Oh yeah, I see what you mean, I've just looking on another cluster with LROC 
drives and they have all disappeared. They are still listed in mmlsnsd, but 
mmdiag --lroc shows the drive as "NULL"/Idle.

Simon

From: <Oesterlin>, Robert 
<[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, 26 August 2015 13:27
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)

Yep. Mine do too, initially. It seems after a number of days, they get marked 
as removed. In any case IBM confirmed it. So… tread lightly.

Bob Oesterlin
Sr Storage Engineer, Nuance Communications
507-269-0413


From: 
<[email protected]<mailto:[email protected]>> 
on behalf of "Simon Thompson (Research Computing - IT Services)"
Reply-To: gpfsug main discussion list
Date: Wednesday, August 26, 2015 at 7:23 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)

Hmm mine seem to be working which I created this morning (on a client node):


mmdiag --lroc


=== mmdiag: lroc ===

LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running

Cache inodes 1 dirs 1 data 1  Config: maxFile 0 stubFile 0

Max capacity: 190732 MB, currently in use: 4582 MB

Statistics from: Tue Aug 25 14:54:52 2015


Total objects stored 4927 (4605 MB) recalled 81 (55 MB)

      objects failed to store 467 failed to recall 1 failed to inval 0

      objects queried 0 (0 MB) not found 0 = 0.00 %

      objects invalidated 548 (490 MB)


This was running 4.1.1-1.

Simon


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to