[gpfsug-discuss] Hardening sudo wrapper?

2017-02-24 Thread Wei Guo
part -- An HTML attachment was scrubbed... URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20170224/afcd7b9a/attachment-0001.html> -- Message: 2 Date: Fri, 24 Feb 2017 14:31:08 -0500 From: Aaron Knister <aaron.s.knis...@nasa.gov>

Re: [gpfsug-discuss] waiting for conn rdmas < conn maxrdmas

2017-02-24 Thread Sven Oehme
its more likely you run out of verbsRdmasPerNode which is the top limit across all connections for a given node. Sven On Fri, Feb 24, 2017 at 11:31 AM Aaron Knister wrote: Interesting, thanks Sven! Could "resources" I'm running out of include NSD server queues? On

Re: [gpfsug-discuss] waiting for conn rdmas < conn maxrdmas

2017-02-24 Thread Aaron Knister
Interesting, thanks Sven! Could "resources" I'm running out of include NSD server queues? On 2/23/17 12:12 PM, Sven Oehme wrote: all this waiter shows is that you have more in flight than the node or connection can currently serve. the reasons for that can be misconfiguration or you simply run

[gpfsug-discuss] NFS Permission matchup to mmnfs command

2017-02-24 Thread Shaun Anderson
I have a customer currently using native NFS and we are going to move them over the CES. I'm looking at the mmnfs command and trying to map the nfs export arguments with the CES arguments. My customer has these currently: no_wdelay, nohide, rw, sync, no_root_squash, no_all_squash I have

Re: [gpfsug-discuss] Fw: Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 parallel log recovery function may result in undetected data corruption

2017-02-24 Thread Sanchez, Paul
Can anyone from IBM confirm whether this only affects manager nodes or if parallel log recovery is expected to happen on any other nodes? Thx Paul From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Bryan Banister Sent: Friday, February

Re: [gpfsug-discuss] Fw: Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 parallel log recovery function may result in undetected data corruption

2017-02-24 Thread Bryan Banister
Has anyone been hit by this data corruption issue and if so how did you determine the file system had corruption? Thanks! -Bryan From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Oesterlin, Robert Sent: Thursday, February 23, 2017

Re: [gpfsug-discuss] Performance Tests using Bonnie++ forces expell of the client running the test

2017-02-24 Thread Achim Rehor
Well, expel of a node from the cluster happens, when the client misses to update its lease from the config manager.In your case of an IO  benchmark running, i would guess, that either the node was too busy to run io, to keep up with the leaserenwal inside the leaseDuration  timeframe, or some

[gpfsug-discuss] Performance Tests using Bonnie++ forces expell of the client running the test

2017-02-24 Thread Engeli Willi (ID SD)
Dear all, Does one of you know if Bonnie++ io Test is compatible with GPFS and if, what could force expell of the client from the cluster? Thanks Willi smime.p7s Description: S/MIME cryptographic signature ___ gpfsug-discuss mailing list