[gpfsug-discuss] Hardening sudo wrapper?

2017-02-24 Thread Wei Guo
 ANDERSON
STORAGE ARCHITECT
O 208.577.2112
M 214.263.7014


NOTICE:  This email message and any attachments here to may contain confidential
information.  Any unauthorized review, use, disclosure, or distribution of such
information is prohibited.  If you are not the intended recipient, please 
contact
the sender by reply email and destroy the original message and all copies of it.
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20170224/afcd7b9a/attachment-0001.html>

--

Message: 2
Date: Fri, 24 Feb 2017 14:31:08 -0500
From: Aaron Knister <aaron.s.knis...@nasa.gov>
To: <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] waiting for conn rdmas < conn maxrdmas
Message-ID: <ce8c2096-c56c-1df1-7c71-6a015df70...@nasa.gov>
Content-Type: text/plain; charset="windows-1252"; format=flowed

Interesting, thanks Sven!

Could "resources" I'm running out of include NSD server queues?

On 2/23/17 12:12 PM, Sven Oehme wrote:
> all this waiter shows is that you have more in flight than the node or
> connection can currently serve. the reasons for that can be
> misconfiguration or you simply run out of resources on the node, not the
> connection. with latest code you shouldn't see this anymore for node
> limits as the system automatically adjusts the number of maximum RDMA's
> according to the systems Node capabilities :
>
> you should see messages in your mmfslog like :
>
> 2017-02-23_06:19:50.056-0800: [I] VERBS RDMA starting with
> verbsRdmaCm=no verbsRdmaSend=yes verbsRdmaUseMultiCqThreads=yes
> verbsRdmaUseCompVectors=yes
> 2017-02-23_06:19:50.078-0800: [I] VERBS RDMA library libibverbs.so
> (version >= 1.1) loaded and initialized.
> 2017-02-23_06:19:50.078-0800: [I] VERBS RDMA verbsRdmasPerNode increased
> from*_3072 to 3740 because verbsRdmasPerNodeOptimize is set to yes._*
> 2017-02-23_06:19:50.121-0800: [I] VERBS RDMA discover mlx5_5 port 1
> transport IB link  IB NUMA node 16 pkey[0] 0x gid[0] subnet
> 0xFEC00013 id 0xE41D2D0300FDB9CD state ACTIVE
> 2017-02-23_06:19:50.137-0800: [I] VERBS RDMA discover mlx5_4 port 1
> transport IB link  IB NUMA node 16 pkey[0] 0x gid[0] subnet
> 0xFEC00015 id 0xE41D2D0300FDB9CC state ACTIVE
> 2017-02-23_06:19:50.153-0800: [I] VERBS RDMA discover mlx5_3 port 1
> transport IB link  IB NUMA node  1 pkey[0] 0x gid[0] subnet
> 0xFEC00013 id 0xE41D2D0300FDB751 state ACTIVE
> 2017-02-23_06:19:50.169-0800: [I] VERBS RDMA discover mlx5_2 port 1
> transport IB link  IB NUMA node  1 pkey[0] 0x gid[0] subnet
> 0xFEC00015 id 0xE41D2D0300FDB750 state ACTIVE
> 2017-02-23_06:19:50.185-0800: [I] VERBS RDMA discover mlx5_1 port 1
> transport IB link  IB NUMA node  0 pkey[0] 0x gid[0] subnet
> 0xFEC00013 id 0xE41D2D0300FDB78D state ACTIVE
> 2017-02-23_06:19:50.201-0800: [I] VERBS RDMA discover mlx5_0 port 1
> transport IB link  IB NUMA node  0 pkey[0] 0x gid[0] subnet
> 0xFEC00015 id 0xE41D2D0300FDB78C state ACTIVE
>
> we want to eliminate all this configurable limits eventually, but this
> takes time, but as you can see above, we make progress on each release  :-)
>
> Sven
>
>
>
>
> On Thu, Feb 23, 2017 at 9:05 AM Aaron Knister <aaron.s.knis...@nasa.gov
> <mailto:aaron.s.knis...@nasa.gov>> wrote:
>
> On a particularly heavy loaded NSD server I'm seeing a lot of these
> messages:
>
> 0x708B63E0 (  15539) waiting 0.004139456 seconds, NSDThread: on
> ThCond 0x7FFFA80772C8 (0x7FFFA80772C8) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x708EED80 (  15584) waiting 0.004075718 seconds, NSDThread: on
> ThCond 0x7FFF680008F8 (0x7FFF680008F8) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x708FDF00 (  15596) waiting 0.003965504 seconds, NSDThread: on
> ThCond 0x7FFF8C00E288 (0x7FFF8C00E288) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x709185A0 (  15617) waiting 0.003916346 seconds, NSDThread: on
> ThCond 0x7FFF9000CB18 (0x7FFF9000CB18) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x7092B380 (  15632) waiting 0.003659610 seconds, NSDThread: on
> ThCond 0x1DB04B8 (0x1DB04B8) (VERBSEventWaitCondvar), reason 'waiting
> for conn rdmas < conn maxrdmas'
>
> I've tried tweaking verbsRdmasPerConnection but the issue seems to
> persist. Has anyone has encountered this and if so how'd you fix it?
>
> -Aaron
>
> --
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space 

Re: [gpfsug-discuss] waiting for conn rdmas < conn maxrdmas

2017-02-24 Thread Sven Oehme
its more likely you run out of verbsRdmasPerNode which is the top limit
across all connections for a given node.

Sven


On Fri, Feb 24, 2017 at 11:31 AM Aaron Knister 
wrote:

Interesting, thanks Sven!

Could "resources" I'm running out of include NSD server queues?

On 2/23/17 12:12 PM, Sven Oehme wrote:
> all this waiter shows is that you have more in flight than the node or
> connection can currently serve. the reasons for that can be
> misconfiguration or you simply run out of resources on the node, not the
> connection. with latest code you shouldn't see this anymore for node
> limits as the system automatically adjusts the number of maximum RDMA's
> according to the systems Node capabilities :
>
> you should see messages in your mmfslog like :
>
> 2017-02-23_06:19:50.056-0800: [I] VERBS RDMA starting with
> verbsRdmaCm=no verbsRdmaSend=yes verbsRdmaUseMultiCqThreads=yes
> verbsRdmaUseCompVectors=yes
> 2017-02-23_06:19:50.078-0800: [I] VERBS RDMA library libibverbs.so
> (version >= 1.1) loaded and initialized.
> 2017-02-23_06:19:50.078-0800: [I] VERBS RDMA verbsRdmasPerNode increased
> from*_3072 to 3740 because verbsRdmasPerNodeOptimize is set to yes._*
> 2017-02-23_06:19:50.121-0800: [I] VERBS RDMA discover mlx5_5 port 1
> transport IB link  IB NUMA node 16 pkey[0] 0x gid[0] subnet
> 0xFEC00013 id 0xE41D2D0300FDB9CD state ACTIVE
> 2017-02-23_06:19:50.137-0800: [I] VERBS RDMA discover mlx5_4 port 1
> transport IB link  IB NUMA node 16 pkey[0] 0x gid[0] subnet
> 0xFEC00015 id 0xE41D2D0300FDB9CC state ACTIVE
> 2017-02-23_06:19:50.153-0800: [I] VERBS RDMA discover mlx5_3 port 1
> transport IB link  IB NUMA node  1 pkey[0] 0x gid[0] subnet
> 0xFEC00013 id 0xE41D2D0300FDB751 state ACTIVE
> 2017-02-23_06:19:50.169-0800: [I] VERBS RDMA discover mlx5_2 port 1
> transport IB link  IB NUMA node  1 pkey[0] 0x gid[0] subnet
> 0xFEC00015 id 0xE41D2D0300FDB750 state ACTIVE
> 2017-02-23_06:19:50.185-0800: [I] VERBS RDMA discover mlx5_1 port 1
> transport IB link  IB NUMA node  0 pkey[0] 0x gid[0] subnet
> 0xFEC00013 id 0xE41D2D0300FDB78D state ACTIVE
> 2017-02-23_06:19:50.201-0800: [I] VERBS RDMA discover mlx5_0 port 1
> transport IB link  IB NUMA node  0 pkey[0] 0x gid[0] subnet
> 0xFEC00015 id 0xE41D2D0300FDB78C state ACTIVE
>
> we want to eliminate all this configurable limits eventually, but this
> takes time, but as you can see above, we make progress on each release
:-)
>
> Sven
>
>
>
>
> On Thu, Feb 23, 2017 at 9:05 AM Aaron Knister  > wrote:
>
> On a particularly heavy loaded NSD server I'm seeing a lot of these
> messages:
>
> 0x708B63E0 (  15539) waiting 0.004139456 seconds, NSDThread: on
> ThCond 0x7FFFA80772C8 (0x7FFFA80772C8) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x708EED80 (  15584) waiting 0.004075718 seconds, NSDThread: on
> ThCond 0x7FFF680008F8 (0x7FFF680008F8) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x708FDF00 (  15596) waiting 0.003965504 seconds, NSDThread: on
> ThCond 0x7FFF8C00E288 (0x7FFF8C00E288) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x709185A0 (  15617) waiting 0.003916346 seconds, NSDThread: on
> ThCond 0x7FFF9000CB18 (0x7FFF9000CB18) (VERBSEventWaitCondvar), reason
> 'waiting for conn rdmas < conn maxrdmas'
> 0x7092B380 (  15632) waiting 0.003659610 seconds, NSDThread: on
> ThCond 0x1DB04B8 (0x1DB04B8) (VERBSEventWaitCondvar), reason 'waiting
> for conn rdmas < conn maxrdmas'
>
> I've tried tweaking verbsRdmasPerConnection but the issue seems to
> persist. Has anyone has encountered this and if so how'd you fix it?
>
> -Aaron
>
> --
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space Flight Center
> (301) 286-2776 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org 
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>

--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] waiting for conn rdmas < conn maxrdmas

2017-02-24 Thread Aaron Knister

Interesting, thanks Sven!

Could "resources" I'm running out of include NSD server queues?

On 2/23/17 12:12 PM, Sven Oehme wrote:

all this waiter shows is that you have more in flight than the node or
connection can currently serve. the reasons for that can be
misconfiguration or you simply run out of resources on the node, not the
connection. with latest code you shouldn't see this anymore for node
limits as the system automatically adjusts the number of maximum RDMA's
according to the systems Node capabilities :

you should see messages in your mmfslog like :

2017-02-23_06:19:50.056-0800: [I] VERBS RDMA starting with
verbsRdmaCm=no verbsRdmaSend=yes verbsRdmaUseMultiCqThreads=yes
verbsRdmaUseCompVectors=yes
2017-02-23_06:19:50.078-0800: [I] VERBS RDMA library libibverbs.so
(version >= 1.1) loaded and initialized.
2017-02-23_06:19:50.078-0800: [I] VERBS RDMA verbsRdmasPerNode increased
from*_3072 to 3740 because verbsRdmasPerNodeOptimize is set to yes._*
2017-02-23_06:19:50.121-0800: [I] VERBS RDMA discover mlx5_5 port 1
transport IB link  IB NUMA node 16 pkey[0] 0x gid[0] subnet
0xFEC00013 id 0xE41D2D0300FDB9CD state ACTIVE
2017-02-23_06:19:50.137-0800: [I] VERBS RDMA discover mlx5_4 port 1
transport IB link  IB NUMA node 16 pkey[0] 0x gid[0] subnet
0xFEC00015 id 0xE41D2D0300FDB9CC state ACTIVE
2017-02-23_06:19:50.153-0800: [I] VERBS RDMA discover mlx5_3 port 1
transport IB link  IB NUMA node  1 pkey[0] 0x gid[0] subnet
0xFEC00013 id 0xE41D2D0300FDB751 state ACTIVE
2017-02-23_06:19:50.169-0800: [I] VERBS RDMA discover mlx5_2 port 1
transport IB link  IB NUMA node  1 pkey[0] 0x gid[0] subnet
0xFEC00015 id 0xE41D2D0300FDB750 state ACTIVE
2017-02-23_06:19:50.185-0800: [I] VERBS RDMA discover mlx5_1 port 1
transport IB link  IB NUMA node  0 pkey[0] 0x gid[0] subnet
0xFEC00013 id 0xE41D2D0300FDB78D state ACTIVE
2017-02-23_06:19:50.201-0800: [I] VERBS RDMA discover mlx5_0 port 1
transport IB link  IB NUMA node  0 pkey[0] 0x gid[0] subnet
0xFEC00015 id 0xE41D2D0300FDB78C state ACTIVE

we want to eliminate all this configurable limits eventually, but this
takes time, but as you can see above, we make progress on each release  :-)

Sven




On Thu, Feb 23, 2017 at 9:05 AM Aaron Knister > wrote:

On a particularly heavy loaded NSD server I'm seeing a lot of these
messages:

0x708B63E0 (  15539) waiting 0.004139456 seconds, NSDThread: on
ThCond 0x7FFFA80772C8 (0x7FFFA80772C8) (VERBSEventWaitCondvar), reason
'waiting for conn rdmas < conn maxrdmas'
0x708EED80 (  15584) waiting 0.004075718 seconds, NSDThread: on
ThCond 0x7FFF680008F8 (0x7FFF680008F8) (VERBSEventWaitCondvar), reason
'waiting for conn rdmas < conn maxrdmas'
0x708FDF00 (  15596) waiting 0.003965504 seconds, NSDThread: on
ThCond 0x7FFF8C00E288 (0x7FFF8C00E288) (VERBSEventWaitCondvar), reason
'waiting for conn rdmas < conn maxrdmas'
0x709185A0 (  15617) waiting 0.003916346 seconds, NSDThread: on
ThCond 0x7FFF9000CB18 (0x7FFF9000CB18) (VERBSEventWaitCondvar), reason
'waiting for conn rdmas < conn maxrdmas'
0x7092B380 (  15632) waiting 0.003659610 seconds, NSDThread: on
ThCond 0x1DB04B8 (0x1DB04B8) (VERBSEventWaitCondvar), reason 'waiting
for conn rdmas < conn maxrdmas'

I've tried tweaking verbsRdmasPerConnection but the issue seems to
persist. Has anyone has encountered this and if so how'd you fix it?

-Aaron

--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org 
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] NFS Permission matchup to mmnfs command

2017-02-24 Thread Shaun Anderson
I have a customer currently using native NFS and we are going to move them over 
the CES.  I'm looking at the mmnfs command and trying to map the nfs export 
arguments with the CES arguments. My customer has these currently:


no_wdelay,
nohide,
rw,
sync,
no_root_squash,
no_all_squash


I have this so far:

mmnfs export add /gpfs/ltfsee/ --client XX.XX.XX.XX (

Access_Type=RW,

Squash=no_root_squash,noidsquash,

NFS_COMMIT=true

)

So the only arguments that don't appear accounted for is the 'nohide' parameter.

Does this look right?

SHAUN ANDERSON
STORAGE ARCHITECT
O 208.577.2112
M 214.263.7014


NOTICE:  This email message and any attachments here to may contain confidential
information.  Any unauthorized review, use, disclosure, or distribution of such
information is prohibited.  If you are not the intended recipient, please 
contact
the sender by reply email and destroy the original message and all copies of it.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Fw: Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 parallel log recovery function may result in undetected data corruption

2017-02-24 Thread Sanchez, Paul
Can anyone from IBM confirm whether this only affects manager nodes or if 
parallel log recovery is expected to happen on any other nodes?

Thx
Paul

From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Bryan Banister
Sent: Friday, February 24, 2017 9:08 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Fw: Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 
parallel log recovery function may result in undetected data corruption

Has anyone been hit by this data corruption issue and if so how did you 
determine the file system had corruption?

Thanks!
-Bryan

From: 
gpfsug-discuss-boun...@spectrumscale.org
 [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Oesterlin, 
Robert
Sent: Thursday, February 23, 2017 9:46 AM
To: gpfsug main discussion list 
>
Subject: [gpfsug-discuss] Fw: Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 
parallel log recovery function may result in undetected data corruption

For those not subscribed, see below.

Bob Oesterlin
Sr Principal Storage Engineer, Nuance


From: "dw-not...@us.ibm.com" 
>
Reply-To: "dw-not...@us.ibm.com" 
>
Date: Thursday, February 23, 2017 at 9:42 AM
Subject: [EXTERNAL] [Forums] 'g...@us.ibm.com' replied to the 'IBM Spectrum 
Scale V4.2.2 announcements' topic thread in the 'General Parallel File System - 
Announce (GPFS - Announce)' forum.

[cid:image001.png@01D28E86.6F1F9BB0] 
g...@us.ibm.com
 replied to the IBM Spectrum Scale V4.2.2 
announcements
 topic thread in the General Parallel File System - Announce (GPFS - 
Announce)
 forum.
Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 parallel log recovery function 
may result in undetected data corruption Abstract

IBM has identified a problem with the IBM Spectrum Scale parallel log recovery 
function in V4.2.1/V4.2.2, which may result in undetected data corruption 
during the course of a file system recovery.



See the complete Flash at 
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009965





Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential or privileged information. If you are not 
the intended recipient, you are hereby notified that any review, dissemination 
or copying of this email is strictly prohibited, and to please notify the 
sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Fw: Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 parallel log recovery function may result in undetected data corruption

2017-02-24 Thread Bryan Banister
Has anyone been hit by this data corruption issue and if so how did you 
determine the file system had corruption?

Thanks!
-Bryan

From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Oesterlin, Robert
Sent: Thursday, February 23, 2017 9:46 AM
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Fw: Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 
parallel log recovery function may result in undetected data corruption

For those not subscribed, see below.

Bob Oesterlin
Sr Principal Storage Engineer, Nuance


From: "dw-not...@us.ibm.com" 
>
Reply-To: "dw-not...@us.ibm.com" 
>
Date: Thursday, February 23, 2017 at 9:42 AM
Subject: [EXTERNAL] [Forums] 'g...@us.ibm.com' replied to the 'IBM Spectrum 
Scale V4.2.2 announcements' topic thread in the 'General Parallel File System - 
Announce (GPFS - Announce)' forum.

[cid:image001.png@01D28E75.28281900] 
g...@us.ibm.com
 replied to the IBM Spectrum Scale V4.2.2 
announcements
 topic thread in the General Parallel File System - Announce (GPFS - 
Announce)
 forum.
Flash (Alert) IBM Spectrum Scale V4.2.1/4.2.2 parallel log recovery function 
may result in undetected data corruption Abstract

IBM has identified a problem with the IBM Spectrum Scale parallel log recovery 
function in V4.2.1/V4.2.2, which may result in undetected data corruption 
during the course of a file system recovery.



See the complete Flash at 
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009965





Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential or privileged information. If you are not 
the intended recipient, you are hereby notified that any review, dissemination 
or copying of this email is strictly prohibited, and to please notify the 
sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Performance Tests using Bonnie++ forces expell of the client running the test

2017-02-24 Thread Achim Rehor
Well, expel of a node from the cluster
happens, when the client misses to update its lease from the config manager.In your case of an IO  benchmark
running, i would guess, that either the node was too busy to run io, to
keep up with the leaserenwal inside the leaseDuration
 timeframe, or some other node has had a communication issue with
that node, and asked for an expel of a non-answering
node.You will find traces of the expel communication
in the mmfs.log on the config/cluster manager node. For specific details, you might want
to raise a PMRMit freundlichen Grüßen / Kind regardsAchim Rehor Software
Technical Support Specialist AIX/ Emea HPC SupportIBM
Certified Advanced Technical Expert - Power Systems with AIXTSCC
Software Service, Dept. 7922Global
Technology Services Phone:+49-7034-274-7862 IBM
DeutschlandE-Mail:achim.re...@de.ibm.com Am
Weiher 24   65451
Kelsterbach   GermanyIBM
Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Geschäftsführung: Martina Koederitz (Vorsitzende), Reinhard Reschke, Dieter
Scholz, Gregor Pillen, Ivo Koerner, Christian Noll Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 14562 WEEE-Reg.-Nr. DE 99369940  From:      
 "Engeli  Willi
(ID SD)" To:      
 "gpfsug-discuss@spectrumscale.org"
Date:      
 02/24/2017 01:39 PMSubject:    
   [gpfsug-discuss]
Performance Tests using Bonnie++ forces expell of the client running the
testSent by:    
   gpfsug-discuss-boun...@spectrumscale.orgDear all,Does one of you know if Bonnie++ io Test
is compatible with GPFS and if, what could force expell of the client from
the cluster? ThanksWilli [attachment "smime.p7s"
deleted by Achim Rehor/Germany/IBM] ___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Performance Tests using Bonnie++ forces expell of the client running the test

2017-02-24 Thread Engeli Willi (ID SD)
Dear all,

Does one of you know if Bonnie++ io Test is compatible with GPFS and if,
what could force expell of the client from the cluster?

 

Thanks

Willi

 



smime.p7s
Description: S/MIME cryptographic signature
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss