Re: [gpfsug-discuss] GPFS long waiter

2017-11-16 Thread Olaf Weiser
even though I think, this is something to open a PMR .. you might help you out yourself  by finding pending messages to this nodeso check on mmfsadm dump tscomm ... output on that node if you find pending messages to a specific node.. .go on that node and debug further.. if it is not an important

Re: [gpfsug-discuss] Write performances and filesystem size

2017-11-16 Thread Olaf Weiser
Hi Ivano, so from this output, the performance degradation is not explainable .. in my current environments.. , having multiple file systems (so vdisks on one BB) .. and it works fine ..  as said .. just open a PMR.. I would'nt consider this as the "expected behavior" the only thing is.. the MD

[gpfsug-discuss] Latest Technical Blogs on Spectrum Scale

2017-11-16 Thread Sandeep Ramesh
Dear User Group members, Here are the Development Blogs in last 3 months on Spectrum Scale Technical Topics. Spectrum Scale Monitoring – Know More … https://developer.ibm.com/storage/2017/11/16/spectrum-scale-monitoring-know/ IBM Spectrum Scale 5.0 Release – What’s coming !

Re: [gpfsug-discuss] Write performances and filesystem size

2017-11-16 Thread Ivano Talamo
Hi, as additional information I past the recovery group information in the full and half size cases. In both cases: - data is on sf_g_01_vdisk01 - metadata on sf_g_01_vdisk02 - sf_g_01_vdisk07 is not used in the filesystem. This is with the full-space filesystem:

Re: [gpfsug-discuss] Write performances and filesystem size

2017-11-16 Thread Dorigo Alvise (PSI)
Hi Olaf, yes we have separate vdisks for MD: 2 vdisks, each is 100GBytes large, 1MBytes blocksize, 3WayReplication. A From: gpfsug-discuss-boun...@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Olaf Weiser [olaf.wei...@de.ibm.com]

Re: [gpfsug-discuss] Write performances and filesystem size

2017-11-16 Thread Olaf Weiser
Rjx, that makes it a bit clearer.. as your vdisk is big enough to span over all pdisks in each of your test 1/1 or 1/2 or 1/4 of capacity... should bring the same performance. .. You mean something about vdisk Layout. .. So in your test, for the full capacity test, you use just one

Re: [gpfsug-discuss] Write performances and filesystem size

2017-11-16 Thread Ivano Talamo
Hello Olaf, yes, I confirm that is the Lenovo version of the ESS GL2, so 2 enclosures/4 drawers/166 disks in total. Each recovery group has one declustered array with all disks inside, so vdisks use all the physical ones, even in the case of a vdisk that is 1/4 of the total size.