even though I think, this is something
to open a PMR .. you might help you out yourself by
finding pending messages to this nodeso check on mmfsadm dump tscomm ...
output on that node if you find pending messages to a specific
node.. .go on that node and debug further.. if it is not an important
Hi Ivano, so from this output, the performance
degradation is not explainable .. in my current environments.. , having
multiple file systems (so vdisks on one BB) .. and it works fine .. as said .. just open a PMR.. I
would'nt consider this as the "expected behavior" the only thing is.. the MD
Dear User Group members,
Here are the Development Blogs in last 3 months on Spectrum Scale
Technical Topics.
Spectrum Scale Monitoring – Know More …
https://developer.ibm.com/storage/2017/11/16/spectrum-scale-monitoring-know/
IBM Spectrum Scale 5.0 Release – What’s coming !
Hi,
as additional information I past the recovery group information in the
full and half size cases.
In both cases:
- data is on sf_g_01_vdisk01
- metadata on sf_g_01_vdisk02
- sf_g_01_vdisk07 is not used in the filesystem.
This is with the full-space filesystem:
Hi Olaf,
yes we have separate vdisks for MD: 2 vdisks, each is 100GBytes large, 1MBytes
blocksize, 3WayReplication.
A
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Olaf Weiser
[olaf.wei...@de.ibm.com]
Rjx, that makes it a bit clearer.. as your vdisk is big enough to span over
all pdisks in each of your test 1/1 or 1/2 or 1/4 of capacity... should bring
the same performance. ..
You mean something about vdisk Layout. ..
So in your test, for the full capacity test, you use just one
Hello Olaf,
yes, I confirm that is the Lenovo version of the ESS GL2, so 2
enclosures/4 drawers/166 disks in total.
Each recovery group has one declustered array with all disks inside, so
vdisks use all the physical ones, even in the case of a vdisk that is
1/4 of the total size.