As long as the -n setting of the FS (the number of nodes potentially mounting the fs) is more or less matching the actual number of mounts,
this 99.x % before degradation is expected. If you are far off with that -n estimate, like having it set to 32, but the actual number of mounts is in the thousands,
then degradation happens earlier, since the distribution of free blocks in the allocation maps is not matching the actual setup as good as it could be.
Naturally, this depends also on how you do filling of the FS. If it is only a small percentage of the nodes, doing the creates, then the distribution can
be 'wrong' as well, and single nodes run earlier out of allocation map space, and need to look for free blocks elsewhere, costing RPC cycles and thus performance.
Putting this in numbers seems quite difficult ;)
Mit freundlichen Grüßen / Kind regards
Achim Rehor
| Software Technical Support Specialist AIX/ Emea HPC Support | |||
| IBM Certified Advanced Technical Expert - Power Systems with AIX | |||
| TSCC Software Service, Dept. 7922 | |||
| Global Technology Services | |||
| Phone: | +49-7034-274-7862 | IBM Deutschland | |
| E-Mail: | [email protected] | Am Weiher 24 | |
| 65451 Kelsterbach | |||
| Germany | |||
| IBM
Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Geschäftsführung: Martina Koederitz (Vorsitzende), Reinhard Reschke, Dieter Scholz, Gregor Pillen, Ivo Koerner, Christian Noll Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 WEEE-Reg.-Nr. DE 99369940 | |||
From:
Peter Smith <[email protected]>
To:
gpfsug main discussion
list <[email protected]>
Date:
11/06/2017 09:17 AM
Subject:
Re: [gpfsug-discuss]
Performance of GPFS when filesystem is almost full
Sent by:
[email protected]
Hi Carl.
When we commissioned our system we ran an NFS stress tool, and filled the system to the top.
No performance degradation was seen until it was 99.7% full.
I believe that after this point it takes longer to find free blocks to write to.
YMMV.
On 6 November 2017 at 03:35, Carl <[email protected]> wrote:
Hi Folk,
Does anyone have much experience with the performance of GPFS as it becomes close to full. In particular I am referring to split data/meta data, where the data pool goes over 80% utilisation.
How much degradation do you see above 80% usage, 90% usage?
Cheers,
Carl.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
Peter Smith · Senior Systems Engineer
London · New York · Los Angeles · Chicago · Montréal
T +44 (0)20 7344 8000 · M +44 (0)7816 123009
19-23 Wells Street, London W1T 3PQ
Twitter· Facebook· framestore.com
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
