.
Von: gpfsug-discuss-boun...@spectrumscale.org
Im Auftrag von Uwe Falke
Gesendet: Montag, 28. Februar 2022 10:17
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] IO sizes
Hi, Kumaran,
that would explain the smaller IOs before the reboot
- ---
Total 137711.70 137713.23 275424.92
My two cents,
-Kums
Kumaran Rajaram
*From:* gpfsug-discuss-boun...@spectrumscale.org
*On Behalf Of *Uwe Falke
*Sent:* Wednesday, February 23, 2022 8:04 PM
*To:* gpfsug-discuss@spectrumscale.org
*Subject:* Re: [gpfsug-discuss] I
ist"
CC:
Betreff: [EXTERNAL] Re: [gpfsug-discuss] IO sizes
Datum: Do, 24. Feb 2022 13:41
Hi Uwe,
first of all, glad to see you back in the GPFS space ;)
agreed, groups of subblocks being written will end up in IO sizes,
being smaller than the 8MB filesys
lf Of Uwe Falke
Sent: Wednesday, February 23, 2022 8:04 PM
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] IO sizes
Hi,
the test bench is gpfsperf running on up to 12 clients with 1...64 threads
doing sequential reads and writes , file size per gpfsperf process is 12TB
(wit
t" > Date: 23/02/2022 22:20> Subject: [EXTERNAL] Re: [gpfsug-discuss] IO sizes> Sent by: gpfsug-discuss-boun...@spectrumscale.org>> Alex, Metadata will be 4Kib Depending on the filesystem version you> will also have subblocks to consider V4 filesystems have 1/32> subblocks,
/2022 22:20:11:
> From: "Andrew Beattie"
> To: "gpfsug main discussion list"
> Date: 23/02/2022 22:20
> Subject: [EXTERNAL] Re: [gpfsug-discuss] IO sizes
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
>
> Alex, Metadata will be 4Kib Depending on th
Hi,
the test bench is gpfsperf running on up to 12 clients with 1...64
threads doing sequential reads and writes , file size per gpfsperf
process is 12TB (with 6TB I saw caching effects in particular for large
thread numbers ...)
As I wrote initially: GPFS is issuing nothing but 8MiB IOs to
Alex,
Metadata will be 4Kib
Depending on the filesystem version you will also have subblocks to consider V4
filesystems have 1/32 subblocks, V5 filesystems have 1/1024 subblocks (assuming
metadata and data block size is the same)
My first question would be is “ Are you sure that Linux OS is
Dear all,
sorry for asking a question which seems not directly GPFS related:
In a setup with 4 NSD servers (old-style, with storage controllers in
the back end), 12 clients and 10 Seagate storage systems, I do see in
benchmark tests that just one of the NSD servers does send smaller IO