Re: [gpfsug-discuss] IO sizes

2022-02-28 Thread Grunenberg, Renar
. Von: gpfsug-discuss-boun...@spectrumscale.org Im Auftrag von Uwe Falke Gesendet: Montag, 28. Februar 2022 10:17 An: gpfsug-discuss@spectrumscale.org Betreff: Re: [gpfsug-discuss] IO sizes Hi, Kumaran, that would explain the smaller IOs before the reboot

Re: [gpfsug-discuss] IO sizes

2022-02-28 Thread Uwe Falke
- --- Total   137711.70 137713.23   275424.92 My two cents, -Kums Kumaran Rajaram *From:* gpfsug-discuss-boun...@spectrumscale.org *On Behalf Of *Uwe Falke *Sent:* Wednesday, February 23, 2022 8:04 PM *To:* gpfsug-discuss@spectrumscale.org *Subject:* Re: [gpfsug-discuss] I

Re: [gpfsug-discuss] IO sizes

2022-02-25 Thread Uwe Falke
ist" CC: Betreff: [EXTERNAL] Re: [gpfsug-discuss] IO sizes Datum: Do, 24. Feb 2022 13:41 Hi Uwe, first of all, glad to see you back in the GPFS space ;) agreed, groups of subblocks being written will end up in IO sizes, being smaller than the 8MB filesys

Re: [gpfsug-discuss] IO sizes

2022-02-24 Thread Kumaran Rajaram
lf Of Uwe Falke Sent: Wednesday, February 23, 2022 8:04 PM To: gpfsug-discuss@spectrumscale.org Subject: Re: [gpfsug-discuss] IO sizes Hi, the test bench is gpfsperf running on up to 12 clients with 1...64 threads doing sequential reads and writes , file size per gpfsperf process is 12TB (wit

Re: [gpfsug-discuss] IO sizes

2022-02-24 Thread Olaf Weiser
t" > Date: 23/02/2022 22:20> Subject: [EXTERNAL] Re: [gpfsug-discuss] IO sizes> Sent by: gpfsug-discuss-boun...@spectrumscale.org>> Alex, Metadata will be 4Kib Depending on the filesystem version you> will also have subblocks to consider V4 filesystems have 1/32> subblocks,

Re: [gpfsug-discuss] IO sizes

2022-02-24 Thread Achim Rehor
/2022 22:20:11: > From: "Andrew Beattie" > To: "gpfsug main discussion list" > Date: 23/02/2022 22:20 > Subject: [EXTERNAL] Re: [gpfsug-discuss] IO sizes > Sent by: gpfsug-discuss-boun...@spectrumscale.org > > Alex, Metadata will be 4Kib Depending on th

Re: [gpfsug-discuss] IO sizes

2022-02-23 Thread Uwe Falke
Hi, the test bench is gpfsperf running on up to 12 clients with 1...64 threads doing sequential reads and writes , file size per gpfsperf process is 12TB (with 6TB I saw caching effects in particular for large thread numbers ...) As I wrote initially: GPFS is issuing nothing but 8MiB IOs to

Re: [gpfsug-discuss] IO sizes

2022-02-23 Thread Andrew Beattie
Alex, Metadata will be 4Kib Depending on the filesystem version you will also have subblocks to consider V4 filesystems have 1/32 subblocks, V5 filesystems have 1/1024 subblocks (assuming metadata and data block size is the same) My first question would be is “ Are you sure that Linux OS is

[gpfsug-discuss] IO sizes

2022-02-23 Thread Uwe Falke
Dear all, sorry for asking a question which seems not directly GPFS related: In a setup with 4 NSD servers (old-style, with storage controllers in the back end), 12 clients and 10 Seagate storage systems, I do see in benchmark tests that  just one of the NSD servers does send smaller IO