It would be interesting in which chunks data arrive at the NSDs -- if those chunks are bigger than the individual I/Os (i.e. multiples of the record sizes), there is some data coalescing going on and it just needs to have its path well paved ... If not, there might be indeed something odd in the configuration. Mit freundlichen Grüßen / Kind regards
Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefa...@de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: Thomas Wolter, Sven Schooß Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 gpfsug-discuss-boun...@spectrumscale.org wrote on 11/04/2018 12:14:21: > From: Jonathan Buzzard <jonathan.buzz...@strath.ac.uk> > To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org> > Date: 11/04/2018 12:14 > Subject: Re: [gpfsug-discuss] Confusing I/O Behavior > Sent by: gpfsug-discuss-boun...@spectrumscale.org > > On Tue, 2018-04-10 at 23:43 +0200, Uwe Falke wrote: > > Hi Aaron, > > to how many different files do these tiny I/O requests go? > > > > Mind that the write aggregates the I/O over a limited time (5 secs or > > so) and ***per file***. > > It is for that matter a large difference to write small chunks all to > > one > > file or to a large number of individual files . > > to fill a 1 MiB buffer you need about 13100 chunks of 80Bytes > > ***per > > file*** within those 5 secs. > > > > Something else to bear in mind is that you might be using a library > that converts everything into putchar's. I have seen this in the past > with Office on a Mac platform and made performance saving a file over > SMB/NFS appalling. I mean really really bad, a "save as" which didn't > do that would take a second or two, a save would take like 15 minutes. > To the local disk it was just fine. > > The GPFS angle is this was all on a self rolled clustered Samba GPFS > setup back in the day. Took a long time to track down, and performance > turned out to be just as appalling with a real Windows file server. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss