Chris,

The job runs across multiple nodes and the tinky 8K writes *should* be to different files that are unique per-rank.

-Aaron

On 4/10/18 12:18 PM, Chris Hoffman wrote:
​Hi Stumped,


Is this MPI job on one machine? Multiple nodes? Are the tiny 8K writes to the same file or different ones?


Chris

------------------------------------------------------------------------
*From:* gpfsug-discuss-boun...@spectrumscale.org <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] <aaron.s.knis...@nasa.gov>
*Sent:* Tuesday, April 10, 2018 9:00 AM
*To:* gpfsug main discussion list
*Subject:* [gpfsug-discuss] Confusing I/O Behavior
I hate admitting this but I’ve found something that’s got me stumped.

We have a user running an MPI job on the system. Each rank opens up several output files to which it writes ASCII debug information. The net result across several hundred ranks is an absolute smattering of teeny tiny I/o requests to te underlying disks which they don’t appreciate. Performance plummets. The I/o requests are 30 to 80 bytes in size. What I don’t understand is why these write requests aren’t getting batched up into larger write requests to the underlying disks.

If I do something like “df if=/dev/zero of=foo bs=8k” on a node I see that the nasty unaligned 8k io requests are batched up into nice 1M I/o requests before they hit the NSD.

As best I can tell the application isn’t doing any fsync’s and isn’t doing direct io to these files.

Can anyone explain why seemingly very similar io workloads appear to result in well formed NSD I/O in one case and awful I/o in another?

Thanks!

-Stumped




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to