GPFS doesn't do flush on close by default unless explicit asked by the application itself, but you can configure that .
mmchconfig flushOnClose=yes if you use O_SYNC or O_DIRECT then each write ends up on the media before we return. sven On Wed, Apr 11, 2018 at 7:06 AM Peter Serocka <pesero...@gmail.com> wrote: > Let’s keep in mind that line buffering is a concept > within the standard C library; > if every log line triggers one write(2) system call, > and it’s not direct io, then multiple write still get > coalesced into few larger disk writes (as with the dd example). > > A logging application might choose to close(2) > a log file after each write(2) — that produces > a different scenario, where the file system might > guarantee that the data has been written to disk > when close(2) return a success. > > (Local Linux file systems do not do this with default mounts, > but networked filesystems usually do.) > > Aaron, can you trace your application to see > what is going on in terms of system calls? > > — Peter > > > > On 2018 Apr 10 Tue, at 18:28, Marc A Kaplan <makap...@us.ibm.com> wrote: > > > > Debug messages are typically unbuffered or "line buffered". If that is > truly causing a performance problem AND you still want to collect the > messages -- you'll need to find a better way to channel and collect those > messages. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss