* Florian Weimer: > * Eric Blake: > >> On 4/29/19 2:45 PM, Florian Weimer wrote: >>> I get that error checking is important. But why not just use ferror and >>> fflush? Closing the streams is excessive and tends to introduce >>> use-after-free issues, as evidenced by the sanitizer workarounds. >> >> If I recall the explanation, at least some versions of NFS do not >> actually flush on fflush(), but wait until close(). If you want to avoid >> data loss and ensure that things written made it to the remote storage >> while detecting every possible indication when an error may have >> prevented that from working, then you have to go all the way through >> close(). > > Any file system on Linux does this to a varying degree (unlike Solaris > and Windows, I think). If you want to catch low-level I/O errors, you > need to call fsync after fflush. And I doubt this is something we want > to do because it will result in bad-looking performance. > > But the NFS aspect is somewhat plausible at least. > > I can try to figure out if NFS makes a difference for Linux here, > i.e. if there are cases where a write will succeed, but only an > immediately following close will report an error condition that is > known, in principle, even at the time of the write. A difference > between hard and soft NFS mounts could matter in this context.
Start of thread: <https://lists.gnu.org/r/bug-gnulib/2019-04/msg00059.html> I've been told that on Linux, close does not report writeback errors. So the only way to get a reliable error indicator (beyond what you get from the write system call) would be fsync. And I doubt you want to call that, purely for performance reasons. This means that for Linux at least, close_stdout should just call fflush, not fclose. Thanks, Florian