If the problem occurs with fflush, most likely it will also occur
with fwrite, fclose, etc. fflush is merely the guinea pig here.
We may have to send bug reports for the systems where it does not always work.
However, even on those systems, this is change likely to be a big improvement.
That's essentially *ALL* of them. How many time do I have to say it --
nonblocking mode violates the fundamental assumtions about how files are
supposed to behave
It didn't convince me the first time, and repeating it won't convince
me either. We need to get this bug fixed, and
I just checked FreeBSD libc, and it appears that fflush fails once its
underlying write fails with EAGAIN. So it appears that this approach
won't work under FreeBSD. That's not a good sign.
Isn't that a bug in FreeBSD?
We can try to work around bugs in various systems, but it is
I wrote the code, checking the specs, and got it to compile correctly.
I don't have any easy way to test it--Emacs is not well suited to that
purpose. Could you test it?
/* The idea of this module is to help programs cope with output
streams that have become nonblocking.
To use it,
Not exactly. If a solution like this were to go into GLIBC, then it
becomes much more likely that it can be shared with GNULIB, which means
that the infrastructure for it is already in CVS and that supporting it
won't rely exclusively on the extremely small and overworked CVS
Why not have glibc incorporate the moral equivalent of the unlocked
stdio routines (fwrite_unlocked, printf_unlocked, etc.), like
fwrite_block?
I am not sure what unlocked means, but any solution like this would
only work on GNU. It would be better to use an equally simple
approach
could not do any harm on any system. If it fixes the problem on GNU
systems, that's an important improvement, so why not do it?
It might not do any additional harm. I won't claim to understand the
issue completely, but I was told that there might be data loss since
Currently CVS does a single select on the stdout fileno
before dumping as much data as it last received from the network to
stdout via fwrite. There is *no looping or checking for EAGAIN
returns*, which I believe you advocated but, I am informed by Larry
Jones and Paul
The code currently works by making the assumption that network reads
will always return chunks of data small enough to be written to stdout
in a single call to fwrite after a single select on fileno (stdout).
What would make some data too long? The nmemb argument of fwrite is a
It's possible that this fix is a full and correct solution. If the
descriptor has room for at least one byte, `write' won't return
EAGAIN; it will write at least one byte. It may write less than the
whole buffer that was specified. If the stdio code handles that case
It's possible that this fix is a full and correct solution. If the
descriptor has room for at least one byte, `write' won't return
EAGAIN; it will write at least one byte. It may write less than the
whole buffer that was specified. If the stdio code handles that case
* client.c (handle_m): Workaround to deal with stdio getting put
into non-blocking via redirection of stderr and interaction with
ssh on some platforms. On those boxes, stdio can put stdout
unexpectedly into non-blocking mode which may lead to fwrite() or
I am trying to bring about a fix for the bad interaction between
CVS and SSH that causes data to be lost?
See http://lists.gnu.org/archive/html/bug-cvs/2002-07/msg00423.html for
a good explanation.
After studying that message, I think I understand the problem. It
occurs when the stdout
13 matches
Mail list logo