The code currently works by making the assumption that network reads
    will always return chunks of data small enough to be written to stdout
    in a single call to fwrite after a single select on fileno (stdout).

What would make some data too long?  The nmemb argument of fwrite is a
size_t, and unless fwrite has a bug, it should handle whatever amount
of data you specify.

I don't see why a large amount of data would cause a failure.
Previous messages claimed that it could, but I think I've refuted that
claim.  Is there a flaw in the argument I gave?





_______________________________________________
Bug-cvs mailing list
Bug-cvs@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-cvs

Reply via email to