Yann Ylavic <ylavic....@gmail.com> writes:

> Why would buffered reads always be full reads (i.e. until the given
> size of EOF)?
>
> For apr_file_read() I don't think this is expected, users might want
> read() to return what is there when it's there (i.e. they handle short
> reads by themselves), so implicit full reads will force them to use
> small buffers artificially and they'll lose in throughput. On the
> other hand, users that want full reads can do that already/currently
> in their own code (by calling read() multiple times).
>
> For apr_file_gets() this is the same issue, why when some read returns
> the \n would we need to continue reading to fill the buffer? In a pipe
> scenario the peer would apr_file_write("Hello\n") and the consumer's
> apr_file_read() would not return that to the user immediately?
>
> This has nothing to do with buffered vs non-buffered IMHO, there are
> apr_file_{read,write}_full() already for that.

One comment I have here is that this commit does not change the existing
behavior of apr_file_read() for buffered files.  The read_buffered() helper
isn't new by itself, as it has been merely factored out from apr_file_read().

So, doing full reads for buffered files has been the current behavior
for both unix and win32 and for a long time, I think.  For example, see
unix/readwrite.c:file_read_buffered().  This applies to the unix version
of apr_file_gets() as well.


Regards,
Evgeny Kotkov

Reply via email to