can someone explain how the core output buffering is supposed to work?
if you look at
httpd-test/perl-framework/c-modules/test_pass_brigade/mod_test_pass_brigade.c
this intentionally creates a brigade with just one bucket and calls
ap_pass_brigade with that bucket. you can make a request like so:
http://localhost:8529/test_pass_brigade?1024,500000
which means use a buffer size of 1024 bytes and send 500k total.
with the patch below for tracing i see this in the error_log:
writev 15543 bytes
writev 16384 bytes
writev 10240 bytes
writev 8192 bytes
... a bunch of 8192's ...
writev 7456 bytes
writev 0 bytes
then with 8192,500000:
writev 41143 bytes
writev 8192 bytes
... a bunch of 8192's ...
writev 288 bytes
writev 0 bytes
is this the expected behavior? reason i am asking is that mod_ssl pretty
much does what mod_test_pass_brigade.c does with 8192 size buffers. i
have a patch in the works to optimize that, but want to make sure core
output filter is behaving as expected first. i thought it would buffer
until it could fill AP_MIN_BYTES_TO_WRITE * MAX_IOVEC_TO_WRITE. but then
again, i guess there is a reason it doesn't, since the OLD_WRITE filter
does its own buffering. insight greatly appreciated, thanks.
p.s.
i realize the answer is probably buried in the archives, maybe somebody
wants to write a documented summary?
--- server/core.c 2001/11/19 22:36:20 1.100
+++ server/core.c 2001/11/20 22:52:56
@@ -3201,7 +3201,7 @@
}
else {
apr_size_t unused_bytes_sent;
-
+ fprintf(stderr, "writev %d bytes\n", nbytes);
rv = writev_it_all(net->client_socket,
vec, nvec,
nbytes, &unused_bytes_sent);