On Wed, 2002-10-16 at 00:10, Bojan Smojver wrote: > Coming back to what William is proposing, the three pool approach... I'm > guessing that with Response put in-between Connection and Request, the Response > will store responses for multiple requests, right? Then, once the core-filter is > done with it, the Response will be notified, it will finalise the processing > (e.g. log all the requests) and destroy itself. Sounds good, but maybe we should > just keep the requests around until they have been dealt with?
There's a small benefit to cleaning up the request instead of keeping it around: you can free up the memory that was used for the request and recycle it for some other request. This can help to reduce the total memory footprint of the server. On the other hand, creating separate pools for the request and the response could end up increasing the total memory usage, because we currently use at least 8KB per pool. We'll probably need to do some research to figure out which approach works best in practice. > BTW, is there are way in 2.0 to force every request to be written to the network > in order? Maybe I should focus on making an option like that in order to make it > work in 2.0? I think you could achieve this result by appending a bucket of type apr_bucket_flush to the output brigade. I think that would be a reasonable workaround for 2.0. Brian
