On Thu, 2002-10-17 at 00:36, William A. Rowe, Jr. wrote:
> At 02:10 AM 10/16/2002, Bojan Smojver wrote:
> 
> >Coming back to what William is proposing, the three pool approach... I'm
> >guessing that with Response put in-between Connection and Request, the Response 
>will store responses for multiple requests, right?
> 
> No, I'm suggesting a single response pool for each request.  As soon as
> the request is processed, the request pool disappears (although some 
> body may still sitting in brigades, or set aside within filters).  As soon as
> the body for -that- request- has been flushed complete, that response
> pool will disappear.
> 
> Since this isn't too clear by the names, perhaps 'connection', 'request'
> and an outer 'handler' pool make things more clear?  In that case, the
> handler pool would disappear at once when the response body had been
> constructed, and the 'request' pool would disappear once it was flushed.

I was associating this way too much with Request/Response in Servlet
architecture. I guess I understand what you mean now.

> > Then, once the core-filter is
> >done with it, the Response will be notified, it will finalise the processing
> >(e.g. log all the requests) and destroy itself. Sounds good, but maybe we should 
>just keep the requests around until they have been dealt with?
> 
> Consider pipelining.  You can't start keeping 25 requests hanging around
> for 1 page + 24 images now, can you?  The memory footprint would soar
> through the roof.

Yep, get it.

> >BTW, is there are way in 2.0 to force every request to be written to the network
> >in order? Maybe I should focus on making an option like that in order to make it
> >work in 2.0?
> 
> How do you mean?  Everything to the network is serialized.  What is out
> of order today, date stamps in the access logs? That is because it takes
> different amounts of time to handle different sorts of requests, availability
> from kernel disk cache of the pages, etc.

Wrong choice of words here maybe. What I meant here was in order in the
time line, like this:

- request received
- request processed
- request flushed to network
- request received
- request processed
- request flushed to network
...

rather then what can happen now:

- request received
- request processed
- request received
- request processed
- request received
- request processed
- requests flushed to network
...

That's what the code that I sent in my other e-mail is all about.
Basically, I put the output filter in mod_logio (AP_FTYPE_NETWORK - 1),
which has the only purpose of replacing the EOS bucket with FLUSH bucket
(the actual counting is done in core output filter). Here it is again:

-----------------------------------------------------------------
static apr_status_t logio_out_filter(ap_filter_t *f,
                                     apr_bucket_brigade *bb) {
    apr_bucket *b = APR_BRIGADE_LAST(bb);

    /* End of data, make sure we flush */
    if (APR_BUCKET_IS_EOS(b)) {
        APR_BRIGADE_INSERT_TAIL(bb,
                 apr_bucket_flush_create(f->c->bucket_alloc));
        APR_BUCKET_REMOVE(b);
        apr_bucket_destroy(b);
    }

    return ap_pass_brigade(f->next, bb);
}
-----------------------------------------------------------------

When core output filter receives the brigade, it will flush because of
the FLUSH bucket at the end. My main problem is - how do I test this
sucker? What do I have to do to make sure two or more requests are in
fact pipelined?

Bojan

Reply via email to