> -----Original Message-----
> From: Graham Leggett
> Sent: Dienstag, 16. November 2010 10:53
> To: [email protected]
> =====================================================================
> >> --- httpd/httpd/trunk/modules/proxy/proxy_util.c (original)
> >> +++ httpd/httpd/trunk/modules/proxy/proxy_util.c Tue Nov 16
> >> 00:23:37 2010
> >> @@ -2643,19 +2643,17 @@ PROXY_DECLARE(int) ap_proxy_connection_c
> >> apr_sockaddr_t *backend_addr = conn->addr;
> >> int rc;
> >> apr_interval_time_t current_timeout;
> >> - apr_bucket_alloc_t *bucket_alloc;
> >>
> >> if (conn->connection) {
> >> return OK;
> >> }
> >>
> >> - bucket_alloc = apr_bucket_alloc_create(conn->scpool);
> >> /*
> >> * The socket is now open, create a new backend server
> connection
> >> */
> >> conn->connection = ap_run_create_connection(conn->scpool, s,
> >> conn->sock,
> >> 0, NULL,
> >> - bucket_alloc);
> >> + c->bucket_alloc);
> >
> > -1 Veto. This does not work.
>
> Just to clear up any possible perception of otherwise, I have spent
> hours and hours trying to pick apart this code and try to understand
> exactly what both mod_proxy and mod_ssl are doing, and why
> mod_proxy_http is behaving so dramatically differently to
> mod_proxy_ftp and mod_proxy_connect, in order to fix a very real
> problem in a very real set of datacentres. I would appreciate it if
> you could help me get to the bottom of any issues I may not be
> understanding so that this can be fixed once and for all. You don't
> need to veto anything to get my attention, you can just say "this
> won't work and this is why". :)
Sorry for having been in grumpy mode this morning, but I saw this code
which is what I pointed out before to be not working :-).
>
> > The socket bucket of the backend connection is created from
> > this bucket allocator. Hence the reuse of connections from the
> > backend will be broken
> > as c->bucket_alloc will be gone when the backend connection and
> > hence the socket bucket
> > is reused.
>
> Ok, I originally understood that the conn_rec was recreated
> each time,
> but this isn't the case.
>
> This is making more sense.
>
> What it seems we need to do is keep wrapping buckets in transient
> buckets as we were doing before, and then, for the final EOS case,
> force the setaside to guarantee that that last set of buckets have
> been physically moved to the frontend connection, and the
> backend can
> be safely released and reused.
That looks like a plan. *Before* we sent the final brigade (which had its
buckets transformed to the transient buckets of c->bucket_alloc)
containing the EOS bucket do a setaside on each bucket in this brigade
(maybe via ap_save_brigade). Maybe something along the following lines
(completely untested and only typed on a scratchpad):
/* Switch the allocator lifetime of the buckets */
ap_proxy_buckets_lifetime_transform(r, bb, pass_bb);
/* found the last brigade? */
if (APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(pass_bb))) {
apr_bucket_brigade *saved_bb;
/* signal that we must leave */
finish = TRUE;
saved_bb = apr_brigade_create(r->pool,
r->connection->bucket_alloc);
ap_save_brigade(NULL, &saved_bb, &pass_bb, r->pool);
pass_bb = saved_bb;
apr_brigade_cleanup(bb);
/* make sure we release the backend connection as soon
* as we know we are done, so that the backend isn't
* left waiting for a slow client to eventually
* acknowledge the data.
*/
ap_proxy_release_connection(backend->worker->scheme,
backend, r->server);
}
/* try send what we read */
if (ap_pass_brigade(r->output_filters, pass_bb) !=
APR_SUCCESS
|| c->aborted) {
/* Ack! Phbtt! Die! User aborted! */
backend->close = 1; /* this causes socket close below
*/
finish = TRUE;
apr_brigade_cleanup(bb);
}
/* make sure we always clean up after ourselves */
if (!finish) {
apr_brigade_cleanup(bb);
}
apr_brigade_cleanup(pass_bb);
Regards
Rüdiger