Re: Optimize behaviour of reverse and forward worker
Jim Jagielski wrote: On Feb 14, 2009, at 9:09 AM, Ruediger Pluem wrote: Current we set is_address_reusable to 0 for the reverse and forward worker. Is this really needed? IMHO we could reuse the connection if it goes to the same target (we already check this). Regards Rüdiger For the generic proxy workers yes; if we are sure it goes to the exact same host we could reuse. The current impl is from a time when, iirc, we didn't check... Could we also export a routine to create a worker? Cheers Jean-Frederic
Re: Optimize behaviour of reverse and forward worker
On Feb 14, 2009, at 9:09 AM, Ruediger Pluem wrote: Current we set is_address_reusable to 0 for the reverse and forward worker. Is this really needed? IMHO we could reuse the connection if it goes to the same target (we already check this). Regards Rüdiger For the generic proxy workers yes; if we are sure it goes to the exact same host we could reuse. The current impl is from a time when, iirc, we didn't check...
Re: Optimize behaviour of reverse and forward worker
On 02/14/2009 10:46 PM, Rainer Jung wrote: On 14.02.2009 15:09, Ruediger Pluem wrote: Current we set is_address_reusable to 0 for the reverse and forward worker. Is this really needed? IMHO we could reuse the connection if it goes to the same target (we already check this). By check you mean the code in ap_proxy_determine_connection()? Yes. The check there seems only to happen in the case were the client reuses a keepalive connection. I have the feeling, that disablereuse and is_address_reusable are used almost in the same way at the moment, except for mod_proxy_ftp. IMHO disablereuse is a configurable option whereas is_address_reusable is an internal flag set / unset by the code in various situations. Both attributes are always checked together, so both imply the same behaviour. What's the expected case were you can actually reuse the backend connection? A client using HTTP Keep-Alive and a backend connection Especially I have the case in mind where HTTPD acts as a forward proxy in a proxy chain and forwards all requests to the next proxy in the chain. IMHO it is a pity that each request creates a new connection to this proxy. that's not too busy, so that consecutive client requests to the same backend can be send via the same backend connection? Could that be generalized to concurrent client connections C1, C2, ... mapping to different backend connections B1, B2, ..., each of them reused for the same client connection as long as it lasts (C1 - B1, C2- B2, ...)? If so, we would also need to find good default pool configuration for the reverse and forward worker. IMHO there is no size that fits all, so I would like to make this configurable by defining special worker names like _forward_ and _reverse_ which can be configured via ProxySet _forward_ . ProxySet _reverse_ . There's also a use case, were proxy requests are defined via RewriteRule. In case the host in the rewrite rule is a constant string, we would benefit from initializing a real worker, not using the default workers. As said, this can be done today by Proxy [common prefix of rewriterule] ProxySet /Proxy Regards Rüdiger
Optimize behaviour of reverse and forward worker
Current we set is_address_reusable to 0 for the reverse and forward worker. Is this really needed? IMHO we could reuse the connection if it goes to the same target (we already check this). Regards Rüdiger
Re: Optimize behaviour of reverse and forward worker
On 14.02.2009 15:09, Ruediger Pluem wrote: Current we set is_address_reusable to 0 for the reverse and forward worker. Is this really needed? IMHO we could reuse the connection if it goes to the same target (we already check this). By check you mean the code in ap_proxy_determine_connection()? The check there seems only to happen in the case were the client reuses a keepalive connection. I have the feeling, that disablereuse and is_address_reusable are used almost in the same way at the moment, except for mod_proxy_ftp. Both attributes are always checked together, so both imply the same behaviour. What's the expected case were you can actually reuse the backend connection? A client using HTTP Keep-Alive and a backend connection that's not too busy, so that consecutive client requests to the same backend can be send via the same backend connection? Could that be generalized to concurrent client connections C1, C2, ... mapping to different backend connections B1, B2, ..., each of them reused for the same client connection as long as it lasts (C1 - B1, C2- B2, ...)? If so, we would also need to find good default pool configuration for the reverse and forward worker. There's also a use case, were proxy requests are defined via RewriteRule. In case the host in the rewrite rule is a constant string, we would benefit from initializing a real worker, not using the default workers. Regards, Rainer