On 10/22/2008 05:33 PM, Jim Jagielski wrote:
>
> On Oct 22, 2008, at 11:05 AM, Manik Taneja wrote:
>
>>
>> With regards to the patch, it seems that the changes in event.c have
>> no effect. We have anyway seen the leak with both worker and event
>> mpm. The changes in scoreboard.c just ensure that memory pointed to by
>> new_sbh is not lost everytime ap_create_sb_handle is called.
>>
>
> But this implies that any code which calls ap_create_sb_handle() needs
> to ensure that *new_sbh isn't NULL, and, iirc, most just define them
> (eg: ap_sb_handle_t *sbh;) so the value is garbage. Thus, this seems
> to also imply a change to the API (since current code will break
> with the ap_create_sb_handle() change.
>
>> I have tested this fix and the memory consumption has dropped
>> drastically, however I still do observe a slow leak at times. I
>> haven't got the opportunity to spend a whole lot of time testing the
>> fix. Its on my plate however, and I'll send out my findings to the
>> group sometime next week.
>>
>
> Agreed that the mod_proxy_http.c patch looks reasonable at 1st
> blush, but need to look into all possible interaction with the
> change to a shorter-lived pool.
There is an explanation above the code why we should take connection->pool,
but IMHO it is outdated and goes back to the times where there was a direct
connection between the backend connection and the frontend connection.
With the usage of the connection pools this is no longer the case and I
cannot think of any reason why not using r->pool instead.
I used a slightly modified version of the patch:
Index: modules/proxy/mod_proxy_http.c
===================================================================
--- modules/proxy/mod_proxy_http.c (Revision 707022)
+++ modules/proxy/mod_proxy_http.c (Arbeitskopie)
@@ -1869,23 +1869,9 @@
const char *u;
proxy_conn_rec *backend = NULL;
int is_ssl = 0;
-
- /* Note: Memory pool allocation.
- * A downstream keepalive connection is always connected to the existence
- * (or not) of an upstream keepalive connection. If this is not done then
- * load balancing against multiple backend servers breaks (one backend
- * server ends up taking 100% of the load), and the risk is run of
- * downstream keepalive connections being kept open unnecessarily. This
- * keeps webservers busy and ties up resources.
- *
- * As a result, we allocate all sockets out of the upstream connection
- * pool, and when we want to reuse a socket, we check first whether the
- * connection ID of the current upstream connection is the same as that
- * of the connection when the socket was opened.
- */
- apr_pool_t *p = r->connection->pool;
+ apr_pool_t *p = r->pool;
conn_rec *c = r->connection;
- apr_uri_t *uri = apr_palloc(r->connection->pool, sizeof(*uri));
+ apr_uri_t *uri = apr_palloc(p, sizeof(*uri));
/* find the scheme */
u = strchr(url, ':');
@@ -1893,7 +1879,7 @@
return DECLINED;
if ((u - url) > 14)
return HTTP_BAD_REQUEST;
- scheme = apr_pstrndup(c->pool, url, u - url);
+ scheme = apr_pstrndup(p, url, u - url);
/* scheme is lowercase */
ap_str_tolower(scheme);
/* is it for us? */
and it passes all perl framework tests.
Regards
RĂ¼diger