On Sun, 2012-12-23 at 18:33 -0800, vigna wrote: > I'm following up on myself. > > Apparently there was a mistake from our part: we were allocating an > asynchronous client for each of our 64 worker threads, instead of having a > single one. In this way the memory allocation per client of the routeToPool > (~256M) skyrocketed to 16G. We are now using a single async client for the > whole application (~20.000 simultaneous connections) and everything seems to > work much better. > > Nonetheless, the routeToPool map will apparently never shrink. For long-term > application accessing millions of site this might be a problem. We will see > whether it is possible to modify the class so to use Google Guava's caches > for this purpose. > >
One needs to call #closeExpiredConnections and / or #closeIdleConnections methods on the connection pool in order to pro-actively evict expired and / or idle connections from the pool. I think the reason for a large memory footprint is not the routeToPool itself but rather all sorts of stuff still stuck in the I/O session context from the last request execution. Generally, it is the responsibility of the caller to remove objects from the local context upon request completion. However, certain cleanups could be (and should be) done by the framework. Feel free, though, to raise a JIRA for this issue and I will make sure it will be looked into. Oleg --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
