I'm following up on myself. Apparently there was a mistake from our part: we were allocating an asynchronous client for each of our 64 worker threads, instead of having a single one. In this way the memory allocation per client of the routeToPool (~256M) skyrocketed to 16G. We are now using a single async client for the whole application (~20.000 simultaneous connections) and everything seems to work much better.
Nonetheless, the routeToPool map will apparently never shrink. For long-term application accessing millions of site this might be a problem. We will see whether it is possible to modify the class so to use Google Guava's caches for this purpose. -- View this message in context: http://httpcomponents.10934.n7.nabble.com/AbstractNIOConnPool-memory-leak-tp18554p18555.html Sent from the HttpClient-User mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
