This topic has been addressed before, but then the answer was often to use/wait for 0.2.1.x or switch to another allocator. I'm thinking there must be a real solution to this somewhere.
I run a reasonably fast (500KB/s) node with Guard+Fast+Stable, so it's a popular destination. It runs at bandwidth capacity at all times. The only problem with this is the massive memory usage that results; at the moment, Tor has 748MB res usage, with almost 7 days of uptime. Generally it escalates at a rate of 100-200MB per day after a restart, and tops out around this number. My understanding is that most of that memory usage is related to the open connections; socket buffers, SSL buffers, etc. At the moment (according to /proc/x/fd), Tor has 5,364 open connections. Short of limiting available FDs, which might harm the performance of the node, what can I do to lower memory usage? It's currently running the Debian testing build, 0.2.1.20-2, and openssl 0.9.8g-15+lenny6. I'm not against doing custom builds of Tor or OpenSSL if it will help. I did have similar problems on my previous machine running Gentoo, where I had tried different the different allocator configurations, and that had little or no effect. Somebody else has to have come across this problem and some sort of solution; I can't have Tor taking up half the available memory on my system. Suggestions would be very appreciated. Thanks, - John Brooks *********************************************************************** To unsubscribe, send an e-mail to [email protected] with unsubscribe or-talk in the body. http://archives.seul.org/or/talk/

