On Aug 16, 2006, at 4:21 PM, Leo Lapworth wrote:

Memory is cheap / CPU is cheap (for when you reach Apache::SizeLimit and need to spawn a new process) - your (and other developers) time is not.

practical response:
Because I'm using some modules with known large memory leaks ( open ssl wrappers )-- which i have to deal with.

I'm also sysadmin on the machine, so I can either spend 16hrs dealing with mod_perl leaks now, or just ignore them and spend 2hrs a day for eternity dealing with the ramifications of the leaks and keeping things up and running.

if someone else had to manage apache and the server, i'd say f-it and make them deal. but its me on both ends.

devils advocate response:
I'm not saying i want things perfect. I'm fine with some loss. But a bunch of misc functions i've got going on are leaking 1-4k /request on average. multiply that by 10-20 functions going on, and thats 40/ request. yeah, i can just use sizelimit, but sizelimit is working on normal process growth ( increasing result sets, post data ) and the leaks i can't control. 40k /request @ an average size of 12mb /child means i've doubled the size every 300 requests-- off those bugs alone. I'd rather keep max-requests @1000 before i start doubling the size of the process. i'd rather have spare memory on my machine and leaner code than something that breaks




Reply via email to