On Fri, Jun 14, 2013 at 12:40 PM, Steven D'Aprano <steve+comp.lang.pyt...@pearwood.info> wrote: > On Thu, 13 Jun 2013 20:15:42 +0000, Giorgos Tzampanakis wrote: > >>> Therefore: if the leak seems to be small, it may be much more advicable >>> to restart your process periodically (during times where a restart does >>> not hurt much) rather than try to find (and fix) the leaks. Only when >>> the leak is large enough that it would force you to too frequent >>> restarts, a deeper analysis may be advicable (large leaks are easier to >>> locate as well). >> >> >> Am I the only one who thinks this is terrible advice? > Sub-optimal, maybe, but terrible? Not even close. Terrible advice would > be "open up all the ports on your firewall, that will fix it!" >... > > My advice is to give yourself a deadline: > > "If I have not found the leak in one week, or found and fixed it in three > weeks, then I'll probably never fix it and I should just give up and > apply palliative reboots to work around the problem." > > Either that or hire an expert at debugging memory leaks.
It's terrible advice in generality, because it encourages a sloppiness of thinking: "Memory usage doesn't matter, we'll just instruct people to reset everything now and then". When you have a problem on your hands, you always have to set a deadline [1] but sometimes you have to set the deadline the other way, too: "I'll just reboot it now, but if it runs out of memory within a week, I *have* to find the problem". Also, I think everyone should have at least one shot at a project that has to stay up for multiple months, preferably a year. Even if you never actually achieve a whole year of uptime, *think* that way. It'll help you get things into perspective: "If I were running this all year, that might be an issue, but who cares about a memory leak in a script that's going to be finished in an hour!". [1] cf http://www.gnu.org/fun/jokes/last.bug.html ChrisA -- http://mail.python.org/mailman/listinfo/python-list