From: Tom Metro <[email protected]>
Date: Tue, 08 Mar 2011 22:52:21 -0500
Some code I'm working on is triggering an out of memory error, and I'd
like to figure out what specifically is responsible. (It's a complex
system with dozens of libraries and it runs in parallel across a cluster
of machines. Running the code in a debugger isn't a practical option.)
Figures.
Any recommendation for tools to do this?
0. I assume you are already using ulimit to trigger the error
sooner, rather than later . . .
1. Is it OOP? If you can take a count of "new" method calls, and
emit warnings periodically, you might be able to rule some things out.
2. Is memory being used for Perl objects, or for non-Perl library
objects? Maybe you could recompile Perl and/or other libraries to use a
different allocator, and see which one uses the most at crap-out time.
. . .
Is it possible to trap the OOM error? I don't think a __DIE__ handler
catches it . . .
You are correct:
Out of memory!
(X) The malloc() function returned 0, indicating there was
insufficient remaining memory (or virtual memory) to satisfy the
request. Perl has no option but to exit immediately.
(Quoth perldiag.) However, I also notice this in perldiag:
Out of memory during "large" request for %s
(F) The malloc() function returned 0, indicating there was
insufficient remaining memory (or virtual memory) to satisfy the
request. However, the request was judged large enough (compile-time
default is 64K), so a possibility to shut down by trapping this
error is granted.
So:
3. You might be able to use a "large" request to force a trappable
OOM error, and inspect memory then.
You have my sympathies. HTH,
-- Bob Rogers
http://www.rgrjr.com/
_______________________________________________
Boston-pm mailing list
[email protected]
http://mail.pm.org/mailman/listinfo/boston-pm