I have a quick question just out of curiosity about the whole ulimit and
core dumping thing being discussed here... could one implement something
in the code to estimate how big of a core the mud will dump at any given
time (this way you only have to do it once as your mud expands in size)
and then use a signal handler to catch the segfaults and check the
availability of space assigned for core dumps, and use a system() call
to expand that size w/ a ulimit command before the mud actually tries to
dump a core? That way you'd never have to worry about there not being
enough space...

Richard Lindsey 

-----Original Message-----
From: Tom Whiting [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 12, 2004 2:40 PM
To: [email protected]
Subject: Re: Segmentation Fault

On Wed, 2004-05-12 at 15:04, [EMAIL PROTECTED] wrote:
> > I keep getting this message in my shell. No crashes, no core dumps,
just a
> > Segmentation fault, mud kills, then restarts.
> 
> Are you running on Redhat?  A couple years ago when I first moved our
mud
> to Redhat it would no longer dump core.  The problem, as it turned
out,
> was the default system resource limit for core files was set at
something
> like 2 meg.  Our mud tends to dump cores over 10meg, so this prevented
it
> from making any core files at all.
> 
> Here is a snippet that will help you see if that kernel restriction is
> stopping core files from being generated.
> 
No snippet is needed actually, simply type ulimit -a inside of the
shell. You'll see what your current limits are. If you're dumping cores
that are higher than your limit, then yes, it will bail on you. The best
way to find the problem in that case would be to run the mud in gdb (see
the rom faq on how to do that), set breakpoints and use backtrace when
it dies. Either that or use valgrind which is a decent tool as well.
Valgrind, however will (usually) only detect memory problems


-- 
ROM mailing list
[email protected]
http://www.rom.org/cgi-bin/mailman/listinfo/rom

Reply via email to