On Thu, May 16, 2002 at 05:24:47PM +0100, Jonathan E. Paton wrote: > > ln -s /dev/null core
I thought you deserved a beating... > just stick into any directory where you could > have a core dump. You must be doing something > mission critical to be desperate to stop these, > but not mission critical enough to want to use > them. ....until I read this previous paragraph. Phew! ;-) > Why not just get cron to recurse over your > directories eliminating core files when the > filesystem gets a little cramped? In that case I'd recommend above solution as the lesser evil since it gives you at least a bit control over which cores to see and which not. Well, a *nice* daemon would chdir to '/' anyways, so putting that symlink there would be the same as traversing the tree and clobbering each and every coredump one can find, which again sucks. > I know it is dumb, but why not just wrap your > perl script in one of these coredump eliminating > shell scripts - the child processes will get > those properties. Or set the limit to disallow cores inside the shellscript. > Oh, and Perl probably does have a way of doing > this... try POSIX.pm, or the syscall function > (you'll need to know what to look for, ask a > C programmer how they do it). Nope, it's not in POSIX as far as I know. Hmm, never used syscall... Let's harvest some information: ---------- perldoc -f syscall ---------- syscall LIST ... The arguments are interpreted as follows: if a given argument is numeric, the argument is passed as an int. If not, the pointer to the string value is passed. You are responsible to make sure a string is pre- extended long enough to receive any result that might be written into a string. ... require 'syscall.ph'; # may need to run h2ph $s = "hi there\n"; syscall(&SYS_write, fileno(STDOUT), $s, length $s); ---------- perldoc -f syscall ---------- Hmmmm, looks easy enough... ---------- man getrlimit ---------- ... int getrlimit(int resource, struct rlimit *rlim); ... getrlimit and setrlimit get and set resource limits respectively. resource should be one of: RLIMIT_CPU /* CPU time in seconds */ RLIMIT_FSIZE /* Maximum filesize */ RLIMIT_DATA /* max data size */ RLIMIT_STACK /* max stack size */ RLIMIT_CORE /* max core file size */ RLIMIT_RSS /* max resident set size */ RLIMIT_NPROC /* max number of processes */ RLIMIT_NOFILE /* max number of open files */ RLIMIT_MEMLOCK /* max locked-in-memory address space*/ RLIMIT_AS /* address space (virtual memory) limit */ ... The rlimit structure is defined as follows : struct rlimit { rlim_t rlim_cur; rlim_t rlim_max; }; ... RETURN VALUE On success, zero is returned. On error, -1 is returned, and errno is set appropriately. ---------- man getrlimit ---------- That should get us started... ---------- snip ---------- nijushiho:~$ grep RLIMIT_CORE /usr/include/*/* 2>/dev/null /usr/include/asm/resource.h:#define RLIMIT_CORE 4 /* max core file size */ /usr/include/bits/resource.h: RLIMIT_CORE = 4, /usr/include/bits/resource.h:#define RLIMIT_CORE RLIMIT_CORE ---------- snip ---------- Aha, RLIMIT_CORE is 4... ---------- snip lim.pl ---------- #!/usr/bin/perl use warnings; use strict; my $lim = pack('LL', 0, 0); # Yepp, I grepped some more in /usr/include/* # to find out the sice of rlim_t print "Failed: $!\n" if syscall(&SYS_getrlimit(4, $lim)) < 0; my ($cur, $max) = unpack('LL', $lim); print "CUR: $cur\n"; print "MAX: $max\n"; ---------- snip lim.pl ---------- Let's start it... ---------- snip ---------- nijushiho:~$ perl lim.pl Returned Invalid argument CUR: 0 MAX: 0 nijushiho:~$ ---------- snip ---------- Hmmmm, what's up here?!? Anybody any ideas? -- If we fail, we will lose the war. Michael Lamertz | +49 221 445420 / +49 171 6900 310 Nordstr. 49 | [EMAIL PROTECTED] 50733 Cologne | http://www.lamertz.net Germany | http://www.perl-ronin.de -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]