On Fri, 11 Feb 2005 13:18:07 +1300, Carl Cerecke wrote: > 1. It has a built-in death by comparing the current time to a future > time that is stored in a shared memory segment accessible by all > processes. If that time has arrived or passed, then it all shuts down, > and no harm done.
If it didn't do the time comparison and the message to stderr you could have a tighter loop, therefore a higher rate of child processes being spawned. Would this make it more lethal? > 2. There is some sort of limit, probably per-user, resulting in a > maximum of just over 4000 processes (probably 12-bit: 2^12 == 4096) > > 3. The rest of the system is surprisingly responsive, but anything that > involves retrieving process information (e.g. ps or top) takes pretty > much forever. root can still cd and ls etc. > > 4. running killall as root (Q: Why is it pointless running killall as > the user that started the fork program?) will sometimes kill a handful > of the processes, sometimes none. Maximum niceness (or is that > nastiness?) doesn't seem to help. The 30-second timeout happens anyway. Are you talking about the niceness of 'fork' or 'killall'? > 5. ctrl-c in the shell that started the fork program will all of them > straight away. I'm not yet sure how the shell does that. What about 'nohup ./fork &' ? Does ctrl-c work in tight loops that don't do any i/o? Can fork.c contain a handler for sigint and sighup that basicly ignores these, making it nastier? Would doom ps be a fun way to kill these processes? > 6. You will get some impressive load averages. Unfortunately, the > integer-part of the load average is only 10-bits long, so no averages > above 1024. (details in /usr/src/linux/include/linux/sched.h) > 7. This was tried on suse 9.1 (linux 2.6.4-52-default) Yuri -- ** WARNING to mailing list repliers ** Gmail over-rides "Reply-To:" field. Check your "To:" address before sending reply to this post.
