My question is, should setuid() fail if the target user's maximum number
of processes (RLIMIT_NPROC) would be exceeded?

Background: in an attempt to manage our webserver to keep too many CGIs
from taking down the machine, I've been experimenting with RLIMIT_NPROC.
This appears to work fine when forking new processes, causing the fork
to fail with error EAGAIN.

However, this didn't solve our problem. We're using Apache with suexec,
and still CGIs would multiply far beyond the specified resource limit. 

Apache forks suexec, which is suid root; fork1() increments the number
of processes for root, unless RLIMIT_NPROC has been exceeded, in which
case the fork fails with EAGAIN.

suexec calls then calls setuid() (before it calls execv), which
decrements root's process count and increments the target user's process
count, but RLIMIT_NPROC is not consulted, and voila, we've just exceeded
the target user's maximum process count.

So should the setuid() fail with EAGAIN (or some such) if the target
user's maximum number of processes would be exceeded? Or would this
break too many programs?

Matt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to