I did a little research. The overcommitment policy on Linux is configurable http://lxr.linux.no/linux/Documentation/vm/overcommit-accounting Of course, almost everyone will use the default "heuristic" policy, and in this case the COW memory after fork() is subject to overcommit accounting, which *may* cause the fork to fail. http://lkml.indiana.edu/hypermail/linux/kernel/0902.1/01777.html If a solution using clone(CLONE_VM ...) can be made to work, subprocess creation will be a little cheaper and significantly more reliable.
Martin On Sat, May 23, 2009 at 12:09, Andrew Haley <a...@redhat.com> wrote: > Martin Buchholz wrote: > > > I confess to incomplete understanding of the situation on Linux, but... > > I agree that the limit is artificial, > > in that there is no doubling of actual memory. > > It's a monitoring problem, whether internal to the linux kernel > > or perhaps some other external software "accounting" entity. > > Nevertheless, I believe this limit prevents a process currently using > > 75% of memory from starting a small subprocess and > > expect that clone() with CLONE_VM will fix that. > > AFAIAA that should not happen. I'd be interested to try a test case. > > Andrew. >