> On the pc, Plan 9 currently limits user-mode stacks to 16MB.
> On a CPU server with 200 processes (fairly typical), that's
> 3.2GB of VM one would have to commit just for stacks.  With
> 2,000 processes, that would rise to 32GB just for stacks.

There's probably no simple answer which is correct for all goal
sets.

For an embedded widget, you might want to create a small number
of processes and be utterly sure none of them would run out of
RAM (which might be small).  If you think your stuff fits in
small stacks you'd probably like to know as early as possible
if it doesn't, so the kernel "helpfully" giving you 16-meg
stacks might not be so helpful.

For a web server you probably want some number of parallel
requests to run to completion and excess requests to be queued
and/or retried by the (remote) browser.  Overcommitting seems
likely to be harmful here, since each process which dies when
it tries to grow a stack page won't complete, and may return
a nonsense answer to the client.  It seems like you could thrash,
with most processes running for a while before getting killed.

Overcommitted 16-meg stacks are probably fine for lightly-loaded
CPU servers running random mixes of programs... but I suspect
other policies would also be fine for this case.

Personally I am not a fan of programs dying randomly because of
the behavior of other programs.  So I guess my vote would be for
a small committed stack (a handful of pages) with an easy way for
processes with special needs to request a (committed) larger size.

But I'd probably prefer an OHCI USB driver first :-)

Dave Eckhardt

Reply via email to