> One might allocate at least 3.2GB of swap for a 4GB machine, but many
> of our machines run with no swap, and we're probably not alone.  And
> 200 processes are not a lot.  Would you really have over 32GB of swap
> allocated for a 4GB machine with 2,000 processes?
> 
> Programs can use a surprising amount of stack space.  A recent notable
> example is venti/copy when copying from a nightly fossil dump score.
> I think we want to be generous about maximum stack sizes.
> 
> I don't think that estimates of VM usage would be an improvement.  If
> we can't get it exactly right, there will always be complaints.

if venti/copy's current behavior could be worked around by allocating stuff
instead of using the stack.  we don't have to base design around what
venti/copy does today.

why would it be unacceptable to have a maximum stack allocation
system-wide?  say 16MB.  this would allow is not to overcommit memory.

if we allow overcomitted memory, *any* access of brk'd memory might page
fault.  this seems like a real step backwards in error recovery as most programs
assume that malloc either returns n bytes of valid memory or fails.  since
this assumption is false, either we need to make it true or fix most programs.

upas/fs fails in this way for us all the time.

this would have more serious consequences if, say, venti or fossil suffered
a similar fate.

- erik

Reply via email to