> If system calls were the only way to change memory allocation, one > could probably keep a strict accounting of pages allocated and fail > system calls that require more VM than is available. But neither Plan > 9 nor Unix works that way. The big exception is stack growth. The > kernel automatically extends a process's stack segment as needed. On > the pc, Plan 9 currently limits user-mode stacks to 16MB. On a CPU > server with 200 processes (fairly typical), that's 3.2GB of VM one > would have to commit just for stacks. With 2,000 processes, that > would rise to 32GB just for stacks.
16MB for stacks seems awful high to me. are there any programs that need even 1/32th of that? 512k is still 32k levels of recursion of a function needing 4 long arguments. a quick count on my home machine and some coraid servers don't show any processes using more than 1 page of stack. doing strict accounting on the pages allocated i think would be an improvement. i also don't see a reason not to shrink the maximum stack size. the current behavior seems pretty exploitable to me. even remotely, if one can force stack/brk allocation via smtp, telnet or whatnot. - erik
