On Fri, Jul 5, 2013 at 6:43 PM, Thomas Daede <[email protected]> wrote: > On Fri, Jul 5, 2013 at 5:05 PM, Daniel Micay <[email protected]> wrote: >> You can rely on it, it's the standard behaviour on Linux. The actual >> consumed memory will be equal to the size of the pages that have been >> touched. > > Has anyone actually tested the performance of a a highly fragmented > page table resulting from very small increments in stack usage? If > Linux lazily allocates pages, wouldn't that involve potentially a > large lookup cost in the kernel, similar or greater than Rust's > userspace segmented stack usage? Why can the kernel supposedly perform > so much better?
It's very fast and has little overhead. We're not going to do come close to the performance of the kernel and the MMU. The kernel is asking the MMU to fault on writes to the pages, it's close to free. If you allocate 100k 2MiB stacks and touch one page in each, the memory usage in userland is 390MiB as expected. If you touch 4 pages in each, the userland memory usage is 1560MiB and there's only a space overhead of maybe 20% separate from the process (it quickly drops down to less than 1% as you touch more pages, but it's hard to measure accurately). _______________________________________________ Rust-dev mailing list [email protected] https://mail.mozilla.org/listinfo/rust-dev
