On Fri, Jul 5, 2013 at 5:05 PM, Daniel Micay <[email protected]> wrote: > You can rely on it, it's the standard behaviour on Linux. The actual > consumed memory will be equal to the size of the pages that have been > touched.
Has anyone actually tested the performance of a a highly fragmented page table resulting from very small increments in stack usage? If Linux lazily allocates pages, wouldn't that involve potentially a large lookup cost in the kernel, similar or greater than Rust's userspace segmented stack usage? Why can the kernel supposedly perform so much better? _______________________________________________ Rust-dev mailing list [email protected] https://mail.mozilla.org/listinfo/rust-dev
