Quoting Andrew Lentvorski <[EMAIL PROTECTED]>:
So, you allocate a nice set of pages you think is plenty big enough to hold your very long, very complicated computation that's going to use all that memory and halfway through *BOOM*--out of memory. So, your choices are to touch *every single byte* up front in order to make Linux actually give you the blasted memory (stupid when I'm actually going to touch the memory when I use it anyhow), or bash some undocumented things to make sure that the kernel can and will give you the memory when you get there.
That's not what (or why) they're doing though. If it were, they wouldn't ask for the massive amounts they do without actually using/testing any. They don't use it.. they just malloc it, in which case, because of the default linux behavior to overcommit, they won't know if they'll get 1/2 way and crash anyway when the app actually goes to use it. The specifically designed this "feature" for speed... it has nothing to do with reliability.
-- Mike Marion-Unix/Linux Admin-http://www.miguelito.org Marge: Homer, the plant called. They said if you don't show up tomorrow don't bother showing up on Monday. Homer: Woo-hoo! Four-day weekend! ==> Simpsons -- KPLUG-List@kernel-panic.org http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list