Mike Marion wrote:

Don't get me started on their incredibly stupid ass code that can be broken by future kernel changes to values in /proc/meminfo, doing things like causing variable overflow that makes a program malloc() anywhere from 400meg to 980gig! at startup. This was due to a feature they deliberately did to attempt to malloc all the ram the app thought it might need at startup... because "it's faster that way."

As much as I hate EDA companies, I'm going to defend them on this even though they got the explanation (sorta) wrong.

The issue is not that it is faster for the application.

The issue is that the Linux kernel *lies*.

If you ask the Linux kernel for memory, it immediately returns "Here ya go, boss." *but doesn't allocate the [EMAIL PROTECTED] memory* until you touch it. Because, well, that's faster.

So, you allocate a nice set of pages you think is plenty big enough to hold your very long, very complicated computation that's going to use all that memory and halfway through *BOOM*--out of memory. So, your choices are to touch *every single byte* up front in order to make Linux actually give you the blasted memory (stupid when I'm actually going to touch the memory when I use it anyhow), or bash some undocumented things to make sure that the kernel can and will give you the memory when you get there.

This kind of stuff is the reason I avoid Linux when I want reliability. Yeah, it only bites you at the edges, but there are *lots* of edges. And the Linux community takes its cue from Linus and ignores things that are 1% fails if they get in the way of "performance" (see async writes to disk as default for another example).

-a


--
KPLUG-List@kernel-panic.org
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to