[ Memory overcommit ]

> One important way to gain confidence that you're little box won't
> silently crash at the worst possible time for the customer is to
> be able to *prove* to yourself that it can't happen, given certain
> assumptions. Those assumptions usually include things like "the
> hardware is working properly" (e.g., no ECC errors) and "the compiler
> compiled my C code correctly".
> 
> Given these basic assumptions, you go through and check that you've
> properly handled every possible case of input (malicious or otherwise)
> from the outside world. Part of the "proof" is verifying that you've
> checked all of your malloc(3) return values for NULL.. and assuming
> that if malloc(3) returns != NULL, then the memory is really there.
> 
> Now, if malloc can return NULL and the memory *not* really be there,
                 ^^^
I assume you meant 'can't' here, right?

> there is simply no way to prove that your code is not going to crash.

Even in this case, there's no way to prove your code is not going to
crash.

The kernel has bugs, your software will have bugs (unless you've proved
that it doesn't, and doing so on any significant piece of software will
probably take longer to do than the amount of time you've spent writing
and debugging it).

And, what's to say that your correctly working software won't go bad
right in the middle of your program running.

There is no such thing as 100% fool-proof.

> This memory overcommit thing is the only case that I can think of
> where this happens, given the basic assumptions of correctly
> functioning hardware, etc. That is why it's especially annoying to
> (some) people.

If you need 99.9999999% fool-proof, memory-overcommit can be one of the
many classes of problems that bite you.  However, in embededed systems,
most folks design the system with particular software in mind.
Therefore, you know ahead of time how much memory should be used, and
can plan for how much memory is needed (overcommit or not) in your
hardware design.  (We're doing this right now in our 3rd generation
product at work.)

If the amount of memory is unknown (because of changing load conditions,
and/or lack-of-experience with newer hardware), then overcommit *can*
allow you to actually run 'better' than a non-overcommit system, though
it doesn't necessarily give you the same kind of predictability when you
'hit the wall' like a non-overcommit system will do.

Our embedded OS doesn't do memory-overcommit, but sometimes I wish it
did, because it would give us some things for free.  However, *IF* it
did, we'd need some sort of mechanism (ie; AIX's SIGDANGER) that memory
was getting tight, so the application could start dumping unused memory,
or at least have an idea that something bad was happening so it could
attempt to cleanup before it got whacked. :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to