I think the argument behind erik's argument is that some error
scenarios are difficult to simulate, and fairly improbable.  Even
given conscientious programmers who code carefully and test reasonably
thoroughly, will testing simulate a failure of every possible attempt
to allocate memory to see what happens?  How about simulating I/O
errors on every possible read or write?  More generally, will it
simulate an error return from every possible library and system call?
all possible error returns from every possible call?

To pick a real example of recovery, in C News, we made sure that
incoming messages would not be dropped even if a filesystem filled at
an awkward time.  The code might exit prematurely (since there's no
point in banging one's [disk] head against a full filesystem usually),
but that would just result in the batch in-process being preserved and
input processing being stalled until the file system got some free
space.  Eventually we started checking free space before processing a
batch too.

On the other hand, resource exhaustion nowadays can generally be
prevented at little cost: add a few gigabytes of RAM, add a few 500GB
disks for swap or file storage.  woot.com was selling 250GB disks
(admittedly from Western Digital, which I'm not a fan of) for $50 each
the other night.  malloc failure was a genuine possibility on the
PDP-11, with a 16-bit address space, and full file systems happened,
when 300MB disks and drives cost thousands of dollars (I no longer
remember prices; they were costly enough that the 11/70 I ran had two
300MB disks and that seemed like an enormous amount of space [it still
seems like a lot to me]).  These days I would imagine that primarily
embedded systems (including games) and supercomputer-like applications
have problems in practice with resource exhaustion.

Reply via email to