24.02.2014 05:07, Alan McKinnon wrote:
[ ...]

We don't do error handling. We don't even try and deal with it at the
point it occurred, we just chuck it back up the stack, essentially
giving them message "stuff it, I'm not dealing with this. You called me,
you fix it."

Doesn't sound like good design does it? Sounds more like do whatever you
think you can get away with. Good design in this area gives you
something conceptually along the lines of try...catch...finally (with
possibly some work done to avoid throwing another exception in the
finally).

try...catch...finally *does* leave error handling to *the caller*. It only provides a more object-oriented way to error handling. It *does not* *handle* errors.

Unix error "design" does this:

exit <some arb number>
and an error message is in $@ if you feel like looking for it

Please, propose a more sound design? Take e.g. jQuery where all errors are handled by the library, it sometimes takes ages to debug why it doesn't work as expected, after a while you eagerly figure why error handling *should* be done by the caller, and the only thing the callee can do reliably is pass an error message upstream. Good error messages (and error codes, or error class hierarchy) are a different problem, but I haven't seen a more proof solution yet.

Strangely, this approach is exactly why Unix took off and got such
widespread adoption throughout the 70s. An engineer will understand that
a well-thought out design that is theoretically correct requires an
underlying design that is consistent. In the 70s, hardware consistency
was a joke - every installation was different. Consistent error handling
would severely limit the arches this new OS could run on. By taking a
"Stuff it, you deal with it coz I'm not!" approach, the handling was
fobbed off to a higher layer that was a) not really able to deal with it
and b) at least in a position to try *something*.

By ripping out the theoretical correctness aspects, devs were left with
something that actually could compile and run. You had to bolt on your
own fancy bits to make it reliable but eventually over time these things
too stabilized into a consistent pattern (mostly by hardware vendors
going bankrupt and their stuff leaving the playing field)

And so we come to what "Unix design" probably really is:

"You do what you have to to get the job done, the simpler the better,
but I'm not *really* gonna hold you to that."

A good design is based on:
- consistency
- isolation and substitution of components
- component reuse
- thorough documentation
(a free interpretation of [1])

This almost always leads to many simple components, and that is what's called Unix design principles AFAIU.

The problem of Unix is that it doesn't follow "Unix design principles" any more. But it doesn't invalidate *the principles*.

I still don't like what Lennart has done with this project, but I also
fail to see what design principle he has violated.

As per [1], I fail to see what design principle he has followed.

[1] http://en.wikipedia.org/wiki/Software_design#Design_concepts

--
Regards,
Yuri K. Shatroff

Reply via email to