On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via Digitalmars-d wrote:
You're using a different definition of "component".

System granularity is decided by the designer. You either allow people design their systems or force your design on them, if you do both, you contradict yourself.

An inconsistency in a transaction is a problem with the input, not a problem with the program logic itself.

Distinction between failures doesn't matter. Reliable system manages any failures, especially unexpected and unforeseen ones, without diagnostic.

If something is wrong with the input, the program
can detect it and recover by aborting the transaction (rollback the wrong data). But if something is wrong with the program logic itself (e.g., it committed the transaction instead of rolling back when it detected a problem) there is no way to recover within the program
itself.

Not the case for an airplane: it recovers from any failure within itself. Another indication that an airplane example contradicts to Walter's proposal. See my post about the big picture.

A failed component, OTOH, is a problem with program logic. You cannot recover from that within the program itself, since its own logic has been compromised. You *can* rollback the wrong changes made to data by that malfunctioning program, of course, but the rollback must be done by a decoupled entity outside of that program. Otherwise you might end up causing even more problems (for example, due to the compromised / malfunctioning logic, the program commits the data instead of reverting
it, thus turning an intermittent problem into a permanent one).

No misunderstanding, I think that Walter's idea is good, just not always practical, and that real critical systems don't work the way he describes, they make more complicated tradeoffs.

Reply via email to