On Saturday, 18 January 2014 at 01:46:55 UTC, Walter Bright wrote:
The autopilot software was designed by someone who thought it should keep operating even if it detects faults in the software.

I would not write autopilot or life-support software in D. So that is kind of out-of-scope for the language. But:

Keep the system simple, select a high level language and verify correctness by an automated proof system.

Use 3 independently implemented systems and shut down the one that produces deviant values. That covers more ground than the unlikely null-pointers in critical systems. No need to self-detect anything.

Consider also the Toyota. My understanding from reading reports (admittedly journalists botch up the facts) is that a single computer controls the brakes, engine, throttle, ignition switch, etc. Oh joy. I wouldn't want to be in that car when it keeps on going despite having self-detected faults.

So you would rather have the car drive off the road because the anti-skid software abruptly turned itself off during an emergency manoeuvre?

But would you stay in a car where the driver talks in a cell-phone while driving, or would you tell him to stop? Probably much more dangerous if you measured correlation between accidents and system features. So you demand perfection from a computer, but not from a human being that is exhibiting risk-like behaviour. That's an emotional assessment.

The rational action would be to improve the overall safety of the system, rather than optimizing a single part. So spend the money on installing a cell-phone jammer and an accelerator limiter rather than investing in more computers. Clearly, the computer is not the weakest link, the driver is. He might not agree, but he is and he should be forced to exhibit low risk behaviour. Direct effort to where it has most effect.

(From a system analytical point of view. It might not be a good sales tactic, because car buyers aren't that rational.)

Reply via email to