On 1/17/2014 6:22 PM, "Ola Fosheim Grøstad" <[email protected]>" wrote:
On Saturday, 18 January 2014 at 01:46:55 UTC, Walter Bright wrote:
The autopilot software was designed by someone who thought it should keep
operating even if it detects faults in the software.

I would not write autopilot or life-support software in D. So that is kind of
out-of-scope for the language. But:

Keep the system simple, select a high level language and verify correctness by
an automated proof system.

Use 3 independently implemented systems and shut down the one that produces
deviant values. That covers more ground than the unlikely null-pointers in
critical systems. No need to self-detect anything.

I didn't mention that the dual autopilots also have a comparator on the output, and if they disagree they are both shut down. The deadman is an additional check. The dual system has proven itself, a third is not needed.


Consider also the Toyota. My understanding from reading reports (admittedly
journalists botch up the facts) is that a single computer controls the brakes,
engine, throttle, ignition switch, etc. Oh joy. I wouldn't want to be in that
car when it keeps on going despite having self-detected faults.

So you would rather have the car drive off the road because the anti-skid
software abruptly turned itself off during an emergency manoeuvre?

Please reread what I wrote. I said it shuts itself off and engages the backup, and if there is no backup, you have failed at designing a safe system.


But would you stay in a car where the driver talks in a cell-phone while
driving, or would you tell him to stop? Probably much more dangerous if you
measured correlation between accidents and system features. So you demand
perfection from a computer, but not from a human being that is exhibiting
risk-like behaviour. That's an emotional assessment.

The rational action would be to improve the overall safety of the system, rather
than optimizing a single part. So spend the money on installing a cell-phone
jammer and an accelerator limiter rather than investing in more computers.
Clearly, the computer is not the weakest link, the driver is. He might not
agree, but he is and he should be forced to exhibit low risk behaviour. Direct
effort to where it has most effect.

(From a system analytical point of view. It might not be a good sales tactic,
because car buyers aren't that rational.)

I have experience with this stuff, Ola, from my years at Boeing designing flight critical systems. What I outlined is neither irrational nor emotionally driven, and has the safety record to prove its effectiveness.

I also ask that you please reread what I wrote - I explicitly do not demand perfection from a computer.

Reply via email to