On 10/4/2014 9:09 AM, Sean Kelly wrote:
On Saturday, 4 October 2014 at 08:15:51 UTC, Walter Bright wrote:
On 10/3/2014 8:43 AM, Sean Kelly wrote:
My point, and I think Kagamin's as well, is that the entire plane is a system
and the redundant internals are subsystems.  They may not share memory, but they
are wired to the same sensors, servos, displays, etc.

No, they do not share sensors, servos, etc.

Gotcha.  I imagine there are redundant displays in the cockpit as well, which
makes sense.  Thus the unifying factor in an airplane is the pilot.

Even the pilot has a backup!

Next time you go flying, peek in the cockpit. You'll see dual instruments and displays. If you examine the outside, you'll see two (or three) pitot tubes (which measure airspeed).


Right.  So the system relies on the intelligence and training of the pilot for
proper operation.  Choosing which systems are in error vs. which are correct,
etc.

A lot of design revolves around making it obvious which component is the failed one, the classic being a red light on the instrument panel.


I still think an argument could be made that an entire airplane, pilot
included, is analogous to a server infrastructure, or even a memory isolated
program (the Erlang example).

Anyone with little training can fly an airplane. Heck, you can go to any flight school and they'll take you up on an introductory flight and let you try out the controls in flight. Most of a pilot's training consists of learning how to deal with failure.


My only point in all this is that while choosing the OS process is a good
default when considering the potential scope of undefined behavior, it's not the
only definition.  The pilot misinterpreting sensor data and making a bad
judgement call is equivalent to the failure of distinct subsystems corrupting
the state of the entire system to the point where the whole thing fails.  The
sensors were communicating confusing information to the pilot, and his
programming, as it were, was not up to the task of separating the good
information from the bad.

That's true. Many accidents have resulted from the pilot getting confused about the failures being reported to him, and his failure to properly grasp the situation and what to do about it. All of these result in reevaluations of how failures are presented to the pilot, and the pilot's training and procedures.

On the other hand, many failures have not resulted in accidents because of the pilot's ability to "think outside the box" and come up with a creative solution on the spot. It's why we need human pilots. These solutions then become part of standard procedure!


Do you have any thoughts concerning my proposal in the "on errors" thread?

Looks interesting, but haven't gotten to it yet.

Reply via email to