> So anyone have a guess at which tools they mean?  The compiler itself
> or testing and verification tools?

When you are working on a life-critical avionics (LCA) project,
everything must be certified to the same stringent requirements.
That means the compiler must be certified as strictly as
the software being compiled. Same goes for hardware. you can't
just use any old processor off the shelf for LCA.

As for validation tools, the 777 ran the code on the actual hardware.
Testing was done by halting the code, inserting the data, letting
it run for a frame, stopping the code, and getting the results.
The idea, I guess, was to verify the source code, the compiled code,
and the hardware all at once.

>> The rule should be to only let programmers who know what they are
>> doing write safety critical code.
>
> I hope only the best write this kind of code, but if the #1 rule
> was that where we need perfect systems we will get them by using
> perfect people,

No. building some LCA black box isn't about perfect individuals.
It's a team effort. At an absolute bare minimum, assuming  you
start with certified hardware and a certified compiler, the
FAA requires that you have at least two people working on the project:
One person designs, the other person verifies.
If you want, you can have Alice write the code for the
take-off/go-around piece, and have Bob verify it; and
then have Bob write the code for the stall recovery piece,
and have Alice verify it.

But no single person gets to write code and ship it without
at least one other pair of eyes verifying that it's good.

At which point, you get the skydiving analogy.
One parachute has a 1-in-N chance of failure.
Two parachutes have a 1-in-(N*N) chance of failure.
If N is 1000, then 1 parachute means 1 in a thousand
skydivers die because of chute failure if they
use a single chute. And 2 parachutes means that
1 in a million skydivers die from parachute failure.

> in it.  It does seem that some embedded programmers use this practice
> of avoiding parts of their languages.  Perhaps they have some
> justifications.

most embedded applications have to refresh at a certain rate.
say 30 hertz. If you have exceptions and dynamically allocated
variables with garbage collection and objects being instantiated
and destroyed on the fly and any other kind of open-ended
software construct in your code, how do you guarantee that you
can refresh at 30 hz?

It's not a language problem. It's not a construct problem.
It's not that embedded folks don't know that exceptions
are nothing more than jumps.

When it comes down to it, it's a *system* issue.

When all the pieces come together, and they are designed
from the ground up to "do this, and if an interrupt happens
stop and do that", then how do you guarantee a 30 hz refresh
rate?

What if you're refreshing the data that controls the fuel
injectors in your car or the shifter on your automatic
transmission?

If you code from the ground up using constructs and designs
that don't take into account refresh rate, then you cannot
guarantee refresh rate. And if you don't refresh fast enough
you may over-accelerate your car because you're feeding too
much gas, you may strip the gears in yoru transmission because
you're not shifting fast enough, or you might crash your airplane
because you're not responding to the pilot's commands or the
values on the pitot tube.

Embedded engineers have to use constructs that can guarantee
refresh rate. THey have to design teh system so that there
are no exceptions, because they can't guarantee how long any
particular exception will take, or how many exceptions they
might get at any particular time.

That generally means you have to poll all your sensors at a
fixed frequency. and you can only respond to incoming data
at a fixed rate.

Folks who are used to writing code for desktop applications
want their code to run as fast as possible and only deal wiht
odd bits and pieces when they absolutely have to.

It all really comes down to focusing on best-case or worst-case
execution speeds:

desktop people generally focus on trying to make their best-case
speed as fast as possible, and every once in a while, their
worst case will bog down because someone is loading a webpage
with video while netflix is streaming a movie to their harddrive
while they're listening to an MP3 that is decodign with software,
and their computer freezes up for a few seconds, and everyone
just shrugs it off as nothing more than an annoyance.

embedded engineers have to focus on making sure their worst-case
delay is below some minimum. They *have* to maintain a refresh
rate of 30 hz or something physical breaks or someone could get hurt.

I remember learning this lesson when I was working on
that cockpit avionics project. I had an idea for how
I could speed up some line-drawing algorithm we had
written. The architect rejected it because even though
it would have made it a third faster most of the time,
there would be some cases where it would have been 50%
slower.

Embedded is all about minimizing the worst case
execution time and making it conform to some fixed
maximum delay.

desktop applications are all about minimizing their
best case execution time and letting the worst case
delay be occaisional and sometimes an undefined,
open-ended amount of time.







_______________________________________________
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to