Kagamin:

> > In the situation like the one of the Ariane I think the good solution is 
> > the introduce a fuzzy control system that has a degradation of its 
> > effectiveness as conditions come out of its specs, but avoids a total 
> > failure. This is what biological designs too do. It's a kind of 'defensive 
> > programming'.
> > 
> From what I heard, the software for Ariane was physically unable to handle 
> Ariane, so no matter what assertions you put into it, it would crash.

In that last paragraph I was talking about something that doesn't use 
assertions, something like:
http://en.wikipedia.org/wiki/Fuzzy_Control_System

If well designed such systems have a graceful degradation of functionality even 
when you step out of their specs. Systems like this are used today in critical 
systems like breakers control systems of subway trains where a sharp shutdown 
like the one on the Ariane can cause hundred of deaths. When well designed such 
fuzzy systems do work very well. All this is kind of the opposite of the design 
strategy behind DbC :-)

My theory is that DbC is good to design and test critical systems because it 
allows you to spot and fix design bugs efficiently. But when the critical 
system is working, it's better to put beside it another system that shows 
graceful degradation and doesn't just stop working abruptly when some parameter 
is outside its designed specs. This is how most control systems in vertebrate 
brains are designed, they generally never just shut down.

Bye,
bearophile

Reply via email to