One thing I would very much like to see done in a functional language is fault-tree 
analysis.
A fault tree has as nodes various undesirable events, with as top node some disaster 
(for example, 
nuclear reactor meltdown) and as leaves various faults which can occur, with their 
probabilities 
(valve fails, operator doesn't notice red flashing light, and so on).  Each internal 
node has
a logical operator (often just AND or OR) associated with it; the fault associated 
with that node
cannot happen unless the operator applied to the children nodes occurs; EG ("the water 
pressure
won't get too high unless the valve fails AND the operator doesn't notice the red 
flashing light").
Thus you can (assuming all the faults at the leaves are independent) bound the 
probability of the
top node happening by bounding the probability of a particularly horrendous logical 
expression
in a number of independent binary variables being true.  Unfortunately working out the 
probability
is extremely hard to do (it's #P-complete) and simulation (running it lots of times) 
is not accurate
since the top probability is (hopefully) extremely low, and so hard to estimate 
accurately without an
awful lot of tests.  So the commercial software for this (extremely important) problem 
has to
adopt a very large number of heuristic approaches, for example trying to split the 
problem into
smaller parts which only have a few nodes in common, trying to come up with a set of 
"cut sets"
of faults, such that the top event cannot occur unless at least one of the cut sets 
has all faults
occurring, and so on.  There are lots of potential logical reorderings you can attempt 
on the tree
to try to simplify things.  The challenge is to come up with a reasonable way of 
finding a good upper
bound for the top probability, while keeping your code sufficiently simple that those 
of us who
live near nuclear reactors can trust it.  

I think my approach would be to regard this as a graph transformation problem where 
the aim is
to transform the input fault tree into one of a simpler form (where it is not too hard 
to bound
the top probability) by a number of well-defined operations guaranteed not to decrease 
the top
probability.  The heuristic part of the program (the largest part) would use a number 
of ad hoc
methods (such as simulations and attempting to compute partial derivatives in terms of
node probabilities) to search for the reduction which increases the top probability 
the least;
it would hand over the result of its search to a verification part which would compute 
the actual
bound, so that only the verification part would have to be guaranteed bug-free.

Has anyone done anything like this already, by the way?  There ought to be money in it 
. . .

_______________________________________________
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

Reply via email to