Phil Henshaw wrote:
> Is it related to what I
> was asking about, the chain of conditions getting trapped in increasing
> complexity, or does it mostly refer to just non-existent address faults
> and things like that?   
>   
Some relatively ordinary software can do that.   A generational garbage 
collector, as in a Java virtual machine, if it is not implemented just 
right can crash far away from the bug in the implementation.    Imagine 
if all of the objects in a room are swirling around out from you and you 
must depend on a slightly misinformed and color blind assistant to find 
what you need in a short window of time.  That assistant, if he 
helpfully but smugly nails down an object, the room might catch on fire 
or he might become one of the objects swirling around.

It sounds like you are interested in something else, a sort of Tower of 
Babel where computer software goes mad by trying to correct the 
uncorrectable.   Overall, I think, most software just crashes and has no 
real mechanism for self-correction that might drive it into an 
escalating cycle of self-reflection and ineffectual adaptation.    
Machine learning codes or agent models certainly could...

> Is 'insanity' just
> anything that doesn't work, or is it more specific in the kinds of
> things that are the persistent dangers of faulty programming?
Once a iterative floating point calculation has gone into to a very 
large or very small range, it's useful to have the system alert the user 
so that it doesn't continue and create nonsense results.   In principle 
a calculation could be adaptive on these signals, but typically it's 
just something to be understood and then avoided. 
> Do any of the kinds of common exceptions have developmental process curves?
>   
Indifference to memory use is probably a general downward spiral in 
software today as we have $500 computers with a billion places to put 
something without even hitting disk.  It's possible to have gross leaks 
in a program (or gross overallocation in a garbage collected system), 
and not notice them.   Under different system loads such programs will 
behave very differently (as memory access speeds goes down and down).
> You mentioned a "hierarchy of exception types" in your last post.  Would
> that include some that take off right away and some that sputter and
> then blow up to fry the chip, and things like that?   If there are
> torrents of signals that push the physical limits of the hardware I
> think they'd have locally unique emergent properties if you looked at
> them closely with that in mind.
>   
A modern CPU has multiple execution units that are all looking for work 
to do from a program.  Most programs will fail to keep all of that 
circuitry running, but occasionally a special code will really get the 
whole engine pumping.   A system without top notch cooling (but one 
built to to spec.) can then have CPUs miscalculating things or wedging 
up.   Since it can be hard to reproduce, testing of the system by the 
manufacturer may not turn the problem up.   For example when the 
national labs buy or lease big systems they have long burn-in periods 
testing all kinds of scientific codes to look for non-determinism and 
unexplained crashes.  Not only to get the hardware all working right, 
but also to avoid expensive scientific mistakes.  

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to