I would say the problem of debugging (or introspection if you insist)  is like 
if you find yourself at some random place, never seen before, and the task it 
do develop a map and learn the local language and customs.  If one is given the 
job of law enforcement (debugging violations of law), it is necessary to 
collect quite a bit of information, e.g. the laws of the jurisdiction, the 
sensitivities and conflicts in the area, and detailed geography.  In 
haphazardly-developed  software, learning about one part of a city teaches you 
nothing about another part of the city.   In well-designed software, one can 
orient oneself quickly because there are many easily-learnable conventions to 
follow.    I would say this distinction between the modeler and the modeled is 
not that helpful.   To really avoid bugs, one wants to have metaphorical 
citizens that are genetically incapable of breaking laws.   Privileged access 
is kind of beside the point because in practice software is often far too big 
to fully rationalize.

From: Friam <[email protected]> on behalf of "[email protected]" 
<[email protected]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[email protected]>
Subject: Re: [FRIAM] Abduction and Introspection

Thanks, Marcus,

Am I correct that all of your examples fall with in this frame;

[cid:[email protected]]
I keep expecting you guys to scream at me, “Of course, you idiot, 
self-perception is partial and subject to error!  HTF could it be otherwise?”   
I would love that.  I would record it and put it on loop for half my colleagues 
in psychology departments around the world.

Nick
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
[email protected]<mailto:[email protected]>
https://wordpress.clarku.edu/nthompson/


From: Friam <[email protected]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] Abduction and Introspection

Nick writes:


 As software engineers, what conditions would a program have to fulfill to say 
that a computer was monitoring “itself



It is common for codes that calculate things to periodically test invariants 
that should hold.   For example, a physics code might test for conservation of 
mass or energy.   A conversion between a data structure with one index scheme 
to another is often followed by a check to ensure the total number of records 
did not change, or if it did change that it changed by an expected amount.   It 
is also possible, but less common, to write a code so that proofs are 
constructed by virtue of the code being compliable against a set of types.   
The types describe all of the conditions that must hold regarding the behavior 
of a function.    In that case it is not necessary to detect if something goes 
haywire at runtime because it is simply not possible for something to go 
haywire.  (A computer could still miscalculate due to a cosmic ray, or some 
other physical interruption, but assuming that did not happen a complete 
proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk 
usage, and to raise an alarm if they are unexpectedly high or low.   Such an 
alarm might trigger cleanups of old results, otherwise kept around for 
convenience.



Marcus


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to