I thought the question was about software engineering, not about predicting 
emergent behavior?    Detecting undesirable behaviors is easier than predicting 
all behaviors..

From: Friam <[email protected]> on behalf of Pieter Steenekamp 
<[email protected]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Date: Saturday, January 25, 2020 at 10:48 PM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] Abduction and Introspection

I would go along with Johsua Epstein's "if you did not grow it you did not 
explain it". Keep in mind that this motto applies to problems involving 
emergence. So what I'm saying is that it's in many cases futile to apply logic 
to reasoning to find answers - and I refer to the emergent properties of the 
human brain as well as to ABM (agent based modeling) software. But even if the 
problem involves emergence, it's easy for both human and computer logic to 
apply validation logic. Similar to the P=NP problem*, it's difficult to find 
the solution, but easy to verify.

So my answer to "As software engineers, what conditions would a program have to 
fulfill to say that a computer was monitoring “itself" is simply: explicitly 
verify the results. There are many approaches to do this verification; applying 
logic, checking against measured actual data, checking for violations of 
physics, etc.

*I know you all know it, just a refresher, The P=NP problem is one of the 
biggest unsolved computer science problems. There is a class of very difficult 
to solve problems and a class of very easy to verify problems. The P=NP problem 
asks the following: if you have a difficult to solve but easy to verify 
problem, is it possible to find a solution that is reasonably easy for a 
computer to solve. "Reasonably easy" is defined as can you solve it in 
polynomial time. The current algorithms takes exponential time to solve it and 
even for a moderate size problem that means more time that the age of the 
universe for a supercomputer to solve it.

Pieter

On Sat, 25 Jan 2020 at 23:04, Marcus Daniels 
<[email protected]<mailto:[email protected]>> wrote:
I would say the problem of debugging (or introspection if you insist)  is like 
if you find yourself at some random place, never seen before, and the task it 
do develop a map and learn the local language and customs.  If one is given the 
job of law enforcement (debugging violations of law), it is necessary to 
collect quite a bit of information, e.g. the laws of the jurisdiction, the 
sensitivities and conflicts in the area, and detailed geography.  In 
haphazardly-developed  software, learning about one part of a city teaches you 
nothing about another part of the city.   In well-designed software, one can 
orient oneself quickly because there are many easily-learnable conventions to 
follow.    I would say this distinction between the modeler and the modeled is 
not that helpful.   To really avoid bugs, one wants to have metaphorical 
citizens that are genetically incapable of breaking laws.   Privileged access 
is kind of beside the point because in practice software is often far too big 
to fully rationalize.

From: Friam <[email protected]<mailto:[email protected]>> on 
behalf of "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Reply-To: The Friday Morning Applied Complexity Coffee Group 
<[email protected]<mailto:[email protected]>>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' 
<[email protected]<mailto:[email protected]>>
Subject: Re: [FRIAM] Abduction and Introspection

Thanks, Marcus,

Am I correct that all of your examples fall with in this frame;

[cid:[email protected]]
I keep expecting you guys to scream at me, “Of course, you idiot, 
self-perception is partial and subject to error!  HTF could it be otherwise?”   
I would love that.  I would record it and put it on loop for half my colleagues 
in psychology departments around the world.

Nick
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
[email protected]<mailto:[email protected]>
https://wordpress.clarku.edu/nthompson/


From: Friam <[email protected]<mailto:[email protected]>> On 
Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group 
<[email protected]<mailto:[email protected]>>
Subject: Re: [FRIAM] Abduction and Introspection

Nick writes:


 As software engineers, what conditions would a program have to fulfill to say 
that a computer was monitoring “itself



It is common for codes that calculate things to periodically test invariants 
that should hold.   For example, a physics code might test for conservation of 
mass or energy.   A conversion between a data structure with one index scheme 
to another is often followed by a check to ensure the total number of records 
did not change, or if it did change that it changed by an expected amount.   It 
is also possible, but less common, to write a code so that proofs are 
constructed by virtue of the code being compliable against a set of types.   
The types describe all of the conditions that must hold regarding the behavior 
of a function.    In that case it is not necessary to detect if something goes 
haywire at runtime because it is simply not possible for something to go 
haywire.  (A computer could still miscalculate due to a cosmic ray, or some 
other physical interruption, but assuming that did not happen a complete 
proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk 
usage, and to raise an alarm if they are unexpectedly high or low.   Such an 
alarm might trigger cleanups of old results, otherwise kept around for 
convenience.



Marcus


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to