Well, we'll start with an initial theory of how this particular cognitive 
system (PAM-P2) should work, and improve it incrementally as experimentation 
shows the holes, and redesign completely when it becomes too brittle.  
I think that's the best that can be done under present circumstances.
Ideally, the cognitive system will improve itself in a developmental fashion.  
(Will keep you posted on that.)
Your comments and suggestions are always welcome and appreciated Jim.
Best.
~PM.
Date: Wed, 25 Jul 2012 13:08:19 -0400
Subject: Re: [agi] Prediction is not a reliable method of verification
From: [email protected]
To: [email protected]



On Tue, Jul 24, 2012 at 6:19 PM, Piaget Modeler <[email protected]> 
wrote:


In my view, there is behavioral expectation verification... 
My theory is this: 
We form expectations (predictions) constantly, and then correlate the 
expectations with observation.  We regulate behavior 
based on this simple process.  Where predictions are successful, we reinforce 
behaviors, and where predictions fail, we correct the behaviors.
 


Piaget Modeler,


As I pointed out, a predicted event
can be tied to any kind of theory so even if an event that has been predicted
is observed it can be used to "verify" all kinds of nonsense. You
might try to limit the relationship between the prediction and the theory but
the only way to do this effectively is to say that the observation of some
easy-to-observe event -A- verifies the prediction that easy-to-observe event
-A-would occur. This is a triviality and even this is confined only to those
observations which are easy to observe.

 

You really need to tie more
imaginative theories in with a variety of kinds of observable events if you
want to use observations as a basis for AGI. But once you do this it will
inevitably lead to the false verification of poor theories like the person who
finds that a weakly associated prediction "proves" some crackpot
conjecture that he is peddling. This is one reason why AGI programs have yet to
achieve minimal sustainable mentation. 
The fact that you are talking about
some separation of the sentences that the agi program would make from the
verification of the truth of the sentences does show that there are some old
fashioned flaws in your theory (it was good enough for Tarski and it is good
enough for me) and this flaw could be used to let some light in, but the
meandering of the stream of thought about how this might work shows that you
are almost totally unprepared for dealing with the issue that I am talking
about. 
Since easy to observe events are too
scarce to use as a tool of verification that means that we have to use more
elaborate methods to "verify" theories about the world. If an AGI
program is not carefully designed it will tend to act like a crackpot who
thinks that overly-simple tests can verify his insipid theories. But since AGI
programs have yet to attain a minimal level of mentation an AGI program that
was not designed well would not only end up pursuing crackpot theories, but
also theories that were meaningless. That is one reason why current AGI
programs do not even show that they are capable of sustained weak intelligent
thought.
So if a supposed verification is
found for some targeted subject, other theories about the target subject should
also be tried to cross examine the supposed validity of the theory and the
relativity of the tests and observations used in the verification process.
These theories will be relatively complicated and the observation methods that
are used to try to verify or disqualify the theory will also be relatively
complicated. (They will not be based only on simple observation events like the
appearance of a pixel set or something like that.) So we can try, for example, 
developing
opposing theories tied to an observation event which might prove that
conjectured theory was wrong just to make sure that the theory can withstand
some opposition. So if an AGI program thinks that it has made an observation
that verifies one of its theories it should then try to develop methods to prove
the theory wrong as well. It should also try to modify the theory in some way
to see if the modification can be proven or not and to see if it can be
falsified as well. As our (hypothetical) AGI programs are better able to use
components that have proven useful and withstood cross-testing they can then
start to use these more viable components in more controlled testing
procedures.
Jim Bromer



 On Tue, Jul 24, 2012 at 6:19 PM, Piaget Modeler <[email protected]> 
wrote:







In my view, there is behavioral expectation verification, then there is 
linguistic utterance verification.  These are two distinct processes. 
My theory is this: 


We form expectations (predictions) constantly, and then correlate the 
expectations with observation.  We regulate behavior based on this simple 
process.  Where predictions are successful, we reinforce behaviors, and where 
predictions fail, we correct 

the behaviors.
Regarding language, we perceive and store utterances, we may ascribe properties 
to the utterances such as certainty,

origin, etc.  We don't necessarily have to prove all utterances as true or 
false, rather we store them initially as opinion and register them as a known 
belief of some entity (the source of the utterance).  It would be up to a 
reasoner process to actually determine 

to some degree of certainty whether the utterances are factual (true or false), 
counterfactual, hypothesis, or simply opinion.  


For my purposes, and from my perspective, determining the truth of utterances 
is less important than recording that they were made.


~PM


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to