Sounds good. 
I'd like to see a picture.
Cheers,
~PM
Date: Wed, 12 Mar 2014 16:26:11 -0500
Subject: Re: [agi] Decomposition of Intelligence
From: [email protected]
To: [email protected]

It sounds like your design is a slightly finer-grained decomposition than my 
own, with the goal-based portion consisting of the observers and reflectors 
taken together and the understanding-based portion consisting of the 
coordinators together with the model itself.


On Wed, Mar 12, 2014 at 11:08 AM, Piaget Modeler <[email protected]> 
wrote:




Perhaps I misunderstood what you meant by physical separation. 
I took separation to mean 'disconnected' in the sense that at times there would 
be no communication between the two components.  

In PAM-P2 (diagrams attached) we have three categories of agents:observers 
(that interact with the outside world), coordinators (thatbuild the model 
through inferencing), and reflectors (which attend
to goal creation, goal satisfaction, learning, and equilibration).  
We've created a language called Premise in which the agents are written and 
through which they perform their respective tasks.  All
the agents share a global memory. This is one way to decomposea cognitive 
architecture.  Whether there is any intelligence in it remains
to be seen, but we are hopeful.
Cheers,

~PM

Date: Wed, 12 Mar 2014 08:55:56 -0500
Subject: Re: [agi] Decomposition of Intelligence
From: [email protected]

To: [email protected]

That is interesting, and I would take it as validation of the sanity 
(meta-application!) of at least the fundamental concept of decomposition of 
intelligence into modeling and goal-seeking. However, I don't come to the same 
conclusion as yourself.


Physical separation does not imply functional independence; the modeling system 
would necessarily validate its sanity with information received from the 
goal-seeking agents. Part of its algorithm could well be to slow or even stop 
its activities in the case of periods of reduced validation opportunities. 
Also, with separation comes the ability to have multiple agents sharing the 
same model but having different goals, including some whose sole purpose might 
be to ask questions and otherwise seek information to validate the model's 
conclusions.


Additionally, with the separation of goals, it would be possible to assign some 
agents to the sole task of improving the system's design, while all others are 
designed to prefer being strictly hands-off with respect to the system's design 
-- and the agents that are tasked with improving it. This would simplify many 
of the complicated feedback loops that could produce ill-defined behavior as 
discussed in other threads relating to the friendly AI problem.




On Wed, Mar 12, 2014 at 12:45 AM, Piaget Modeler <[email protected]> 
wrote:




Heard an interesting story on this some months ago.  It turns out that people 
form a mental model of their world. And they continually validate their model 
by asking others questions.  This model validation 

helps them to maintain "sanity" and not start thinking thoughts that are 
completely out of touch with reality.  Prisoners in solitary confinement do not 
have the luxury of asking questions to validate

their mental models and consequently their thinking may quickly lose 
correlation with "reality".  
The conclusion is that your AGI needs to have a means of continually 

checking whether its hypotheses are correct. Separating sensory input from 
action or behavior (particularly when one of those behaviors is question 
asking) then may not be prudent.


~PM
--------------

> From: [email protected]
> Date: Wed, 12 Mar 2014 00:37:38 -0500
> Subject: [agi] Decomposition of Intelligence


> To: [email protected]
> 
> The concept of the "dispassionate observer" got me wondering, can
> intelligence be decomposed? I think a lot of confusion with regards to


> the definition of intelligence comes down to confusion between
> intelligent thought/understanding versus intelligent action/behavior.
> 
> The understanding-based definition of intelligence: Intelligence is


> the accurate modeling of the environment through information attained
> by the senses, irrespective of any behavior taken based on that
> understanding.
> 
> The behavior-based definition of intelligence: Intelligence is


> effective goal-seeking behavior within the environment, irrespective
> of any model of the environment used to determine that behavior.
> 
> It seems to me that if we were to build a system capable of


> constructing an accurate model of the environment, goal-seeking
> behavior would be relatively trivial to implement on top of this. This
> suggests a possible solution to the "friendly AI" problem: Keep the


> modeling system physically separate from the goal-seeking system. In
> the event the goal seeking system goes awry, throw a kill switch that
> prevents it from accessing the modeling system. Without the capability


> for understanding, it ceases to behave intelligently, and is
> effectively contained.
> 
> 
> 
> 
> Aaron Hosford
> 
> 
> -------------------------------------------


> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc


> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to