Heard an interesting story on this some months ago.  It turns out that people 
form a mental model of their world. And they continually validate their model 
by asking others questions.  This model validation helps them to maintain 
"sanity" and not start thinking thoughts that are completely out of touch with 
reality.  Prisoners in solitary confinement do not have the luxury of asking 
questions to validatetheir mental models and consequently their thinking may 
quickly lose correlation with "reality".  
The conclusion is that your AGI needs to have a means of continually checking 
whether its hypotheses are correct. Separating sensory input from action or 
behavior (particularly when one of those behaviors is question asking) then may 
not be prudent.
~PM
--------------

> From: [email protected]
> Date: Wed, 12 Mar 2014 00:37:38 -0500
> Subject: [agi] Decomposition of Intelligence
> To: [email protected]
> 
> The concept of the "dispassionate observer" got me wondering, can
> intelligence be decomposed? I think a lot of confusion with regards to
> the definition of intelligence comes down to confusion between
> intelligent thought/understanding versus intelligent action/behavior.
> 
> The understanding-based definition of intelligence: Intelligence is
> the accurate modeling of the environment through information attained
> by the senses, irrespective of any behavior taken based on that
> understanding.
> 
> The behavior-based definition of intelligence: Intelligence is
> effective goal-seeking behavior within the environment, irrespective
> of any model of the environment used to determine that behavior.
> 
> It seems to me that if we were to build a system capable of
> constructing an accurate model of the environment, goal-seeking
> behavior would be relatively trivial to implement on top of this. This
> suggests a possible solution to the "friendly AI" problem: Keep the
> modeling system physically separate from the goal-seeking system. In
> the event the goal seeking system goes awry, throw a kill switch that
> prevents it from accessing the modeling system. Without the capability
> for understanding, it ceases to behave intelligently, and is
> effectively contained.
> 
> 
> 
> 
> Aaron Hosford
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to