>===Vlad===>
the root of such problem lies in projecting reality on the mind, in trying
to draw a clear-cut distinction between the parts of AI that need to find
out things about reality, and parts that act on this reality, where in fact
this distinction shouldn't be that sharp

===Porter===>
The distinction is not as sharp as Josh made it seem.  But the human brain
does appear to have wetware substantially dedicated to selecting what to
attend to and what to do.  As most on this list know the frontal lobe, basil
ganglia, and thalamus, are believed to play a key role in attention and
behavior selection.  

Of course the selection of what to attend to and what action to take is
often a function of what is being perceived and/or imagined, or what goals
and drives one is currently laboring under.  Selection of a behavior often
involves a comparison of its perceived cost/benefit compared to that of
other options.  And as I stated in my response to Josh, instantiation of a
behavior usually requires feedback with perceived reality so as to make that
behavior appropriate in that reality, often repeated cycles of such
feedback.  

So your right, Josh's up then down scheme is obviously too much of a
simplification.

-----Original Message-----
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 4:49 PM
To: [email protected]
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

On Tue, Apr 22, 2008 at 12:18 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]>
wrote:
>
>  At a very high level of abstraction, most the AGI (and AI for that
matter)
>  schemes I've seen can be caricatured as follows:
>
>  1. Receive data from sensors.
>  2. Interpret into higher-level concepts.
>  3. Then a miracle occurs.
>  4. Interpret high-level actions from 3 into motor commands.
>  5. Send to motors.
>
>  What's wrong with this? It implicitly assumes that data flows from 1 to 5
in
>  waterfall fashion, and that feedback, if any, occurs either within 3 or
as a
>  loop thru the external world.
>

Thanks Josh, it clarifies the picture a bit. I think the root of such
problem lies in projecting reality on the mind, in trying to draw a
clear-cut distinction between the parts of AI that need to find out
things about reality, and parts that act on this reality, where in
fact this distinction shouldn't be that sharp. It looks this way,
because low-level perception is mainly about reality, just as
low-level action can be regarded as a subgoal-generation process, and
systems that we built (including disciplines of deliberative
reasoning) operate this way. But facts that get extracted are selected
for usefulness for action, and the process of their perception can
seamlessly proceed into actuation. High-level concepts are handles for
supervised training, not guardians between perception and action.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to