Josh,

I don't think your 5 steps do justice to the more sophisticated views of AGI
that are out their.  It does not describe how I presume a Novamente system
would work.  In the system I have envisioned all links in the hierarchical
memory work in both directions and support top-down, and bottom-up
processing, and there is also lateral implication.  No miracles occur, other
than massively complex spreading activation, implication, and constraint
relaxation, thresholding, attention selection, and focusing, and selection
and context appropriate instantiation of mental and physical behaviors.

If you have read my responses in this thread one of their common themes is
how both perception up from lower levels and instantiation of higher levels
concepts and behaviors is context appropriate.  Being context appropriate
involves a combination of both bottom-up, top-down, and lateral implication.

So I don't view your alleged missing conceptual piece to be actually missing
from the better AGI thinking.  But until we actually try building systems
like Novamenti are larger versions of Joscha Bach's MicroPsi architecture we
won't know for sure exactly how complex getting the bottom-up, top-down, and
lateral implications and constraints to all work together well will be.  I'm
hoping and expecting it will just be a quite complicated AI engineering
task, made much easier by cheap hardware which will make search the space of
possible solutions much cheaper and faster --- but it might become a full
blown major conceptual piece.

Ed Porter

-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 4:18 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

(Aplogies for inadvertent empty reply to this :-)

On Saturday 19 April 2008 11:35:43 am, Ed Porter wrote:
> WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

In a single word: feedback.

At a very high level of abstraction, most the AGI (and AI for that matter) 
schemes I've seen can be caricatured as follows:

1. Receive data from sensors.
2. Interpret into higher-level concepts.
3. Then a miracle occurs.
4. Interpret high-level actions from 3 into motor commands.
5. Send to motors.

What's wrong with this? It implicitly assumes that data flows from 1 to 5 in

waterfall fashion, and that feedback, if any, occurs either within 3 or as a

loop thru the external world.

Problem is, in brains, there are actually more nerve fibers transmitting
data 
from higher numbers to lower, i.e. backwards, than forwards. I think that
the 
interpretation of sensory input is a much more active process than we AGIers

realize, and that doing things requires a lot more sensing.

Here's a quip that feels like it has some relevance:
"What's the difference between a physicist and an engineer? A physicist is 
someone who spends all his time building machinery, to help him write an 
equation. An engineer is someone who spends all his time writing equations, 
in order to build machinery."

Josh

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to