On 5/3/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

 James,

It's interesting - there is a general huge block here - and culture-wide -
to thinking about intelligence in terms of problems as opposed to the means
and media of solution (or if you like the tasks vs the tools).



What you seem to be missing, Mike, is that the approach you are advocating
is EXACTLY THE APPROACH TAKEN BY THE MAINSTREAM OF ACADEMIC AI RESEARCH
TODAY.

If you go to the AAAI conference, you can see 1000 papers presented
discussing AI in the context of particular problems, problem classes, etc.

There is really nothing unusual about what you are suggesting.  It's just
that this particular email list is dominated by people who have decided a
different approach is more promising.


Your list is all about means - AGI that uses this language or that, and that
uses a body or not. Similarly, Pei's and Ben's expositions of their systems
are all about how-it-works rather than what-it-does.



Because how-it-works is the hard part!   What-it-does is not the hard part.



What I suggest is an AGI - and almost certainly it will be a robot - that
is given a general set of concepts and education about "moving" and
"navigating" past "obstacles" towards "goals" - much, I guess, like an
infant first learns about navigating round its environment in a very general
way, before it gets down to complex, specific activities.



Ok, that's fine ;-) ... But that is already what we are doing, with the
exception that it's a virtual robot in a sim world rather than a physical
robot in the real world.

Enumerating such goals is not very hard nor very fascinating, it's the
how-it-works that has been the bottleneck in AGI.

obviously, a huge # of people have worked on the robotics goals you mention
above, over the last decades.  The bottleneck has been knowing how the
software should work ... not articulation of the goal itself...


Note that infants - and the human brain - do have this central capacity to
hold very general concepts - to think in terms of "go there" or "move a bit"
- which are supremely general - and understand that "go" can mean "crawl"
"run" "hop" "jump"  "ride on scooter" "walk" etc  -  and that "obstacle" or
"something in the way" can refer to literally an infinity of differently
shaped objects,  from a carpet to a human being to a tricycle. - and that
"move" can mean "move any part of your body - arms, legs etc".

[All this fits, I suspect, if loosely with Hawkins' ideas].

Once you have an AGI that has a brain structured in this way - with a tree
of generality/ particularity & abstractness/ concreteness - to understand
that there are many ways of moving towards goals - then you can teach it, or
it can learn an in principle infinite variety of physical, navigational,
goal-seeking activities - from navigating mazes to searching buildings to
hunting and chasing other agents/ animals to navigating videogame mazes
etc.  - for it will know that there are many ways to move its body along
many different kinds of paths past many different kinds of obstacles to many
different kinds of goals.

I thought I made much of that last para. clear already  - but obviously it
didn't communicate - I'm curious why not. Do try and explain what you found
confusing.


It's not that you didn't communicate these ideas ... it's just that,
frankly, these are fairly obvious ideas, and articulating them doesn't take
you very far toward creating an AGI!  ;-)

Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to