Ben,

It took a while for me to communicate to you what I meant by defining "problem 
classes." etc.  I suspect that something similar is happening here. What I am 
saying LOOKS obvious to you, I understand. But I don't think it is at all. I 
don't think you're understanding the actual meaning - because the words here 
are so familiar that you're not appreciating that they can be and are being 
used in radically different ways and on different levels..

Let's cut to the heart of it -

how does your system or these other systems that you are talking about,  
represent "goals" , "move", "obstacle".. "path.."? Literally, what form do 
those concepts take within any of these systems -and what meanings/sense/ 
referents are attached to them and how? Do they actually use the general 
concept "goal" as such - as distinct, obviously, from having their own specific 
goals?

You see, if any computer system can represent those concepts as the human brain 
actually does, then I would suggest that it's at least half solved the problem 
of AGI.
  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: [email protected] 
  Sent: Thursday, May 03, 2007 2:45 PM
  Subject: Re: [agi] The University of Phoenix Test [was: Why do you think your 
AGI design will work?]





  On 5/3/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
    James,

    It's interesting - there is a general huge block here - and culture-wide - 
to thinking about intelligence in terms of problems as opposed to the means and 
media of solution (or if you like the tasks vs the tools).


  What you seem to be missing, Mike, is that the approach you are advocating is 
EXACTLY THE APPROACH TAKEN BY THE MAINSTREAM OF ACADEMIC AI RESEARCH TODAY.

  If you go to the AAAI conference, you can see 1000 papers presented 
discussing AI in the context of particular problems, problem classes, etc. 

  There is really nothing unusual about what you are suggesting.  It's just 
that this particular email list is dominated by people who have decided a 
different approach is more promising.




    Your list is all about means - AGI that uses this language or that, and 
that uses a body or not. Similarly, Pei's and Ben's expositions of their 
systems are all about how-it-works rather than what-it-does.


  Because how-it-works is the hard part!   What-it-does is not the hard part.
   


    What I suggest is an AGI - and almost certainly it will be a robot - that 
is given a general set of concepts and education about "moving" and 
"navigating" past "obstacles" towards "goals" - much, I guess, like an infant 
first learns about navigating round its environment in a very general way, 
before it gets down to complex, specific activities.


  Ok, that's fine ;-) ... But that is already what we are doing, with the 
exception that it's a virtual robot in a sim world rather than a physical robot 
in the real world. 

  Enumerating such goals is not very hard nor very fascinating, it's the 
how-it-works that has been the bottleneck in AGI.

  obviously, a huge # of people have worked on the robotics goals you mention 
above, over the last decades.  The bottleneck has been knowing how the software 
should work ... not articulation of the goal itself... 
   


    Note that infants - and the human brain - do have this central capacity to 
hold very general concepts - to think in terms of "go there" or "move a bit" - 
which are supremely general - and understand that "go" can mean "crawl" "run" 
"hop" "jump"  "ride on scooter" "walk" etc  -  and that "obstacle" or 
"something in the way" can refer to literally an infinity of differently shaped 
objects,  from a carpet to a human being to a tricycle. - and that "move" can 
mean "move any part of your body - arms, legs etc".

    [All this fits, I suspect, if loosely with Hawkins' ideas].

    Once you have an AGI that has a brain structured in this way - with a tree 
of generality/ particularity & abstractness/ concreteness - to understand that 
there are many ways of moving towards goals - then you can teach it, or it can 
learn an in principle infinite variety of physical, navigational, goal-seeking 
activities - from navigating mazes to searching buildings to hunting and 
chasing other agents/ animals to navigating videogame mazes etc.  - for it will 
know that there are many ways to move its body along many different kinds of 
paths past many different kinds of obstacles to many different kinds of goals.

    I thought I made much of that last para. clear already  - but obviously it 
didn't communicate - I'm curious why not. Do try and explain what you found 
confusing.



  It's not that you didn't communicate these ideas ... it's just that, frankly, 
these are fairly obvious ideas, and articulating them doesn't take you very far 
toward creating an AGI!  ;-) 

  Ben

------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.2/785 - Release Date: 02/05/2007 
14:16

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to