Ben,

V. interesting and helpful to get this pretty clearly stated general position.

However:

"To put it simply, once an AGI can understand human language we can teach it 
stuff."

you don't give any prognostic view about the acquisition of language. Mine is - 
"in your dreams." Arguably, most AGI-ers still see handling language as a 
largely logical exercise of translating between symbols in dictionaries and 
texts, with perhaps a little grounding. I see language as an extremely 
sophisticated worldpicture, and system for handling that picture, which is 
actually, even if not immediately obvious,  a multimedia exercise that is both 
continuously embodied in our system and embedded in the real world. Not just a 
mode of, but almost the whole of the brain in action, interacting with the 
whole of the world. No AGI system will be literate in an awfully long time. 
Your view?

And:

"I think we're at the stage where a team of a couple dozen could do it in 5-10 
years"

I repeat - this is outrageous. You don't have the slightest evidence of 
progress - you [the collective you] haven't solved a single problem of general 
intelligence - a single mode of generalising - so you don't have the slightest 
basis for making predictions of progress other than wish-fulfilment, do you? 

  Ben:A few points...

  1)  
  Closely associating embodiment with GOFAI is just flat-out historically 
wrong.  GOFAI refers to a specific class of approaches to AI that wer pursued a 
few decades ago, which were not centered on embodiment as a key concept or 
aspect.  

  2)
  Embodiment based approaches to AGI certainly have not been extensively tried 
and failed in any serious way, simply because of the primitive nature of real 
and virtual robotic technology.  Even right now, the real and virtual robotics 
tech are not *quite* there to enable us to pursue embodiment-based AGI in a 
really tractable way.  For instance, humanoid robots like the Nao cost $20K and 
have all sorts of serious actuator problems ... and virtual world tech is not 
built to allow fine-grained AI control of agent skeletons ... etc.   It would 
be more accurate to say that we're 5-15 years away from a condition where 
embodiment-based AGI can be tried-out without immense time-wastage on making 
not-quite-ready supporting technologies work....

  3)
  I do not think that humanlike NL understanding nor humanlike embodiment are 
in any way necessary for AGI.   I just think that they seem to represent the 
shortest path to getting there, because they represent a path that **we 
understand reasonably well** ... and because AGIs following this path will be 
able to **learn from us** reasonably easily, as opposed to AGIs built on 
fundamentally nonhuman principles

  To put it simply, once an AGI can understand human language we can teach it 
stuff.  This will be very helpful to it.  We have a lot of experience in 
teaching agents with humanlike bodies, communicating using human language.  
Then it can teach us stuff too.   And human language is just riddled through 
and through with metaphors to embodiment, suggesting that solving the 
disambiguation problems in linguistics will be much easier for a system with 
vaguely humanlike embodied experience.

  4)
  I have articulated a detailed proposal for how to make an AGI using the OCP 
design together with linguistic communication and virtual embodiment.  Rather 
than just a promising-looking assemblage of in-development technologies, the 
proposal is grounded in a coherent holistic theory of how minds work.

  What I don't see in your counterproposal is any kind of grounding of your 
ideas in a theory of mind.  That is: why should I believe that loosely coupling 
a bunch of clever narrow-AI widgets, as you suggest, is going to lead to an AGI 
capable of adapting to fundamentally new situations not envisioned by any of 
its programmers?   I'm not completely ruling out the possiblity that this kind 
of strategy could work, but where's the beef?  I'm not asking for a proof, I'm 
asking for a coherent, detailed argument as to why this kind of approach could 
lead to a generally-intelligent mind.

  5)
  It sometimes feels to me like the reason so little progress is made toward 
AGI is that the 2000 people on the planet who are passionate about it, are 
moving in 4000 different directions ;-) ... 

  OpenCog is an attempt to get a substantial number of AGI enthusiasts all 
moving in the same direction, without claiming this is the **only** possible 
workable direction.  

  Eventually, supporting technologies will advance enough that some smart guy 
can build an AGI on his own in a year of hacking.  I don't think we're at that 
stage yet -- but I think we're at the stage where a team of a couple dozen 
could do it in 5-10 years.  However, if that level of effort can't be 
systematically summoned (thru gov't grants, industry funding, open-source 
volunteerism or wherever) then maybe AGI won't come about till the supporting 
technologies develop further.  My hope is that we can overcome the existing 
collective-psychology and practical-economic obstacles that hold us back from 
creating AGI together, and build a beneficial AGI ASAP ...

  -- Ben G









  On Mon, Oct 6, 2008 at 2:34 AM, David Hart <[EMAIL PROTECTED]> wrote:

    On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen <[EMAIL PROTECTED]> wrote:

      So, it has, in fact, been tried before.  It has, in fact, always failed. 
Your comments about the quality of Ben's approach are noted.  Maybe you're 
right.  But, it's not germane to my argument which is that those parts of Ben 
G.'s approach that call for human-level NLU, and that propose embodiment (or 
virtual embodiment) as a way to achieve human-level NLU, have been tried 
before, many times, and have always failed.  If Ben G. knows something he's not 
telling us then, when he does, I'll consider modifying my views.  But, 
remember, my comments were never directed at the OpenCog project or Ben G. 
personally.  They were directed at an AGI *strategy* not invented by Ben G. or 
OpenCog.

    The OCP approach/strategy, both in crucial specifics of its parts and 
particularly in its total synthesis, *IS* novel; I recommend a closer 
re-examination!

    The mere resemblance of some of its parts to past [failed] AI undertakings 
is not enough reason to dismiss those parts, IMHO, dislike of embodiment or NLU 
or any other aspect that has a GOFAI past lurking in the wings not withstanding.

    OTOH, I will happily agree to disagree on these points to save the AGI list 
from going down in flames! ;-)

    -dave


----------------------------------------------------------------------------
          agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome "  - Dr Samuel Johnson




------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to