John,

You're making a massively important point, wh. I have been thinking about 
recently.

I think it's more useful to say that AGI-ers are thinking in terms of building 
a *complete AGI system* (rather than person) wh. could range from a simple 
animal robot to fantasies of an all intelligent brain-in-a-box.

No AGI-er has (and no team of supercreative AGI-ers could have) even a remotely 
realistic understanding of how massively complex a feat this would be.

I've changed recently to thinking that realistic AGI in the near future will 
have to concentrate instead (or certainly have one major focus) on what might 
be called "local AGI" as opposed to "global AGI" - getting a robot able to do 
just *one* or two things in a truly general way - with a very well-defined goal 
- rather than a true all-round AGI robot system. (more of this another time).

Look at Venter - he is not trying to build a complete artificial cell in one - 
that would be insane, and yet not a tiny fraction of the insanity of present 
AGI systembuilders' goals. He is taking it one narrow step at a time - one 
relatively narrow part at a time. That is a law of both natural and machine 
evolution to wh. I don't think there are any exceptions - from simple to 
complex in gradual, progressive stages.




From: John G. Rose 
Sent: Thursday, June 24, 2010 6:20 PM
To: agi 
Subject: RE: [agi] The problem with AGI per Sloman


I think some confusion occurs where AGI researchers want to build an artificial 
person verses artificial general intelligence. An AGI might be just a 
computational model running in software that can solve problems across domains. 
 An artificial person would be much else in addition to AGI.

 

With intelligence engineering and other engineering that artificial person 
could be built, or some interface where it appears to be a person. And a huge 
benefit is in having artificial people to do things that real people do. But 
pursuing AGI need not have to be pursuit of building artificial people.

 

Also, an AGI need not have to be able to solve ALL problems initially. Coming 
out and asking why some AGI theory wouldn't be able to figure out how to solve 
some problem like say, world hunger, I mean WTF is that?

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, June 24, 2010 5:33 AM
To: agi
Subject: [agi] The problem with AGI per Sloman

 

"One of the problems of AI researchers is that too often they start off with an 
inadequate
understanding of the problems and believe that solutions are only a few years 
away. We need an educational system that not only teaches techniques and 
solutions, but also an understanding of problems and their difficulty - which 
can come from a broader multi-disciplinary education. That could speed up 
progress."

A. Sloman

 

(& who else keeps saying that?)

      agi | Archives | Modify Your Subscription
     
     

 

      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to