I have been an admirer of your robotic works for many years but I just
couldn't help ask you a question.

 

I have created a few robots using sonar and other sensors and I totally
agree that the real world is very messy.  However, most of my thinking is at
a much higher level than what my senses provide.   The messy world is turned
into my symbolic world through a complex maze of systems but in the end, at
least my conscious brain, uses the symbols created, not messy reality.

 

The idea that the whole AGI puzzle can be solved by exclusively using
predicate logic, I agree doesn't fit with what we both know about sensor
data in the real world.   On the other hand, I don't see why a huge chunk
(and I would argue the more difficult part) of the "intelligence part" of an
AGI can't be done using words, numbers and pictures (idealized pictures I
believe are more valuable than real ones) in a vast series of models.

 

I can't see why AGI researchers would have to take an either/or approach to
constructing their AGI when dealing with the real world AND intelligently
correlating the symbols created.  Having both seems like an obviously better
approach.  Neither approach by itself seems even remotely plausible to me.
If both areas can't be had in a single researcher or group then I see no
reason why each group can't do what they do best and then develop an
interface so that the final AGI has the benefit of both.

 

I am not proposing that all AGI designs are equal or useful and I am not
proposing an AGI should be created using an amalgamation of all types of
AGI.  I am specifically referring to only symbolic intelligence (in some
form) and the systems that would turn messy real world data into symbols
that could be used by the symbolic system.

 

I have great respect for people that are working on turning pictures and
vision into useful symbolic objects.  I believe this and speech recognition
etc are greatly needed and would be very helpful to an AGI but I don't
believe these problems need to be solved as a first step to AGI or that they
are entirely necessary to at least get close to human level intelligence.
Is a blind man who is also a paraplegic necessarily considered less
intelligent than an able bodied person?

 

David Clark

 

From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: March-04-08 8:58 AM
To: [email protected]
Subject: Re: [agi] would anyone want to use a commonsense KB?

 

On 04/03/2008, Mark Waser <[EMAIL PROTECTED]> wrote:

>> But the question is whether the internal knowledge representation of the
AGI needs to allow ambiguities, or should we use an ambiguity-free
representation.  It seems that the latter choice is better. 

 

An excellent point.  But what if the representation is natural language with
pointers to the specific intended meaning of any words that are possibly
ambiguous?  That would seem to be the best of both worlds.



This is fine provided that the AGI lives inside a chess-like ambiguity free
world, which could be a simulation or maybe some abstract data mining
environment.

 

  _____  


agi |  <http://www.listbox.com/member/archive/303/=now> Archives
<http://www.listbox.com/member/archive/rss/303/> |
<http://www.listbox.com/member/?&;>
Modify Your Subscription

 <http://www.listbox.com> 

 

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to