I think probably "AGI-curious" person has intuitions about this subject.  Here 
are mine:
 
Some people, especially those espousing a modular software-engineering type of 
approach seem to think that a perceptual system basically should spit out a 
token for "chair" when it sees a chair, and then a reasoning system can take 
over to reason about chairs and what you might do with them -- and further it 
is thought that the "reasoning about chairs" part is really the essence of 
intelligence, whereas chair detection is just discardable pre-processing.  My 
personal intuition says that by the time you have taken experience and boiled 
it down to a token labeled "chair" you have discarded almost everything 
important about the experience and all that is left is something that can be 
used by our logical inference systems.  And although that ability to do logical 
inference (probabilistic or pure) is a super-cool thing that humans can do, it 
is a fairly minor part of our intelligence.
 
Often I see AGI types referring to physical embodiment as a costly sideshow or 
as something that would be nice if a team of roboticists were available.  But 
really, a simple robot is trivial to build, and even a camera on a pan/tilt 
base pointed at an interesting physical location is way easier to build than a 
detailed simulation world.  The next objection is that "image processing" is 
too expensive and difficult.  I guess my only thought about that it doesn't 
inspire confidence in an approach if the very first layer of neural processing 
is too hard.  I suspect the real issue is that even if you do the "image 
processing", then what?  What do you do with the output?
 
Ignoring those issues -- inventing a way of representing and manipulating 
"knowledge", and assuming that sensory processes can create those data 
structures if built properly -- can work IF it turns out that brains are just 
really really bad at being "intelligent".  That is, if the extreme tip of the 
evolutionary iceberg (some thousands of generations of lightly-populated 
species) finally stumbled on the fluid symbol-manipulating abilities that 
define intelligence, and the rest of the historical structures are only mildly 
more important than organs that pump blood -- if that's true, thinking about 
all this low-level grunk is a waste of time.  I actually hope that it's true, 
but I doubt it.  To the first people who had the ability to code our magical 
symbol processing abilities on a machine, it must have seemed like an exciting 
theory.
 
 
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to