On Mon, 19 Feb 2007, John Scanlon wrote:

) Is there anyone out there who has a sense that most of the work being 
) done in AI is still following the same track that has failed for fifty 
) years now?  The focus on logic as thought, or neural nets as the 
) bottom-up, brain-imitating solution just isn't getting anywhere?  It's 
) the same thing, and it's never getting anywhere.

Yes, they are mostly building robots and trying to pick up blocks or catch 
balls.  Visual perception and motor control for solving this task was 
first shown in a limited context in the 1960s.  You are correct that the 
bottom up approach is not a theory driven approach.  People talk about 
mystical words, such as Emergence or Complexity, in order to explain how 
their very simple model of mind can ultimately think like a human.  
Top-down design of an A.I. requires a theory of what abstract thought 
processes do.

) The missing component is thought.  What is thought, and how do human 
) beings think?  There is no reason that thought cannot be implemented in 
) a sufficiently powerful computing machine -- the problem is how to 
) implement it.

Right, there are many theories of how to implement an AI.  I wouldn't 
worry too much about trying to define Thought.  It has different 
definitions depending on the different problem solving contexts that it is 
used.  If you focus on making a machine solve problems, then you might see 
some part of the machine you build will resemble your many uses for the 
term Thought.

) Logical deduction or inference is not thought.  It is mechanical symbol 
) manipulation that can can be programmed into any scientific pocket 
) calculator.

Logical deduction is only one way to think.  As you say, there are many 
other ways to think.  Some of these are simple reactive processes, while 
others are more deliberative and form multistep plans, while still others 
are reflective and react to problems in actual planning and inference 
processes.

) Human intelligence is based on animal intelligence.

No.  Human intelligence has evolved from animal intelligence.  Human 
intelligence is not necessarily a simple subsumption of animal 
intelligence.

) The world is continuous, spatiotemporal, and non-descrete, and simply is 
) not describable in logical terms.  A true AI system has to model the 
) world in the same way -- spatiotemporal sensorimotor maps.  Animal 
) intelligence.

Logical parts of the world are describable in logical terms.  We think in 
many different ways.  Each of these ways uses different representations of 
the world.  We have many specific solutions to specific types of problem 
solving, but to make a general problem solver we need ways to map these 
representations from one specific problem solver to another.  This allows 
alternatives to pursue when a specific problem solver gets stuck.  This 
type of robust problem solving requires reasoning by analogy.

) Ask some questions, and I'll tell you what I think.

People always have a lot to say, but what we need more of are working 
algorithms and demonstrations of robust problem solving.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to