On 2/19/07, Bo Morgan <[EMAIL PROTECTED]> wrote:


On Mon, 19 Feb 2007, John Scanlon wrote:

) Is there anyone out there who has a sense that most of the work being
) done in AI is still following the same track that has failed for fifty
) years now?  The focus on logic as thought, or neural nets as the
) bottom-up, brain-imitating solution just isn't getting anywhere?  It's
) the same thing, and it's never getting anywhere.

Yes, they are mostly building robots and trying to pick up blocks or catch
balls.  Visual perception and motor control for solving this task was
first shown in a limited context in the 1960s.  You are correct that the
bottom up approach is not a theory driven approach.  People talk about
mystical words, such as Emergence or Complexity, in order to explain how
their very simple model of mind can ultimately think like a human.
Top-down design of an A.I. requires a theory of what abstract thought
processes do.

) The missing component is thought.  What is thought, and how do human
) beings think?  There is no reason that thought cannot be implemented in
) a sufficiently powerful computing machine -- the problem is how to
) implement it.

Right, there are many theories of how to implement an AI.  I wouldn't
worry too much about trying to define Thought.  It has different
definitions depending on the different problem solving contexts that it is
used.  If you focus on making a machine solve problems, then you might see
some part of the machine you build will resemble your many uses for the
term Thought.

) Logical deduction or inference is not thought.  It is mechanical symbol
) manipulation that can can be programmed into any scientific pocket
) calculator.

Logical deduction is only one way to think.  As you say, there are many
other ways to think.  Some of these are simple reactive processes, while
others are more deliberative and form multistep plans, while still others
are reflective and react to problems in actual planning and inference
processes.

) Human intelligence is based on animal intelligence.

No.  Human intelligence has evolved from animal intelligence.  Human
intelligence is not necessarily a simple subsumption of animal
intelligence.

) The world is continuous, spatiotemporal, and non-descrete, and simply is
) not describable in logical terms.  A true AI system has to model the
) world in the same way -- spatiotemporal sensorimotor maps.  Animal
) intelligence.

Logical parts of the world are describable in logical terms.  We think in
many different ways.  Each of these ways uses different representations of
the world.  We have many specific solutions to specific types of problem
solving, but to make a general problem solver we need ways to map these
representations from one specific problem solver to another.  This allows
alternatives to pursue when a specific problem solver gets stuck.  This
type of robust problem solving requires reasoning by analogy.


I hope my ignorance does not bother this list too much.

Regarding what or what may not be done through logical inference and other
expressive enough symbolic approaches; given unlimited resources would it
not be possible to implement an UTM with at most a finite overhead which in
turn yields that any algorithm running on an UTM could also run on
expressive enough symbolic systems, whether they "learn" or not? I do not
argue that it is not inefficient, both for running and implementation speed.
It's even so that the logical inference in such a case may be reduced
entirely and proven to be more efficiently obviously, than to implement the
system direcly on certain systems. I do not think however that such a strict
and not well-formulated position is rationally justified since it's not
clear (at least not to me) that the logical inference may be efficiently
reduced for every algorithm expressed in the logical language. Just rambling
and unrelated but perhaps the brain's operations do not even allow for UTMs
since they are not so clear and there might not be appropriate
transformations and if assume the Turing-Church thesis we might find that
there are problems that artificial components may solve that humans cannot
even given unlimited resources. Perhaps not very likely since we can
simulate the process of an UTM by hand and even the errors may be corrected
given enough time.

) Ask some questions, and I'll tell you what I think.

People always have a lot to say, but what we need more of are working
algorithms and demonstrations of robust problem solving.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to