--- Eugen Leitl <[EMAIL PROTECTED]> wrote:

> On Tue, Mar 27, 2007 at 07:46:07PM -0700, Matt Mahoney wrote:
> > a program that can beat you in chess, then it is not AI because it is just
> > executing an algorithm that very roughly models your thought process when
> 
> What makes you think it models your thought process? We don't know a lot
> of what the brain does when it's playing chess. A lot of it lights up,
> though.

When I need to come up with an algorithm, I sometimes work out a few examples
by hand until I understand the logical steps I took, then write them in code. 
That is what I mean.  For example, if I wanted to write a chess playing
program, I would first play a few games and think about how I think.  I would
then realize the solution is a tree search with heuristic pruning, e.g. "if I
move my pawn then he takes my queen, so I don't need to follow that path any
further."  Obviously this is an approximation, a model of what is really going
on.  Deep Blue had to compute 200,000,000 chess positions per second to beat
Kasparov, who needed only to compute 3 positions per second.  Up until then,
chess was considered an AI problem.  Now it is just a brute force engineering
problem to overcome our ignorance of the pruning heuristics used by
grandmasters.

But my point is that we will never solve AI with this attitude.  We have a
long history of attacking AI problems like theorem proving or speech
recognition, and then once we solve them they are not AI any more.  If a
machine passes the Turing test by using a giant lookup table or some other
horribly inefficient brute force algorithm executed on a billion processors,
is that AI?  Or is it required that we understand how the brain works?

> In order to understand the human language all the time the system has to
> pass the Turing test. I think you will know when any systems would pass
> that.

What if a system would have passed except that it answered questions faster
than any human could possibly type?  What if it made too few errors?  What if 
it never sleeps?  We already have many examples of machines that surpass human
intelligence in some areas but fail in others.  There is no economic incentive
to reproduce human weaknesses, or to reproduce capabilities that don't
increase value.  If the goal of AGI is to replace all human labor, we can do
that without ever passing the Turing test.  Machines don't need knowledge
unrelated to their jobs.  They will serve us better if we do not mistake them
for human.  Does that mean we have failed to solve AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to