> From: Dan Minette <[EMAIL PROTECTED]>

> From: "Erik Reuter" <[EMAIL PROTECTED]>
 
> 
> > On Sun, Mar 10, 2002 at 09:26:51PM -0600, Dan Minette wrote:
> >
> > > Before I answer this Eric, let me ask you a question that will help
me
> > > frame an answer.  Do you think science is about the Truth or is it
a
> > > means by which we model, predict, and manipulate phenomenon.
> >
> > Sniff. Either this is a trap or you must not know my views as well as
I
> > would have guessed.
> 
> No trap at all.  I now know I need to argue from a practical point of
view.
> Let us consider the practicalities of AI.  I remember when they were
first
> touted 20 years ago; when LISP was a hot language.  Expert systems were
come
> close to replacing various human experts within 5 years.  Anyone from a
log
> analyst to  radiologist could be replaced by an expert system.
> 
> Its now 20 years later, and the horizon for such uses of AI appears to
be
> further away than when they were first touted.  Yes, there are uses for
> expert systems, and for neural networks.  But, they are far more
limited
> than they were expected to be when computer power was almost a million
times
> as expensive.
> 
> Second, when computers are used, the algorithms that are used are
carefully
> written, debugged, tested, rewritten, redebugged, retested, etc.  In
every
> step there is a designer/designers who is figuring out what went wrong
and
> what has to be fixed.  Thus, algorithms seem to be an expression of how
> carefully a designer thinks, not something that just happens.
> 
> With real AI, programs would have to be self modifying.  This has been
> touted to be just around the corner, again, for a couple of decades. 
From
> what I've heard, the success has been limited to very restricted toy
models
> (a technical term, not inherently derisive).  In most cases, the
program
> soon blows up.
> 
> Third, humans appear to be able to do things that have been proven to
not be
> capable of being reduced to algorithms.  Handling self-referential
> statements is one of these things.  I realize that Dennett argues that
> humans only appear to be able to do this, its just that they have
algorithms
> that search the possibilities until they stumble over them...like a
chess
> playing program.  However, it is very curious that humans would have
such an
> algorithm in their heads without being able to access most of the
results:
> since such access would be evolutionarily favored.

In other words, a human mind do Polynominal calculations in
Non-Polynominal time, like a quantum computer.

> Fourth, from the time of Bohr's early writings on the implications of
QM on
> biology (in the '30s IIRC), it has been thought that QM renders the
> predictions of brain states impossible.  Even though, as Zimmy pointed
out,
> the neuron is rather large on a quantum scale, it is also very complex
(as
> Zimmy also pointed out.)  Thus, quantum effects can rise quite quickly.
> Indeed, in a post a while back, I showed how on an idealized pool
table,
> quantum chaos takes effect within 1-2 seconds of the positions and
momenta
> of the balls being well established..
> 
> Once we saw how computers worked, it was reasonable to suppose that our
> brains might also work algorithmically.  However, the last 50 years of
> experimentation have not yielded the results expected by the AI
enthusiasts.
> Instead, very serious arguments have been raised against that
hypothesis.
> Practically speaking, while the hypothesis has not been falsified, it
> doesn't seem to be a leading candidate.
> 
> > Anyway, I am a pragmatist, and an experimentalist. Truth be damned!
:-)
> 
> As far as I can see, the last 20 years of experimentation shows that
the
> claims for AI are vastly overstated: that if it will work, it will be
in a
> much longer time frame than touted.  My practical experience is when
someone
> starts talking about AI solutions, one should count the silverware
_very_
> carefully before they leave.

Reply via email to