----- Original Message -----
From: "Joshua Bell" <[EMAIL PROTECTED]>
To: "Brin-L" <[EMAIL PROTECTED]>
Sent: Tuesday, March 12, 2002 1:37 AM
Subject: Re: SCOUTED: Science Meets Spirituality, and Wireless Nanotech VR


> "Dan Minette" <[EMAIL PROTECTED]> wrote:
> >
> >Third, humans appear to be able to do things that have been proven to not
> >be
> >capable of being reduced to algorithms.
>
> Can you give an example that can't be handled by the following algorithm:
>
> - Make a handful of well-educated guesses a second
> - Filter using a massively parallel pattern matcher with billions of
> computational elements trained with the task for 18+ years
> - Repeat a few times. Say "Hmmmm...." while doing this.
> - After some stability threshold is reached, say "Aha!"
> - Try out the result. If it's wrong, say "Ooops!" and color your cheeks
red.

The self reference stuff I had talked about has been proven to be impossible
to do algorithmically.  Dennett suggested a workaround for this.  He
suggests that our mind goes through millions and billions of possible
theories in order to find a theory that handles the self-reference case
(this is a 3 year old memory so my exact understanding of his argument may
be a bit fuzzy.)

If he is right, we don't make a handful of guesses per second, we make at
least thousands.  And, we evaluate them in a very peculiar manner.  We do a
very good job at getting to the results, but have absolutely no ability to
access the information in the intermediate steps.

Further, as someone who has evaluated new ideas as part of his job, I find
the idea that I'm actually going through thousands per second, when it seems
to take me hours or days to properly evaluate a small subset of those ideas
rather implausible.  I process algorithms subconsciously at a KHz rate, and
then have to painstakingly rework the steps that I've already done.

> I think you're underestimating how often humans get things wrong.

Actually, that's not really the problem.  Self referential is literally
impossible to do algorithmically. Algorithms are limited because they are an
expression of a formal system and any formal system is limited. Humans
thought does not show this limitation.  In order to reduce human thought to
algorithms, one needs to show a manner in which such a limited system can
appear to transcend those limits. It is clear that any attempt to do so
requires the assumption of a lot of hidden horsepower that operates in an
extremely peculiar manner.


>
> That just sounds silly. A billion element parallel pattern matcher is
great
> at extracting the wheat from the chaff. It's even great at finding wheat
> where there isn't any (e.g. the Face on Mars).

I'm clearly not communicating my thoughts.  Lets go back to chess.  Its as
though we had a tree search algorithm that utilized analysis of ply 4 to
find a branch to go down to 8 plys without considering the fact that at ply
5 there is a catastrophic result that should have terminated the search.
Even the simplest computer algorithms don't make this mistake, and even the
best grandmasters do.  This is strong evidence, IMHO, that they go about
things in different manners.

In addition, when asked, chess players do not talk about doing this kind of
search.  Instead there is a general weighing of positions and a
painstakingly slow analsyis of selected decision trees.  I know how long it
takes me to walk through a fairly simple decision tree compared to the speed
of a computer.  I can beat a computer because of an ability that I have
which is not available to the computer.

Its not that a mistake was made.  Algorithm based systems, as well as
humans, certainly come up with the wrong results.  I've written plenty of
code that has come up with erroneous results, as well has having made plenty
of mistakes.  What is critical is the pattern of the mistakes.  Human
thought has a pattern of insight and error that is at odds with what would
be expected from an algorithm based system.

Dan M.

Reply via email to