"Dan Minette" <[EMAIL PROTECTED]> wrote:
>
>The self reference stuff I had talked about has been proven to be
>impossible
>to do algorithmically.
[and later]
>I'm clearly not communicating my thoughts. Lets go back to chess. Its as
>though we had a tree search algorithm that utilized analysis of ply 4 to
>find a branch to go down to 8 plys without considering the fact that at ply
>5 there is a catastrophic result that should have terminated the search.
>Even the simplest computer algorithms don't make this mistake, and even the
>best grandmasters do. This is strong evidence, IMHO, that they go about
>things in different manners.
I agree entirely that we can't "solve" these sorts of problems
algorithmically. What I think we can do is make guesses and then verify
these. Just as NP-complete problems can't be solved in polynomial time but
can be verified in polynomial time, humans can make educated guesses and
then eliminate the bad guesses. The fact that a master chess player can make
incredibly silly mistakes indicates to me that there isn't a reliable
process going on, merely some form of guesswork and verification.
In your chess example, I see the human "algorithm" for chess as having a
tens of thousand of 3 dimensional patterns handy (2D + time T), each one
yielding a new pattern at T+N. The patterns don't span the board, don't
necessarily occupy a single step in time, and are fuzzy on both the input
side and output side. Throwing the current state of the board at these
patterns yeilds thousands of matches which cascade into thousands more.
There are also flags on the patterns which scream "good!" and "bad!" and the
bad ones prune the tree slightly while the good ones enhance searches in
that area.
Note that I'm not describing the process as I see the brain walking through
like a program, but the underlying steps taken by the neural machinery.
There are probably a few thousand of neurons involved in each pattern (but
overlapped - the same neurons participate in many patterns).
Applying this capability on a step by step basis is a very artificial thing
(contrast with walking, or throwing a stone at a moving target, where there
is a single application of a learned process), and thus closest to the
concious mind. I think that's why we assume that's how humans play chess -
it's how we learn (before the patterns are built up) and how we apply rigor
(walking through each step logically), but I think it's only the surface
layer.
>From the 1960s until a few years ago we've only had computers that could do
this sort of monotonous stepwise work. Only when computers have access to
large libraries learned experiences and patterns - and a hell of a lot of
brute force to make up for a lack of experience and rich guessing
capabilities - were computers able to beat the best humans.
>Further, as someone who has evaluated new ideas as part of his job, I find
>the idea that I'm actually going through thousands per second, when it
>seems
>to take me hours or days to properly evaluate a small subset of those ideas
>rather implausible. I process algorithms subconsciously at a KHz rate, and
>then have to painstakingly rework the steps that I've already done.
Indeed - and I would describe the first step of that process as applying
learned pattern matching and state chaining ("out of A, B or C it seems like
it might be B, which would imply one of P, Q or R... but those all seem
wrong so it's probably not B...") - you spent umpteen years learning the
patterns and rules which make up your job just like you spent years learning
how not to fall over when learning to walk.
In the latter case, the inputs are proprioreceptors and vision, which are
reasonably simple and why after trying for 40+ years we're just now able to
get a robot to walk up a flight of stairs. (Side note: it's the processing
power, not the algorithms). In the former case, there was the little matter
of your natural language processing capabilities developed over millions of
years, hundreds of years of painstaking evolution of the science, plus years
of formal education on the subject. That's a very large amount of training
placed into the system. :)
> Self referential is literally
>impossible to do algorithmically.
The fact that I *can* say that I understand the sentence "This sentence is
false." doesn't mean I *do* understand it in any useful way. I'll admit I've
never quite grokked this - I fail to see why "cheating" (leaping to
conclusions without being able to connect the dots in a formal matter) is
more than plucking axioms out of thin air because they're fun to play with
and the others are boring.
I definitely need to learn more here. It's my weak spot in these debates.
> In order to reduce human thought to
>algorithms, one needs to show a manner in which such a limited system can
>appear to transcend those limits. It is clear that any attempt to do so
>requires the assumption of a lot of hidden horsepower that operates in an
>extremely peculiar manner.
Why? We aren't close to having billion-element pattern matchers training for
18+ years. We've seen that far, far smaller systems exhibit "emergent"
properties (in the "non-obvious" sense, not anything mystical). Twenty years
ago we had Eliza which could fool a few people for a few minutes. I'm not
convinced that humans
>Its not that a mistake was made. Algorithm based systems, as well as
>humans, certainly come up with the wrong results. I've written plenty of
>code that has come up with erroneous results, as well has having made
>plenty
>of mistakes. What is critical is the pattern of the mistakes. Human
>thought has a pattern of insight and error that is at odds with what would
>be expected from an algorithm based system.
Have you looked at the sort of mistakes that self-teaching systems make?
There was a lovely article about the Cyc project about 5 years back where
the program came up with a hypothesis and effectively asked the question,
"Am I a person?" More formally, it formed the hypothesis that the program
called Cyc might be a member of the class of people because of observed
similarities between what it knew about Cyc and what it knew about people.
The sort of programming that most people do is akin to programming a single
neuron. Snazzy stuff like digital signal processing is akin to what a
handfull of cells and hairs in the ears do. The cutting edge of processing
using 2GHz chips is real-time image recognition and feature tracking (like
the face recognition systems) - and that's emulating a smidgen of incredibly
boring repetitive tissue in the retina and visual cortex.
Going back to the chess example - we're at the point where a custom built
supercomputer can match the a human grand master. Hypothesis: given the
comparison that 0.2g of neural tissue is equivalent to a 2GHz processor, I'd
wager that the processing power in a grand master's brain dedicated to
playing chess is equivalent to the processing power of Deep Blue, give or
take a factor of 10. Anyone got the numbers?
The gap between that and anything close to what the human brain is capable
of, even assuming it's algorithmic/guesswork, is staggering.
Joshua
_________________________________________________________________
MSN Photos is the easiest way to share and print your photos:
http://photos.msn.com/support/worldwide.aspx