----- Original Message -----
From: Robert Shaw <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, March 09, 2001 1:26 AM
Subject: Re: Kant QM


> "Dan Minette" wrote
> > From: Joshua Bell <[EMAIL PROTECTED]
>
> > >
> > > I have to chuckle whenever I read statements like "the human mind must
> > > transcend the limitations of the universe, if we can defeat things
like
> > > Godel's theorems". Or we're just reasonably efficient filters of
random
> > crap
> > > that - after much training - approximates things like following
logical
> > > progressions.
> >
> > How do these filters work?  They cannot work algoritmically, that's been
> > proven rigorously.
>
> Has it? What is the proof?
> All known physics is technically computable, so the brain can't
> perform non-algorithmic processes without using unknown physics.
>
Right, that's Penrose's arguement for new stuff going on.  If you apply
quantum mechanics to the brain, you find that the state of the brain will
quickly become indeterminate. Thus, the most precise scientific thing we can
say about human thought is that the precise results are indeterminate.

We can, of course, give probabilities of this result or that result.  But,
we are very close to proving that it is theoretically impossible to exactly
predict future actions of humans by studying the present state of their
brain and body and by controling their environment from time t to t+dt.

.
>
> That's false. A brute force algorithm would work that way but the actual
> algorithms used pare the tree. Some lines are explored 8 deep and others
> only 3 deep.
>
That is true, but I specifically referred to the same line.  Indeed, I can
understand general strategy along a lone 20 or 30 deep, and yet miss a 1
mover along that same line.  I've done that a number of times.  Karpov has
missed a 2 mover.  No chess program rated over 2000 does that, and he was
rated 2700 at the time.

> > Humans can see a subtle 8 deep combo and miss a simple 1 mover at the
same
> > time.
> >
> That's because we don't consider all possible initial moves. We recognise
> patterns  in the relative positions of the pieces. If a good move doesn't
fit those
> patterns  it doesn't get considered.

But, that's not all we do.  We generalize patterns in manners that do not
seem to work algorithmically.  We know that the stopping problem for a
Turning engine cannot be solved in general.  Yet, our minds makes
generalizations akin to that.  This is used as evidence that the workings of
our mind are not reducible to physics...consistant with, mind you, but not
reducable.

Dan M.

Reply via email to