On Feb 6, 10:54 am, John Clark <johnkcl...@gmail.com> wrote:
> On Sun, Feb 5, 2012  Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > > The only understanding of Chinese going on is by those Chinese speakers
> > outside the room who are carrying on a one-sided conversation with a rule
> > book.
>
> So you say, but Searle says his idiotic thought experiment has PROVEN it;
> and yet one key step in the "proof" is "if there is understanding it can
> only be in the little man but the little man does not understand so there
> is no understanding involved".

If you are proving that a computer in the position of the man has no
understanding then this thought experiment proves it. If you are
trying to prove that there is no understanding in the universe then
the thought experiment does not prove that. The whole idea of there
being 'understanding involved' is a non-sequitur. It takes
consciousness for granted, like some free floating glow. If I
understand how to cook and then I walk into a building, does the
building, now that it includes me, now know how to cook?

> But if you start the thought experiment as
> that as one of the axioms then what the hell is the point of the thought
> experiment in the first place, how can you claim to have proven what you
> just assumed?  I stand by my remarks that Clark's Chinese Room, described
> previously, has just as much profundity (or lack thereof) as Searle's
> Chinese Room.
>
> > > >  OK fine, the man does not understand Chinese, so what? How does that
> >> prove that understanding was not involved in the room/outside-people
> >> conversation?
>
> > > Because there is nobody on the inside end of the conversation.
>
> So what?

So nothing is understanding you on the other end. It's not a live
performance, it's a canned recording.

> The point of the thought experiment was to determine if
> understanding was involved at the room end,

Huh? The point of the thought experiment was to show that AI doesn't
necessarily understand the data it is processing, which it does. I
like my truck carrying a piano over a bumpy road better, but it still
reveals the important point that accounting is not understanding.

> not how many people were inside
> the room, you can write and scream that there was no understanding from now
> to the end of time but you have not proven it, and neither has Searle.

And you can do the same denying it. Searle is assuming the common
sense of the audience to show them that having a conversation in a
language you don't understand cannot constitute understanding, but he
underestimates the power of comp to obscure common sense.

> It's
> not uncommon for a mathematical "proof" to contain a hidden assumption of
> the very thing you're trying to prove, but usually this error is subtle and
> takes some close analysis and digging to find the mistake, but in the case
> of the Chinese Room the blunder is as obvious as a angry elephant in your
> living room and that is why I have no hesitation in saying that John Searle
> is a moron.

I don't think he's a moron, but he may not understand that comp
already denies any distinction between trivial or prosthetic
intelligence and subjective understanding, so it doesn't help to make
examples which highlight that distinction.

>
> > I suspect the use of the man in the room is a device to force people to
> > identify personally with (what would normally be) the computer.
>
> Yes that's exactly what he's doing, and that's what makes Searle a con
> artist, he's like a stage magician who waves his right hand around and
> makes you look at it so you don't notice what his left hand is doing,

No, I think it's an honest device to help people get around their
prejudices. If someone claims that a program is no different than a
person, this is a way that we can imagine what it is actually like to
do what a program does. The result, is that rather than being forced
to accept that yes, AI must be sentient, we see clearly, that no, AI
is appears to be an automatic and unconscious mechanism.

> and
> the thing that makes him a idiot is that he believes his own bullshit. It's
> as if I forced you to identify with the neurotransmitter acetylcholine and
> then asked you to derive grand conclusions from the fact that acetylcholine
> doesn't understand much.

If the claim of strong AI was that it functioned exactly like
acetylcholine, then what's wrong with that?

>
> > yes I only have first hand knowledge of consciousness. Because the nature
> > of sense is to fill the
> > gaps, connect the dots, solve the puzzle, etc, we are able to generalize
> > figuratively. We are not limited to solipsism
>
> By "fill in the gaps" you mean we accept certain rules of thumb and axioms
> of existence to be true even though we can not prove them,

No. It means we make sense of patterns. We figure them out. The rules
and axioms are a posteriori.

> like induction
> and that intelligent behavior implies consciousness.

No, it requires sense. When we look at yellow and blue dots from far
away and see green, that pattern is not determined as an intellectual
response to rules or axioms. The rules would say that blue is blue and
yellow is yellow and that there is no such thing as green and no point
at which it becomes more of an accurate description than the dots.

>That is the only way
> to avoid solipsism.

Some aspects of our consciousness are solipsistic, some are
mechanistic.

>
> > >>  Take a sleeping pill and your brain organization, its chemistry,
> >> changes and your consciousness goes away; take a pep pill and the
> >> organization reverses itself and your consciousness comes back.
>
> >  > The organization of the brain is still the same in either case.
>
> Bullshit.

The organization of my kitchen sink does not change with the
temperature of the water coming out of the faucet.

>
> > > the brain retains the capacity for consciousness the whole time.
>
> As long as the drug is in your brain consciousness is not possible, it is
> only when chemical processes break down the drug and the concentration of
> it is reduced (it wares off in other words) does consciousness return.

That's exactly why I say the organization of the brain did not change.
The absence of the drug doesn't control the organization of the brain,
the brain is metabolizing the drug and returning to it's own
homeostasis.

>
> > If the pill killed you, a pep pill would not bring you back.
>
> And if the pill did kill you that would certainly change brain
> organization.

If that were all it changed, then all you would need is another pill
to change it back.

> And by the way, why do you believe that dead people are not
> conscious? Because they no longer behave intelligently.

No, people who have never behaved intelligently can die too.

>
> > What we sense is not imposed from the outside,
>
> Well it had better be! If the outside world could be anything we wanted it
> to be then our senses would be of no value and Evolution would never have
> had a reason to develop them.

Evolution didn't develop them, any more than it developed velocity or
charge. If sense were imposed from the outside, we could never
understand it. Part of us has to internalize it as meaningful
otherwise we can't relate to it as real.

> In reality if we project our wishes on how we
> interpret the information from our senses too much our life expectancy will
> be very short; I don't like that saber toothed tiger over there so I'll
> think of him as a cute little bunny rabbit.

If we did not project our wishes on our experience than our experience
would have no meaning. We would not care whether what our life
expectancy was. In reality we project our wishes on our sensory
experience all the time. It's a combination of interior and exterior
experience that gives rise to realism.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to