On Feb 5, 11:55 am, John Clark <johnkcl...@gmail.com> wrote:
> On Sat, Feb 4, 2012 at Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > You don't understand Searle's thought experiment.
>
> I understand it one hell of a lot better than Searle did, but that's not
> really much of a boast.
>
> > The whole point is to reveal the absurdity of taking understanding for
> > granted in data manipulation processes.
>
> And Searle takes it for granted that if the little man doing a trivial task
> does not understand Chinese then Chinese is not understood, and that
> assumption simply is not bright.

No, I can see clearly that Searle is correct.You are applying a
figurative sense of understanding when a literal sense is required.
The only understanding of Chinese going on is by those Chinese
speakers outside the room who are carrying on a one-sided conversation
with a rule book. To say that Chinese is understood by the Chinese
Room system is to say that the entire universe understands Chinese.

>
> > None of the descriptions of the argument I find online make any mention
> > of infinite books, paper, or ink.
>
> Just how big do you think a book would need to be to contain every possible
> question and every possible answer to those questions?

It doesn't need to be able to answer every possible question, it only
needs to approximate a typical conversational capacity. It can ask
'what do you mean by that?'

>
> > All I find is a clear and simple experiment:
>
> Yes simple, as in stupid.

It seems like you take it's contradiction of your position personally.
I assume that you don't mean that literally though, right? You don't
think that the thought experiment has a low I.Q., right? Thinking that
would be entirely consistent with what you are saying though.

>
> > The fact that he can use the book to make the people outside think they
> > are carrying on a conversation with them in Chinese reveals that it is only
> > necessary for the man to be trained to use the book, not to understand
> > Chinese or communication in general.
>
> OK fine, the man does not understand Chinese, so what? How does that prove
> that understanding was not involved in the room/outside-people
> conversation?

Because there is nobody on the inside end of the conversation.

> You maintain that only humans can have understanding while I
> maintain that other things can have it too.

No, I don't limit understanding to humans, I just limit human quality
understanding to humans. Not that it's the highest quality
understanding, but it is the only human understanding.

>To determine which of us is
> correct Searle sets up a cosmically impractical and complex thought
> experiment in which a human is a trivial part. Searle says that if
> understanding exists anywhere it must be concentrated in the human and
> nowhere else, but the little man does not understand Chinese so Searle
> concludes that understanding is not involved. What makes Searle such an
> idiot is that determining if humans are the only thing that can have
> understanding or not is the entire point of the thought experiment, he's
> assuming the very thing he's trying to prove. If Siri or Watson had behaved
> as stupidly as Searle did their programers would hang their heads in shame!

I'm not sure if Searle maintains that understanding is forever limited
to humans, but I suspect the use of the man in the room is a device to
force people to identify personally with (what would normally be) the
computer. This way he makes you confront the reality that looking up
reactions in a rule book is not the same thing as reacting
authentically and generating responses personally.

>
> > Makes sense to me.
>
> I know, that's the problem.

No, because I understand why the way you are looking at it misses the
point, and I understand that you aren't willing to entertain my way of
looking at it.

>
> > We know for a fact that human consciousness is associated with human
> > brains
>
> That should be "with a human brain" not "with human brains"; you only know
> for a fact that one human brain is conscious, your own.

Again, there are literal and figurative senses. In the most literal
sense of 'you only know for a fact', yes I only have first hand
knowledge of consciousness. Because the nature of sense is to fill the
gaps, connect the dots, solve the puzzle, etc, we are able to
generalize figuratively. We are not limited to solipsism or formal
proofs that other people are conscious, we have a multi-contextual
human commonality. We share many common senses and can create new
senses through the existing sense channels we share. Knowing whether a
person is conscious or not therefore, is only an issue under very
unusual conditions.

>
> > but we do not have much reason to suspect the rooms can become conscious
>
> Because up to now rooms have not behaved very intelligently, but the room
> containing the Watson supercomputer is getting close.

Close to letting us use it to fool ourselves is all. It's still only a
room with a large, fast rulebook.

>
> > Organization of the brain does not make the difference between being
> > awake and being unconscious.
>
> Don't be ridiculous! Take a sleeping pill and your brain organization, its
> chemistry, changes and your consciousness goes away; take a pep pill and
> the organization reverses itself and your consciousness comes back.

The organization of the brain is still the same in either case. The
sleeping pill not killing you and then resurrecting you, the brain
retains the capacity for consciousness the whole time. If the pill
killed you, a pep pill would not bring you back. It depends in what
sense you apply the term organization, but in the context of
determining whether something other than a human brain can have human
consciousness, it is clear to me that exporting some mathematical
description of a neuron or brain's measured behavior to an inanimate
system could only yield the qualities of experience factored in by the
model. If your description can't measure the blueness of blue, then
neither can the machine being controlled by the model.

>
> > Organization is certainly important, but only if it arises organically.
>
> Electricity is not organic but it will change your organization and
> dramatically change your consciousness if that electricity is applied to
> your brain, or any other part of your body for that matter.

Electricity is organic to all matter as far as I know.

>
> > Organization imposed from the outside doesn't cause that organization to
> > become internalized as awareness.
>
> So the light entering your eye and other inputs from your senses about the
> outside world has no effect on your awareness. How unfortunate for you,
> your nose must be sore by now from walking into so many walls that you were
> not aware of.

What we sense is not imposed from the outside, we have developed sense
organs to probe the outside world in our native physiological
language. If that were not the case, then we could just drill holes in
our head and learn to treat the 'inputs' as extra nostrils or eyes.

>
> > Yet if someone pulls the plug on a coma patient, they can go to prison,
>
> but iPhones can be disposed of at will.
>
> Good heavens, do you really expect to understand the nature of reality by
> studying the legal system?!
>

I'm giving you an example of the absurdity of the position you suggest
by pointing out the overwhelming affront to common sense represented
(sometimes poorly, but still relevantly) by the law.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to