On 27 Jan 2014, at 06:07, Craig Weinberg wrote:
On Saturday, January 25, 2014 11:36:11 PM UTC-5, stathisp wrote:
On 26 January 2014 01:35, Craig Weinberg <whatsons...@gmail.com>
wrote:
>> But that doesn't answer the question: do you think (or
understand, or
>> whatever you think the appropriate term is) that the Chinese Room
>> COULD POSSIBLY be conscious or do you think that it COULD NOT
POSSIBLY
>> be conscious?
>
>
> NO ROOM CAN BE CONSCIOUS. NO BODY CAN BE CONSCIOUS. NO FORM CAN BE
> CONSCIOUS.*
>
> *Except within the fictional narrative of a conscious experience.
Puppets
> can seem conscious. Doors, door-knobs, and Chinese rooms can SEEM
to be
> conscious.
Do you think Barack Obama is conscious? If you do, then in whatever
sense you understand that, can the Chinese Room also be conscious?
Or do you think that is impossible?
Yes, I think that Barack Obama is conscious, because he is different
from a building or machine. Buildings and machines cannot be
conscious, just as pictures of people drinking pictures of water do
no experience relief from thirst.
To compare a brain with a machine can make sense.
To compare a brain with a picture cannot.
Bruno
>> Or do you claim that the question is meaningless, a
>> category error (which ironically is a term beloved of
positivists)? If
>> the latter, how is it that the question can be meaningfully asked
>> about humans but not the Chinese Room?
>
>
> Because humans are not human bodies. We don't have to doubt that
humans are
> conscious, as to do so would be to admit that we humans are the ones
> choosing to do the doubting and therefore are a priori certainly
conscious.
> Bodies do not deserve the benefit of the doubt, since they remain
when we
> are personally unconscious or dear. That does not mean, however,
that our
> body is not itself composed on lower and lower levels by
microphenomenal
> experiences which only seem to us at the macro level to be forms and
> functions....they are forms and functions relative to our
> perceptual-relativistic distance from their level of description.
Since
> there is no distance between our experience and ourselves, we
experience
> ourselves in every way that it can be experienced without being
outside of
> itself, and are therefore not limited to mathematical
descriptions. The sole
> purpose of mathematical descriptions are to generalize
measurements - to
> make phenomena distant and quantified.
Wouldn't the Chinese Room also say the same things, i.e. "We Chinese
Rooms don't have to doubt that we are
conscious, as to do so would be to admit that we are the ones
choosing to do the doubting and therefore are a priori certainly
conscious."
Why would the things a doll says things make any difference? If a
puppet moves its mouth and you hear words that seem to be coming out
of it, does that mean that the words are true, and that they are the
true words of a puppet?
>> > I like my examples better than the Chinese Room, because they are
>> > simpler:
>> >
>> > 1. I can type a password based on the keystrokes instead of the
letters
>> > on
>> > the keys. This way no part of the "system" needs to know the
letters,
>> > indeed, they could be removed altogether, thereby showing that
data
>> > processing does not require all of the qualia that can be
associated
>> > with
>> > it, and therefore it follows that data processing does not
necessarily
>> > produce any or all qualia.
>> >
>> > 2. The functional aspects of playing cards are unrelated to the
suits,
>> > their
>> > colors, the pictures of the royal cards, and the participation
of the
>> > players. No digital simulation of playing card games requires any
>> > aesthetic
>> > qualities to simulate any card game.
>> >
>> > 3. The difference between a game like chess and a sport like
basketball
>> > is
>> > that in chess, the game has only to do with the difficulty for
the human
>> > intellect to compute all of the possibilities and prioritize them
>> > logically.
>> > Sports have strategy as well, but they differ fundamentally in
that the
>> > real
>> > challenge of the game is the physical execution of the moves. A
machine
>> > has
>> > no feeling so it can never participate meaningfully in a sport.
It
>> > doesn't
>> > get tired or feel pain, it need not attempt to accomplish
something that
>> > it
>> > cannot accomplish, etc. If chess were a sport, completing each
move
>> > would be
>> > subject to the possibility of failure and surprise, and the end
can
>> > never
>> > result in checkmate, since there is always the chance of weaker
pieces
>> > getting lucky and overpowering the strong. There is no
Cinderella Story
>> > in
>> > real chess, the winning strategy always wins because there can
be no
>> > difference between theory and reality in an information-theoretic
>> > universe.
>>
>> How can you start a sentence "a machine has no feeling so..." and
>> purport to discuss the question of whether a machine can have
feeling?
>>
>> > So no, I do not "believe" this, I understand it. I do not think
that the
>> > Chinese Room is valid because wholes must be identical to their
parts.
>> > The
>> > Chinese Room is valid because it can (if you let it) illustrate
that the
>> > difference between understanding and processing is a difference
in kind
>> > rather than a difference in degree. Technically, it is a
difference in
>> > kind
>> > going one way (from the quantitative to the qualitative) and a
>> > difference in
>> > degree going the other way. You can reduce a sport to a game
(as in
>> > computer
>> > basketball) but you can't turn a video game into a sport unless
you
>> > bring in
>> > hardware that is physical/aesthetic rather than programmatic.
Which
>> > leads me
>> > to:
>>
>> The Chinese Room argument is valid if it follows that if the
parts of
>> the system have no understanding then the system can have no
>> understanding.
>
>
> You aren't listening to me - which may not be your fault. Your
psychological
> specialization may not permit you to see any other possibility
than the
> mereological argument that you keep turning to. Of course the
whole can have
> properties that the parts do not have, that is not what I am
denying at all.
> I am saying that there is no explanation of the Chinese Room which
requires
> that it understands anything except one in which understanding
itself is
> smuggled in from the real world and attached to it arbitrarily on
blind
> faith.
Then you don't consider the Chinese Room argument valid. You agree
with the conclusion and premises but you don't agree that the
conclusion follows from the premises in the way Searle claims.
The Chinese Room is not important. You are missing the whole point.
Consciousness is beyond reason and cannot be discovered through
evidence or argument, but sensory experience alone.
>> It is pointed out (correctly) by Searle that the person
>> in the room does not understand Chinese, from which he CONCLUDES
that
>> the room does not understand Chinese,
>
>
> Rooms don't understand anything. Rooms are walls with a roof.
Walls and
> roofs are planed matter. Matter is bonded molecules. Molecules are
sensory
> experiences frozen in some externalized perceptual gap.
The claim is that the consciousness of the room stands in relation
to the physical room as the consciousness of a person stands in
relation to the physical person.
There is no 'physical person', there is a public facing body. A
person is not a body. On one level of an animal's body there are
organs which cannot survive independently of the body as a whole,
but on another level all of those organs are composed of living
cells which have more autonomy. Understanding this theme of
coexisting but contrasting levels of description suggests that a
room need not be comparable to the body of a living organism. Since
the room is not something which naturally evolves of its own motives
and sense, we need not assume that the level at which it appears to
us as a room or machine is in fact the relevant level of description
when considering its autonomy and coherence. In my view, the machine
expresses only the lowest levels of immediate thermodynamic
sensitivity according to the substance which is actually reacting,
and the most distant levels of theoretical design, but with nothing
in between. We do not have to pretend that there is no way to guess
whether a doll or a cadaver might be conscious. With an adequate
model of qualitative nesting and its relation to quantitative scale,
we can be freed from sophism and pathetic fallacy.
>> and uses this conclusion to
>> support the idea that the difference between understanding and
>> processing is a difference in kind, so no matter how clever the
>> computer or how convincing its behaviour it will never have
>> understanding.
>
>
> The conclusion is just the same if you use the room as a whole
instead of
> the person. You could have the book be a simulation of John Wayne
talking
> instead. No matter how great the collection of John Wayne quotes,
and how
> great a job the book does at imitating what John Wayne would say,
the
> room/computer/simulation cannot ever become John Wayne.
It could not become John Wayne physically, and it could not become
John Wayne mentally if the actual matter in John Wayne is required
to reproduce John Wayne's mind, but you have not proved that the
latter is the case.
It has nothing to do with matter. There can only ever be one John
Wayne. A person is like a composite snapshot of a unique human
lifetime, and the nesting of that lifetime within a unique cultural
zeitgeist. It's all made of the expression of experience through
time. The matter is just the story told to us by the experiences of
eyeballs and fingertips, microscopes, etc.
>> I don't think your example with the typing is as good as the
Chinese
>> Room, because by changing the keys around a bit it would be obvious
>> that there is no real understanding, while with the Chinese Room
would
>> be able to pass any test that a Chinese speaker could pass.
>
>
> Tests are irrelevant, since the pass/fail standard can only be
subjective.
> There can never be a Turing test or a Voight-Kampff test which is
objective,
> but there will always be tests which designers of AI can use to
identify the
> signature of their design.
That's what Searle claims, which is why he makes the Room pass a
Turing test in Chinese and then purports to prove (invalidly,
according to what you've said) that despite passing the test it
isn't conscious.
The question of whether or not a Turing test is possible is beyond
the scope of the the Chinese Room. The Room assumes, for the sake of
argument, that Computationalist assumptions are true, and that a
Turing type test would be useful, and that anything which could pass
such a test would have to be conscious. Searle rightly identifies
the futility of looking for outward appearances to reveal the
quality of interior awareness. He successfully demonstrates that
blind syntactic approaches to producing symbols of consciousness
could indeed match any blind semantic approach of expecting
consciousness. My hypotheses go further into the ontology of
awareness, so that we are not limited to the blindness of measurable
communication in our empathy, and that our senses extend beyond
their own accounts of each other. Our intuitive capacities can be
more fallible than empirical views can measure, but they can also be
more veridical than information based methods can ever dream of.
Intuition, serendipity, and imagination are required to generate the
perpetual denationalization of creators ahead of the created. This
doesn't mean that some people cannot be fooled all of the time or
that all of the people can't be fooled some of the time, only that
all of the people cannot be fooled all of the time.
Craig
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.