On Tue, Aug 2, 2011 at 11:37 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Aug 1, 8:07 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
>> 1. You agree that is possible to make something that behaves as if
>> it's conscious but isn't conscious.
> Noooo. I've been trying to tell you that there is no such thing as
> behaving as if something is conscious. It doesn't mean anything
> because consciousness isn't a behavior, it's a sensorimotive
> experience which sometimes drives behaviors.

Behaviour is what can be observed. Consciousness cannot be observed.
The question is, can something behave like a human without being

> If you accept that, then it follows that whether or not someone is
> convinced as to the consciousness of something outside of themselves
> is based entirely upon them. Some people may not even be able to
> accept that certain people are conscious... they used to think that
> infants weren't conscious. In my theory I get into this area a lot and
> have terms such as Perceptual Relativity Inertial Frame (PRIF) to help
> illustrate how perception might be better understood (http://
> s33light.org/post/8357833908).
> How consciousness is inferred is a special case of PR Inertia which I
> think is based on isomorphism. In the most primitive case, the more
> something resembles what you are, in physical scale, material
> composition, appearance, etc, the more likely you are to identify
> something as being conscious. The more time you have to observe and
> relate to the object, the more your PRIF accumulates sensory details
> which augment your sense-making of the thing,  and context,
> familiarity, interaction, and expectations grow to overshadow the
> primitive detection criteria. You learn that a video Skype of someone
> is a way of seeing and talking to a person and not a hallucination or
> talking demon in your monitor.
> So if we build something that behaves like Joe Lunchbox, we might be
> able to fool strangers who don't interact with him, and an improved
> version might be able to fool strangers with limited interaction but
> not acquaintances, the next version might fool everyone for hours of
> casual conversation except Mrs. Lunchbox cannot be fooled at all, etc.
> There is not necessarily a possible substitution level which will
> satisfy all possible observers and interactors, pets, doctors, etc and
> there is not necessarily a substitution level which will satisfy any
> particular observer indefinitely. Some observers may just think that
> Joe is not feeling well. If the observers were told that one person in
> a lineup was an android, they might be more likely to identify Joe as
> the one.

The field of computational neuroscience involves modelling the
behaviour of neurons. Even philosophers such as John Searle, who
doesn't believe that a computer model of a brain can be conscious, at
least allow that a computer model can accurately predict the behaviour
of a brain. Searle points out that a model of a storm may predict its
behaviour accurately, but it won't actually be wet: that would require
a real storm. By analogy, a computer inside someone's head may model
the behaviour of his brain sufficiently well so as to cause his
muscles to move in a perfectly human way, but according to Searle that
does not mean that the ensuing being would be conscious. If you
disagree that even the behaviour can be modelled by a computer then
you are claiming that there is something in the physics of the brain
which is non-computable. But there is no evidence for such
non-computable physics in the brain; it's just ordinary chemistry.

> In any case, it all has nothing to do with whether or not the thing is
> actually conscious, which is the only important aspect of this line of
> thinking. We have simulations of people already - movies, TV, blow up
> dolls, sculptures, etc. Computer sims add another layer of realism to
> these without adding any reality of awareness.

So you *are* conceding the first point, that it is possible to make
something that behaves as if it's conscious without actually being
conscious? We don't even need to talk about brain physics: for the
purposes of the philosophical discussion it can be a magical device
created by God. If you don't concede this then you are essentially
agreeing with functionalism: that if something behaves as if it's
conscious then it is necessarily conscious.

>> 2. Therefore it would be possible to make a brain component that
>> behaves just like normal brain tissue but lacks consciousness.
> Probably not. Brain tissue may not be any less conscious than the
> brain as a whole. What looks like normal behavior to us might make the
> difference between cricket chirps and a symphony and we wouldn't
> know.

 If you concede point 1, you must concede point 2.

>> 3. And since such a brain component behaves normally the rest of the
>> brain should be have normally when it is installed.
> The community of neurons may graciously integrate the chirping
> sculpture into their community, but it doesn't mean that they are
> fooled and it doesn't mean that the rest of the orchestra can be
> replaced with sculptures.

If you concede point 2 you must concede point 3.

>> 4. So it is possible to have, say, half of your brain replaced with
>> unconscious components and you would both behave normally and feel
>> that you were completely normal.
> It's possible to have half of your cortex disappear and still behave
> and feel relatively normally.
> http://www.newscientist.com/article/dn17489-girl-with-half-a-brain-retains-full-vision.html
> http://www.pnas.org/content/106/31/13034

People with brain damage can have other parts of their brain take over
the function of the damaged part. But this is not the point I am
making: if a part of the brain is removed and replaced with artificial
components that function normally, then the rest of the brain also
continues functioning normally.

>> If you accept the first point, then points 2 to 4 necessarily follow.
>> If you see an error in the reasoning can you point out exactly where
>> it is?
> If you see an error in my reasoning, please do the same.

You contradict yourself in saying that it is not possible for a
non-conscious being to behave as if it's conscious, then claiming that
there are examples of non-conscious beings behaving as if they are
conscious (although your examples of videos and computer sims are not
good ones: we don't actually have anything today that comes near to
replicating the full range of human intelligence). You don't seem to
appreciate the difference between a technical problem and a
philosophical argument which considers only what is theoretically
possible. You don't explain where a computer model of neural tissue
would fail, how you know there is non-computable physics in a neuron
and where it is. You seem to think that even if the behaviour of a
neuron could be replicated by an artificial neuron, or for example by
a normal neuron missing its nucleus, the other neurons would somehow
know that something was wrong and not behave normally; or even worse,
that they would behave normally but the person would still experience
an alteration in consciousness.

Stathis Papaioannou

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to