On Aug 2, 8:59 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Tue, Aug 2, 2011 at 11:37 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> > On Aug 1, 8:07 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
>
> >> 1. You agree that is possible to make something that behaves as if
> >> it's conscious but isn't conscious.
>
> > Noooo. I've been trying to tell you that there is no such thing as
> > behaving as if something is conscious. It doesn't mean anything
> > because consciousness isn't a behavior, it's a sensorimotive
> > experience which sometimes drives behaviors.
>
> Behaviour is what can be observed. Consciousness cannot be observed.
> The question is, can something behave like a human without being
> conscious?

Does a cadaver behave like a human? If I string it up like a
marionette? If the puppeteer is very good? What is the meaning of
these questions when it has nothing to do with whether the thing feels
like a human?

> > If you accept that, then it follows that whether or not someone is
> > convinced as to the consciousness of something outside of themselves
> > is based entirely upon them. Some people may not even be able to
> > accept that certain people are conscious... they used to think that
> > infants weren't conscious. In my theory I get into this area a lot and
> > have terms such as Perceptual Relativity Inertial Frame (PRIF) to help
> > illustrate how perception might be better understood (http://
> > s33light.org/post/8357833908).
>
> > How consciousness is inferred is a special case of PR Inertia which I
> > think is based on isomorphism. In the most primitive case, the more
> > something resembles what you are, in physical scale, material
> > composition, appearance, etc, the more likely you are to identify
> > something as being conscious. The more time you have to observe and
> > relate to the object, the more your PRIF accumulates sensory details
> > which augment your sense-making of the thing,  and context,
> > familiarity, interaction, and expectations grow to overshadow the
> > primitive detection criteria. You learn that a video Skype of someone
> > is a way of seeing and talking to a person and not a hallucination or
> > talking demon in your monitor.
>
> > So if we build something that behaves like Joe Lunchbox, we might be
> > able to fool strangers who don't interact with him, and an improved
> > version might be able to fool strangers with limited interaction but
> > not acquaintances, the next version might fool everyone for hours of
> > casual conversation except Mrs. Lunchbox cannot be fooled at all, etc.
> > There is not necessarily a possible substitution level which will
> > satisfy all possible observers and interactors, pets, doctors, etc and
> > there is not necessarily a substitution level which will satisfy any
> > particular observer indefinitely. Some observers may just think that
> > Joe is not feeling well. If the observers were told that one person in
> > a lineup was an android, they might be more likely to identify Joe as
> > the one.
>
> The field of computational neuroscience involves modelling the
> behaviour of neurons. Even philosophers such as John Searle, who
> doesn't believe that a computer model of a brain can be conscious, at
> least allow that a computer model can accurately predict the behaviour
> of a brain.

Because he doesn't know about the essential-existential relation. Can
a computer model of your brain accurately predict what is going to
happen to you tomorrow? Next week? If I pull a name of a random
country out of a hat today and put you on a plane to that country,
will the computer model already have predicted what you will see and
say during your trip? That is what a computer model would have to do
to predict the 'behavior of a brain' - that means predicting signals
which correlate to the images processed in the visual regions of the
brain. How can you predict that without knowing what country you will
be going to next week?

This is the limitation of a-signifying modeling. You can only do so
much comparing the shapes of words and the order of the letters
without really understanding what the words mean. Knowing the letters
and order is important, but it is not sufficient for understanding
either what a human or their brain experiences. It's the meaning that
is essential. This should be especially evident after the various
derivative-driven market crashes. The market can't be predicted
indefinitely through statistical analysis alone, because the only
thing the statistics represent is driven by changing human conditions
and desires. Still people will beat the dead horse of quantitative
invincibility.

 Searle points out that a model of a storm may predict its
> behaviour accurately, but it won't actually be wet: that would require
> a real storm.

We may be at the limit of practical meteorological modeling. Not only
will the virtual storm will not be wet, it won't even necessarily
behave like a real storm when it really needs to. Reality is a
stinker. It doesn't like to be pinned down for long.

>By analogy, a computer inside someone's head may model
> the behaviour of his brain sufficiently well so as to cause his
> muscles to move in a perfectly human way, but according to Searle that
> does not mean that the ensuing being would be conscious. If you
> disagree that even the behaviour can be modelled by a computer then
> you are claiming that there is something in the physics of the brain
> which is non-computable.

There is something in the physics of all matter which is non-
computable, it's just that in order for the non-computable part of a
chip to be like the non-computable part of the brain, it needs to feel
like it's living inside of a brain inside of a body on a planet with a
history. A chip is probably not going to feel like that. A chip
doesn't know what it means for something to be hard or easy or painful
or dangerous. Cells know that, not because of their structure but
because of their capability to sustain that structure.

> But there is no evidence for such
> non-computable physics in the brain; it's just ordinary chemistry.

There is no evidence that chemistry is any less ordinary than we are
either, or that doesn't have non-computable interiority. Obviously it
either must have that interiority or have the potential to give rise
to that interiority in specific large groups, but either way it comes
from somewhere. It's a feature of the cosmos just as is computation.

> > In any case, it all has nothing to do with whether or not the thing is
> > actually conscious, which is the only important aspect of this line of
> > thinking. We have simulations of people already - movies, TV, blow up
> > dolls, sculptures, etc. Computer sims add another layer of realism to
> > these without adding any reality of awareness.
>
> So you *are* conceding the first point, that it is possible to make
> something that behaves as if it's conscious without actually being
> conscious?

You're either not reading or not understanding what I'm writing. There
is no such thing as 'behaving as if it's conscious'. It's a category
error along the lines of 'feeling as if it's unconscious'.

> We don't even need to talk about brain physics: for the
> purposes of the philosophical discussion it can be a magical device
> created by God. If you don't concede this then you are essentially
> agreeing with functionalism: that if something behaves as if it's
> conscious then it is necessarily conscious.

See above: There is no such thing as 'behaving as if it's conscious'.


> >> 2. Therefore it would be possible to make a brain component that
> >> behaves just like normal brain tissue but lacks consciousness.
>
> > Probably not. Brain tissue may not be any less conscious than the
> > brain as a whole. What looks like normal behavior to us might make the
> > difference between cricket chirps and a symphony and we wouldn't
> > know.
>
>  If you concede point 1, you must concede point 2.

If you don't understand my rejection of point 1 then you still can
understand point 2, because you're a living human being, capable of
figuring out things in many different ways, not just through a
scripted linear logic.

> >> 3. And since such a brain component behaves normally the rest of the
> >> brain should be have normally when it is installed.
>
> > The community of neurons may graciously integrate the chirping
> > sculpture into their community, but it doesn't mean that they are
> > fooled and it doesn't mean that the rest of the orchestra can be
> > replaced with sculptures.
>
> If you concede point 2 you must concede point 3.

Did I mention that the occidental side of the psychological continuum
can lead to robotic formalism?

> >> 4. So it is possible to have, say, half of your brain replaced with
> >> unconscious components and you would both behave normally and feel
> >> that you were completely normal.
>
> > It's possible to have half of your cortex disappear and still behave
> > and feel relatively normally.
>
> >http://www.newscientist.com/article/dn17489-girl-with-half-a-brain-re...
> >http://www.pnas.org/content/106/31/13034
>
> People with brain damage can have other parts of their brain take over
> the function of the damaged part. But this is not the point I am
> making: if a part of the brain is removed and replaced with artificial
> components that function normally, then the rest of the brain also
> continues functioning normally.

As long as the person is alive, the brain is going to try to make do
with whatever it has. If it can use whatever artificial prosthetics
have been implanted, then it will, but those implants will not likely
be mistaken for functioning normally, and they cannot replace the
entire brain and be expected to function 'normally' for an indefinite
period of time. Again, like the Wall Street quants, we need to
understand that when it comes to consciousness, there is no normal.

> >> If you accept the first point, then points 2 to 4 necessarily follow.
> >> If you see an error in the reasoning can you point out exactly where
> >> it is?
>
> > If you see an error in my reasoning, please do the same.
>
> You contradict yourself in saying that it is not possible for a
> non-conscious being to behave as if it's conscious,

I don't say that, I say that there is no such thing as behaving as if
it's conscious. Awareness isn't a behavior, it's inherent and non-
computable.

> then claiming that
> there are examples of non-conscious beings behaving as if they are
> conscious (although your examples of videos and computer sims are not
> good ones: we don't actually have anything today that comes near to
> replicating the full range of human intelligence).

Those examples are intended to show the dubious nature of claims to
machine consciousness, and how being temporarily fooled does not
equate with functional equivalence.

>You don't seem to
> appreciate the difference between a technical problem and a
> philosophical argument which considers only what is theoretically
> possible.

I understand that it seems like that to you, but in this case the
philosophical argument is a red herring from the start. My hypothesis
explains why this is the case. My view is that it is theoretically
possible to embody consciousness that is like human conscious in
something that is not human, but that depends as much on the
capabilities of the physical substance you make it out of as much as
the machine itself.

> You don't explain where a computer model of neural tissue
> would fail, how you know there is non-computable physics in a neuron
> and where it is.

See above - you can only so much quantitatively. Can't predict what a
neuron is going to do under every possible circumstance because it's a
living thing, it tries to do what it wants. Don't you try to do what
you want?

> You seem to think that even if the behaviour of a
> neuron could be replicated by an artificial neuron, or for example by
> a normal neuron missing its nucleus, the other neurons would somehow
> know that something was wrong and not behave normally; or even worse,
> that they would behave normally but the person would still experience
> an alteration in consciousness.

It depends how artificial the neuron is. What it's made of.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to