On Wed, Aug 3, 2011 at 3:35 AM, Craig Weinberg <whatsons...@gmail.com> wrote:

> Does a cadaver behave like a human? If I string it up like a
> marionette? If the puppeteer is very good? What is the meaning of
> these questions when it has nothing to do with whether the thing feels
> like a human?

The question is whether it can behave exactly like a human but lack
consciousness. A cadaver does not behave like a human. If it is
manipulated by a human pupeteer then it can behave like a human and it
will in fact be conscious since its "brain", the pupeteer, is
conscious. What if the cadaver is animated by a computer so that it
behaves just like a human: would it be conscious then or would it be
unconscious, or do you think the question is meaningless?

> Because he doesn't know about the essential-existential relation. Can
> a computer model of your brain accurately predict what is going to
> happen to you tomorrow? Next week? If I pull a name of a random
> country out of a hat today and put you on a plane to that country,
> will the computer model already have predicted what you will see and
> say during your trip? That is what a computer model would have to do
> to predict the 'behavior of a brain' - that means predicting signals
> which correlate to the images processed in the visual regions of the
> brain. How can you predict that without knowing what country you will
> be going to next week?

If it is an accurate model of my brain, it will predict what is going
to happen to me next week just as well as I would - which may not be
very well, especially if I am going to an unknown country.

> This is the limitation of a-signifying modeling. You can only do so
> much comparing the shapes of words and the order of the letters
> without really understanding what the words mean. Knowing the letters
> and order is important, but it is not sufficient for understanding
> either what a human or their brain experiences. It's the meaning that
> is essential. This should be especially evident after the various
> derivative-driven market crashes. The market can't be predicted
> indefinitely through statistical analysis alone, because the only
> thing the statistics represent is driven by changing human conditions
> and desires. Still people will beat the dead horse of quantitative
> invincibility.

The computer models in this case were not bettered by models created
in the heads of human traders. Anyway, that has nothing to do with the
theoretical possibility of accurate brain simulations.

>  Searle points out that a model of a storm may predict its
>> behaviour accurately, but it won't actually be wet: that would require
>> a real storm.
>
> We may be at the limit of practical meteorological modeling. Not only
> will the virtual storm will not be wet, it won't even necessarily
> behave like a real storm when it really needs to. Reality is a
> stinker. It doesn't like to be pinned down for long.

There is a difference between modelling a particular storm and
modelling a storm. Not even a real storm can model a particular storm,
due to chaotic effects. That is, if you tried to replicate a
particular storm on Earth by building an entire planet, the equivalent
storm will evolve differently since you can't replicate initial
conditions to an arbitrary level of precision.

>>By analogy, a computer inside someone's head may model
>> the behaviour of his brain sufficiently well so as to cause his
>> muscles to move in a perfectly human way, but according to Searle that
>> does not mean that the ensuing being would be conscious. If you
>> disagree that even the behaviour can be modelled by a computer then
>> you are claiming that there is something in the physics of the brain
>> which is non-computable.
>
> There is something in the physics of all matter which is non-
> computable, it's just that in order for the non-computable part of a
> chip to be like the non-computable part of the brain, it needs to feel
> like it's living inside of a brain inside of a body on a planet with a
> history. A chip is probably not going to feel like that. A chip
> doesn't know what it means for something to be hard or easy or painful
> or dangerous. Cells know that, not because of their structure but
> because of their capability to sustain that structure.
>
>> But there is no evidence for such
>> non-computable physics in the brain; it's just ordinary chemistry.
>
> There is no evidence that chemistry is any less ordinary than we are
> either, or that doesn't have non-computable interiority. Obviously it
> either must have that interiority or have the potential to give rise
> to that interiority in specific large groups, but either way it comes
> from somewhere. It's a feature of the cosmos just as is computation.

You're saying (as far as I can understand) that there are some basic
physical processes which follow no mathematical law. Can you give a
specific example of such a process? For example, do you think that
metabolism of glucose or the propagation of an action potential along
an axon is non-computable, and generations of biochemists and
physiologists have been wrong?

>> > In any case, it all has nothing to do with whether or not the thing is
>> > actually conscious, which is the only important aspect of this line of
>> > thinking. We have simulations of people already - movies, TV, blow up
>> > dolls, sculptures, etc. Computer sims add another layer of realism to
>> > these without adding any reality of awareness.
>>
>> So you *are* conceding the first point, that it is possible to make
>> something that behaves as if it's conscious without actually being
>> conscious?
>
> You're either not reading or not understanding what I'm writing. There
> is no such thing as 'behaving as if it's conscious'. It's a category
> error along the lines of 'feeling as if it's unconscious'.

A human behaves as if he's conscious when he declares that he has
feelings and perceptions, is aware of his surroundings and so on.
That's my definition of "behaves as if he's conscious". If a robot
behaves in a similar way, does that mean the robot is necessarily
conscious?

>> We don't even need to talk about brain physics: for the
>> purposes of the philosophical discussion it can be a magical device
>> created by God. If you don't concede this then you are essentially
>> agreeing with functionalism: that if something behaves as if it's
>> conscious then it is necessarily conscious.
>
> See above: There is no such thing as 'behaving as if it's conscious'.
>
>
>> >> 2. Therefore it would be possible to make a brain component that
>> >> behaves just like normal brain tissue but lacks consciousness.
>>
>> > Probably not. Brain tissue may not be any less conscious than the
>> > brain as a whole. What looks like normal behavior to us might make the
>> > difference between cricket chirps and a symphony and we wouldn't
>> > know.
>>
>>  If you concede point 1, you must concede point 2.
>
> If you don't understand my rejection of point 1 then you still can
> understand point 2, because you're a living human being, capable of
> figuring out things in many different ways, not just through a
> scripted linear logic.
>
>> >> 3. And since such a brain component behaves normally the rest of the
>> >> brain should be have normally when it is installed.
>>
>> > The community of neurons may graciously integrate the chirping
>> > sculpture into their community, but it doesn't mean that they are
>> > fooled and it doesn't mean that the rest of the orchestra can be
>> > replaced with sculptures.
>>
>> If you concede point 2 you must concede point 3.
>
> Did I mention that the occidental side of the psychological continuum
> can lead to robotic formalism?
>
>> >> 4. So it is possible to have, say, half of your brain replaced with
>> >> unconscious components and you would both behave normally and feel
>> >> that you were completely normal.
>>
>> > It's possible to have half of your cortex disappear and still behave
>> > and feel relatively normally.
>>
>> >http://www.newscientist.com/article/dn17489-girl-with-half-a-brain-re...
>> >http://www.pnas.org/content/106/31/13034
>>
>> People with brain damage can have other parts of their brain take over
>> the function of the damaged part. But this is not the point I am
>> making: if a part of the brain is removed and replaced with artificial
>> components that function normally, then the rest of the brain also
>> continues functioning normally.
>
> As long as the person is alive, the brain is going to try to make do
> with whatever it has. If it can use whatever artificial prosthetics
> have been implanted, then it will, but those implants will not likely
> be mistaken for functioning normally, and they cannot replace the
> entire brain and be expected to function 'normally' for an indefinite
> period of time. Again, like the Wall Street quants, we need to
> understand that when it comes to consciousness, there is no normal.

How will the brain know the implants are not normal when by definition
they function normally? (Yes it would be hard to make such implants,
but this is not a discussion about technical difficulty).

>> >> If you accept the first point, then points 2 to 4 necessarily follow.
>> >> If you see an error in the reasoning can you point out exactly where
>> >> it is?
>>
>> > If you see an error in my reasoning, please do the same.
>>
>> You contradict yourself in saying that it is not possible for a
>> non-conscious being to behave as if it's conscious,
>
> I don't say that, I say that there is no such thing as behaving as if
> it's conscious. Awareness isn't a behavior, it's inherent and non-
> computable.

I'm assuming your point - that consciousness is non-computable but
behaviour is computable - to see where it leads but you don't seem to
understand this.

>> then claiming that
>> there are examples of non-conscious beings behaving as if they are
>> conscious (although your examples of videos and computer sims are not
>> good ones: we don't actually have anything today that comes near to
>> replicating the full range of human intelligence).
>
> Those examples are intended to show the dubious nature of claims to
> machine consciousness, and how being temporarily fooled does not
> equate with functional equivalence.

But being permanently fooled by a good enough simulation does.

>>You don't seem to
>> appreciate the difference between a technical problem and a
>> philosophical argument which considers only what is theoretically
>> possible.
>
> I understand that it seems like that to you, but in this case the
> philosophical argument is a red herring from the start. My hypothesis
> explains why this is the case. My view is that it is theoretically
> possible to embody consciousness that is like human conscious in
> something that is not human, but that depends as much on the
> capabilities of the physical substance you make it out of as much as
> the machine itself.

So make a substitute that IGNORES consciousness and just models
BEHAVIOUR and see where it leads.

>> You don't explain where a computer model of neural tissue
>> would fail, how you know there is non-computable physics in a neuron
>> and where it is.
>
> See above - you can only so much quantitatively. Can't predict what a
> neuron is going to do under every possible circumstance because it's a
> living thing, it tries to do what it wants. Don't you try to do what
> you want?

It's a living thing but it's also a physical system comprised of
atoms, which follow well-defined physical laws. I do what I want but
in so doing, the matter in my body obeys the laws of physics. Are you
proposing that there is magic involved as well?

>> You seem to think that even if the behaviour of a
>> neuron could be replicated by an artificial neuron, or for example by
>> a normal neuron missing its nucleus, the other neurons would somehow
>> know that something was wrong and not behave normally; or even worse,
>> that they would behave normally but the person would still experience
>> an alteration in consciousness.
>
> It depends how artificial the neuron is. What it's made of.

It's magic, so it can fool the other neurons all the time. Does this
magic neuron necessarily preserve consciousness?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to