> Date: Tue, 12 Jul 2011 15:50:12 -0700
> Subject: Re: Bruno's blasphemy.
> From: [email protected]
> To: [email protected]
>
> Thanks, I always seem to like Chalmers perspectives. In this case I
> think that the hypothesis of physics I'm working from changes how I
> see this argument compared to how I would have a couple years ago. My
> thought now is that although organizational invariance is valid,
> molecular structure is part of the organization. I think that
> consciousness is not so much a phenomenon that is produced, but an
> essential property that is accessed in different ways through
> different organizations.
But how does this address the thought-experiment? If each neuron were indeed
replaced one by one by a functionally indistinguishable substitute, do you
think the qualia would change somehow without the person's behavior changing in
any way, so they still maintained that they noticed no differences?
>
> I'll just throw out some thoughts:
>
> If you take an MRI of a silicon brain, it's going to look nothing like
> a human brain. If an MRI can tell the difference, why can't the brain
> itself?
Because neurons (including those controlling muscles) don't see each other
visually, they only "sense" one another by certain information channels such as
neurotransmitter molecules which go from one neuron to another at the synaptic
gap. So if the artificial substitutes gave all the same type of outputs that
other neurons could sense, like sending neurotransmitter molecules to other
neurons (and perhaps other influences like creating electromagnetic fields
which would affect action potentials traveling along nearby neurons), then the
system as a whole should behave identically in terms of neural outputs to
muscles (including speech acts reporting inner sensations of color and whether
or not the qualia are "dancing" or remaining constant), even if some other
system that can sense information about neurons that neurons themselves cannot
(like a brain scan which can show something about the material or even shape of
neurons) could tell the difference.
>
> Can you make synthetic water? Why not?
You can simulate the large-scale behavior of water using only the basic quantum
laws that govern interactions between the charged particles that make up the
atoms in each water molecule--see
http://www.udel.edu/PR/UDaily/2007/mar/water030207.html for a discussion. If
you had a robot whose external behavior was somehow determined by the behavior
of water in an internal hidden tank (say it had some scanners watching the
motion of water in that tank, and the scanners would send signals to the
robotic limbs based on what they saw), then the external behavior of the robot
should be unchanged if you replaced the actual water tank with a sufficiently
detailed simulation of a water tank of that size.
>
> If consciousness is purely organizational, shouldn't we see an example
> of non-living consciousness in nature? (Maybe we do but why don't we
> recognize it as such). At least we should see an example of an
> inorganic organism.
I don't see why that follows, we don't see darwinian evolution in non-organic
systems either but that doesn't prove that darwinian evolution somehow requires
something more than just a physical system with the right type of organization
(basically a system that can self-replicate, and which has the right sort of
stable structure to preserve hereditary information to a high degree but also
with enough instability for "mutations" in this information from one generation
to the next). In fact I think most scientists would agree that intelligent
purposeful and flexible behavior must have something to do with darwinian or
quasi-darwinian processes in the brain (quasi-darwinian to cover something like
the way an ant colony selects the best paths to food, which does involve
throwing up a lot of variants and then creating new variants closer to
successful ones, but doesn't really involve anything directly analogous to
"genes" or self-replication of scent trails). That said, since I am
philosophically inclined towards monism I do lean towards the idea that perhaps
all physical processes might be associated with some very "basic" form of
qualia, even if the sort of complex, differentiated and meaningful qualia we
experience are only possible in adaptive systems like the brain (chalmers
discusses this sort of panpsychist idea in his book "The Conscious Mind", and
there's also a discussion of "naturalistic panpsychism" at
http://www.hedweb.com/lockwood.htm#naturalistic )
>
> My view of awareness is now subtractive and holographic (think pinhole
> camera), so that I would read fading qualia in a different way. More
> like dementia.. attenuating connectivity between different aspects of
> the self, not changing qualia necessarily. The brain might respond to
> the implanted chips, even ruling out organic rejection, the native
> neurology may strengthen it's remaining connections and attempt to
> compensate for the implants with neuroplasticity, routing around the
> 'damage'.
But here you seem to be rejecting the basic premise of Chalmers' thought
experiment, which supposes that one could replace neurons with *functionally*
indistinguishable substitutes, so that the externally-observable behavior of
other nearby neurons would be no different from what it would be if the neurons
hadn't been replaced. If you accept physical reductionism--the idea that the
external behavior (as opposed to inner qualia) of any physical system is in
principle always reducible to the interactions of all its basic components such
as subatomic particles, interacting according to the same universal laws (like
how the behavior of a collection of water molecules can be reduced to the
interaction of all the individual charged particles obeying basic quantum
laws)--then it seems to me you should accept that as long as an artificial
neuron created the same physical "outputs" as the neuron it replaced (such as
neurotransmitter molecules and electromagnetic fields), then the behavior of
surrounding neurons should be unaffected. If you object to physical
reductionism, or if you don't object to it but somehow still reject the idea
that it would be possible to predict a real neuron's "outputs" with a computer
simulation, or reject the idea that as long as the outputs at the boundary of
the original neuron were unchanged the other neurons wouldn't behave any
differently, please make it clear so I can understand what specific premise of
Chalmers' thought-experiment you are rejecting.
Jesse
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.