On Jul 13, 2011, at 7:04 PM, Craig Weinberg <whatsons...@gmail.com> wrote:

Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be >artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing >technology) but there'd be no need for actual DNA molecules inside the substitute.

The assumption is that there is a meaningful difference between the
processes physically within the cell and those that are input and
output between the cells. That is not my view. Just as the glowing
blue chair you are imagining now (is it a recliner? A futuristic
cartoon?) is not physically present in any neuron or group of neurons
in your skull -

If it is not present physically, then what causes a person to say "I am imagining a blue chair"?

under any imaging system or magnification. My idea of
'interior' is different from the physical inside of the cell body of a
neuron. It is the interior topology. It's not even a place, it's just
a sensorimotive

Could you please define this term? I looked it up but the definitions I found did not seem to fit.

awareness of itself and it's surroundings - hanging on
to it's neighbors, reaching out to connect, expanding and contracting
with the mood of the collective. This is what consciousness is. This
is who we are. The closer you get to the exact nature of the neuron,
the closer you get to human consciousness.

There is such a thing as too low a level. What leads you to believe the neuron is the appropriate level to find qualia, rather than the states of neuron groups or the whole brain? Taking the opposite direction, why not say it must be explained in terms if chemistry or quarks? What led you to conclude it is the neurons? Afterall, are rat neurons very different from human neurons? Do rats have the same range of qualia as we?

If you insist upon using
inorganic materials, that really limits the degree to which the
feelings it can host will be similar.

Assuming qualia supervene on the individual cells or their chemistry.

Why wouldn't you need DNA to
feel like something based on DNA in practically every one of it's
cells?

You would have to show that the presence of DNA in part determines the evolution of the brains neural network. If not, it is as relevant to you and your mind as the neutrinos passing through you.



The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this >simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) >as the original brain

A digital simulation is just a pattern in an abacus.

The state of an abacus is just a number, not a process. I think you may not have a full understanding of the differences between a turing machine and a string of bits. A Turing machine can mimick any process that is defineable and does not take an infinite number of steps. Turing machines are dynamic, self-directed entities. This distinguishes them from cartoons, YouTube videos and the state if an abacus.

Since they have such a universal capability to mimick processes, then the idea that the brain is a process leads naturally to the idea of intelligent computers which could function identically to organic brains.

Then, if you deny the logical possibilitt of zombies, or fading qualia, you must accept such an emulation of a human mind would be equally conscious.

If you've got a
gigantic abacus and a helicopter, you can make something that looks
like whatever you want it to look like from a distance, but it's still
just an abacus. It has no subjectivity beyond the physical materials
that make up the beads.

The idea behind a computer simulation of a mind is not to make something that looks like a brain but to make something that behaves and works like a brain.



Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones.

It's a dynamic system,

So is a turing machine.

there is no boundary like that. The
neurotransmitters are produced by and received within the neurons
themselves. If something produces and metabolizes biological
molecules, then it is functioning at a biochemical level and not at
the level of a digital electronic simulation. If you have a heat sink
for your device it's electromotive. If you have an insulin pump it's
biological, if you have a serotonin reuptake receptor, it's
neurological.

So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other >adjacent volumes "don't know the difference" and their behavior is unaffected.

No, I don't that's how living things work. Remember that people's
bodies often reject living tissue transplanted from other human
beings.

Rejection requires the body knowing there is a difference, which is against the starting assumption.



You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?

I agree with physical reductionism as far as the physical side of
things is concerned. Qualia is the opposite that would be subject to
experiential irreductionism. Which is why you can print Shakespeare on
a poster or a fortune cookie and it's still Shakeapeare, but you can't
make enriched uranium out of corned beef or a human brain out of table
salt.

Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace >neurons with functional equivalents that would leave *behavior* unaffected

I'm rejecting the premise that there is a such thing as a functional
replacement for a neuron that is sufficiently different from a neuron
that it would matter.

I pasted real life counter examples to this. Artificial cochlea and retinas.

You can make a prosthetic appliance which your
nervous system will make do with, but it can't replace the nervous
system altogether.

At what point does the replacement magically stop working?

The nervous system predicts and guesses. It can
route around damage or utilize a device which it can understand how to
use.

So it can use an artificial retina but not an artificial neuron?



first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other >neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the >body as a whole.

This is tautological. You are making a nonsense distinction between
it's 'internal' structure and what it does. If the internal structure
is equivalent enough, then it will be functionally equivalent to other
neurons and the organism at large. If it's not, then it won't be.
Interior mechanics that produce organic molecules and absorb them
through a semipermeable membrane are biological cells. If you can make
something that does that out of something other than nucleic acids,
then cool, but why bother? Just build the cell you want
nanotechnologically.

Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I >imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the >external limb movements of the robot shouldn't be any different.

If you are interested in the behaviors of consciousness only, all you
have to do is watch a youtube and you will see a simulated
consciousness behaving. Can you produce something that acts like it's
conscious? Of course.

My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree >with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in- principle impossible to implement >self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a >matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well".

It's a false equivalence. Darwinian evolution is a relational
abstraction and consciousness or life is a concrete experience. The
fact that we can call anything which follows a statistical pattern of
iterative selection 'Darwinian evolution' just means that it is a
basic relation of self-replicating elements in a dynamic mechanical
system. That living matter and consciousness only appears out of a
particular recipe of organic molecules doesn't mean that there can't
be another recipe, however it does tend to support the observation
that life and consciousness is made out of some things and not others,
and certainly it supports that it is not likely a phenomenon which can
be produced by combinations of anything physical, let alone something
purely computational.

On Jul 13, 1:23 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
Craig Weinberg wrote:
It's weird, I get an error when I try to reply in any way to your last post. Here's what I'm trying to Reply: The crux of the whole issue is what we mean by functionally indistinguishable.

But I specified what I meant (and what I presume Chalmers meant)-- that any physical influences such as neurotransmitters that other neurons respond to (in terms of the timing of their own electrochemical pulses, and the growth and death of their synapses) are still emitted by the substitute, so that the other neurons "can't tell the difference" and their behavior is unchanged from what it would be if the neuron hadn't been replaced by an artificial substitute.>If you aren't talking about silicon chips or digital simulation, then you are talking about a different level of function. Would your artificial >neuron synthesize neurotransmitters, detect and respond to neurotransmitters, even emulate genetics?

I said that it would emit neurotransmitters--whether it synthesized them internally or had a supply that was periodically replenished by nanobots or something is irrelevant. Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing technology) but there'd be no need for actual DNA molecules inside the substitute.>If you get down to the level of the pseudobiological, then the odds of being able to replace neurons successfully gets much higher to >me. To me, that's not what functionalism is about though. I think of functionalism as confidence in a more superficial neural network >simulation of logical nodes. Virtual consciousness.

I don't think functionalism means confidence that the extremely simplified "nodes" of most modern neural networks would be sufficient for a simulated brain that behaved just like a real one, it might well be that much more detailed simulations of individual neurons would be needed for mind uploading. The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) as the original brain (see the first sentence defining 'functionalism' athttp://plato.stanford.edu/entries/ functionalism/)>If you're going to get down to the biological substitution level of emulating the tissue itself so that the tissue is biologically >indistinguishable from brain tissue, but maybe has some plastic or whatever instead of cytoplasm, then sure, that might work. As long >as you've got real DNA, real ions, real sensitivity to real neurotransmitters, then yeah that could work.

No, that's not what I'm talking about. Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones. But all the relevant molecules and electromagnetic waves which leave the boundary (and which are relevant to the behavior of other neurons, so for example visible light waves probably don't need to be included) of the original neuron are still emitted by the artificial substitute, like neurotransmitters. As I said, a reductionist should believe that the behavior of a complex system is in principle explainable as nothing more than the sum of all the interactions of its parts. And if the reductionist grants that at the scale of neurons, entanglement isn't relevant to how they interact (because of decoherence), then we should be able to assume that the behavior of the system is a sum of *local* interactions between particles that are close to one another in space. So if we divide a large system into a bunch of small volumes, the only way processes happening within one volume can have any causal influence on processes happening within a second adjacent volume is via local interactions that happen at the *boundary* between the two volumes, or particles passing through this boundary which later interact with others inside the second volume. So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other adjacent volumes "don't know the difference" and their behavior is unaffected. You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?>>You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged >>particles that make up the atoms in each water molecule-
Simulating the behavior of water isn't the same thing as being able to create synthetic water. If you are starving, watching a movie >that explains a roast beef sandwich doesn't help you. Why would consciousness be any different?

Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace neurons with functional equivalents that would leave *behavior* unaffected (both the behavior of other nearby neurons, and behavior of the whole person in the form of muscle movement triggered by neural signals, including speech about what the person was feeling). If you do accept that premise, then we can move on to Chalmers' argument about the implausibility of dancing/fading qualia in situations where behavior is completely unaffected--you also have not really given a clear answer to the question of whether you think there could be situations where behavior is completely unaffected but qualia are changing or fading. But one thing at a time, first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the body as a whole.>If you replaced the log in your fireplace with a fluorescent tube, it's not going to be the functional equivalent of fire if you are freezing >in the winter. The problem with consciousness is that we don't know which functions, if any, make the difference between the >possibility of consciousness or not. I see our human consciousness as an elaboration of animal experience, so that anything that can >emulate human consciousness must be able to feel like an animal, which means feeling like you are made of meat that wants to eat, >fuck, kill, run, sleep, and avoid pain.

Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the external limb movements of the robot shouldn't be any different.

I don't see why that follows, we don't see darwinian evolution in non-organic systems either but that doesn't prove that darwinian >>evolution somehow requires something more than just a physical system with the right type of organization (basically a system that >>can self-replicate, and which has the right sort of stable structure to preserve hereditary information to a high degree but also with >>enough instability for "mutations" in this information from one generation to the next)
If we can make an inorganic material that can self-replicate, mutate, and die, then it stands more of a chance to be able to develop >it's detection into something like sensation then feeling, thinking, morality, etc. There must be some reason why it doesn't happen >naturally after 4 billion years here, so I suspect that reinventing it won't be worth the trouble. Why not just use organic molecules >instead?

I don't really want to get into the general question of the advantages and disadvantages of trying to have darwinian evolution in non-organic systems, I was just addressing your specific claim that if consciousness is just a matter of organization we should expect to see it already in non-organic systems. My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in-principle impossible to implement self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well".
Jesse

On Jul 12, 8:36 pm, Jesse Mazer <laserma...@hotmail.com> wrote:







Date: Tue, 12 Jul 2011 15:50:12 -0700
Subject: Re: Bruno's blasphemy.
From: whatsons...@gmail.com
To: everything-list@googlegroups.com

Thanks, I always seem to like Chalmers perspectives. In this case I
think that the hypothesis of physics I'm working from changes how I
see this argument compared to how I would have a couple years ago. My
thought now is that although organizational invariance is valid,
molecular structure is part of the organization. I think that
consciousness is not so much a phenomenon that is produced, but an
essential property that is accessed in different ways through
different organizations.

But how does this address the thought-experiment? If each neuron were indeed replaced one by one by a functionally indistinguishable substitute, do you think the qualia would change somehow without the person's behavior changing in any way, so they still maintained that they noticed no differences?

I'll just throw out some thoughts:

If you take an MRI of a silicon brain, it's going to look nothing like a human brain. If an MRI can tell the difference, why can't the brain
itself?

Because neurons (including those controlling muscles) don't see each other visually, they only "sense" one another by certain information channels such as neurotransmitter molecules which go from one neuron to another at the synaptic gap. So if the artificial substitutes gave all the same type of outputs that other neurons could sense, like sending neurotransmitter molecules to other neurons (and perhaps other influences like creating electromagnetic fields which would affect action potentials traveling along nearby neurons), then the system as a whole should behave identically in terms of neural outputs to muscles (including speech acts reporting inner sensations of color and whether or not the qualia are "dancing" or remaining constant), even if some other system that can sense information about neurons that neurons themselves cannot (like a brain scan which can show something about the material or even shape of neurons) could tell the difference.

Can you make synthetic water? Why not?

You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged particles that make up the atoms in each water molecule--seehttp://www.udel.edu/PR/UDaily/2007/mar/water030207.htmlfora discussion. If you had a robot whose external behavior was somehow determined by the behavior of water in an internal hidden tank (say it had some scanners watching the motion of water in that tank, and the scanners would send signals to the robotic limbs based on what they saw), then the external behavior of the robot should be unchanged if you replaced the actual water tank with a sufficiently detailed simulation of a water tank of that size.

If consciousness is purely organizational, shouldn't we see an example of non-living consciousness in nature? (Maybe we do but why don't we
recognize it as such). At least we should see an example of an
inorganic organism.

I don't see why that follows, we don't see darwinian evolution in non-organic systems either but that doesn't prove that darwinian evolution somehow requires something more than just a physical system with the right type of organization (basically a system that can self-replicate, and which has the right sort of stable structure to preserve hereditary information to a high degree but also with enough instability for "mutations" in this information from one generation to the next). In fact I think most scientists would agree that intelligent purposeful and flexible behavior must have something to do with darwinian or quasi-darwinian processes in the brain (quasi-darwinian to cover something like the way an ant colony selects the best paths to food, which does involve throwing up a lot of variants and then creating new variants closer to successful ones, but doesn't really involve anything directly analogous to "genes" or self-replication of scent trails). That said, since I am philosophically inclined towards monism I do lean towards the idea that perhaps all physical processes might be associated with some very "basic" form of qualia, even if the sort of complex, differentiated and meaningful qualia we experience are only possible in adaptive systems like the brain (chalmers discusses this sort of panpsychist idea in his book "The Conscious Mind", and there's also a discussion of "naturalistic panpsychism" athttp://www.hedweb.com/lockwood.htm#naturalistic )

My view of awareness is now subtractive and holographic (think pinhole camera), so that I would read fading qualia in a different way. More like dementia.. attenuating connectivity between different aspects of the self, not changing qualia necessarily. The brain might respond to
the implanted chips, even ruling out organic rejection, the native
neurology may strengthen it's remaining connections and attempt to
compensate for the implants with neuroplasticity, routing around the
'damage'.

But here you seem to be rejecting the basic premise of Chalmers' thought experiment, which supposes that one could replace neurons with *functionally* indistinguishable substitutes, so that the externally-observable behavior of other nearby neurons would be no different from what it would be if the neurons hadn't been replaced. If you accept physical reductionism--the idea that the external behavior (as opposed to inner qualia) of any physical system is in principle always reducible to the interactions of all its basic components such as subatomic particles, interacting according to the same universal laws (like how the behavior of a collection of water molecules can be reduced to the interaction of all the individual charged particles obeying basic quantum laws)-- then it seems to me you should accept that as long as an artificial neuron created the same physical "outputs" as the neuron it replaced (such as neurotransmitter molecules and electromagnetic fields), then the behavior of surrounding neurons should be unaffected. If you object to physical reductionism, or if you don't object to it but somehow still reject the idea that it would be possible to predict a real neuron's "outputs" with a computer simulation, or reject the idea that as long as the outputs at the boundary of the original neuron were unchanged the other neurons wouldn't behave any differently, please make it clear so I can understand what specific premise of Chalmers' thought-experiment you are rejecting.
Jesse

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

hl=en.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to