>You're misunderstanding what I meant by "internal", I wasn't talking about
>subjective interiority (qualia), but *only* about the physical processes in
>the spatial interior of the cell. I am trying to first concentrate on
>external behavioral issues that don't involve qualia at all, to see whether
>your disagreement with Chalmers' argument is because you disagree with the
>basic starting premise that it would be possible to replace neurons by
>artificial substitutes which would not alter the *behavior* of surrounding
>neurons (or of the person as a whole), only after assuming this does
>Chalmers go on to speculate about what would happen to qualia as neurons
>were gradually replaced in this way. Remember this paragraph from my last
>post:

 In my model, physical processes are just the exterior, like clothing
of the
qualia (perceivable experiences). There is no such thing as
external
behavior that doesn't involve qualia, that's my point. It's all one
thing -
sensorimotive perception of relativistic electromagnetism. I think
that in
the best case scenario, what happens when you virtualize your brain
with a
non-biological neuron emulation is that you gradually lose
consciousness but
the remaining consciousness has more and more technology at it's
disposal.
You can't remember your own name but when asked, there would be
a
meaningless word that comes to mind for no reason. To me, the only
question
is how virtual is virtual. If you emulate the biology, that's a
completely
different scenario than running a logical program on a chip. Logic
doesn't
ooze
serotonin.

>Are you suggesting that even if the molecules given off by foreign cells
>were no different at all from those given off by my own cells, my cells
>would nevertheless somehow be able to nonlocally sense that the DNA in the
>nuclei of these cells was foreign?

 It's not about whether other cells would sense the imposter neuron,
it's
about how much of an imposter the neuron is. If acts like a real cell
in
every physical way, if another organism can kill it and eat it
and
metabolize it completely then you pretty much have a cell. Whatever
cannot
be metabolized in that way is what potentially detracts from the
ability to
sustain consciousness. It's not your cells that need to sense DNA,
it's the
question of whether a brain composed entirely of, or significantly of
cells
lacking DNA would be conscious in the same way as a
person.

>Well, it's not clear to me that you understand the implications of physical
>reductionism based on your rejection of my comments about physical processes
>in one volume only being affected via signals coming across the boundary.
>Unless the issue is that you accept physical reductionism, but reject the
>idea that we can treat all interactions as being local ones (and again I
>would point out that while entanglement may involve a type of nonlocal
>interaction--though this isn't totally clear, many-worlds advocates say they
>can explain entanglement phenomena in a local way--because of decoherence,
>it probably isn't important for understanding how different neurons interact
>with one another).

It's not clear that you are understanding that my model of physics is
not
the same as yours. Imagine an ideal glove that is white on the outside
and
on the inside it feels like latex. As you move your hand in the glove
you
feel all sorts of things on the inside. Textures, shapes. etc. From
the
outside you see different patterns appearing on it. When you clench
your
fist, you can see right through the glove to your hand, but when you
do,
your hand goes completely numb and you can't feel the glove. What you
are
telling me is that if you make a glove that looks exactly like this
crazy
glove, if it satisfies all glove like properties such that it makes
these
crazy designs on the outside, that it must be having the same effect
on the
inside. My position is that no, not unless it is close enough to the
real
clove physically that it produces the same effects on the inside,
which you
cannot know unless you are wearing the
glove.

>And is that because you reject the idea that in any volume of space,
>physical processes outside that volume can only be affected by processes in
>its interior via particles (or other local signals) crossing the boundary of
>that volume?

No, it's because the qualia possible in inorganic systems is limited
to
inorganic qualia. Think of consciousness as DNA. Can you make DNA out
of
string? You could make a really amazing model of it out of string, but
it's
not going to do what DNA does. You are saying, well what if I make DNA
out
of something that acts just like DNA? I'm asking, like what? If it
acts like
DNA in every way, then it isn't an emulation, it's just DNA by another
name.

>I don't know what you mean by "functionally equivalent" though, are you
>using that phrase to suggest some sort of similarity in the actual molecules
>and physical structure of what's inside the boundary?

I'm using that phrase because you are. I'm just saying that what the
cell is
causes what the cell does. You can try to change what the cell is but
retain
what you think is what the cell does, but how much you change it
increases
the odds that you are changing something that you have no way of
knowing is
important.

>My point is that it's perfectly possible to imagine replacing a neuron with
>something that has a totally different physical structure, like a tiny
>carbon nanotube computer, but that it's sensing incoming neurotransmitter
>molecules (and any other relevant physical inputs from nearby cells) and
>calculating how the original neuron would have behaved in response to those
>inputs if it were still there, and using those calculations to figure out
>what signals the neuron would have been sending out of the boundary, then
>making sure to send the exact same signals itself (again, imagine that it
>has a store of neurotransmitters which can be sent out of an artificial
>synapse into the synaptic gap connected to some other neuron). So it *is*
>"functionally equivalent" if by "function" you just mean what output signals
>it transmits in response to what input signals, but it's not functionally
>equivalent if you're talking about its actual internal structure.

But what the signals and neurotransmitters are coming out of is
not
functionally equivalent. The real thing feels and has intent, not
calculates
and imitates. You can't build a machine that feels and has intent out
of
basic units that can only calculate at imitate. It just scales up to
a
sentient being vs a spectacular
automaton.

>If you do accept that it would be possible in principle to gradually replace
>real neurons with artificial ones in a way that wouldn't change the behavior
>of the remaining real neurons and wouldn't change the behavior of the person
>as a whole, but with the artificial ones having a very different internal
>structure and material composition than the real ones, then we can move on
>to Chalmer's argument about why this sort of behavioral indistinguishability
>suggests qualia probably wouldn't change either. But as I said I don't want
>to discuss that unless we're clear on whether you accept the original
>premise of the thought-experiment.

 It all depends how different the artificial neurons are. There might
be
other recipes for consciousness and life, but so far, we have no
reason to
believe that inorganic logic can sustain either. For the purposes of
this
thread, let's say no. If it's artificial enough to be called
artificial then
the consciousness associated with it is also
inauthentic.

>That's just a recording of something that actually happened to a biological
>consciousness, not a simulation which can respond to novel external stimuli
>(like new questions I can think to ask it) which weren't presented to any
>biological original.

That's easy. You just make a few hundred YouTubes and associate them
with
some AGI logic. Basically make a video ELIZA (which would actually be
a
fantastic doctorate thesis I would think). Now you can have a
conversation
with your YouTube person in real time. You could even splice
together
phonemes to make them just able to speak English in general and then
hook
them up to a Google translation. Would you then say that if the
AGI
algorithms were good enough - functionally equivalent to human
intelligence
in every way, that the YouTube was
conscious?

>But when you originally asked why we don't "see" consciousness in
>non-biological systems, I figured you were talking about the external
>behaviors we associate with consciousness, not inner experience. After all
>we have no way of knowing the inner experience of any system but ourselves,
>we only infer that other beings have similar inner experiences based on
>similar external behaviors.

That's what I'm trying to tell you. Consciousness is nothing but
inner
experience. It has no external behaviors, we just can recognize our
own
feelings in other things when we can see them do something that
reminds us
of
ourselves.

> If you want to just talk about inner experience, again we should first
>clear up whether you can accept the basic premise of Chalmers' thought
>experiment, then if you do we can move on to talking about what it implies
>for inner experience.

I don't want to talk about inner experience unless you want to. I want
to talk about
fundamental reordering of the cosmos, which if it were correct, would
be
staggeringly important and I have not seen anywhere
else:

   1. Mind and body are not merely separate, but perpendicular
topologies of
   the same ontological continuum of
sense.
   2. The interior of electromagnetism is sensorimotive, the interior
of
   determinism is free will, and the interior of general relativity
is
 
perception.
   3. Quantum Mechanics is a misinterpretation of atomic quorum
sensing.
   4. Time, space, and gravity are void. Their effects are explained
by
   perceptual relativity and sensorimotor
electromagnetism.
   5. The "speed of light" *c* is not a speed it's a condition
of
   nonlocality or absolute velocity, representing a third state of
physical
   relation as the opposite of both stillness and
motion.

It's not about meticulous logical deduction, it's about grasping
the
largest, broadest description of the cosmos possible which doesn't
leave
anything out. I just want to see if this map flies, and if not, why
not?


On Jul 13, 10:12 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Date: Wed, 13 Jul 2011 17:04:19 -0700> Subject: Re: bruno list> From: 
> > whatsons...@gmail.com> To: everything-list@googlegroups.com> > >Again, all 
> > that matters is that the *outputs* that influence other neurons are just 
> > like those of a real neuron, any *internal* processes in the substitute are 
> > just supposed to be >artificial simulations of what goes on in a real 
> > neuron, so there might be simulated genes (in a simulation running on 
> > something like a silicon chip or other future computing >technology) but 
> > there'd be no need for actual DNA molecules inside the substitute.> > The 
> > assumption is that there is a meaningful difference between the> processes 
> > physically within the cell and those that are input and> output between the 
> > cells. That is not my view. Just as the glowing> blue chair you are 
> > imagining now (is it a recliner? A futuristic> cartoon?) is not physically 
> > present in any neuron or group of neurons> in your skull - under any 
> > imaging system or magnification. My idea of> 'interior' is different from 
> > the physical inside of the cell body of a> neuron. It is the interior 
> > topology. It's not even a place, it's just> a sensorimotive awareness of 
> > itself and it's surroundings - hanging on> to it's neighbors, reaching out 
> > to connect, expanding and contracting> with the mood of the collective. 
> > This is what consciousness is. This> is who we are.You're misunderstanding 
> > what I meant by "internal", I wasn't talking about subjective interiority 
> > (qualia), but *only* about the physical processes in the spatial interior 
> > of the cell. I am trying to first concentrate on external behavioral issues 
> > that don't involve qualia at all, to see whether your disagreement with 
> > Chalmers' argument is because you disagree with the basic starting premise 
> > that it would be possible to replace neurons by artificial substitutes 
> > which would not alter the *behavior* of surrounding neurons (or of the 
> > person as a whole), only after assuming this does Chalmers go on to 
> > speculate about what would happen to qualia as neurons were gradually 
> > replaced in this way. Remember this paragraph from my last post:"Because 
> > I'm just talking about the behavioral aspects of consciousness now, since 
> > it's not clear if you actually accept or reject the premise that it would 
> > be possible to replace neurons with functional equivalents that would leave 
> > *behavior* unaffected (both the behavior of other nearby neurons, and 
> > behavior of the whole person in the form of muscle movement triggered by 
> > neural signals, including speech about what the person was feeling). If you 
> > do accept that premise, then we can move on to Chalmers' argument about the 
> > implausibility of dancing/fading qualia in situations where behavior is 
> > completely unaffected--you also have not really given a clear answer to the 
> > question of whether you think there could be situations where behavior is 
> > completely unaffected but qualia are changing or fading. But one thing at a 
> > time, first I want to focus on this issue of whether you accept that in 
> > principle it would be possible to replace neurons with "functional 
> > equivalents" which emit the same signals to other neurons but have a 
> > totally different internal structure, and whether you accept that this 
> > would leave behavior unchanged, both for nearby neurons and the muscle 
> > movements of the body as a whole."The reason I want to separate these two 
> > issues, and first deal only with physical behaviors, is that in your 
> > original answer to my question about Chalmers' thought-experiment you made 
> > several comments suggesting there would be behavioral changes, like the 
> > suggestion that replacing parts of the brain with artificial substitutes 
> > would cause "dementia" (which normally leads to changes in behavior) and 
> > the suggestion that "the native neurology may strengthen it's remaining 
> > connections and attempt to compensate for the implants with 
> > neuroplasticity, routing around the 'damage'." So please, until we have 
> > this issue settled of whether it would be possible in principle to create 
> > substitutes which caused no behavioral changes in surrounding neurons or in 
> > the whole person, can we leave aside issues relating to qualia and 
> > subjectivity?> > >Everything internal to the boundary of the neuron is 
> > simulated, possibly using materials that have no resemblance to biological 
> > ones.> > It's a dynamic system, there is no boundary like that.If you 
> > accept reductionism and accept that all interactions between the basic 
> > units are *local* ones, then you can divide up any complex system into a 
> > collection of volumes in absolutely any way you please (you don't have to 
> > pick volumes that correspond to 'natural' boundaries like the edges of a 
> > cell), and it will always be true that physical processes in one volume can 
> > only be influenced by other volumes via local influences (like molecules or 
> > photons) coming through that system's boundary. If you don't agree with 
> > this I don't think you understand the basic idea of a reductionist theory 
> > based on local interactions.>The> neurotransmitters are produced by and 
> > received within the neurons> themselves.Sure, but other neurons don't know 
> > anything about the history of neurotransmitter molecules arriving at their 
> > own "input" synapses, if exactly the same neurotransmitter molecules were 
> > arriving they wouldn't behave differently depending on whether those 
> > molecules had been synthesized inside a cell or were constructed by a 
> > nanobot or something.> > >So if you replace the inside of one volume with a 
> > very different system that nevertheless emits the same pattern of particles 
> > at the boundary of the volume, systems in other >adjacent volumes "don't 
> > know the difference" and their behavior is unaffected.> > No, I don't 
> > that's how living things work. Remember that people's> bodies often reject 
> > living tissue transplanted from other human> beings.Why do you think that's 
> > a reason to reject the local reductionist principle I suggest? In a local 
> > reductionist theory, presumably the reason that my cells reject foreign 
> > tissue has to do with the foreign tissue giving off molecules that don't 
> > match the ones given off by my own cells, and my cells picking up those 
> > molecules and reacting to them. See for example the discussion of 
> > "histocompatibility molecules" 
> > athttp://users.rcn.com/jkimball.ma.ultranet/BiologyPages/H/HLA.htmlAreyou 
> > suggesting that even if the molecules given off by foreign cells were no 
> > different at all from those given off by my own cells, my cells would 
> > nevertheless somehow be able to nonlocally sense that the DNA in the nuclei 
> > of these cells was foreign?> > >You didn't address my question about 
> > whether you agree or disagree with physical reductionism in my last post, 
> > can you please do that in your next response to me?> > I agree with 
> > physical reductionism as far as the physical side of> things is concerned. 
> > Well, it's not clear to me that you understand the implications of physical 
> > reductionism based on your rejection of my comments about physical 
> > processes in one volume only being affected via signals coming across the 
> > boundary. Unless the issue is that you accept physical reductionism, but 
> > reject the idea that we can treat all interactions as being local ones (and 
> > again I would point out that while entanglement may involve a type of 
> > nonlocal interaction--though this isn't totally clear, many-worlds 
> > advocates say they can explain entanglement phenomena in a local 
> > way--because of decoherence, it probably isn't important for understanding 
> > how different neurons interact with one another). > > >Because I'm just 
> > talking about the behavioral aspects of consciousness now, since it's not 
> > clear if you actually accept or reject the premise that it would be 
> > possible to replace >neurons with functional equivalents that would leave 
> > *behavior* unaffected> > I'm rejecting the premise that there is a such 
> > thing as a functional> replacement for a neuron that is sufficiently 
> > different from a neuron> that it would matter.And is that because you 
> > reject the idea that in any volume of space, physical processes outside 
> > that volume can only be affected by processes in its interior via particles 
> > (or other local signals) crossing the boundary of that volume?> > >first I 
> > want to focus on this issue of whether you accept that in principle it 
> > would be possible to replace neurons with "functional equivalents" which 
> > emit the same signals to other >neurons but have a totally different 
> > internal structure, and whether you accept that this would leave behavior 
> > unchanged, both for nearby neurons and the muscle movements of the >body as 
> > a whole.> > This is tautological. You are making a nonsense distinction 
> > between> it's 'internal' structure and what it does. If the internal 
> > structure> is equivalent enough, then it will be functionally equivalent to 
> > other> neurons and the organism at large.I don't know what you mean by 
> > "functionally equivalent" though, are you using that phrase to suggest some 
> > sort of similarity in the actual molecules and physical structure of what's 
> > inside the boundary? My point is that it's perfectly possible to imagine 
> > replacing a neuron with something that has a totally different physical 
> > structure, like a tiny carbon nanotube computer, but that it's sensing 
> > incoming neurotransmitter molecules (and any other relevant physical inputs 
> > from nearby cells) and calculating how the original neuron would have 
> > behaved in response to those inputs if it were still there, and using those 
> > calculations to figure out what signals the neuron would have been sending 
> > out of the boundary, then making sure to send the exact same signals itself 
> > (again, imagine that it has a store of neurotransmitters which can be sent 
> > out of an artificial synapse into the synaptic gap connected to some other 
> > neuron). So it *is* "functionally equivalent" if by "function" you just 
> > mean what output signals it transmits in response to what input signals, 
> > but it's not functionally equivalent if you're talking about its actual 
> > internal structure.Also note that these hypothetical carbon nanotube 
> > computers only
>
> ...
>
> read more »

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to