You're misunderstanding what I meant by "internal", I wasn't talking about 
subjective interiority (qualia), but *only* about the physical processes in 
the spatial interior of the cell. I am trying to first concentrate on 
external behavioral issues that don't involve qualia at all, to see whether 
your disagreement with Chalmers' argument is because you disagree with the 
basic starting premise that it would be possible to replace neurons by 
artificial substitutes which would not alter the *behavior* of surrounding 
neurons (or of the person as a whole), only after assuming this does 
Chalmers go on to speculate about what would happen to qualia as neurons 
were gradually replaced in this way. Remember this paragraph from my last 
post:


 In my model, physical processes are just the exterior, like clothing of the 
qualia (perceivable experiences). There is no such thing as external 
behavior that doesn't involve qualia, that's my point. It's all one thing - 
sensorimotive perception of relativistic electromagnetism. I think that in 
the best case scenario, what happens when you virtualize your brain with a 
non-biological neuron emulation is that you gradually lose consciousness but 
the remaining consciousness has more and more technology at it's disposal. 
You can't remember your own name but when asked, there would be a 
meaningless word that comes to mind for no reason. To me, the only question 
is how virtual is virtual. If you emulate the biology, that's a completely 
different scenario than running a logical program on a chip. Logic doesn't 
ooze serotonin.

Are you suggesting that even if the molecules given off by foreign cells 
were no different at all from those given off by my own cells, my cells 
would nevertheless somehow be able to nonlocally sense that the DNA in the 
nuclei of these cells was foreign?

 It's not about whether other cells would sense the imposter neuron, it's 
about how much of an imposter the neuron is. If acts like a real cell in 
every physical way, if another organism can kill it and eat it and 
metabolize it completely then you pretty much have a cell. Whatever cannot 
be metabolized in that way is what potentially detracts from the ability to 
sustain consciousness. It's not your cells that need to sense DNA, it's the 
question of whether a brain composed entirely of, or significantly of cells 
lacking DNA would be conscious in the same way as a person.

Well, it's not clear to me that you understand the implications of physical 
reductionism based on your rejection of my comments about physical processes 
in one volume only being affected via signals coming across the boundary. 
Unless the issue is that you accept physical reductionism, but reject the 
idea that we can treat all interactions as being local ones (and again I 
would point out that while entanglement may involve a type of nonlocal 
interaction--though this isn't totally clear, many-worlds advocates say they 
can explain entanglement phenomena in a local way--because of decoherence, 
it probably isn't important for understanding how different neurons interact 
with one another). 
 
It's not clear that you are understanding that my model of physics is not 
the same as yours. Imagine an ideal glove that is white on the outside and 
on the inside it feels like latex. As you move your hand in the glove you 
feel all sorts of things on the inside. Textures, shapes. etc. From the 
outside you see different patterns appearing on it. When you clench your 
fist, you can see right through the glove to your hand, but when you do, 
your hand goes completely numb and you can't feel the glove. What you are 
telling me is that if you make a glove that looks exactly like this crazy 
glove, if it satisfies all glove like properties such that it makes these 
crazy designs on the outside, that it must be having the same effect on the 
inside. My position is that no, not unless it is close enough to the real 
clove physically that it produces the same effects on the inside, which you 
cannot know unless you are wearing the glove.

And is that because you reject the idea that in any volume of space, 
physical processes outside that volume can only be affected by processes in 
its interior via particles (or other local signals) crossing the boundary of 
that volume?

No, it's because the qualia possible in inorganic systems is limited to 
inorganic qualia. Think of consciousness as DNA. Can you make DNA out of 
string? You could make a really amazing model of it out of string, but it's 
not going to do what DNA does. You are saying, well what if I make DNA out 
of something that acts just like DNA? I'm asking, like what? If it acts like 
DNA in every way, then it isn't an emulation, it's just DNA by another name.

I don't know what you mean by "functionally equivalent" though, are you 
using that phrase to suggest some sort of similarity in the actual molecules 
and physical structure of what's inside the boundary?

I'm using that phrase because you are. I'm just saying that what the cell is 
causes what the cell does. You can try to change what the cell is but retain 
what you think is what the cell does, but how much you change it increases 
the odds that you are changing something that you have no way of knowing is 
important.

My point is that it's perfectly possible to imagine replacing a neuron with 
something that has a totally different physical structure, like a tiny 
carbon nanotube computer, but that it's sensing incoming neurotransmitter 
molecules (and any other relevant physical inputs from nearby cells) and 
calculating how the original neuron would have behaved in response to those 
inputs if it were still there, and using those calculations to figure out 
what signals the neuron would have been sending out of the boundary, then 
making sure to send the exact same signals itself (again, imagine that it 
has a store of neurotransmitters which can be sent out of an artificial 
synapse into the synaptic gap connected to some other neuron). So it *is* 
"functionally equivalent" if by "function" you just mean what output signals 
it transmits in response to what input signals, but it's not functionally 
equivalent if you're talking about its actual internal structure.

But what the signals and neurotransmitters are coming out of is not 
functionally equivalent. The real thing feels and has intent, not calculates 
and imitates. You can't build a machine that feels and has intent out of 
basic units that can only calculate at imitate. It just scales up to a 
sentient being vs a spectacular automaton.

If you do accept that it would be possible in principle to gradually replace 
real neurons with artificial ones in a way that wouldn't change the behavior 
of the remaining real neurons and wouldn't change the behavior of the person 
as a whole, but with the artificial ones having a very different internal 
structure and material composition than the real ones, then we can move on 
to Chalmer's argument about why this sort of behavioral indistinguishability 
suggests qualia probably wouldn't change either. But as I said I don't want 
to discuss that unless we're clear on whether you accept the original 
premise of the thought-experiment.

 It all depends how different the artificial neurons are. There might be 
other recipes for consciousness and life, but so far, we have no reason to 
believe that inorganic logic can sustain either. For the purposes of this 
thread, let's say no. If it's artificial enough to be called artificial then 
the consciousness associated with it is also inauthentic.

That's just a recording of something that actually happened to a biological 
consciousness, not a simulation which can respond to novel external stimuli 
(like new questions I can think to ask it) which weren't presented to any 
biological original.

That's easy. You just make a few hundred YouTubes and associate them with 
some AGI logic. Basically make a video ELIZA (which would actually be a 
fantastic doctorate thesis I would think). Now you can have a conversation 
with your YouTube person in real time. You could even splice together 
phonemes to make them just able to speak English in general and then hook 
them up to a Google translation. Would you then say that if the AGI 
algorithms were good enough - functionally equivalent to human intelligence 
in every way, that the YouTube was conscious?

But when you originally asked why we don't "see" consciousness in 
non-biological systems, I figured you were talking about the external 
behaviors we associate with consciousness, not inner experience. After all 
we have no way of knowing the inner experience of any system but ourselves, 
we only infer that other beings have similar inner experiences based on 
similar external behaviors.

That's what I'm trying to tell you. Consciousness is nothing but inner 
experience. It has no external behaviors, we just can recognize our own 
feelings in other things when we can see them do something that reminds us 
of ourselves.

 If you want to just talk about inner experience, again we should first 
clear up whether you can accept the basic premise of Chalmers' thought 
experiment, then if you do we can move on to talking about what it implies 
for inner experience.


I don't want to talk about inner experience. I want to talk about my 
fundamental reordering of the cosmos, which if it were correct, would be 
staggeringly important and I have not seen anywhere else:

   1. Mind and body are not merely separate, but perpendicular topologies of 
   the same ontological continuum of sense. 
   2. The interior of electromagnetism is sensorimotive, the interior of 
   determinism is free will, and the interior of general relativity is 
   perception. 
   3. Quantum Mechanics is a misinterpretation of atomic quorum sensing.
   4. Time, space, and gravity are void. Their effects are explained by 
   perceptual relativity and sensorimotor electromagnetism.
   5. The "speed of light" *c* is not a speed it's a condition of 
   nonlocality or absolute velocity, representing a third state of physical 
   relation as the opposite of both stillness and motion.

It's not about meticulous logical deduction, it's about grasping the 
largest, broadest description of the cosmos possible which doesn't leave 
anything out. I just want to see if this map flies, and if not, why not?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/V2ld27WqYpEJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to