On 10/21/2012 4:05 AM, Stathis Papaioannou wrote:
On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg <whatsons...@gmail.com> wrote:

The atoms in my brain don't have to know how to read Chinese. They only
need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex
behaviour which is reading Chinese comes from the interaction of billions of
these atoms doing their simple thing.

I don't think that is true. The other way around makes just as much sense of
not more: Reading Chinese is a simple behavior which drives the behavior of
billions of atoms to do a complex interaction. To me, it has to be both
bottom-up and top-down. It seems completely arbitrary prejudice to presume
one over the other just because we think that we understand the bottom-up so
well.

Once you can see how it is the case that it must be both bottom-up and
top-down at the same time, the next step is to see that there is no
possibility for it to be a cause-effect relationship, but rather a dual
aspect ontological relation. Nothing is translating the functions of neurons
into a Cartesian theater of experience - there is nowhere to put it in the
tissue of the brain and there is no evidence of a translation from neural
protocols to sensorimotive protocols - they are clearly the same thing.
If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this. Evidence would
constitute, for example, neurons firing when measurements of
transmembrane potentials, ion concentrations etc. suggest that they
should not. You claim that such anomalous behaviour of neurons and
other cells due to consciousness is widespread, yet it has never been
experimentally observed. Why?

Hi Stathis,

How would you set up the experiment? How do you control for an effect that may well be ubiquitous? Did you somehow miss the point that consciousness can only be observed in 1p? Why are you so insistent on a 3p of it?


If the atoms in my brain were put into a Chinese-reading configuration,
either through a lot of work learning the language or through direct
manipulation, then I would be able to understand Chinese.

It's understandable to assume that, but no I don't think it's like that. You
can't transplant a language into a brain instantaneously because there is no
personal history of association. Your understanding of language is not a
lookup table in space, it is made out of you. It's like if you walked around
with Google translator in your brain. You could enter words and phrases and
turn them into you language, but you would never know the language first
hand. The knowledge would be impersonal - accessible, but not woven into
your proprietary sense.
I don't mean putting an extra module into the brain, I mean putting
the brain directly into the same configuration it is put into by
learning the language in the normal way.

How might we do that? Alter 1 neuron and you might not have the same mind.


I'm sorry, but this whole passage is a non sequitur as far as the fading
qualia thought experiment goes. You have to explain what you think would
happen if part of your brain were replaced with a functional equivalent.

There is no functional equivalent. That's what I am saying. Functional
equivalence when it comes to a person is a non-sequitur. Not only is every
person unique, they are an expression of uniqueness itself. They define
uniqueness in a never-before-experienced way. This is a completely new way
of understanding consciousness and signal. Not as mechanism, but as
animism-mechanism.


A functional equivalent would stimulate the remaining neurons the same as
the part that is replaced.

No such thing. Does any imitation function identically to an original?
In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original. We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?

    Is the brain strictly a classical system?


The original paper says this is a computer chip but this is not necessary
to make the point: we could just say that it is any device, not being the
normal biological neurons. If consciousness is substrate-dependent (as you
claim) then the device could do its job of stimulating the neurons normally
while lacking or differing in consciousness. Since it stimulates the neurons
normally you would behave normally. If you didn't then it would be a
miracle, since your muscles would have to contract normally. Do you at least
see this point, or do you think that your muscles would do something
different?

I see the point completely. That's the problem is that you keep trying to
explain to me what is obvious, while I am trying to explain to you something
much more subtle and sophisticated. I can replace neurons which control my
muscles because muscles are among the most distant and replaceable parts of
'me'. These nerves are outbound efferent nerves and the target muscle cells
are for the most part willing servants. The same goes for amputating my arm.
I can replace it in theory. What I am saying though is that amputating my
head is not even theoretically possible. Wherever my head is, that is where
I have to be. If I replace my brain with other parts, the more parts there
are the less of me there is left. The brain isn't like a computer though.
You can't just pull out something and then put it back in if it doesn't
work. In the brain, as soon as you screw it up, you get coma, death,
dementia, stroke, etc. It's part of a living creature made of smaller living
creatures. It doesn't matter how closely you think your substitute brain
acts like my brain, I am never going to be found in your substitute brain,
and the substitute brain will never even get close to working properly.
Computers do not work very well. Every time I turn on my stupid phone there
are like 25 updates, and I hardly do anything with it. Can you imagine how
unreliable a network the size of a synthetic brain would be? How easy it
would be to halt the thalamus program and kill you? It's wildly
overconfident and factually misguided to think of the self and the brain in
these terms. I see it like 19th century Jules Verne sci-fi now. It's just
silly and every week there are more studies which suggest that our
neuroscientific models continue to be more and more inadequate. They don't
add up.
As I said, technical problems with computers are not relevant to the
argument. The implant is just a device that has the correct timing of
neural impulses. Would it necessarily preserve consciousness?


Let's see. If I ingest psychoactive substances, there is a 1p observable effect.... Is this a circumstance that is different in kind from that device?

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to