On Mon, Oct 22, 2012 at 1:48 AM, Craig Weinberg <whatsons...@gmail.com> wrote:

>> If there is a top-down effect of the mind on the atoms then there we
>> would expect some scientific evidence of this.
> These words are a scientific evidence of this. The atoms of my brain are
> being manipulated from the top down. I am directly projecting what I want to
> say through my mind in such a way that the atoms of my brain facilitate
> changes in the tissues of my body. Fingers move. Keys click.

You assert that there is top-down manipulation of the atoms in your
brain but the scientific evidence is against you.

>> Evidence would
>> constitute, for example, neurons firing when measurements of
>> transmembrane potentials, ion concentrations etc. suggest that they
>> should not.
> Do not neurons fire when I decide to type?

Yes, but you decide to type because neurons fire. You can't have a
decision without the physical process, so every decision or other
mental process has a correlating physical process.

> What you are expecting would be nothing but another homunculus. If there was
> some special sauce oozing out of your neurons which looked like...what?
> pictures of me moving my fingers? How would that explain how I am inside
> those pictures. The problem is that you are committed to the realism of
> cells and neurons over thoughts and feelings - even when we understand that
> our idea of neurons are themselves only thoughts and feelings. This isn't a
> minor glitch, it is The Grand Canyon.
> What has to be done is to realize that thoughts and feelings cannot be made
> out of forms and functions, but rather forms and functions are what thoughts
> and feelings look like from an exterior, impersonal perspective. The
> thoughts and feelings are the full-spectrum phenomenon, the forms and
> functions a narrow band of that spectrum. The narrowness of that band is
> what maximizes the universality of it. Physics is looking a slice of
> experience across all phenomena, effectively amputating all of the meaning
> and perceptual inertia which has accumulated orthogonally to that slice.
> This is the looong way around when it comes to consciousness as
> consciousness is all about the longitudinal history of experience, not the
> spatial-exterior mechanics of the moment.

Craig, I have repeatedly explained how entertaining your hypothesis
that consciousness is substrate-dependent rather than
function-dependent, which on the face of it is not unreasonable, leads
to absurdity. You actually seem to agree with this below without
realising it.

>> You claim that such anomalous behaviour of neurons and
>> other cells due to consciousness is widespread, yet it has never been
>> experimentally observed. Why?
> Nobody except you and John Clark are suggesting any anomalous behavior. This
> is your blind spot. I don't know if you can see beyond. I am not optimistic.
> If there were any anomalous behavior of neurons, they would STILL require
> another meta-level of anomalous behaviors to explain them. Whatever level of
> description you choose for human consciousness - the brain, the body, the
> extended body, CNS, neurons, molecules, atoms, quanta... it DOESN'T MATTER
> AT ALL to the hard problem. There is still NO WAY for us to be inside of
> those descriptions, and even if there were, there is no conceivable purpose
> for 'our' being there in the first place.  This isn't a cause for despair or
> giving up, it is a triumph of insight. It is to see that the world is round
> if you are far away from it, but flat if you are on the surface. You keep
> trying to say that if the world were round you would see anomalous dips and
> valleys where the Earth begins to curve. You are not getting it. Reality is
> exactly what it seems to be, and it is many other things as well. Just
> because our understanding brings us sophisticated views of what we are from
> the outside in does not in any way validate the supremacy of the realism
> which we rely on from the inside out to even make sense of science.

If the the behaviour of neurons cannot be described and predicted
using physical laws then there must be anomalous at play. How else
could you explain it?

>> I don't mean putting an extra module into the brain, I mean putting
>> the brain directly into the same configuration it is put into by
>> learning the language in the normal way.
> That can't be done. It's like saying you will put New York City directly in
> the same configuration as Shanghai. It's meaningless. Even if you could move
> the population of Shanghai to New York or demolish New York and rebuild it
> in the shape of Shanghai, it wouldn't matter because consciousness develops
> through time. It is made of significance which accumulates through sense
> experience - *not just 'data'*.

Well, if you did disassemble New York and put the atoms into
Shanghai's configuration, including the population, then you would
have Shanghai. Not going to happen tomorrow but where is the
theoretical problem?

>> > No such thing. Does any imitation function identically to an original?
>> In a thought experiment we can say that the imitation stimulates the
>> surrounding neurons in the same way as the original.
> Then the thought experiment is garbage from the start. It begs the question.
> Why not just say we can have an imitation human being that stimulates the
> surrounding human beings in the same way as the original? Ta-da! That makes
> it easy. Now all we need to do is make a human being that stimulates their
> social matrix in the same way as the original and we have perfect AI without
> messing with neurons or brains at all. Just make a whole person out of
> person stuff - like as a thought experiment suppose there is some stuff X
> which makes things that human beings think is another human being. Like
> marzipan. We can put the right pheromones in it and dress it up nice, and
> according to the thought experiment, let's say that works.

The imitation human stimulating his surrounding humans in the same way
as the original could be a zombie or a very good actor. That's what we
need for the neural implant in the thought experiment: a zombie or a
very good actor that stimulates the surrounding neurons in the same
way as the original. Do you think this is logically impossible?
Logical possibility is all that is needed in order to establish

> You aren't allowed to deny this because then you don't understand the
> thought experiment, see? Don't you get it? You have to accept this flawed
> pretext to have a discussion that I will engage in now. See how it works?
> Now we can talk for six or eight months about how human marzipan is
> inevitable because it wouldn't make sense if you replaced a city gradually
> with marzipan people that New York would gradually fade into less of a New
> York or that New York becomes suddenly absent. It's a fallacy. The premise
> screws up the result.

To state it as clearly as I can again, what is required is an
artificial component that stimulates the other neurons in the same way
as the original did. Chalmers says that this component is a computer
chip, which is necessary to establish computationalism, but not to
establish functionalism. To establish functionalism, it is not
necessary to specify how the component works, only that it does work.

>> We can even say
>> that it does this miraculously. Would such a device *necessarily*
>> replicate the consciousness along with the neural impulses, or could
>> the two be separated?
> Would the marzipan Brooklyn necessarily replicate the local TV and Radio
> along with the traffic on the street or could the two be separated? Neither.
> The whole premise is garbage because both Brooklyn and brain are made of
> living organisms who are aware of their description of the universe. We
> can't imitate their description of the universe because we can only get our
> own description of our measuring instruments description of  their exterior
> descriptions.

Are you actually saying that it is *logically* impossible to replicate
a neuron's behaviour stimulating its neighbours? Not just that the
behaviour is not computable, but that not even an omnipotent being
could replicate it? So where is the logical contradiction?

>> As I said, technical problems with computers are not relevant to the
>> argument. The implant is just a device that has the correct timing of
>> neural impulses. Would it necessarily preserve consciousness?
> The timing of neural impulses can only be made completely correct by direct
> experience. The implant can't work as a source of consciousness on a
> personal level, only as band-aid on a sub-personal level. Making a person
> out of band-aids doesn't work.

But you said there are no scientifically anomalous events in neurons,
and if so it would mean that the timing can be be calculated. And if
that fails, there is always God, who is omnipotent. If God got the
timing right would consciousness necessarily be preserved? If so,
functionalism is established as true. If not, we would have the
possibility of partial (as well as full) zombies, lacking aspects of
their consciousness but unaware of this.

Stathis Papaioannou

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to