On Aug 21, 9:34 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Fri, Aug 19, 2011 at 11:39 PM, Craig Weinberg <whatsons...@gmail.com> 
> wrote:
> > 1. Visual qualia does not 'occur in' the visual cortex. Visual qualia
> > overlaps phenomenologically with the electromagnetic activity in the
> > cortex (which is really spread out in different area of the brain),
> > but if visual qualia were actually present 'in' the brain in any way,
> > then we would have no trouble simply scooping it out with a scalpel.
> > That's a fundamental insight. If you don't accept that as fact, then
> > we have nothing to discuss.
> Qualia are not present as a substance which can be scooped out,
> although scooping out a particular part of the brain will destroy the
> qualia, and hence you can figure out which parts of the brain are
> involved in their production.

It doesn't destroy qualia, it just destroys access to that qualia to
that person in the same way that taking out the network card of your
computer doesn't destroy any part of the internet.

> > 2. There is no such thing, objectively as 'information'. It has no
> > properties, so to say that the 'visual cortex, which is fed
> > information from downstream structures such as the retina and optic
> > nerve' is really a way so saying 'we have no idea how visual qualia is
> > transmitted, but we know that it must be, because the visual cortex is
> > informed by the retina.
> Many people have no idea how machines work, but they can easily figure
> out what parts of the machine are essential in their function. For
> example, if you unplug the radio it won't work, so electric power is
> important in radio reception. However, if you replace the AC power
> with a battery the radio will work again, so it can't be that AC power
> is essential for radio operation. I suppose you would say that the
> battery isn't *really* functionally equivalent, since it contains
> chemicals and lacks any AC hum; which is true, but irrelevant.

Radio doesn't need outside power. We add power is just to amplify the
signal so we can hear it better. Radio comes free with the universe.

But the problem I have with your idea of functional equivalence is
that you seem to treat it as an objective property when the reality is
that equivalence is completely contingent upon what functions you are
talking about. AC power works until there is a blackout. Batteries
work in a blackout but they wear out. They are different ways of
achieving the same effect as far as getting what we want out of a
radio amplifier, but there is no absolute 'functional equivalence'
property out there beyond our own motives and senses.

> > 3. To say that there is 'no visual qualia' if the cortex is destroyed
> > is accurate, but only accurate from the subject's perspective.
> > Objectively there was never any qualia there to begin with. The retina
> > still responds to photostimulation, which in your view, it seems would
> > have to be the same thing as visual qualia. If you are going to
> > attribute thermal sense to a thermostat, you must also attribute sight
> > to the retina, no? I imagine you'll say that the qualia is produced by
> > the complex interaction of the visual cortex, which supports my view,
> > that the thin bandwith of the raw sensation makes not just for
> > different intensity of sense but a deeper quality of it's experience.
> If my optic nerves are cut I am blind, so if my retina continues to
> have qualia by itself (which is not impossible)

Yes, that's what I'm saying is possible, and likely.

> I don't know about it.
> "I" am the person who speaks and acts consciously, which appears to
> require my cerebral cortex.

Right, you don't know about it. Just like the furnace doesn't know
about the thermostat if the connection between them is cut.

> >> Any substitution will of course affect qualia IF it affects function.
> > Qualia has no objective function, which is why it is not objectively
> > detectable. That is the reason.
> But it is subjectively detectable. My qualia change if my brain
> changes, since my brain is generating the qualia.

Yes, it is subjectively detectable and has a subjective function. What
the brain is doing is not generating that though, any more than your
retina is generating the light you see now in front of your eyes. What
I think is happening is that your brain is embodying - pantomiming -
what your retina is doing, which is to embody what is happening
visually outside of the eyeball. It's an unbroken chain of
metaphorical recapitulation - not transduction.

Subjectively you are not seeing a linear process of photons being
converted to rod cell twitches to optic nerve twitches, you are seeing
what you are looking at in the outside world. It's not a simulation,
it's a shared sense of what the thing is in terms that you can make
sense of. Your brain is facilitating that, just as your computer
facilitates your access to this conversation, but it's a
misunderstanding to say that your computer is 'generating the

I understand that this is not scientific consensus, it's just my ideas
about it, but I do think it might be a more accurate model with more
explanatory power than that of dumb sense mechanisms.

> It is also
> objectively detectable by assumption: if you point a gun at someone
> and they change their behaviour, you assume that the gun has changed
> their visual qualia.

Sure, but only because you know to expect that someone can see it. If
you point a gun at a bird and it *seems* to change it's behavior, you
can't really make that assumption.

> >> The lens in the eye is replaced in cataract surgery and that does not
> >> affect visual qualia at all, because the artificial lens is
> >> functionally equivalent. The artificial lens is not functionally
> >> identical under *all* circumstances, since it is not identical to the
> >> natural lens; for example, it won't go cloudy with prolonged
> >> ultraviolet light exposure. However, it *is* functionally identical as
> >> far as normal vision is concerned, and that is the thing we are
> >> concerned with. An artificial neuron is more difficult to make than an
> >> artificial lens or artificial joint, but in principle it is no
> >> different.
> > You keep going back to that, but it's not true. It's wall buffaloes.
> > "“We’re like cavemen that, you draw a buffalo on the wall and you say,
> > ok I’m getting good at this just give me another week, bring me some
> > meat, and pretty soon I’ll make buffaloes come out of that wall”.
> >http://www.youtube.com/watch?v=fTd8I0DoHNk(57:16)". You can't make a
> > movie so complete that it replaces the audience. There is a
> > fundamental ontological difference between altering the substance of
> > what we ARE compared to altering the substance of what we USE.
> You *can* start with paintings and end up arbitrarily close to the
> real thing. You go from painting to sculpture to clockwork to
> computer-controlled to computer-controlled with emulation of the
> buffalo brain if you want to reproduce buffalo consciousness.

I understand that that's your belief system, and plenty of people
agree with you. I think that it's not correct though and I agree with
those who see it the way the person in the video who said the quote.
You cannot reproduce buffalo consciousness on a computer. It's just a
better rendered painting/sculpture/model of our outsider's impressions
of a buffalo.

> If you
> are more interested in eating the buffalo than its intellect you will
> have to make muscles out of artificial protein,. And so on: whatever
> aspect of the real buffalo you want to replicate, in theory you can.

Replicate, but not emulate. If you want to make muscles out of
something physical, what makes you so sure that it's possible to make
emotion without physical neurotransmitters? Our consciousness is human
meat. It exists nowhere else but in the context of living, healthy,
human tissue. That fact should be of interest to us. Sure, the things
that we do have patterns which we can neurologically abstract into
'logic' and apply that logic to other substances, but that doesn't
ever have to mean that we can turn other substances into a living
human or buffalo psyche.

> >>It just has to slot into the network of neurons in a way
> >> that is close enough to a natural neuron. It does not have to behave
> >> exactly the same as the neuron it is replacing, since even the same
> >> neuron changes from moment to moment, just close enough.
> > That view is not supported by neuroscience. There are no slots in the
> > network of the brain. It's a family, a dynamic community that has a
> > way of keeping out parasites and foreign biota.
> I'm sure you can imagine a way of doing it. For example, an
> intelligent nanobot could enter the neuron, study it closely for a
> period of time, then gradually replace its various cellular processes
> with more efficient non-biological ones.

No, it doesn't work like that. You're assuming that all neurons do is
the same thing over and over again. It's like saying you can write a
program that watches activity of cnn.com for a while and then replaces
the news stories with better news.

> >> I am unclear as to what exactly you think the artificial neuron would
> >> do. If it replicated 99.99% of the behaviour of the biological neuron
> >> do you think it would result in a slight change in consciousness, such
> >> as things looking a little bit fuzzy around the edges if the entire
> >> visual cortex were replaced, or do you think there would be total
> >> blindness because machines can't support qualia?
> > Let's sat that 100% of the behavior is only 50% of what a neuron is.
> > The other 50% is what is doing the behaving and the reasons it has for
> > behaving that way. It doesn't matter how great your I/O handlers are
> > if the computer is made of ground beef.
> So are you agreeing that that the artificial neuron can in theory
> replicate almost 100% of the behaviour of the biological neuron? What
> would that be like if your entire visual cortex were replaced?

It depends 50% on what you replace it with. If it was nothing but pure
behavioral logic, maybe it would rapidly be compensated for with
synesthesia. You would get the same knowledge you do from vision, but
the qualia would be like using a GPS made of sound, feeling, smell,
taste, balance, etc. Your brain would learn to use the device.

As far as I know, even memories of visual qualia would be inaccessible
if the visual parts of the brain were damaged. If not the brain could
maybe recover partial visual qualia by re associating colors and forms
with the memories of colors and forms (which as you see is not the
same thing. staring at the sun is not possible for long, whereas you
can imagine staring at the sun as long as you want. I assume.)

> >> I have asked several times if you believe that IF the behaviour of a
> >> device were exactly the same as a biological neuron THEN you would
> >> deduce that it must also have the consciousness of a neuron, and you
> >> haven't answered.
> > I keep answering but you aren't able to let go of your view long
> > enough to consider mind. The answer is that biological 'function' is
> > inseparable from BUT NOT IDENTICAL TO awareness. Awareness is not
> > 'caused by' biological function - biology is necessary but not
> > sufficient to describe sense. Biology is built on the interaction of
> > passive objects only, so there is no room for the ordinary experience
> > of interiority that we have to arise from or exist within other
> > phenomena in the universe.
> It seems to me that biology is sufficient since if you exactly
> replicate the biology, you would replicate awareness.

That's not the case. An identical twin is close to a biological
replicate the awareness is not at all 'replicated'. They will share
some personality traits but are by no means the same person. My own
dad has an identical twin who has a very different personality and
life path than he has, so I can verify that.

What biology gives you is access to awareness. Two computers can have
the same hardware, but entirely different contents on their HD and
entirely different users who put that content there.

> However, I don't
> think biology is necessary, since as explained if you replace the
> brain with a non-biological equivalent awareness would continue.

Awareness, but not biological level awareness. A plastic bag might
have awareness, in that it may have an interior correlate to it's
responses to it's environment, heat, stress, etc in a specific way,
but it's probably not the same thing as the feeling of a living

> >  It's hugely anthropocentric to say "We are the magic monkeys that
> > think we feel and see when of course we could only be pachincko
> > machines responding to complex billiard ball like particle impacts".
> Of course that's what we are. This is completely obvious to me and I
> have to make a real effort to understand how you could think
> otherwise.

You think that we are magic? That Homo sapiens invented awareness by
accident in a universe that has no awareness? That to me is like
saying that nuclear power plants invented radiation.

> >>I've asked the same question differently: could an
> >> omnipotent being make a device that behaves just like a neuron
> > No. There is no such thing as something that 'behaves *just like* a
> > neuron' that is not, in fact, a neuron.
> >  (but
> >> isn't a biological neuron) while lacking consciousness, and you
> >> haven't answered that either. A simple yes/no would do.
> > This part of the question is invalidated by the answer to the first
> > part. I've already answered it. It's like saying 'If God could make an
> > apple that was exactly like an orange, would it lack orange flavor'.
> > It's a non sequitur.
> So, you agree: any device that acts just like a neuron has to be a
> neuron. It doesn't have to look like a neuron, it could be a different
> colour for example, it just has to behave like a neuron. Right?

No. My answer is not going to change. If it doesn't look like a
neuron, then there must be SOME difference. Whatever difference that
is could either result itself in different interiority (≠Ψ) which
results In different behavior pattern (BP) accumulations over time
(let's call that Z factor {≠Ψ->Sum(≠BP/Δt)} ), or the visible
difference (≠v) could be the tip of the iceberg of subtle
compositional differences which result in the same Z thing. It could
be a different color with no Z factor - it depends on why it's a
different color.

> >> So you say. We assume that you are right and see where it leads. It
> >> leads to contradiction, as you yourself admit, but then you avoid
> >> discussing how to avoid the contradiction.
> > What contradiction? I think your view leads to contradiction.
> > Thermostats that can feel but retina that can't see? It's de-
> > anthropomorphic prejudice.
> I'm agnostic about whether thermostats can feel or retinas can see on their 
> own.

But do you have an opinion on whether there is a fundamental
difference between the two? Does the thermostat sensor make more sense
than a rod cell?

> >> If our brain chemistry were the same then we would have the same
> >> opinions; but since our brain chemistry is different we have different
> >> opinions. Moreover, my opinions change because my brain chemistry
> >> changes.
> > So you think that you in fact have no control over your opinion, and
> > what we are doing here is a meaningless sideshow to some more
> > important and causally efficacious coordination of neurotransmitter
> > diffusion-reputake gradients. To me that is really bending over
> > backward to put the cart before the horse... or does it make more
> > sense to just say
> > "CCK, cholecystokinin; CCK8,COOH-terminal CCK-octapeptide; PC,
> > phosphatidylcholine; DAG,sn-1,2-diacylglycerol; PI,
> > phosphatidylinositol;P A, phosphatidic acid;PIP, phosphatidylinositol
> > 4-phosphate; PIP2, phosphatidylinositol4,5-bisphosphate; PE,
> > phosphatidylethanolamine; PS, phosphatidylserine;TG, triacylglycerol;
> > protein kinase C, Ca2+-activated, phospholipid-dependent protein
> > kinase; protein kinase A, cyclic AMPdependent protein kinase; TPA,
> > 12-0-tetradecanoylphorbol-13-acetate;IP:3,i nositol trisphosphate;
> > HEPES4,- (2-hydroxyethyl)-l-piperazineethansulfonicacid EGTA,
> > [ethylenebis(oxyethylenenitrilo)]tetracetic acid HR"
> > instead?
> If they are my opinions then I have control over them, don't I?

Only if you subscribe to a view like mine. In your view, you are
clearly stating that your opinions are biochemical processes, and
therefore any semantic conception of them is strictly metaphysical and
somewhat illusory.

>I think you may be referring to the compatibilism/incompatibilism
> debate. If you choose to say you lack free will under determinism,
> then you can say that - it's just a matter of semantics. But then,
> would you be happier saying that you have free will if your choices
> are random rather than determined? Or do you have some third option,
> neither random nor determined?

Yes, a third option. Purposeful teleology. Determinism and randomness
are the 3-p view (teleonomy), 1-p sensorimotive view is what causes,
or determines determinism. The separation between perceptual frames of
reference (PRIF = Ψ² meaning a matrix of perceptual relations as a
coherent 'world')  is what causes the appearance of randomness. The
further something is outside of your PRIF, the more it is presented in
binary terms of determinism or randomness. The closer something in
your world is to the interior of 'you' the more it is presented in
nuanced analog contingencies and participatory phenomenology.

> >> Of course it does. It has absolutely everything to do with it. If you
> >> drank 10 liters of water the sodium concentration in your brain would
> >> fall and you would go into a coma. If you had a non-material soul that
> >> was unaffected by biochemistry you would have been OK.
> > The mind supervenes upon the body and brain, but the brain also
> > supervenes upon the mind. Consciousness is 100% real, but only 50% of
> > it can be describes in exclusively physical terms - like mass,
> > density, scale, etc. Those terms do not apply to the phenomenology of
> > experience. If it was all biochemical, you could not read something
> > that 'makes you mad', since the ink of the writing doesn't enter your
> > bloodstream and cross the blood brain barrier.
> Consciousness is 100% real but none of it can be described in purely
> physical terms. However, the physical processes which generate it can
> be described.

That's a premature assumption. It's the same as saying that electronic
processes which generate the content of the internet can be described.
The internet is an electronic machine that connects human beings. A
brain is a neurological machine that hosts human identities (as well
many other subconscious and unconscious entities in all likelihood)

> >> > If he breaks his artificial knee, he won't experience the same pain as
> >> > if he broke his original knee. That is a function. The difference
> >> > means that it does not 'function normally'. He can take a 3/4" drill
> >> > and put a hole straight through his patella five inches and not feel a
> >> > damn thing. That is not 'functioning normally'. Why not admit it?
> >> Similarly with an artificial brain component of the right design, if
> >> the circulation is cut off he won't have a stroke. But that does not
> >> mean that he can't have normal cognition.
> > So you admit, finally, that a replacement part cannot be assumed to
> > 'function normally' just because it appears *to us*  to do what the
> > thing it replaces used to do.
> If it's not exactly the same then obviously there will be differences,
> in composition and also in function. But you don't seem to get that we
> are concerned with *relevant* differences.

Relevant to who? (To paraphrase Suicidal Tendencies), how can You say
what My brain's best interests are? Until we know how to make the
color blue from scratch or find the mathematical ingredient that makes
a joke funny, we can't even come close to saying that we can know what
is relevant to the production of awareness. The brain appears to be
not much more than a big soft colony of coral. Nothing any cell does
looks like it can wind up being funny or blue.

>If a person with a brain
> prosthesis can have a normal conversation with you for an hour on a
> wide range of topics, showing humour and emotion and creativity, that
> would say something about how well the prosthesis was working. So what
> if the prosthesis is a different colour and weighs a little bit more
> compared to the original?

In a real life medical situation, if a prosthesis was developed that
appeared to work by the reports of the subjects themselves and the
people around them, I would of course give it the benefit of the
doubt. I'm not asserting with certainty that no such appliance can
ever be developed. Philosophically however, there is no
epistemological support for it, since we can't observe someone else's
qualia. Like members of an isolated tribe being shown television for
the first time, our assumption that there are people in the television
set might be unfounded.

Because of that, if we are to determine the course of development of
artificial neurology, in the face of compelling reasons to the
contrary, I would make biological replication at the genetic level a
top priority and computational simulation a distant second. Unless and
until we have any success whatsoever in creating a device which seems
on casual inspection to possess free will and feeling out of an
inorganic material, it's really only academic. If someone thinks that
there is no significant difference between living organisms and
computer programs, then let them prove it, even on the most basic

> >> >> No, the radioactive decay is truly random,
> >> > What makes you an authority on what is truly random?
> >> Look it up yourself.
> > I mean how do you know that the idea of randomness corresponds to that
> > can exist? Randomness is a category of information, which I think
> > doesn't physically exist.
> There is no way, even if you have complete knowledge of starting
> conditions, that you can predict when an atom will decay. Perhaps
> "indeterminate" is a better term than random.

Indeterminate is fine. It's still the same as predicting crowd
responses for a baseball game that has not yet been played. It doesn't
mean there's no sense going on from a 1p perspective, it just means
that it can't be predicted because it's inherently unpredictable.

> >> For example, do you imagine that an ion gate in
> >> a cell membrane can open apparently without being triggered by any
> >> change in the physical conditions such as binding of neurotranmitter?
> >> If so, it should be evident experimentally; can you cite any papers
> >> showing such amazing results?
> > You're citing the limitation of your own 50% correct view as a virtue.
> > The fact is that changes in our cell membranes of our brains are
> > triggered by our thoughts, and thoughts are triggered by neurological
> > changes as well. It. is. bidirectional. I know it seems bizarre, as
> > did Galileo's understanding of astronomy contradict the naive realism
> > that we should fly off the Earth if it was moving, but I am not
> > willing to discard the reality of consciousness in favor of the
> > reality of biology. They can and do coexist quite peacefully. Your
> > brain doesn't have a problem with you controlling what you think and
> > say, it's only your conditioning and stubbornness that prevents you
> > from seeing it as it is, an unassailable and ordinary fact.
> If thoughts are generated by biochemical reactions in the brain

They aren't. No more than this conversation is generated by electronic
reactions in our computers.

> and
> only biochemical changes in the brain then there is a deterministic
> chain of biochemical events from one brain state to another. That is,
> we can say,

Even that is doubtful. The biochemical changes in the brain, whether
or not the exclusive cause of thought (which they aren't), are
probably just like baseball games. Contingent upon unknowable outcomes
of contests and motives on a molecular and cellular level that we
can't understand and even they can't predict. Groups of living
organisms are not just a machine, they are also a community. They
collectively make decisions, as in photosynthesis and quorum sensing
in bacteria.

> (a) Release of dopamine from Neuron A triggers an action potential in
> Neuron B which causes Muscle C to contract which causes Hand D to
> rise,

What caused Neuron A to release the dopamine in the first place?
Nothing in the brain - it was caused by an event in the mind, or, more
accurately an experience of the Self, which constellates as many
overlapping events on different levels of sensation, emotion, and
cognition. That's the reason the neuron fires, because something is
happening to us personally. The neuron has no reason to fire or not
fire on it's own. It doesn't care, it just wants to eat glucose and
participate in the society of the other neurons.
> or,
> (b) Release of dopamine from Neuron A generates a desire to lift one's
> hand up, the dopamine then triggers an action potential in Neuron B
> which is experienced as the intention of lifting one's hand up, and
> Neuron B stimulates Muscle C to contract which is experienced as one's
> hand actually rising.

Great. So we are dopamine puppets from a neuron puppet master. It's
not a legitimate possibility. If it were there would be no reason for
anything like a 'desire' to be generated. It's completely superfluous.
If Neuron A can trigger Neuron B without our help, then it surely
would. It's like saying that maybe your thermostat has a DVD player in
it that plays excerpts from the Wizard of Oz and then it turns on the
furnace and then the house is warmed up which makes the DVD player
choose a different scene of the movie.

I'm only continuing with this for the benefit of you or anyone else
who might be interested in reading it. There is nothing in your
arguments that I have not considered many times in many many long
discussions. It's all very old news to me. It does help me communicate
my view more clearly though so I don't mind, just don't get frustrated
that I'm not going to ever go back to my (our) old worldview. I think
that I mentioned that I used to hold the same views that you have now
only a few years ago? It's almost correct, it's just inside out.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to