Maybe I should try to condense this a bit. The primary disagreement we
have is rooted in how we view the relation between feeling, awareness,
qualia, and meaning, calculation, and complexity. I know from having
gone through dozens of these conversations that you are likely to
adhere to your position, which I would characterize as one which
treats subjective qualities as trivial, automatic consequences which
arise unbidden from "from relations that are defined by

My view is that your position adheres to a very useful and widely held
model the universe, and which is critically important for specialized
tasks of an engineering nature, but that it wildly undervalues the
chasm separating ordinary human experience from neurology. Further I
think that this philosophy is rooted in Enlightenment Era assumptions
which, although spectacularly successful during the 17th-20th
centuries, are no longer fully adequate to explain the realities of
the relation between psyche and cosmos.

What I'm giving you is a model which picks up where your model leaves
off. I'm very familiar with all of the examples you are working with -
color perception, etc. I have thought about all of these issues for
many years, so unless you are presenting something which is from a
source that is truly obscure, you can assume that I already have
considered it.

>I disagree with this.  Do you have an argument to help convince me to change
>my opinion?

You have to give me reasons why you disagree with it.

>There is no change in the wiring (hardware) of the computer, only a software
>change has occurred.

Right, that's what I'm saying. From the perspective of the wiring/
hardware/brain, there is no difference between consciousness and
unconsciousness. What you aren't seeing is that the unassailable fact
of our own consciousness is all the evidence that is required to
qualify it as a legitimate, causally efficacious phenomenology in the
cosmos rather than an epiphenomenology which magically appears
whenever it is convenient for physical mechanics. This is what I am
saying must be present as a potential within or through matter from
the beginning or not at all.

The next think you would need to realize is that software is in the
eye of the beholder. Wires don't read books. They don't see colors. A
quintillion wires tangled in knots and electrified don't see colors or
feel pain. They're just wires. I can make a YouTube of myself sitting
still and smiling, and I can do a live video Skype and sit there and
so the same thing and it doesn't mean that the YouTube is conscious
just because someone won't be able to tell the difference.

It's not the computer that creates meaning, it's the person who is
using the computer. Not a cat, not a plant, not another computer, but
a person. If a cat could make a computer, we probably could not use it
either, although we might have a better shot at figuring it out.

>would it concern you if you learned you had been reconstructed by the medical
>device's own internal store of matter, rather than use your original atoms?

No, no, you don't understand who you're talking to. I'm not some bio-
sentimentalist. If I thought that I could be uploaded into a billion
tongued omnipotent robot I would be more than happy to shed this
crappy monkey body. I'm all over that. I want that. I'm just saying
that we're not going to get there by imitating the logic of out higher
cortical functions in silicon. It doesn't work that way. Thought is an
elaboration of emotion, emotion of feeling, feeling of sense, and
sense of detection. Electronically stimulated silicon never gets
beyond detection, so ontologically it's like one big molecule in the
sense that it can make. It can act as a vessel for us to push human
sense patterns through serially as long as you've got a conscious
human receiver, but the conduit itself has no taste for human sense
patterns, it just knows thermodynamic electromotive sense. Human
experience is not that. A YouTube of a person is not a person.

>Color is how nerve impulses from the optive nerve feel to us.
Why doesn't it just feel like a nerve impulse? Why invent a
phenomenology of color out of whole cloth to intervene upon one group
of nerve cells and another? Color doesn't have to exist. It provides
no functional advantage over detection of light wavelengths through a
linear continuum. Your eyes could work just like your gall bladder,
detecting conditions and responding to them without invoking any
holographic layer of gorgeous 3D technicolor perception. One computer
doesn't need to use a keyboard and screen to talk to another, so it
would make absolutely no sense for such a thing to need to exist for
the brain to understand something that way, unless such qualities were
already part of what the brain is made of. It's not nerve impulses we
are feeling, we are nerves and we are the impulses of the nerves.
Impulses are nerve cells feeling, seeing, tasting, choosing. They just
look like nerve cells from the point of view of our body and it's
technological extensions as it is reflected back to us through our own
perception of self-as-other.

>Data can be stored as magnetic poles on
>hard drives and tape, different levels of reflectivity on CDs...

Data is only meaningful when it is interpreted by a sentient organism.
Our consciousness is what makes the pattern a meaningful pattern. Read
a book, put it on tape, CD, flash drive, etc. It means nothing to the
cockroaches and deer foraging for food after the humans are gone.
Again, data is in the eye of the beholder, it is an epiphenomon. We
are not data. We eat data but what we are is the sensorimotor topology
of a living human brain, body, lifetime, civilization, planet, solar
system, galaxy, universe. We have a name, but we are not a name.

>Nearly an infinite number could be constructed, and they are all accessible
>within this universe.  (If you accept computationalism).

Constructed out of what? Why can't we just imagine a color zlue if
it's not different than imaging a square sitting on top of a circle?
You're trying to bend reality to fit your assumptions instead of
expanding your framework to accommodate the evidence.

>> >How does it know to stop at a red light if it is not aware of anything?
>> It doesn't stop at a red light. The car stops at an electronic signal
>> that fires when the photosensor and it's associated semiconductors
>> match certain quantitative thresholds which correspond to what we see
>> as a red light.

>Sounds very much like a description  one could make for why a person stops
>at a red light.  There are inputs, some processing, and some outputs.  The
>difference is you think the processing done by a computer is meaningless,
>while the processing done by a brain is not.

You are assuming that the inputs and outputs have any significance
independent of the processing. The processing is everything.

>> The word processor is just semiconductors which are activated and
>> control in a pattern we deem meaningful. There is no distinction for
>> the computer between correct or incorrect spelling, other than
>> different logic gates being held open or closed.

>If that is so, then point out where this logic fails: "There is no
>distinction for a human that is sad or happy, there are dist different
>collections of neurons either firing or not firing."

Right, you can't tell from the outside. If we discovered an alien word
processor in a crashed spaceship then we could not know whether or not
it is made out of something which understands what it's doing or
whether it's just an artifact which reflects the function of it's use
by something else that understands what it's doing. Since we know how
our own word processors are made however, I have no reason to infer
that electrified silicon cares whether a word is spelled correctly.

>Qualia aren't directly connected to sensory measurements from the
>environment though.  If I swapped all the red-preferring cones in your eyes
>with the blue-preferring cones, then shone blue-colored light at your eyes,
>you would report it as red.

Right, you don't even need eyes. I can imagine or dream red without
there being anything there for the senses to measure. What it is
directly connected to though is the internally consistent logic of
visual awareness. The universe doesn't pick yellow out of a hat, or if
it did, where is the hat and what else is in it?

> The brain interprets the dots and dashes, and creates the
> experience of blue.  How?  It is certainly very complex,

It's only complex if you presume that blue is created. It isn't. It's
primary like charge or spin. Blue is the human nervous system feeling
itself visually, just as language is the nervous system feeling itself
semantically. Blue is incredibly simple. It's probably what we have in
common with one celled organisms and their experience of
photosynthesis dating back to the Precambrian Era. Nerve color is cell
color. It takes an elaborate architecture of different kinds of cells
to step that awareness up to something the size and complexity of a
human being, so the cells are sense-augmented and concentrated into
organs which share their experience with the sense-diminished cells of
the cortex.

>Am I only a lowly adding machine, processing meaningless symbols in the way my
> programming tells me to process them?

No, no. There's nothing inherently less-marvelous about an a-
signifying machine of significant complexity compared to something
that can feel and think. I'm just saying that it's not the same thing.
Even an imitation can improve upon the original, but we are looking at
the wrong side of the Mona Lisa to accomplish that if we seek
consciousness from silicon.

On Jul 10, 11:48 pm, Jason Resch <> wrote:
> On Sun, Jul 10, 2011 at 8:52 PM, Craig Weinberg <>wrote:
> > >I don't think we can say what is or what wouldn't be possible with a
> > machine of these

(cut for hugeness)

On Jul 10, 11:48 pm, Jason Resch <> wrote:
> On Sun, Jul 10, 2011 at 8:52 PM, Craig Weinberg <>wrote:
> > >I don't think we can say what is or what wouldn't be possible with a
> > machine of these
> > >complexity; all machines we have built to date are primitive and
> > simplistic
> > >by comparison.  The machines we deal with day to day don't usually do
> > novel
> > >things, exhibit creativity, surprise us, etc. but I think a machine as
> > >complex as the human brain could do these things regularly.
> > I do think that we can say, with the same certainty that we cannot
> > create a square circle, that it would not be possible at any level of
> > complexity. It's not that they can't create novelty or surprise, it's
> > that they can't feel or care about their own survival. I'm saying that
> > the potential for awareness must be built in to matter at the lowest
> > level or not at all.
> I disagree with this.  Do you have an argument to help convince me to change
> my opinion?
> > Complexity alone cannot cause awareness in
> > inanimate objects, let alone the kind of rich, ididopathic phenomena
> > we think of as qualia. The waking state of consciousness requires no
> > more biochemical complexity to initiate than does unconsciousness.
> The complexity of the wiring doesn't change between an unconscious and
> conscious brain does not change, but the complexity of what is transmitted
> over that wiring does.  It is like a computer that is turned off, vs. one
> which has loaded its programs into memory and begun executing them.  There
> is no change in the wiring (hardware) of the computer, only a software
> change has occurred.  Similarly, the presence or absence of large-scale
> firing patterns involving many brain regions makes the difference between
> consciousness and unconsciousness.  fMRI scans have shown that a stimulus in
> an anesthetized brain does not travel nearly as far as it would in an
> unconscious brain.  There is a difference in complexity between a signal
> that  reaches 10 billion neurons and one that reaches 1 billion.
> > In
> > this debate, the idea of complexity is a red herring which, together
> > with probability acts as a veil of what I consider to be the religious
> > faith of promissory materialism.
> > > If one day humans succeeded in reverse engineering a brain, and executed
> > it
> > > on a super computer, and it told you it was conscious and alive, and did
> > not
> > > want to be turned off, would this convince you or would you believe it
> > was
> > > only being mimicking something that could feel something?  If not, it
> > seems
> > > there would be no possible evidence that could convince you.  Is that
> > true?
> > The only thing that would come close to convincing me that a
> > virtualized brain was successful in producing human consciousness
> > would be if a person could live with half of their brain emulated for
> > a while, then switch to the other half emulated for a while and report
> > as to whether their memories and experiences of being emulated were
> > faithful. I certainly would not exchange my own brain for a computer
> > program based on the computer program's assessment of it's own
> > consciousness.
> Okay.
> > > I believe this is what computers allow us to do: explore alternate
> > universes
> > > by defining new sets of logical rules.
> > Sure, but they can also blind us to the aspects of our own universe
> > which cannot ever be defined by any set of logical rules (such as the
> > experiential nature of qualia).
> > > Neural prostheses will be common some day, Thomas Berger has spent the
> > past
> > > decade reverse engineering the hippocampus:
> >
> > Prostheses are great but you can't assume that you can replace the
> > parts of the brain which host the conscious self without replacing the
> > self.
> Let's say there was advanced medical technology in the distant future which
> could heal a person from any wound, it could reassemble a person atom by
> atom or cell by cell if necessary.  If you were completely obliterated in
> some disaster and then perfectly restored with this machine, would it
> concern you if you learned you had been reconstructed by the medical
> device's own internal store of matter, rather than use your original atoms?
> If it does, then you must somehow justify why the continual replacement of
> matter in your body through normal metabolism does not alarm you, if it does
> not, then some part of you admits that what constitues a person is not their
> matter but the patterns of the matter which define them.  The mechanist idea
> is that it is patterns above all that matter, and whether they are
> replicated with brain cells, silicon chips, or ping pong balls, the essence
> of the entity (its personality, thought patterns, memories, abilities,
> dreams, etc.) would all be preserved.
> This philosophy has already shown great success for anything that stores,
> transmits or processes information.  Data can be stored as magnetic poles on
> hard drives and tape, different levels of reflectivity on CDs and DVDs, as
> charges of electrons in flash memory, etc.  Data can be sent as vibrations
> in the air, electric fields in wires, photons in glass fibers, or ions
> between nerve cells.  Data can be processed by electromechanical machines,
> vacuum tubes, transistors, or biological neural networks.  These different
> technologies can be meshed together without causing any problem.  You can
> have packets sent over a copper wire in an Ethernet cable, and then be
> bridged to a fiber optic connection and represented as groups of photons,
> and then translated again to vibrations in the air, and then after being
> received by a cochlea, transmitted as releases of ions between nerve cells.
> Data can be copied from the flash memory in a digital camera, to a hard
> drive in a computer, and then encoded into a persons brain by way of a
> monitor.  To believe in the impossibility of an artificial brain is to
> believe there is some form of information which can only be transmitted by
> neurons, or some computation performed by neurons which cannot be reproduced
> by any other substrate.  The results of Henry Markram and team have thus
> far, not found the need to incorporate any unknown physics in order to build
> biologically accurate models of large sections of interconnected neurons.
> If you think his team will ultimately fail you should go beyond the mere
> prediction that they will fail and provide a reasoning or explanation for
> what you think the roadblock will be.  Is it infinite complexity,
> non-computable functions, something else?  Without pointing to something in
> the brain which cannot be modeled, then logic leads directly to the idea
> that intelligent machines are possible.  Once at this stage, you may say the
> machines may be intelligent, but unconscious.  This leads to a belief in
> philosophical zombies.  Is this what you believe will happen, or am I
> missing some part of your theory?
> > If you lose an arm or a leg, fine, but if you lose a head and a
> > body, you're out of luck. To save the arm and replace the head with a
> > cybernetic one is not the same thing. Even if you get a brain grown
> > from your own stem cells, it's not going to be you. One identical twin
> > is not a valid replacement for the other.
> I agree, a re-grown brain or a twins brain would not have the same
> memories.  However, nothing prevents the construction of a more faithful
> replica, complete with the original neuron links and connection weights.
> > >  If only one possible
> > > substrate is possible in any given universe, why do you think it just so
> > > happens to line up with the same materials which serve a biological
> > > function?  Do you subscribe to anthropic reasoning?
> > I don't know that only one substrate is possible, and I don't
> > necessarily think that consciousness is unique to biology, I just
> > think that human consciousness in particular is an elaboration of
> > hominid perception, animal sense, and organic molecular detection.
> Qualia aren't directly connected to sensory measurements from the
> environment though.  If I swapped all the red-preferring cones in your eyes
> with the blue-preferring cones, then shone blue-colored light at your eyes,
> you would report it as red.
> > The
> > more you vary from that escalation, the more you should expect the
> > interiority to diverge from our own. It's not that we cannot build a
> > brain based on plastic and semiconductors, it's that we should not
> > assume that such a machine would be aware at all, just as a plastic
> > flower is not a plant. It looks enough like a plant to fool our casual
> > visual inspection, but for every other animal, plant, or insect, the
> > plastic flower is nothing like a plant at all. A plastic brain is the
> > same thing. It may make for a decent android to serve our needs, but
> > it's not going to be an actual person.
> Do you think something can in principle act just like an intelligent person
> in every respect, without being at all conscious?
> > > Primary colors aren't physical properties, they are purely mental
> > > constructions.  There are shrimp which can see something like 16
> > different
> > > primary colors.  It is a factor of the dimensionality of the inputs the
> > > brain has to work with when generating the environment you believe
> > yourself
> > > to be in.
> > They are phenomena present in the cosmos, just as a quark or galaxy
> > is. Labeling them mental constructions is just a way of disqualifying
> > them by appealing to metaphysical speculation.
> This has been understood ever since the ancient Greeks, and by most
> scientists who have studied light:
> The Greek philosopher Democritus, who lived in the 4th and 3rd centuries
> B.C., best known for his atomic theory of matter, said “By convention there
> are sweet and bitter, hot and cold, by convention there is color; but in
> truth there are atoms and the void.”  Galileo wrote in The Assayer,
> published in 1623, “I think that tastes, odors, colors, and so on are no
> more than mere names so far as the object in which we locate them are
> concerned, and that they reside in ...
> read more »

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to