On Sat, Jul 9, 2011 at 7:42 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
> > The difference between a life form and a mixture of chunks of coal and
> water won't be found
> > in comparing the chemicals, the difference is in their organization.
> > is all that separates living matter from non-living matter
> Organization is only part of it.
How is it you are so sure that the organization is only part of it?
> You could try to to make DNA out of
> something else - substituting sulfur for carbon for instance, and it
> won't work.
Sulfur is not functionally equivalent to carbon, it will behave differently
and thus it is not the same organization.
> It goes beyond mathematical considerations, since there is
> nothing inherently golden about the number 79 or carbon-like about the
> number 6. We can observe that in this universe these mathematical
> organizations correlate with particular behaviors and qualities, but
> that doesn't mean that they have to, in all possible universes,
> correlate in that way. Mercury could look gold to us instead. Life
> could be based on boron. In this universe, however, there is no such
> thing as living matter, there are only living tissues. Cells. Not
The special thing about carbon is that it has four free electrons to use to
bond with other atoms (it serves as a glue for holding large molecules
together). While Silicon also has 4 free electrons, it is much larger, and
doesn't hide away between the atoms it is holding together, it would get in
the way. Anything that behaves like a carbon atom in all the same ways
could serve as a replacement for the carbon atom, it wouldn't have to be
carbon. For example, lets say we discovered a new quark that could be put
together into a super proton with a positive charge of 3, and also it had
the mass of 3 protons. A nucleus made of two of these super protons and six
neutrons could not rightfully be called carbon, yet it would have the same
mass and chemical properties, and the same electron shells. Do you think it
would be impossible to make a life form using these particles in place of
carbon (assuming they behaved the same in all the right conditions) or is
there something special about the identity of carbon?
> > Could we not build an artificial retina which sent the right signals down
> the optic nerve and allow someone to see?
> Sure, but it's still going to be a prosthetic antenna.
No, it is more than an antenna. The retina does processing. I chose the
retina example as opposed to replacing part of the optic nerve precisely
because the retina is more than an antenna.
> You can
> replicate the physical inputs from the outside world but you can't
> necessarily replicate the psychic outputs from the visual cortex to
> the conscious Self.
So the "psychic outputs" from the retina are reproducible, but not those of
the visual cortex? Why not? The idea of these psychic outputs sounds
somewhat like substance dualism or vitalism.
> It's no more reasonable than expecting the
> fingernails on an artificial hand to continue to grow and need
> clipping. We don't have the foggiest idea how to create a new primary
> color from scratch.
We have done this to monkeys already:
The interesting thing is that the brain was apparently able to automatically
adapt to the new signals received from the retina and process it for what it
was, a new primary color input. It only took the brain five months or so to
rewire itself to process this new color.
"It was as if they woke up and saw these new colours. The treated animals
unquestionably responded to colours that had been invisible to them," said
Jay Neitz, a co-author on the study at the University of Washington in
It is even thought that some small percentage of women see four primary
It is not that they have different genes for processing colors differently,
they just have genes for a fourth type of light-sensitive cone, their brain
software adapts accordingly. (Just as those with color blindness do not
have defective brains)
> IMO, until we can do that - one of the most
> objective and simple examples of subjective experience, we have no
> hope of even beginning to synthesize consciousness from inorganic
I think it is wrong to say the subjective visual experience is simple. It
seems simple to us, but it has gone through massive amounts of processing
and filters before you are made aware of it. Some 30% of the gray matter in
your brain is used to process visual data.
Given that, I would argue we have already implemented consciousness in
in-organic materials. Consider that Google's self driving cars must
discriminate between red and green street lights. Is the self-driving car
not aware of the color the street light is?
> >And brains are just gelatinous tissue with cells squirting juices back and
> >forth. If you are going to use reductionism when talking about computers,
> >then to be fair you must apply the same reasoning when talking about minds
> >and brains.
> Exactly. If we didn't know for a fact that our brain was hosting
> consciousness through our first hand experience there would be
> absolutely no way of suspecting that such a thing could exist.
I am not as certain of that as you are. Imagine some alien probe came down
to earth and observed apes pointing at a piece of red fruit up in a tree
amongst many green leaves. The probe might conclude that the ape was
conscious of the fruit and has the awareness of different frequencies of
light. Then again, if you define consciousness out of the universe
entirely, there would be no way we could suspect anything because there
would be nothing we could do at all.
> This is
> what I'm saying about the private topology of the cosmos. We can't
> access it directly because we are stuck in our own private topology.
> So to apply this to computers and planes - yes they could have a
> private topology, but judging from their lack of self-motivated
Check out the program "Smart Sweepers" on this page:
You will find software for evolving neural network based brains, which
control behaviors of little robots on a plane searching for food. They
start off completely dumb, most running around in circles, but after a few
hundred generations become quite competent, and after a few thousand I've
even observed what could be described as social behavior (they all travel in
the same direction and never turn around backwards if they miss a piece of
food), when I first saw this I was completely shocked, I would not have
guessed this behavior would result even though I understood how the program
worked. There is, however, an individual survival benefit from following
the group movement rather than traveling against the grain (the waves of
bots clear out food in a wave, and going against the grain you would get
lucky far less often than traveling with it). Note: Press the "F" key to
accelerate the evolution rather than animating it in its entirety when you
get bored watching the performance of each generation.
The bots, and their evolution is self-directed. You will find no code in
the source files indicating how to find food, or how if they all travel in
one pack, members of the group will individually benefit. This is a
surprising result which the computer found, it was not programmed in, nor
was the computer told to do it. The evolved bots movement, in their search
for food can also be said to be self-directed, it is as much as movement in
a bacterium or insect is in its search for food. You might go so far as to
say each bot IS conscious of the closest piece of food (that information is
fed into the neural network of each bot). Whether or not a computer
exhibits self motivated behaviors is a matter of its programming., you
couldn't say your word processor is very self-directed, for example.
> it makes more sense to think of them in terms of purely
> structural and electronic interiority rather than imagining that their
> assembly into anthropological artifacts confer some kind of additional
> A living cell is more than the sum of it's parts. A dead cell is made
> of the same materials with the same organization as a living cell, it
> just doesn't cohere as an integrated cell anymore, so lower level
> processes overwhelm the whole.
I would say death, like unconsciousness is the failure of higher-level
processes. It is when you stop breathing (a high level process) that causes
mitochondria to stop producing ATP, which causes most other reactions in
cells to cease. Likewise, anesthetical chemicals cause very little
difference in the operation of brain cells at the lower levels, but
globally, nerve signals won't travel as far, and different brain regions
become isolated from each other. Thus the brain still looks like it is
alive and functioning but there is no consciousness. You cannot look at a
conscious computer at the level of the silicon chip, by far most of the
complexity is in the memory of the computer. If we were to talk about a
computer program with the same complexity as the human brain, it might have
many Petabytes (10^15 bytes) worth of memory. The processor, or processors
serve only to provide a basic ruleset for relating the various structures
that exist in memory, just as the laws of physics do for our biological
brains. Compared to our brains, the laws of physics look very simple, just
like the architecture of any computer's CPU looks very simple compared to
the in-memory representation of a brain. This is the mistake Searle made
when he said as the rule-follower he wouldn't understand a thing, there is
very little complexity in the following of the rules, the computer more than
just the CPU, it is the tape also.
> Decay is entropy for a body or a piece
> of fruit, but a bonanza of biological negentropy for bacteria and
> >Do you need another person to look at and interpret the firings of neurons
> >in your brain in order for there to be meaning for your thoughts? If not,
> >why must be a user of the computer to impart meaning to its states?
> I'm not saying that there is no meaning to the states of
> semiconductors acting in concert within a microprocessor, I'm just
> saying that it's likely to be orders of magnitude more primitive than
> organic life.
Again, I think you may be confusing the processor for the computer. The
comexity of a life is bounded by the amount of information it takes to
represent it, which roughly corresponds to how many base pairs its DNA has
(2 bits for each one) but there is a high degree of compressibility. Human
DNA can be compressed to about 25 MB, and with about half the genes
describing the human brain in particular. Thus the initial human infant
brain could be described in about 12.5 MB or (100 million bits) of
information. Obviously through interaction with the environment, it becomes
much more complex, and requires more bits to describe, but my point here is
that there are data sets and programs much larger in size than that
necessary to describe organic life. I agree with you, almost any life is
more complex than the processor, instruction sets are typically small. The
Java Virtual Machine, for example, has only 256 possible instructions. It
knows how to do only a finite set of 256 different things. Yet the number
of possible Java programs is infinite, a program can be implemented to do
just about anything (despite these limited instruction set of its
> To me, it's obvious that the interior experience of
> neurons firing is the important, relevant phenomenon while the neuron
> side is the generic back end.
> Since computers are a reflection of our own cognitive abilities rather
> than a self-organizing phenomenon, their important, relevant phenomena
> are the signifying side which faces the user. The guts of the computer
> are just means to an end. They don't know that they are computers, and
> they never will.
Dan Dennett said this belief is only a prejudice against non-neuron minds.
> Computation is not awareness. If it were, you could
> invent a new primary color simply by having someone understand a
> formula. It's a category error to conflate the two.
This is Searle's error again. The person would have to BE the new formula
to experience the new color, not have a meta-understanding of the formula.
Searle was not the mind he was processing the rules for, he was just the
CPU, so of course he did not experience things as the mind itself. The laws
of physics are determining what your brain does, yet we don't say the laws
of physics know what it is like to be you, only you know what it is like to
> btw, these ideas are not what I have always believed. I have been
> thinking about these issues all of my life. My original orientation
> was as a strict materialist, so I know very well how to make sense out
> of the world that way. I'm just saying that it's missing half of the
> story based on an idea of the self as a transparent logical entity
> separate from the cosmos that it observes, which is not. Consciousness
> is an extension of perception and awareness, not a disembodied logical
> essence. Logic is metaphysical. Sense is physics.
> On Jul 9, 6:08 pm, Jason Resch <jasonre...@gmail.com> wrote:
> > On Sat, Jul 9, 2011 at 11:44 AM, Craig Weinberg <whatsons...@gmail.com
> > > > Why? Biological tissue is made out of protons, neutrons, and
> > > > just like computer chips. Why should anything other than their
> > > > input/output function matter?
> > > A cadaver is made out of the same thing too. You could pump food into
> > > it and fit it with an artificial gut, even give it a synthesized voice
> > > to make pre-recorded announcements and string it up like a marionette.
> > > That doesn't mean it's a person. Life does not occur on the atomic
> > > level, it occurs on the molecular level. There may be a way of making
> > > inorganic molecules reproduce themselves, but there's no reason to
> > > believe that their sensation or cognition would be any more similar
> > > than petroleum is to plutonium. The i/o function is only half of the
> > > story.
> > > > Just assertions. The question is whether something other than you
> > > > have them?
> > > Why couldn't it? As you say, I am made of the same protons, neutrons,
> > > and electrons as everything else. You can't have it both ways. Either
> > > consciousness is a natural potential of all material phenomena or it's
> > > a unique special case. In the former you have to explain why more
> > > things aren't conscious, and the latter you have to explain why
> > > consciousness could exist.
> > This is like having to argue why more atoms aren't alive. The difference
> > between a life form and a mixture of chunks of coal and water won't be
> > in comparing the chemicals, the difference is in their organization.
> > is all that separates living matter from non-living matter. Mechanism
> > the same thing regarding intelligent entities vs. non-intelligent
> > It comes down to their organization, not any material difference.
> > can be performed by collections of cells and by logic gates etched on
> > silicon.
> > Most neurologists consider the retina part of the brain, since processing
> > performed there. Could we not build an artificial retina which sent the
> > right signals down the optic nerve and allow someone to see? Such
> > already exist:
> > > My alternative is to see that everything
> > > has a private side, which behaves in a sensorimotor way rather than
> > > electromagnetic, so that our experience is a massive sensorimotor
> > > aggregate of nested organic patterns.
> > > > A computer flying an airliner is not very smart, but it would know
> > > > a runway is, what a storm is, the shape of the Earth. A computer
> > > > runs a hospital would know whether there were patients, doctors, or
> > > nurses.
> > > Nah, a computer like that wouldn't know anything about runways,
> > > storms, shapes, or Earth or whether there were patients, doctors, or
> > > nurses. Computers are just mazes of semiconductors which know when
> > > they are free to complete some circuits and not others.
> > And brains are just gelatinous tissue with cells squirting juices back
> > forth. If you are going to use reductionism when talking about
> > then to be fair you must apply the same reasoning when talking about
> > and brains.
> > > A computer
> > > autopilot knows less what a plane is than a cat does. Computers are
> > > automated microelectronic sculptures through which we compute human
> > > sense. They have no actual sense of their own beyond microelectronic
> > > sense.
> > > > You beg the question by specifying "human meaning". Do you suppose
> > > > there is something unique about humans, or can there be dog meaning
> > > > fish meaning and computer meaning?
> > > There is certainly something unique about humans in the minds of
> > > humans. Of course there is dog meaning, fish meaning, liver cell
> > > meaning, neuron meaning, DNA meaning, carbon meaning. There isn't
> > > computer meaning though because it's only a computer to a person that
> > > can use a computer.
> > Do you need another person to look at and interpret the firings of
> > in your brain in order for there to be meaning for your thoughts? If
> > why must be a user of the computer to impart meaning to its states?
> > Jason
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to firstname.lastname@example.org.
> To unsubscribe from this group, send email to
> For more options, visit this group at
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at