> How is it you are so sure that the organization is only part of it?

Because it makes sense to me that organization cannot create functions
which are not inherent potentials of whatever it is you are
organizing. It doesn't matter how many ping pong balls you have or how
you organize them, even if you put velcro or grease on them, you're
not going to ever get a machine that feels or thinks or tries to kill
you when you threaten it's organization. Life or consciousness does
not follow logically from mechanical organizations of any kind. Those
qualities can only be perceived by a subjective participant.

> Sulfur is not functionally equivalent to carbon, it will behave differently
> and thus it is not the same organization.

That's why I'm saying that to assume inorganic matter will behave in a
way that is functionally equivalent to organic cells, let alone
neurological networks, is not supported by any evidence. I think it's
a fantasy. Just because we can make a puppet seem convincingly
anthropomorphic to us doesn't mean that it can feel something.

> Do you think it
> would be impossible to make a life form using these particles in place of
> carbon (assuming they behaved the same in all the right conditions) or is
> there something special about the identity of carbon?

There is only something special about the identity of carbon because
organic chemistry relies upon it to perform higher level biochemical
acrobatics. There's no logical reason why sentience should occur in
one molecular arrangement and not another if you were designing a
cosmos from scratch. You could make a universe that makes sense where
noble gases stack up like cells and write symphonies. Consciousness
makes no more sense in a strictly physical universe than would time
travel, teleportation, or omnipotence. Less actually. Those magical
kinds of categories are at least variations on physical themes,
whereas feeling and awareness are wholly unprecedented and impossible
under purely mathematical and physical definitions. There is simply no
place for subjectivity to take place.

> No, it is more than an antenna.  The retina does processing.  I chose the
> retina example as opposed to replacing part of the optic nerve precisely
> because the retina is more than an antenna.

A living retina is more than an antenna because it is composed of a
microbiological community of living cells. An electronic retina is a
prosthetic extension of the optic nerve that may or may not serve as a
functional equivalent to the person using it. Just as a prosthetic
limb may be the functional equivalent in whatever ways it's designer
deems feasible, important, etc, it doesn't mean that it's the same
thing, even if we can't consciously tell the difference.

Who knows, it may turn out that someone with an artificial eye has
more emotional distance toward the images they see, or maybe they will
have enhanced acuity for certain categories of things and not others,
etc. It's still not like replacing someone's amygdala or something.

> So the "psychic outputs" from the retina are reproducible, but not those of
> the visual cortex?  Why not?  The idea of these psychic outputs sounds
> somewhat like substance dualism or vitalism.

With the retina (or the cochlea, skin receptors, olfactory bulb, etc)
you are dealing with specialized tissues which, IMO, have concentrated
and centralized the sensorimotor functions inherent in all animal
cells into an organ for the larger organism. As such, their i/o is
more isomorphic to the physical phenomena they are interfacing with.
As with all tissues in the nervous system, they play a dual role,
subjugating their own psychic output as single celled organisms and
animal tissues to some degree in order to facilitate a psychic i/o at
the organism level. A nervous system is like an organism within an
organism. So yes, the output of the retina that we make sense of can
be reproduced, but you're not fooling the rest of the nervous system
and body.

>The interesting thing is that the brain was apparently able to automatically
>adapt to the new signals received from the retina and process it for what it
>was, a new primary color input.

Making existing colors accessible to an individual monkey or person's
nervous system is completely different from inventing a new primary
color in the universe. Even tetrachromats do not perceive a new
primary color, they just perceive finer distinction between existing
hue combinations. Not that a new color couldn't be achieved
neurologically, maybe it could, but we have no idea how to conceive of
what that color could look like. We can't think of a replacement for
yellow. We don't know where yellow comes from, or what it's made of,
or what other possible spectrum could be created. It's literally
inconceivable, like a square circle, not a matter of technical skill,
but an understanding that color is a visual feeling that has no
mechanical logic which invokes it by necessity. It has it's own logic
which is just as fundamental as the elements of the periodic table,
and not reducible to physical phenomena.

>I think it is wrong to say the subjective visual experience is simple.  It
>seems simple to us, but it has gone through massive amounts of processing
>and filters before you are made aware of it.

If it seems simple to us, so simple that an infant can relate to them
even before they can grasp numbers or letters, that would have to be
explained. There is a lot of technology behind this conversation as
well, but it doesn't mean these words are a complex technology. From
my perspective, the view you are investing in is west-of-center, in
the sense that it compels us to privilege third person views of first
person phenomena, which I think is sentimental and unscientific. First
person phenomena are legitimate, causally efficacious manifestations
in the cosmos having properties and functions which cannot be
meaningfully defined in strictly physical, objective terms.

> Is the self-driving car
> not aware of the color the street light is?

No way. It's not aware of anything. The sensitivity of the ccd to
optical changes in the environment drives electronic changes in the
chips but that's as far as it goes. Nothing is felt or known, it's
just unconsciously reported through a sophisticated program.

>Then again, if you define consciousness out of the universe
>entirely, there would be no way we could suspect anything because there
>would be nothing we could do at all.

Right, but just the sake of argument, let's say that there were some
other way of analyzing the universe without consciousness. What I'm
saying is that there would be no hint of any interior dimension such
as we experience in every waking moment. Even if the analysis could
detect the kinds of patterns and behaviors we are familiar with (which
it wouldn't), the idea of consciousness itself just would not follow
from observing a living brain any more than a brain coral or a dead
brain.

>You will find software for evolving neural network based brains...

Sure, we can definitely make artificial patterns which reflect
intelligence, which behave intelligently, but they still don't feel
anything or care about their own existence. They have no subjective
interiority, they are just automatic patterns. Part of what we are is
just like that. Our bodies are evolving genetic robots, but that's
only half of what we are. The other half is equally interesting but
not as reducible to quantified variables.

>You cannot look at a
>conscious computer at the level of the silicon chip, by far most of the
>complexity is in the memory of the computer

Except that what the computer physically is can only be a collection
of silicon chips. It has no physical coherence of it's own. Without a
human interpreter, the entire contents of the memory is just a-
signifying groupings of magnetized cobalt alloy. There is no
independent sentience there. In the absence of electric current and a
conscious creature to interact with it, the computer is just an
unusual collection of minerals.

Sorry if I sound rude or anything, I'm not trying to be argumentative.
You're being very civil and knowledgeable, and I appreciate that. I'm
just naturally wordy and obnoxious on this subject. It's what I do
most of my blogging about (http://s33light.org).

On Jul 9, 10:02 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Sat, Jul 9, 2011 at 7:42 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> > > The difference between a life form and a mixture of chunks of coal and
> > water won't be found
> > > in comparing the chemicals, the difference is in their organization.
> >  That
> > > is all that separates living matter from non-living matter
>
> > Organization is only part of it.
>
> How is it you are so sure that the organization is only part of it?
>
> > You could try to to make DNA out of
> > something else - substituting sulfur for carbon for instance, and it
> > won't work.
>
> Sulfur is not functionally equivalent to carbon, it will behave differently
> and thus it is not the same organization.
>
> > It goes beyond mathematical considerations, since there is
> > nothing inherently golden about the number 79 or carbon-like about the
> > number 6. We can observe that in this universe these mathematical
> > organizations correlate with particular behaviors and qualities, but
> > that doesn't mean that they have to, in all possible universes,
> > correlate in that way. Mercury could look gold to us instead. Life
> > could be based on boron. In this universe, however, there is no such
> > thing as living matter, there are only living tissues. Cells. Not
> > circuits.
>
> The special thing about carbon is that it has four free electrons to use to
> bond with other atoms (it serves as a glue for holding large molecules
> together).  While Silicon also has 4 free electrons, it is much larger, and
> doesn't hide away between the atoms it is holding together, it would get in
> the way.  Anything that behaves like a carbon atom in all the same ways
> could serve as a replacement for the carbon atom, it wouldn't have to be
> carbon.  For example, lets say we discovered a new quark that could be put
> together into a super proton with a positive charge of 3, and also it had
> the mass of 3 protons.  A nucleus made of two of these super protons and six
> neutrons could not rightfully be called carbon, yet it would have the same
> mass and chemical properties, and the same electron shells.  Do you think it
> would be impossible to make a life form using these particles in place of
> carbon (assuming they behaved the same in all the right conditions) or is
> there something special about the identity of carbon?
>
>
>
> > > Could we not build an artificial retina which sent the right signals down
> > the optic nerve and allow someone to see?
>
> > Sure, but it's still going to be a prosthetic antenna.
>
> No, it is more than an antenna.  The retina does processing.  I chose the
> retina example as opposed to replacing part of the optic nerve precisely
> because the retina is more than an antenna.
>
> > You can
> > replicate the physical inputs from the outside world but you can't
> > necessarily replicate the psychic outputs from the visual cortex to
> > the conscious Self.
>
> So the "psychic outputs" from the retina are reproducible, but not those of
> the visual cortex?  Why not?  The idea of these psychic outputs sounds
> somewhat like substance dualism or vitalism.
>
> > It's no more reasonable than expecting the
> > fingernails on an artificial hand to continue to grow and need
> > clipping. We don't have the foggiest idea how to create a new primary
> > color from scratch.
>
> We have done this to monkeys 
> already:http://www.guardian.co.uk/science/2009/sep/16/colour-blindness-monkey...
> The interesting thing is that the brain was apparently able to automatically
> adapt to the new signals received from the retina and process it for what it
> was, a new primary color input.  It only took the brain five months or so to
> rewire itself to process this new color.
>
> "It was as if they woke up and saw these new colours. The treated animals
> unquestionably responded to colours that had been invisible to them," said
> Jay Neitz, a co-author on the study at the University of Washington in
> Seattle.
>
> It is even thought that some small percentage of women see four primary
> colors:http://www.post-gazette.com/pg/06256/721190-114.stm
> It is not that they have different genes for processing colors differently,
> they just have genes for a fourth type of light-sensitive cone, their brain
> software adapts accordingly.  (Just as those with color blindness do not
> have defective brains)
>
> > IMO, until we can do that - one of the most
> > objective and simple examples of subjective experience, we have no
> > hope of even beginning to synthesize consciousness from inorganic
> > materials.
>
> I think it is wrong to say the subjective visual experience is simple.  It
> seems simple to us, but it has gone through massive amounts of processing
> and filters before you are made aware of it.  Some 30% of the gray matter in
> your brain is used to process visual data.
>
> Given that, I would argue we have already implemented consciousness in
> in-organic materials.  Consider that Google's self driving cars must
> discriminate between red and green street lights.  Is the self-driving car
> not aware of the color the street light is?
>
>
>
> > >And brains are just gelatinous tissue with cells squirting juices back and
> > >forth.  If you are going to use reductionism when talking about computers,
> > >then to be fair you must apply the same reasoning when talking about minds
> > >and brains.
>
> > Exactly. If we didn't know for a fact that our brain was hosting
> > consciousness through our first hand experience there would be
> > absolutely no way of suspecting that such a thing could exist.
>
> I am not as certain of that as you are.  Imagine some alien probe came down
> to earth and observed apes pointing at a piece of red fruit up in a tree
> amongst many green leaves.  The probe might conclude that the ape was
> conscious of the fruit and has the awareness of different frequencies of
> light.  Then again, if you define consciousness out of the universe
> entirely, there would be no way we could suspect anything because there
> would be nothing we could do at all.
>
> > This is
> > what I'm saying about the private topology of the cosmos. We can't
> > access it directly because we are stuck in our own private topology.
>
> > So to apply this to computers and planes - yes they could have a
> > private topology, but judging from their lack of self-motivated
> > behaviors,
>
> Check out the program "Smart Sweepers" on this 
> page:http://www.ai-junkie.com/ann/evolved/nnt1.html
> You will find software for evolving neural network based brains, which
> control behaviors of little robots on a plane searching for food.  They
> start off completely dumb, most running around in circles, but after a few
> hundred generations become quite competent, and after a few thousand I've
> even observed what could be described as social behavior (they all travel in
> the same direction and never turn around backwards if they miss a piece of
> food), when I first saw this I was completely shocked, I would not have
> guessed this behavior would result even though I understood how the program
> worked.  There is, however, an individual survival benefit from following
> the group movement rather than traveling against the grain (the waves of
> bots clear out food in a wave, and going against the grain you would get
> lucky far less often than traveling with it).  Note: Press the "F" key to
> accelerate the evolution rather than animating it in its entirety when you
> get bored watching the performance of each generation.
>
> The bots, and their evolution is self-directed.  You will find no code in
> the source files indicating how to find food, or how if they all travel in
> one pack, members of the group will individually benefit.  This is a
> surprising result which the computer found, it was not programmed in, nor
> was the computer told to do it.  The evolved bots movement, in their search
> for food can also be said to be self-directed, it is as much as movement in
> a bacterium or insect is in its search for food.  You might go so far as to
> say each bot IS conscious of the closest piece of food (that information is
> fed into the neural network of each bot).  Whether or not a computer
> exhibits self motivated behaviors is a matter of its programming., you
> couldn't say your word processor is very self-directed, for example.
>
> > it makes more sense to think of them in terms of purely
> > structural and electronic interiority rather than imagining that their
> > assembly into anthropological artifacts confer some kind of additional
> > subjectivity.
>
> > A living cell is more than the sum of it's parts. A dead cell is made
> > of the same materials with the same organization as a living cell, it
> > just doesn't cohere as an integrated cell anymore, so lower level
> > processes overwhelm the whole.
>
> I would say death, like unconsciousness is the failure of higher-level
> processes.  It is when you stop breathing (a high level process) that causes
> mitochondria to stop producing ATP, which causes most other reactions in
> cells to cease.  Likewise, anesthetical chemicals cause very little
> difference in the operation of brain cells at the lower levels, but
> globally, nerve signals won't travel as far, and different brain regions
> become isolated from each other.  Thus the brain still looks like it is
> alive and functioning but there is no consciousness.  You cannot look at a
> conscious computer at the level of the silicon chip, by far most of the
> complexity is in the memory of the computer.  If we were to talk about a
> computer program with the same complexity as the human brain, it might have
> many Petabytes (10^15 bytes) worth of memory.  The processor, or processors
> serve only to provide a basic ruleset for relating the various structures
> that exist in memory, just as the laws of physics do for our biological
> brains.  Compared to our brains, the laws of physics look very simple, just
> like the architecture of any computer's CPU looks very simple compared to
> the in-memory representation of a brain.  This is the mistake Searle made
> when he said as the rule-follower he wouldn't understand a thing, there is
> very little complexity in the following of the rules, the computer more than
> just the CPU, it is the tape also.
>
>
>
>
>
>
>
>
>
> ...
>
> read more »

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to