>I don't think we can say what is or what wouldn't be possible with a machine 
>of these
>complexity; all machines we have built to date are primitive and simplistic
>by comparison.  The machines we deal with day to day don't usually do novel
>things, exhibit creativity, surprise us, etc. but I think a machine as
>complex as the human brain could do these things regularly.

I do think that we can say, with the same certainty that we cannot
create a square circle, that it would not be possible at any level of
complexity. It's not that they can't create novelty or surprise, it's
that they can't feel or care about their own survival. I'm saying that
the potential for awareness must be built in to matter at the lowest
level or not at all. Complexity alone cannot cause awareness in
inanimate objects, let alone the kind of rich, ididopathic phenomena
we think of as qualia. The waking state of consciousness requires no
more biochemical complexity to initiate than does unconsciousness. In
this debate, the idea of complexity is a red herring which, together
with probability acts as a veil of what I consider to be the religious
faith of promissory materialism.

> If one day humans succeeded in reverse engineering a brain, and executed it
> on a super computer, and it told you it was conscious and alive, and did not
> want to be turned off, would this convince you or would you believe it was
> only being mimicking something that could feel something?  If not, it seems
> there would be no possible evidence that could convince you.  Is that true?

The only thing that would come close to convincing me that a
virtualized brain was successful in producing human consciousness
would be if a person could live with half of their brain emulated for
a while, then switch to the other half emulated for a while and report
as to whether their memories and experiences of being emulated were
faithful. I certainly would not exchange my own brain for a computer
program based on the computer program's assessment of it's own
consciousness.

> I believe this is what computers allow us to do: explore alternate universes
> by defining new sets of logical rules.

Sure, but they can also blind us to the aspects of our own universe
which cannot ever be defined by any set of logical rules (such as the
experiential nature of qualia).

> Neural prostheses will be common some day, Thomas Berger has spent the past
> decade reverse engineering the 
> hippocampus:http://www.popsci.com/scitech/article/2007-04/memory-hacker

Prostheses are great but you can't assume that you can replace the
parts of the brain which host the conscious self without replacing the
self. If you lose an arm or a leg, fine, but if you lose a head and a
body, you're out of luck. To save the arm and replace the head with a
cybernetic one is not the same thing. Even if you get a brain grown
from your own stem cells, it's not going to be you. One identical twin
is not a valid replacement for the other.

>  If only one possible
> substrate is possible in any given universe, why do you think it just so
> happens to line up with the same materials which serve a biological
> function?  Do you subscribe to anthropic reasoning?

I don't know that only one substrate is possible, and I don't
necessarily think that consciousness is unique to biology, I just
think that human consciousness in particular is an elaboration of
hominid perception, animal sense, and organic molecular detection. The
more you vary from that escalation, the more you should expect the
interiority to diverge from our own. It's not that we cannot build a
brain based on plastic and semiconductors, it's that we should not
assume that such a machine would be aware at all, just as a plastic
flower is not a plant. It looks enough like a plant to fool our casual
visual inspection, but for every other animal, plant, or insect, the
plastic flower is nothing like a plant at all. A plastic brain is the
same thing. It may make for a decent android to serve our needs, but
it's not going to be an actual person.

> Primary colors aren't physical properties, they are purely mental
> constructions.  There are shrimp which can see something like 16 different
> primary colors.  It is a factor of the dimensionality of the inputs the
> brain has to work with when generating the environment you believe yourself
> to be in.

They are phenomena present in the cosmos, just as a quark or galaxy
is. Labeling them mental constructions is just a way of disqualifying
them by appealing to metaphysical speculation. Mentally constructed
where? From what? How? Why can't we mentally construct new colors
ourselves? Even if you had seen red and blue, you could not in your
wildest imaginings or most rigorous quantitative expression conceive
of what it is to see yellow if you had never seen it. Yellow is not
just a bluer version of red, even though electromagnetically that is
exactly what it should be, it's different from either blue or red and
different in a self-explanatory, exquisitely signifying way. Shrimp
may not even see one color, let alone 16. They may just be able to
distinguish different qualities of grey. You're still not accepting
that color is not mechanical. It has no third party dimensionality. It
is either seen first hand or it does not exist. This is the way most
of the phenomena we experience and certainly the experiences we care
about work.

> some people have reported experiencing "impossible"
> colors, such as reddish green and yellowish blue.

Yeah I like that demo. It's not a new primary color though, that's
just contradictory mixing of familiar colors. I'm not talking about
reddish green, I'm talking about Xed, Yhite, and Zlue. Because if you
are going to assert that the spectrum is a mental construct then there
would need to be some explanation of how many such mental universals
can be constructed. Why not ten million completely and utterly novel
spectrums? How do you make them make sense internally so that you can
have complements and opposites, color wheels and additive vs
subtractive mixing palettes?

>I think they are informational, rather than physical, but I tend to agree it
>may not be communicable without instantiating the same patterns in your own
>mind, or rewiring one's own brain to have the experience of someone else.

I think that informational is metaphysical. It doesn't explain how the
effect is achieved. Imagine that color did not exist and you were
writing a program for a virtual world. How would you invent color? How
could you even conceive of the idea for it, it's like 'beef flavored
nineteen'. There's no information there, it's pure experiential sense.
Visual feeling. It is physical but it is the interior of physicality -
not electromagnetic, but the sensorimotive topology of the sense which
can be detected externally as electromagnetism. Color is how
electromagnetism feels to us, to our brains, nerves, retinas. This is
a really big deal to realize. It's a secret door to finding your own
existence in a world of materialistic reflections.

>How does it know to stop at a red light if it is not aware of anything?
It doesn't stop at a red light. The car stops at an electronic signal
that fires when the photosensor and it's associated semiconductors
match certain quantitative thresholds which correspond to what we see
as a red light. It has no idea there is a car or a light. It knows
silicon, boron, germanium, and what electricity feels like.


>The human brain doesn't have a tiny homunculus inside of it watching a
>projector screen of conscious thought, the brain itself is a system which
>provides its own meaning and interpretation.
It's not a tiny homunculus, it's us. Meaning is not provided by the
brain any more than movies are provided by a DVD player.

>Likewise, a word processor
>distinguishes incorrectly spelled words from correctly spelled words whether
>or not someone is looking at or using the word processor.  Surely, the
>meaning to a person of a misspelled word is different from the meaning to
>the word-processor, yet there is still a distinction, and there is internal
>meaning between the status of correctly spelled vs. incorrectly spelled
>which affects the state of the word processor.

The word processor is just semiconductors which are activated and
control in a pattern we deem meaningful. There is no distinction for
the computer between correct or incorrect spelling, other than
different logic gates being held open or closed. It's just self
referential machine language and has no sense of linguistic
significance whatsoever. Computation by itself can only simulate
intelligence, it can't know any meaning, just as a recipe can't be
served as a meal.

>You could say that about just about anything, a person, a city, the Earth,
>that they are all just unusual collections of minerals, but that misses a
>lot of the finer points.

Right, because the finer points cannot be reduced to physical
mechanics or calculation, they must be experienced first hand.

Craig

On Jul 10, 3:07 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Sun, Jul 10, 2011 at 12:29 AM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> > > How is it you are so sure that the organization is only part of it?
>
> > Because it makes sense to me that organization cannot create functions
> > which are not inherent potentials of whatever it is you are
> > organizing. It doesn't matter how many ping pong balls you have or how
> > you organize them, even if you put velcro or grease on them, you're
> > not going to ever get a machine that feels or thinks or tries to kill
> > you when you threaten it's organization. Life or consciousness does
> > not follow logically from mechanical organizations of any kind. Those
> > qualities can only be perceived by a subjective participant.
>
> In theory it is possible to build a computer using some system of ping pong
> balls.
> (Here is a cool example of what a component of such a computer might look
> like:http://www.youtube.com/watch?v=GcDshWmhF4A)
>
> Now imagine we link 100,000,000,000 of these ping pong ball based computers
> together, each one capable of processing signals received from other linked
> computers and sending out signals at a rate of 1000 per second.  Each of
> these computers is connected to around 10,000 other such computers.  It is
> beyond the ability of the human mind to fathom something of this complexity,
> but seeing something this complex (about as complex as the brain) makes it a
> little easier to accept by intuition, that a mind based on ping pong balls
> is possible.  When we typically try to imagine a ping-pong ball mind, we
> have trouble picturing more than a few hundred ping pong balls, but if you
> are to approach a mind as complex as the brain, you would need some thousand
> thousand thousand thousand thousand transactions per second.  I don't think
> we can say what is or what wouldn't be possible with a machine of these
> complexity; all machines we have built to date are primitive and simplistic
> by comparison.  The machines we deal with day to day don't usually do novel
> things, exhibit creativity, surprise us, etc. but I think a machine as
> complex as the human brain could do these things regularly.
>
>
>
> > > Sulfur is not functionally equivalent to carbon, it will behave
> > differently
> > > and thus it is not the same organization.
>
> > That's why I'm saying that to assume inorganic matter will behave in a
> > way that is functionally equivalent to organic cells, let alone
> > neurological networks, is not supported by any evidence. I think it's
> > a fantasy. Just because we can make a puppet seem convincingly
> > anthropomorphic to us doesn't mean that it can feel something.
>
> If one day humans succeeded in reverse engineering a brain, and executed it
> on a super computer, and it told you it was conscious and alive, and did not
> want to be turned off, would this convince you or would you believe it was
> only being mimicking something that could feel something?  If not, it seems
> there would be no possible evidence that could convince you.  Is that true?
>
>
>
> > > Do you think it
> > > would be impossible to make a life form using these particles in place of
> > > carbon (assuming they behaved the same in all the right conditions) or is
> > > there something special about the identity of carbon?
>
> > There is only something special about the identity of carbon because
> > organic chemistry relies upon it to perform higher level biochemical
> > acrobatics. There's no logical reason why sentience should occur in
> > one molecular arrangement and not another if you were designing a
> > cosmos from scratch.
>
> I believe this is what computers allow us to do: explore alternate universes
> by defining new sets of logical rules.
>
> > You could make a universe that makes sense where
> > noble gases stack up like cells and write symphonies. Consciousness
> > makes no more sense in a strictly physical universe than would time
> > travel, teleportation, or omnipotence. Less actually. Those magical
> > kinds of categories are at least variations on physical themes,
> > whereas feeling and awareness are wholly unprecedented and impossible
> > under purely mathematical and physical definitions. There is simply no
> > place for subjectivity to take place.
>
> > > No, it is more than an antenna.  The retina does processing.  I chose the
> > > retina example as opposed to replacing part of the optic nerve precisely
> > > because the retina is more than an antenna.
>
> > A living retina is more than an antenna because it is composed of a
> > microbiological community of living cells. An electronic retina is a
> > prosthetic extension of the optic nerve that may or may not serve as a
> > functional equivalent to the person using it. Just as a prosthetic
> > limb may be the functional equivalent in whatever ways it's designer
> > deems feasible, important, etc, it doesn't mean that it's the same
> > thing, even if we can't consciously tell the difference.
>
> > Who knows, it may turn out that someone with an artificial eye has
> > more emotional distance toward the images they see, or maybe they will
> > have enhanced acuity for certain categories of things and not others,
> > etc. It's still not like replacing someone's amygdala or something.
>
> Neural prostheses will be common some day, Thomas Berger has spent the past
> decade reverse engineering the 
> hippocampus:http://www.popsci.com/scitech/article/2007-04/memory-hacker
>
> Down the hall, Berger rises to greet me in his office. An imposing man with
>
> > a shock of gray hair, Berger, 56, has the thick build of an aging athlete
> > and the no-nonsense manner of a CEO. Can a chunk of silicon really stand in
> > for brain cells? I ask. "I don't need a grand theory of the mind to fix what
> > is essentially a signal-processing problem," he says. "A repairman doesn't
> > need to understand music to fix your broken CD player."
>
> > > So the "psychic outputs" from the retina are reproducible, but not those
> > of
> > > the visual cortex?  Why not?  The idea of these psychic outputs sounds
> > > somewhat like substance dualism or vitalism.
>
> > With the retina (or the cochlea, skin receptors, olfactory bulb, etc)
> > you are dealing with specialized tissues which, IMO, have concentrated
> > and centralized the sensorimotor functions inherent in all animal
> > cells into an organ for the larger organism. As such, their i/o is
> > more isomorphic to the physical phenomena they are interfacing with.
> > As with all tissues in the nervous system, they play a dual role,
> > subjugating their own psychic output as single celled organisms and
> > animal tissues to some degree in order to facilitate a psychic i/o at
> > the organism level. A nervous system is like an organism within an
> > organism. So yes, the output of the retina that we make sense of can
> > be reproduced, but you're not fooling the rest of the nervous system
> > and body.
>
> It seems to me a little too convenient, that the same biological material
> which is needed for self-reproducing cells, just happens to be the only
> viable substrate which can support consciousness.  If only one possible
> substrate is possible in any given universe, why do you think it just so
> happens to line up with the same materials which serve a biological
> function?  Do you subscribe to anthropic reasoning?
>
>
>
> > >The interesting thing is that the brain was apparently able to
> > automatically
> > >adapt to the new signals received from the retina and process it for what
> > it
> > >was, a new primary color input.
>
> > Making existing colors accessible to an individual monkey or person's
> > nervous system is completely different from inventing a new primary
> > color in the universe.
>
> Primary colors aren't physical properties, they are purely mental
> constructions.  There are shrimp which can see something like 16 different
> primary colors.  It is a factor of the dimensionality of the inputs the
> brain has to work with when generating the environment you believe yourself
> to be in.
>
> > Even tetrachromats do not perceive a new
> > primary color, they just perceive finer distinction between existing
> > hue combinations.
>
> The reason we say there are 3 primary colors, and TV screens have pixels of
> three different colors is because most humans have three different types of
> color sensitive cones in their eye.  Each cone cell can distinguish between
> 100 or so intensity levels, thus someone with red green color blindness can
> only see 100*100 different colors.  A typical human can see about a million
> 100*100*100, while a tetrachromat could distinguish between 100*100*100*100
> colors.
>
> > Not that a new color couldn't be achieved
> > neurologically, maybe it could, but we have no idea how to conceive of
> > what that color could look like.
>
> I agree we can't really conceive of these new colors without having a mind
> capable of representing them.  Perhaps there will be gene therapy in the
> near future which will allow any human to become a tetrachromat, and then
> like the monkeys you will one day wake up and see entirely novel colors.
> Hopefully people won't suddenly appear ugly to you after that switch occurs!
>
> > We can't think of a replacement for
> > yellow. We don't know where yellow comes from, or what it's made of,
> > or what other possible spectrum could be created. It's literally
> > inconceivable, like a square circle, not a matter of technical skill,
> > but an understanding that color is a visual feeling that has no
> > mechanical logic which invokes it by necessity. It has it's own logic
> > which is just as fundamental as the elements of the periodic table,
> > and not reducible to physical phenomena.
>
> For fun, see if you can have any success with 
> this:http://en.wikipedia.org/wiki/Opponent_process#Reddish_green_and_yello...http://en.wikipedia.org/wiki/Impossible_colors
> By putting red light into one eye and green light into another with the same
> level of brightness, some people have reported experiencing "impossible"
> colors, such as reddish green and yellowish blue.
>
>
>
> > >I think it is wrong to say the subjective visual experience is simple.  It
> > >seems simple to us, but it has gone through massive amounts of processing
> > >and filters before you are made aware of it.
>
> > If it seems simple to us, so simple that an infant can relate to them
> > even before they can grasp numbers or letters, that would have to be
> > explained. There is a lot of technology behind this conversation as
> > well, but it doesn't mean these words are a complex technology. From
> > my perspective, the view you are investing in is west-of-center, in
> > the sense that it compels us to privilege third person views of first
> > person phenomena, which I think is sentimental and unscientific. First
> > person phenomena are legitimate, causally efficacious manifestations
> > in the cosmos having properties and functions which cannot be
> > meaningfully defined in strictly physical, objective terms.
>
> I think they are informational, rather than physical, but I tend to agree it
> may not be communicable without instantiating the same patterns in your own
> mind, or rewiring one's own brain to have the experience of someone else.
>
>
>
> > > Is the self-driving car
> > > not aware of the color the street light is?
>
> > No way. It's not aware of anything.
>
> How does it know to stop at a red light if it is not aware of anything?
>
> > The sensitivity of the ccd to
> > optical changes in the environment drives electronic changes in the
> > chips but that's as far as it goes. Nothing is felt or known, it's
> > just unconsciously reported through a sophisticated program.
>
> Someone could say the same thing about you, could they not?  Without being
> the self-driving car, you really can't assert that it is aware of nothing.
>
>
>
> > >Then again, if you define consciousness out of the universe
> > >entirely, there would be no way we could suspect anything because there
> > >would be nothing we could do at all.
>
> > Right, but just the sake of argument, let's say that there were some
> > other way of analyzing the universe without consciousness. What I'm
> > saying is that there would be no hint of any interior dimension such
> > as we experience in every waking moment. Even if the analysis could
> > detect the kinds of patterns and behaviors we are familiar with (which
> > it wouldn't), the idea of consciousness itself just would not follow
> > from observing a living brain any more than a brain coral or a dead
> > brain.
>
> > >You will find software for evolving neural network based brains...
>
> > Sure, we can definitely make artificial patterns which reflect
> > intelligence, which behave intelligently, but they still don't feel
> > anything or care about their own existence. They have no subjective
> > interiority, they are just automatic patterns. Part of what we are is
> > just like that. Our bodies are evolving genetic robots, but that's
> > only half of what we are. The other half is equally interesting but
> > not as reducible to quantified variables.
>
> > >You cannot look at a
> > >conscious computer at the level of the silicon chip, by far most of the
> > >complexity is in the memory of the computer
>
> > Except that what the computer physically is can only be a collection
> > of silicon chips. It has no physical coherence of it's own. Without a
> > human interpreter, the entire contents of the memory is just a-
> > signifying groupings of magnetized cobalt alloy.
>
> The human brain doesn't have a tiny homunculus inside of it watching a
> projector screen of conscious thought, the brain itself is a system which
> provides its own meaning and interpretation.  Likewise, a word processor
> distinguishes incorrectly spelled words from correctly spelled words whether
> or not someone is looking at or using the word processor.  Surely, the
> meaning to a person of a misspelled word is different from the meaning to
> the word-processor, yet there is still a distinction, and there is internal
> meaning between the status of correctly spelled vs. incorrectly spelled
> which affects the state of the word processor.
>
> > There is no
> > independent sentience there. In the absence of electric current and a
> > conscious creature to interact with it, the computer is just an
> > unusual collection of minerals.
>
> You could say that about just about anything, a person, a city, the Earth,
> that they are all just unusual collections of minerals, but that misses a
> lot of the finer points.
>
>
>
> > Sorry if I sound rude or anything, I'm not trying to be argumentative.
> > You're being very civil and knowledgeable, and I appreciate that.
>
> I don't think you are rude or argumentative.  I appreciate the opportunity
> for a debate. :-)
>
> Jason> I'm
> > just naturally wordy and obnoxious on this subject. It's what I do
> > most of my blogging about (http://s33light.org).
>
> > On Jul 9, 10:02 pm, Jason Resch <jasonre...@gmail.com> wrote:
> > > On Sat, Jul 9, 2011 at 7:42 PM, Craig Weinberg <whatsons...@gmail.com
> > >wrote:
>
> > > > > The difference between a life form and a mixture of chunks of coal
> > and
> > > > water won't be found
> > > > > in comparing the chemicals, the difference is in their organization.
> > > >  That
> > > > > is all that separates living matter from non-living matter
>
> > > > Organization is only part of it.
>
> > > How is it you are so sure that the organization is only part of it?
>
> > > > You could try to to make DNA out of
> > > > something else - substituting sulfur for carbon for instance, and it
> > > > won't work.
>
> > > Sulfur is not functionally equivalent to carbon, it will behave
> > differently
> > > and thus it is not the same organization.
>
> > > > It goes beyond mathematical considerations, since there is
> > > > nothing inherently golden about the number 79 or carbon-like about the
> > > > number 6. We can observe that in this universe these mathematical
> > > > organizations correlate with particular behaviors and qualities, but
> > > > that doesn't mean that they have to, in all possible universes,
> > > > correlate in that way. Mercury could look gold to us instead. Life
> > > > could be based on boron. In this universe, however, there is no such
> > > > thing as living matter, there are only living tissues. Cells. Not
> > > > circuits.
>
> > > The special thing about carbon is that it has four free electrons to use
> > to
> > > bond with other atoms (it serves as a glue for holding large molecules
> > > together).  While Silicon also has 4 free electrons, it is much larger,
> > and
> > > doesn't hide away between the atoms it is holding together, it would get
> > in
> > > the way.  Anything that behaves like a carbon atom in all the same ways
> > > could serve as a replacement for the carbon atom, it wouldn't have to be
> > > carbon.  For example, lets say we discovered a new quark that could be
> > put
> > > together into a super proton with a positive charge of 3, and also it had
> > > the mass of 3 protons.  A nucleus made of two of these super protons and
> > six
> > > neutrons could not rightfully be called carbon, yet it would have the
> > same
> > > mass and chemical properties, and the same electron shells.  Do you think
> > it
> > > would be impossible to make a life form using these particles in place of
> > > carbon (assuming they behaved the same in all the right conditions) or is
> > > there something special about the identity of carbon?
>
> > > > > Could we not build an artificial retina which sent the right signals
> > down
> > > > the optic nerve and allow someone to see?
>
> > > > Sure, but it's still going to be a prosthetic antenna.
>
> > > No, it is more than an antenna.  The retina does processing.  I chose the
> > > retina example as opposed to replacing part of the optic nerve precisely
> > > because the retina is more than an antenna.
>
> > > > You can
> > > > replicate the physical inputs from the outside world but you can't
> > > > necessarily replicate the psychic outputs from the visual cortex to
> > > > the conscious Self.
>
> > > So the "psychic outputs" from the retina are reproducible, but not those
> > of
> > > the visual cortex?  Why not?  The idea of these psychic outputs sounds
> > > somewhat like substance dualism or vitalism.
>
> > > > It's no more reasonable than expecting the
> > > > fingernails on an artificial hand to continue to grow and need
> > > > clipping. We don't have the foggiest idea how to create a new primary
> > > > color from scratch.
>
> > > We have done this to monkeys already:
> >http://www.guardian.co.uk/science/2009/sep/16/colour-blindness-monkey...
> > > The interesting thing is that the brain was apparently able to
> > automatically
> > > adapt to the new signals received from the retina and process it for what
> > it
> > > was, a new primary color input.  It only took the brain five months or so
> > to
> > > rewire itself to process this new color.
>
> > > "It was as if they woke up and saw these new colours. The treated animals
> > > unquestionably responded to colours that had been invisible to them,"
> > said
> > > Jay Neitz, a co-author on the study at the University of Washington in
> > > Seattle.
>
> > > It is even thought that some small percentage of women see four primary
> > > colors:http://www.post-gazette.com/pg/06256/721190-114.stm
> > > It is not that they have different genes for processing colors
> > differently,
> > > they just have genes for a fourth type of light-sensitive cone, their
> > brain
> > > software adapts accordingly.  (Just as those with color blindness do not
> > > have defective brains)
>
> > > > IMO, until we can do that - one of the most
> > > > objective and simple examples of subjective experience, we have no
> > > > hope of even beginning to synthesize consciousness from inorganic
> > > > materials.
>
> > > I think it is wrong to say the subjective visual experience is simple.
> >  It
> > > seems simple to us, but it has gone through massive amounts of processing
> > > and filters before you are made aware of it.  Some 30% of the gray matter
> > in
> > > your brain is used to process visual data.
>
> > > Given that, I would argue we have already implemented consciousness in
> > > in-organic materials.  Consider that Google's self driving cars must
> > > discriminate between red and green street lights.  Is the self-driving
> > car
> > > not aware of the color the street light is?
>
> > > > >And brains are just gelatinous tissue with cells squirting juices back
> > and
> > > > >forth.  If you are going to use reductionism when talking about
> > computers,
> > > > >then to be fair you must apply the same reasoning when talking about
> > minds
> > > > >and brains.
>
> > > > Exactly. If we didn't know for a fact that our brain was hosting
> > > > consciousness through our first hand experience there would be
> > > > absolutely no way of suspecting that such a thing could exist.
>
> > > I am not as certain of that as you are.  Imagine some alien probe came
> > down
> > > to earth and observed apes pointing at a piece of red fruit up in a tree
> > > amongst many green leaves.  The probe might conclude that the ape was
> > > conscious of the fruit and has the awareness of different frequencies of
> > > light.  Then again, if you define consciousness out of the universe
> > > entirely, there would be no way we could suspect anything because there
> > > would be nothing we could do at all.
>
> > > > This is
> > > > what I'm saying about the private topology of the cosmos. We can't
> > > > access it directly because we are stuck in our own private topology.
>
> > > > So to apply this to computers and planes - yes they could have a
> > > > private topology, but judging from their lack of self-motivated
> > > > behaviors,
>
> > > Check out the program "Smart Sweepers" on this page:
> >http://www.ai-junkie.com/ann/evolved/nnt1.html
> > > You will find software for evolving neural network based brains, which
> > > control behaviors of little robots on a plane searching for food.  They
> > > start off completely dumb, most running around in circles, but after a
> > few
> > > hundred generations become quite competent, and after a few thousand I've
> > > even observed what could be described as social behavior (they all travel
> > in
> > > the same direction and never turn around backwards if they miss a piece
> > of
> > > food), when I first saw this I was completely shocked, I would not have
> > > guessed this behavior would result even though I understood how the
> > program
> > > worked.  There is, however, an individual survival benefit from following
> > > the group movement rather than traveling against the grain (the waves of
> > > bots clear out food in a wave, and going against the grain you would get
> > > lucky far less often than traveling with it).  Note: Press the "F" key to
> > > accelerate the evolution rather than animating it in its entirety when
> > you
> > > get bored watching the performance of each generation.
>
> > > The bots, and their evolution is self-directed.  You will find no code in
> > > the source files indicating how to find food, or how if they all travel
> > in
> > > one pack, members of the group will individually benefit.  This is a
> > > surprising result which the computer found, it was not programmed in, nor
> > > was the computer told to do it.  The evolved bots movement, in their
> > search
> > > for food can also be said to be self-directed, it is as much as movement
> > in
> > > a bacterium or insect is in its search for food.  You might go so far as
> > to
> > > say each bot IS conscious of the closest piece of food (that information
> > is
> > > fed into the neural network of each bot).  Whether or not a computer
> > > exhibits self motivated behaviors is a matter of its programming., you
> > > couldn't say your word processor is very self-directed, for example.
>
> > > > it makes more sense to think of them in terms of purely
> > > > structural and electronic interiority rather than imagining that their
> > > > assembly into anthropological artifacts confer some kind of additional
> > > > subjectivity.
>
> > > > A living cell is more than the sum of it's parts. A dead cell is made
> > > > of the same materials with the same organization as a living cell, it
> > > > just doesn't cohere as an integrated cell anymore, so lower level
> > > > processes overwhelm the whole.
>
> > > I would say death, like unconsciousness is the failure of higher-level
> > > processes.  It is when you stop breathing (a high level process) that
> > causes
> > > mitochondria to stop producing ATP, which causes most other reactions in
> > > cells to cease.  Likewise, anesthetical chemicals cause very little
> > > difference in the operation of brain cells at the lower levels, but
> > > globally, nerve signals won't travel as far, and different brain regions
> > > become isolated from each other.  Thus the brain still looks like it is
> > > alive and functioning but there is no consciousness.  You cannot look at
> > a
> > > conscious computer at the level of the silicon chip, by far most of the
> > > complexity is in the memory of the computer.  If we were to talk about a
> > > computer program with the same complexity as the human brain, it might
> > have
> > > many Petabytes (10^15 bytes) worth of memory.  The processor, or
> > processors
> > > serve only to provide a basic ruleset for relating the various structures
> > > that exist in memory, just as the laws of physics do for our biological
> > > brains.  Compared to our brains, the laws of physics look very simple,
> > just
> > > like the architecture of any computer's CPU looks very simple compared to
> > > the in-memory representation of a brain.  This is the mistake Searle made
> > > when he said as the rule-follower he wouldn't understand a thing, there
> > is
> > > very little complexity in the following of the rules, the computer more
> > than
> > > just the CPU, it is the tape also.
>
> > > ...
>
> > > read more »
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Everything List" group.
> > To post to this group, send email to everything-list@googlegroups.com.
> > To unsubscribe from this group, send email to
> > everything-list+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to