--- Panu Horsmalahti <[EMAIL PROTECTED]> wrote:

> 2007/9/10, Matt Mahoney <[EMAIL PROTECTED]>:
> 
> > - Human belief in consciousness and subjective experience is universal and
> > accepted without question.
> 
> 
> It isn't.

I am glad you spotted the flaw in these statements.

> 
>   Any belief programmed into the brain through
> > natural selection must be true in any logical system that the human mind
> > can
> > comprehend.
> 
> 
> 1. Provide evidence that any belief at all is "programmed into the brain
> through natural selection"
> 2. Provide evidence for the claim that these supposed beliefs "must be true
> in any logical system that the human mind can comprehend."
> 
> I don't think natural selection has had enough time to program any beliefs
> about consciousness into our brains, as philosophical discussion about these
> issues has been around for only a couple of thousand years. Also, disbelief
> in consciousness doesn't mean that the individual suddenly stops to
> reproduce or kills itself (I remember you claiming this, I might be wrong
> though).

Disagreements over the existence of consciousness often center on the
definition.  One definition is that consciousness is that which distinguishes
the human mind from that of animals and machines.  This definition has
difficulties.  Isn't a dog more conscious than a worm?  Are babies conscious? 
If so, at what point after conception?

I prefer to define consciousness at that which distinguishes humans from
p-zombies as described in http://en.wikipedia.org/wiki/Philosophical_zombie
For example, if you poke a p-zombie with a sharp object, it will not
experience pain, although it will react just like a human.  It will say
"ouch", avoid behaviors that cause pain, and claim that it really does feel
pain, just like any human.  There is no test to distinguish a conscious human
from a p-zombie.

In this sense, belief in consciousness (but not consciousness itself) is
testable, even in animals.  An animal cannot say "I exist", but it will change
its behavior to avoid pain, evidence that it appears to believe that pain is
real.  You might not agree that learning by negative reinforcement is the same
as a belief in one's own consciousness, but consider all the ways in which a
human might not change his behavior in response to pain, e.g. coma,
anesthesia, distraction, enlightenment, etc.  Would you say that such a person
still experiences pain?

I assume you agree that animals which react to stimuli as if they were real
have a selective advantage over those that do not.  Likewise, evolution favors
animals that retain memory, that seek knowledge through exploration (appear to
have free will), and that fear death.  These are all traits that we associate
with consciousness in humans.

> Matt, you have frequently 'hijacked' threads about consciousness with these
> claims, so maybe you could tell us reasons to believe in them?

It has important implications for the direction that a singularity will take. 
Recursive self improvement is a genetic algorithm that favors rapid
reproduction and acquisition of computing resources.  It does not favor
immortality, friendliness (whatever that means), or high fidelity of uploads. 
Humans, on the other hand, are motivated to upload by fear of death and the
belief that their consciousness depends on the preservation of their memories.
  How will human uploads driven by these goals fare in a competitive computing
environment?


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=40332421-43f7b0

Reply via email to