>> AGIs (at least those that could run on current computers) cannot
>> really get excited about anything. It's like when you represent the
>> pain intensity with a number. No matter how high the number goes,
>> it doesn't really hurt. Real feelings - that's the key difference
>> between us and them and the reason why they cannot figure out on
>> their own that they would rather do something else than what they
>> were asked to do.

Mark> So what's the difference in your hardware that makes you have
Mark> real pain and real feelings?  Are you *absolutely positive* that
Mark> "real pain and real feelings" aren't an emergent phenomenon of
Mark> sufficiently complicated and complex feedback loops?  Are you
Mark> *really sure* that a sufficiently sophisticated AGI won't
Mark> experience pain?

Mark> I think that I can guarantee (as in, I'd be willing to bet a
Mark> pretty large sum of money) that a sufficiently sophisticated AGI
Mark> will act as if it experiences pain . . . . and if it acts that
Mark> way, maybe we should just assume that it is true.

If you accept the proposition (for which Turing gave compelling
arguments) that a computer with the right program could simulate the
workings of your brain in detail, then it follows that your feelings
are identifiable with some aspect or portion of the computation.

I claim that if feelings are identified with the decision making
computations of a top level module, (which might reasonably
be called a homunculus) everything is
concisely explained. What you are then *unaware* of is all the many
and varied computations done in subroutines that the decision
making module is isolated from by abstraction boundary (this
is by far most of the computation) as well as most internal computations
of the decision making module itself (which it will no more be
programmed to be able to report than my laptop can report its
internal transistor voltages). What you feel and can report and
the qualitative nature of your 
sensations is then determined by the code being run as it makes
decisions. I claim that the subjective nature of every feeling is
very naturally explained in this context. 
Pain, for example, is the weighing
of programmed-in negative reinforcement. (How could you possibly
modify the sensation of pain to make it any clearer it is 
negative reinforcement?) What is Thought? ch 14
goes through about 10 sensations that a philosopher had claimed
were not plausibly explainable by a computational model, and 
argues that each has exactly the nature you'd expect evolution 
to program in.
You then can't have a "zombie" that behaves the way you do but
doesn't have sensations, since to behave like you do it has to
make decisions, and it is in fact the decision making computation
that is identified with sensation. (Computations that are better
preprogrammed because they don't require decision, such as pulling
away from a hot stove or driving the usual route home for the
thousandth time, are dispatched to subroutines and are unconscious.) 

This picture is subject to empirical test, through psychophysics
(and also as we increasingly understand the genetic programming that
builds much of this code.)
A good example is Ramanchandran's amputee experiment. Amputees
frequently feel pain in their phantom (missing) limb. They can
feel themselves clenching their phantom hand so hard, that their
phantom finger nails gouge their phantom hands, causing intense real
pain. Ramanchandran predicted that this was caused by the mind sending
a signal to the phantom hand saying: relax, but getting no feedback
assuming that the hand had not relaxed, and inferring that pain should
be felt (including computing details of its nature). 
He predicted that if he provided a feedback telling the mind
that relaxation had occurred the pain would go away, which he then 
provided through a mirror device in which patients could place both
real and phantom limbs, relax both simultaneously, and get visual
feedback that the phantom limb had relaxed (in the mirror). Instantly
the pain vanished, confirming the prediction that the pain was
purely computational.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to