I haven't kept up with this thread.  But I wanted to counter the idea of a 
simple ordering of painfulness.

A simple ordering of painfulness is one way to think about pain that might 
work in some simple systems, where resources are allocated in a serial 
fashion, but may not work in systems where resource allocation choices are 
not necessarily serial and mutually exclusive.

If our system has a heterarchy of goal-accomplishing resources--some of 
which imply others and some of which exclude others, the problem of 
simple orderings of painfulness may be not useful for thinking about 
these types of resource allocation.

--
Bo

On Sat, 16 Jun 2007, Matt Mahoney wrote:

) 
) --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
) 
) > Eric,
) > 
) > I'm not 100% sure if someone/something else than me feels pain, but
) > considerable similarities between my and other humans
) > 
) > - architecture
) > - [triggers of] internal and external pain related responses
) > - independent descriptions of subjective pain perceptions which
) > correspond in certain ways with the internal body responses
) > 
) > make me think it's more likely than not that other humans feel pain
) > the way I do.
) 
) There is a simple proof for the existence of pain.  Define pain as a signal
) that an intelligent system has the goal of avoiding.  By the equivalence:
) 
)   (P => Q) = (not Q => not P)
) 
) if you didn't believe the pain was real, you would not try to avoid it.
) 
) (OK, that is "proof by belief".  I omitted the step (you believe X => X is
) true).  If you believe it is true, that is good enough).
) 
) > The further you move from human like architecture the less you see the
) > signs of pain related behavior (e.g. the avoidance behavior). Insect
) > keeps trying to use badly injured body parts the same way as if they
) > weren't injured and (unlike in mammals) its internal responses to the
) > injury don't suggest that anything crazy is going on with them. And
) > when I look at software, I cannot find a good reason for believing it
) > can be in pain. The fact that we can use pain killers (and other
) > techniques) to get rid of pain and still remain complex systems
) > capable of general problem solving suggests that the pain quale takes
) > more than complex problem solving algorithms we are writing for our
) > AGI.
) 
) Pain is clearly measurable.  It obeys a strict ordering.  If you prefer
) penalty A to B and B to C, then you will prefer A to C.  You can estimate,
) e.g. that B is twice as painful as A and choose A twice vs. B once.  In AIXI,
) the reinforcement signal is a numeric quantity.
) 
) But how should pain be measured?
) 
) Pain results in a change in the behavior of an intelligent system.  If a
) system responds Y = f(X) to input X, followed by negative reinforcement, then
) the function f is changed to output Y with lower probability given input X. 
) The magnitude of this change is measurable in bits.  Let f be the function
) prior to negative reinforcement and f' be the function afterwards.  Then
) define
) 
)   dK(f) = K(f'|f) = K(f, f') - K(f)
) 
) where K() is algorithmic complexity.  Then dK(f) is the number of bits needed
) to describe the change from f to f'.
) 
) Arguments for:
) - Greater pain results in a greater change in behavior (consistent with animal
) experiments).
) - Greater intelligence implies greater possible pain (consistent with the
) belief that people feel more pain than insects or machines).
) 
) Argument against:
) - dK makes no distinction between negative and positive reinforcement, or
) neutral methods such as supervised learning or classical conditioning.
) 
) I don't know how to address this argument.  Earlier I posted a program that
) simulates a programmable logic gate that you train using reinforcement
) learning.  Note that you can achieve the same state using either positive or
) negative reinforcement, or by a neutral method such as setting the weights
) directly.
) 
) -- Matt Mahoney
) 
) 
) -- Matt Mahoney, [EMAIL PROTECTED]
) 
) -----
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?&;
) 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to