--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> Create a numeric "pleasure" variable in your mind, initialize it with
> a positive number and then keep doubling it for some time. Done? How
> do you feel? Not a big difference? Oh, keep doubling! ;-))

The point of autobliss.cpp is to illustrate the flaw in the reasoning that we
can somehow through technology, AGI, and uploading, escape a world where we
are not happy all the time, where we sometimes feel pain, where we fear death
and then die.  Obviously my result is absurd.  But where is the mistake in my
reasoning?  Is it "if the brain is both conscious and computable"?


> 
> Regards,
> Jiri Jelinek
> 
> On Nov 3, 2007 10:01 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> > > If bliss without intelligence is the goal of the machines you imaging
> > > running the world, for the cost of supporting one human they could
> > > probably keep at least 100 mice in equal bliss, so if they were driven
> to
> > > maximize bliss why wouldn't they kill all the grooving humans and
> replace
> > > them with grooving mice.  It would provide one hell of a lot more bliss
> > > bang for the resource buck.
> >
> > Allow me to offer a less expensive approach.  Previously on the
> singularity
> > and sl4 mailing lists I posted a program that can feel pleasure and pain:
> a 2
> > input programmable logic gate trained by reinforcement learning.  You give
> it
> > an input, it responds, and you reward it.  In my latest version, I
> automated
> > the process.  You tell it which of the 16 logic functions you want it to
> learn
> > (AND, OR, XOR, NAND, etc), how much reward to apply for a correct output,
> and
> > how much penalty for an incorrect output.  The program then generates
> random
> > 2-bit inputs, evaluates the output, and applies the specified reward or
> > punishment.  The program runs until you kill it.  As it dies it reports
> its
> > life history (its age, what it learned, and how much pain and pleasure it
> > experienced since birth).
> >
> > http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
> >
> > To put the program in an eternal state of bliss, specify two positive
> numbers,
> > so that it is rewarded no matter what it does.  It won't learn anything,
> but
> > at least it will feel good.  (You could also put it in continuous pain by
> > specifying two negative numbers, but I put in safeguards so that it will
> die
> > before experiencing too much pain).
> >
> > Two problems remain: uploading your mind to this program, and making sure
> > nobody kills you by turning off the computer or typing Ctrl-C.  I will
> address
> > only the first problem.
> >
> > It is controversial whether technology can preserve your consciousness
> after
> > death.  If the brain is both conscious and computable, then Chalmers'
> fading
> > qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
> > computer simulation of your brain would also be conscious.
> >
> > Whether you *become* this simulation is also controversial.  Logically
> there
> > are two of you with identical goals and memories.  If either one is
> killed,
> > then you are in the same state as you were before the copy is made.  This
> is
> > the same dilemma that Captain Kirk faces when he steps into the
> transporter to
> > be vaporized and have an identical copy assembled on the planet below.  It
> > doesn't seem to bother him.  Does it bother you that the atoms in your
> body
> > now are not the same atoms that made up your body a year ago?
> >
> > Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
> > this goal; they just don't know it).  The problem is that you would forgo
> > food, water, and sleep until you died (we assume, from animal
> experiments).
> > The solution is to upload to a computer where this could be done safely.
> >
> > Normally an upload would have the same goals, memories, and sensory-motor
> I/O
> > as the original brain.  But consider the state of this program after self
> > activation of its reward signal.  No other goals are needed, so we can
> remove
> > them.  Since you no longer have the goal of learning, experiencing sensory
> > input, or controlling your environment, you won't mind if we replace your
> I/O
> > with a 2 bit input and 1 bit output.  You are happy, no?
> >
> > Finally, if your memories were changed, you would not be aware of it,
> right?
> > How do you know that all of your memories were not written into your brain
> one
> > second ago and you were some other person before that?  So no harm is done
> if
> > we replace your memory with a vector of 4 real numbers.  That will be all
> you
> > need in your new environment.  In fact, you won't even need that because
> you
> > will cease learning.
> >
> > So we can dispense with the complex steps of making a detailed copy of
> your
> > brain and then have it transition into a degenerate state, and just skip
> to
> > the final result.
> >
> > Step 1. Download, compile, and run autobliss 1.0 in a secure location with
> any
> > 4-bit logic function and positive reinforcement for both right and wrong
> > answers, e.g.
> >
> >   g++ autobliss.cpp -o autobliss.exe
> >   autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)
> >
> > Step 2. Kill yourself.  Upload complete.
> >
> >
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60968219-5d223f

Reply via email to