Mark,

If Google came along and offered you $10 million for your AGI, would you
give it to them?

No, I would sell services.

How about the Russian mob for $1M and your life and the
lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)

Or, what if your advisor tells you that unless you upgrade him so that he
can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.

I suggest preventing potential harm by making the AGI's top-level
goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what is
meant by that).

Tell us about it. :)

sufficiently sophisticated AGI will act as if it experiences pain

So could such AGI be then forced by "torture" to break rules it
otherwise would not "want" to break?  Can you give me an example of
something what will cause the "pain"? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.

I don't see your point unless you're arguing that there is something
special about using chemicals for global environment settings rather
than some other method (in which case I
would ask "What is that something special and why is it special?").

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to