Yes, and I don't see how the hole could easily be fixed.
Of course the fact that this proof doesn't work means that
we still don't know whether or not friendliness can be proven
for very powerful AIs. I still suspect that it can't.
One of the main problems I found it trying to come up with
a negative proof was that it was difficult to define friendly in
a way that I could work with. You need to somehow separate
out what the AGI has the power to do, from what the AGI
actually does. That seems a bit hairy to me.
I still haven't given up on a negative proof, I've got a few more
ideas brewing. I'd also like to encourage anybody else to try
to come up with a negative proof, or for that matter, a positive
one if you think that's more likely.
Shane
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
- [singularity] An interesting but currently failed attempt... Ben Goertzel
- Re: [singularity] An interesting but currently faile... Shane Legg
- Re: [singularity] An interesting but currently f... Brian Atkins
- Re: [singularity] An interesting but current... Shane Legg
- Re: [singularity] An interesting but cur... Brian Atkins
- Re: [singularity] An interesting bu... Shane Legg
- Re: [singularity] An interestin... Gregory Johnson
- Re: [singularity] An interestin... Brian Atkins
- Re: [singularity] An interestin... Charles D Hixson
- Re: [singularity] An interesting but cur... Lukasz Kaiser
- Re: [singularity] An interesting but currently f... Nathan Barna
