On Mon, Jan 19, 2015 at 10:08 PM, Tim Tyler via AGI <[email protected]> wrote:
> On 19/01/2015 14:47, Matt Mahoney via AGI wrote:
>>
>> On Sat, Jan 17, 2015 at 6:37 PM, Tim Tyler via AGI <[email protected]>
>> wrote:
>>>>
>>>> de Gray is arguing against the scenario where a recursively self
>>>> improving AI in a box goes FOOM!
>>>
>>> It seems like a straw man scenario. Has anyone seriously proposed it?
>>
>> There are some old proposals, for example Corwin's experiments on
>> containing AI in 2002.
>> http://www.sl4.org/archive/0207/4935.html
>>
>> Yudkowsky's Coherent Extrapolated Volition in 2004.
>> https://intelligence.org/files/CEV.pdf
>
>
> Aren't those just imagined attempts to contain superintelligences?
> I don't really see the idea that the machine gets smarter and smarter
> inside the box just by thinking about things.

It is impossible, of course. I argued this on the SL4 list when it was
still active and wrote a paper proving that recursive self improvement
won't work. http://mattmahoney.net/rsi.pdf

Yudkowsky criticized an earlier version of my paper in which I equated
intelligence with knowledge. He correctly pointed out that AIXI is
infinitely intelligent even though it starts with zero knowledge. In
the version I linked, I rewrote the ending to equate intelligence with
knowledge + computing power, but it doesn't change the result.

In the paper, it looks like I prove the opposite because I actually
write an unbounded self improving program. After each self-rewrite,
there is an interval in which it achieves higher utility than the
previous version, and no interval in which it achieves lower utility.
The trick is that to measure utility after n iterations, you have to
specify n, which provides log n bits of input to the AI. In theory it
would work on a Turing machine. In practice, you would have to use all
of the computing power of the universe to gain 400 bits of knowledge.

> IMO, the way it is likely to go is that we will figure out how to back up
> and simulate human minds - after we develop artificial intelligence.

That would seem to make more sense, but I'm not so sure. Humans are
highly optimized to reproduce, fear death, and then die. We could
redesign ourselves with any goals we wanted. But whatever we design
will have to compete with humans, and evolution will ultimately pick
the winners.

What goals should our uploads have?


-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to