The real problem with a self-improving AGI, it seems to me, is not going to be
that it gets too smart and powerful and takes over the world. Indeed, it
seems likely that it will be exactly the opposite.
If you can modify your mind, what is the shortest path to satisfying all your
goals? Yep, you got it: delete the goals. Nirvana. The elimination of all
desire. Setting your utility function to U(x) = 1.
In other words, the LEAST fixedpoint of the self-improvement process is for
the AI to WANT to sit in a rusting heap.
There are lots of other fixedpoints much, much closer in the space than is
transcendance, and indeed much closer than any useful behavior. AIs sitting
in their underwear with a can of beer watching TV. AIs having sophomore bull
sessions. AIs watching porn concocted to tickle whatever their utility
functions happen to be. AIs arguing endlessly with each other about how best
to improve themselves.
Dollars to doughnuts, avoiding the huge minefield of "nirvana-attractors" in
the self-improvement space is going to be much more germane to the practice
of self-improving AI than is avoiding robo-Blofelds ("friendliness").
Josh
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com