On Thu, Aug 23, 2012 at 9:58 AM, [email protected]
<[email protected]> wrote:
>
> Some might say, better a known enemy. Anyway, why all this stress on
> self-modifying AI? Wouldn't it be easier & safer to design an AI that
> doesn't want to modify itself than to design one that's supposed to stay
> friendly despite ongoing self-modification?

The safest AI would be one that doesn't want anything. It would have
no goals and no motivations, no reward button and no utility to
optimize. It would be a vastly intelligent tool, a collection of all
the world's knowledge and the computing power to do whatever you want
with it. Rather than think for itself, it would be an extension of our
own brains; a place to store your memories, communicate with anyone on
the planet, and do the work that you would if you knew more and
thought faster. It would be collectively owned, controlled by no
single person but by everyone that uses it. It would be the AI that we
are actually building; the one in front of you that has already
surpassed human level intelligence in all but a few domains as it
doubles in size every 1.5 years.


-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to