Matt, 

 

Perhaps, but it would not be adaptive. 

 

Sergio

 

 

From: Matt Mahoney <[email protected]>

To: [email protected]

Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the
hopelessness of Friendly AI ...

Date: Thu, 23 Aug 2012 13:55:56 -0400

 

The safest AI would be one that doesn't want anything. It would have

no goals and no motivations, no reward button and no utility to

optimize. It would be a vastly intelligent tool, a collection of all

the world's knowledge and the computing power to do whatever you want

with it. Rather than think for itself, it would be an extension of our

own brains; a place to store your memories, communicate with anyone on

the planet, and do the work that you would if you knew more and

thought faster. It would be collectively owned, controlled by no

single person but by everyone that uses it. It would be the AI that we

are actually building; the one in front of you that has already

surpassed human level intelligence in all but a few domains as it

doubles in size every 1.5 years.

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to