No, no! It came out exactly the opposite of what I was trying to say, sorry. I 
was trying to convey the notion that something is missing from today's 
machines, and is stopping them from doing what people do. It's the ability to 
deal with the accumulation of entropy. 

 

Entropy comes with information. More and more information also brings more and 
more entropy into the system. But entropy = uncertainty, so the excess entropy 
has to be removed. Removing entropy causes information to self-organize and 
create the structures or invariant representations we use to think and 
communicate. Our brains are very good at removing entropy. 

 

But consider a computer. As developers write more and more code into a 
computer, more and more entropy accumulates in it. If you are a developer you 
know what refactoring is. Refactoring is the process of removing entropy from 
code. Developers say "improving the understandability", which means removing 
uncertainty, that is, entropy. 

 

Google needs an entropy processor. Google patents on entropy processors are 
only for compression of video signals, not general purpose enough for general 
information. You can see my paper in  Complexity 17(2): 19–38  for the basics, 
or my website <http://www.scicontrols.com/>  for updates. 

 

Sergio

 

From: [email protected] [mailto:[email protected]] 
Sent: Thursday, August 23, 2012 4:17 PM
To: AGI
Subject: RE: [agi] Hugo de Garis on the Singhilarity Institute and the 
hopelessness of Friendly AI ...

 

I see humans themselves as machines of a sort, and so I don't see a fundamental 
distinction between what a person can do and what a machine can do. But you are 
entitled to your beliefs, even though I'm unconvinced I should agree.



  _____  

On Aug 23, 2012 3:57 PM, Sergio Pissanetzky <[email protected]> wrote: 

That promise has been around for a very, very long time. I don't believe it can 
be done with machines alone, the humans will have to remain in the loop. 

 

Sergio

 

From: [email protected] [mailto:[email protected]] 
Sent: Thursday, August 23, 2012 3:31 PM
To: AGI
Subject: RE: [agi] Hugo de Garis on the Singhilarity Institute and the 
hopelessness of Friendly AI ...

 

Not yet. But once we develop individual agents capable of it, I'm sure Google 
(or whoever is at the forefront then) will offer some sort of "service" that 
extends that level of intelligence to internet users.



  _____  

On Aug 23, 2012 3:07 PM, Sergio Pissanetzky <[email protected]> wrote: 

Can Google learn how to play chess? 

 

Sergio

 

From: [email protected] [mailto:[email protected]] 
Sent: Thursday, August 23, 2012 2:23 PM
To: AGI
Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the 
hopelessness of Friendly AI ...

 

Google is already adaptive. There is no way they could build a search engine 
that effective if it weren't. That approach to adaptivity can be applied on a 
much bigger scale, & will be.

  _____  

On Aug 23, 2012 1:52 PM, Sergio Pissanetzky <[email protected]> wrote: 

Matt, 

 

Perhaps, but it would not be adaptive. 

 

Sergio

 

 

From: Matt Mahoney <[email protected]>

To: [email protected]

Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the 
hopelessness of Friendly AI ...

Date: Thu, 23 Aug 2012 13:55:56 -0400

 

The safest AI would be one that doesn't want anything. It would have

no goals and no motivations, no reward button and no utility to

optimize. It would be a vastly intelligent tool, a collection of all

the world's knowledge and the computing power to do whatever you want

with it. Rather than think for itself, it would be an extension of our

own brains; a place to store your memories, communicate with anyone on

the planet, and do the work that you would if you knew more and

thought faster. It would be collectively owned, controlled by no

single person but by everyone that uses it. It would be the AI that we

are actually building; the one in front of you that has already

surpassed human level intelligence in all but a few domains as it

doubles in size every 1.5 years.

 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives  
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |  
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives  
<https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |  
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives  
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |  
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives  
<https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |  
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives  
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |  
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to