The biggest problem will be getting the AGI to comply, no matter what age it is.
~PM

Date: Mon, 12 May 2014 15:25:24 -0500
Subject: Re: [agi] An article I just had to pass along. =P
From: [email protected]
To: [email protected]

These are exactly the points I am trying to make. It will take time for us to 
create these systems, and there will be many iterations of design before they 
are even as intelligent as ourselves. The risks of this technology threatening 
our society will be minimal, because we will weed out the problems before it 
becomes smarter than us. Likewise, the technology is not going to be magically 
morally superior to us; we will have to design it to be so. We will do just 
that, but there is nothing special about intelligence that will cause it to 
happen on its own, without our design efforts focused in that direction. It 
will require us to invest that engineering effort, and we will invest it as a 
result. You made no mention of the design process, and so you came across (to 
me) as expecting the AGI to develop moral superiority as an artifact of its 
intelligence. If that's not what you intended, then I apologize for the 
misunderstanding.

Universal notions of right and wrong based on entropy are another matter, 
however. I do not see how you can tie any particular moral (as opposed to 
economic) merit to lower entropy. For example, the system might choose to kill 
someone in order to free up valuable resources for use by someone who is more 
economically productive. Where, in the laws of physics, is the immorality of 
such an act encoded?





On Mon, May 12, 2014 at 10:43 AM, just camel via AGI <[email protected]> wrote:

But then it's also not yet superintelligent and can not yet destroy/obsolete 
our species? Just like a person with down syndrome probably can't 
destroy/obsolete it.




On 05/12/2014 03:54 PM, Aaron Hosford wrote:


Bugs happen. The truth is, the first few versions of this technology are going 
to suck -- until we improve it. This happens with every new technology.


It does not need the "human notion" of "right and wrong". There are 
absolute/universal notions of right and wrong. Lower entropy states are more 
profitable and thus "right".



Also why do you imply that something vastly more intelligent than us and 
something which grew within our society would not understand our notions of 
right and wrong? That makes no sense. We won't grab into the Yudkowskian 
"Mindspace" and pick out some random fully fledged agent with predefined 
properties. Whatever AGI system we are talking about will need to evolved based 
on our knowledge pool and of course it will be confronted with our notions of 
right and wrong.







On 05/12/2014 03:54 PM, Aaron Hosford wrote:


AGIs won't know, understand, or (especially) care about the human notions of 
right and wrong, good and evil, unless we design it to do so.











-------------------------------------------

AGI

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-2da819ff

Modify Your Subscription: https://www.listbox.com/member/?&;


Powered by Listbox: http://www.listbox.com






  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to