Some might say, better a known enemy. Anyway, why all this stress on 
self-modifying AI? Wouldn't it be easier & safer to design an AI that 
doesn't want to modify itself than to design one that's supposed to stay 
friendly despite ongoing self-modification?



-- Sent from my Palm Pre
On Aug 23, 2012 7:34 AM, Ben Goertzel <[email protected]> wrote: 


You're focusing on the "Tool AI" aspect of Holden's argument, which is only one 
part of it.   Not all of his arguments rely on his point about Tool AI; 
his other arguments about the utility of SI as an organization are largely 
independent of his Tool AI point.

Regarding the potential power of Tool AI, I would say that's unclear.  The 
"Very Powerful Optimization Process" that Eliezer has posited as part of one 
version of CEV, is in fact a kind of Tool AI, it seems to me.   That VPOP 
seems precisely a Tool AI designed to guide and possibly avert the development 
of powerful Agent AI.

Personally, I fear powerful Tool AI more than I fear powerful Agent AI, because 
it's humans who will be using the tools, and the propensity of a certain 
percentage of humans to use powerful tools for harm against other humans is 
well known.  That is: the dangers of Tool AI are obvious, whereas the 
dangers of Agent AI are more confusing at present.

However, it's a fair point that SI is obsessing more about the potential 
dangers of self-modifying Agent AI, and largely side-stepping (in its public 
materials and discussions so far, anyway) the more obvious dangers of Tool AI 
in the hands of power-hungry or malevolent humans.

-- Ben G

On Thu, Aug 23, 2012 at 7:52 AM, Michael Anissimov 
<[email protected]> wrote:

Discerning, but wrong...all based on the assumption that there can be 
sophisticated AI that behaves like Google Maps, which is flawed.


On Tue, Aug 21, 2012 at 8:07 PM, Ben Goertzel <[email protected]> wrote:


Hmmm... the reply on Tool AI is interesting, but Holden's original critique of 
SI is also worth reading:


http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

It's a good deal more discerning than Hugo's critique, I'd say...

ben


On Tue, Aug 21, 2012 at 6:43 PM, Michael Anissimov 
<[email protected]> wrote:



For those interested in current Singularity Institute research:
http://singularity.org/research/



Also possibly of interest, our executive director's recent reply to Holden 
Karnofsky on what he calls "tool AI":

http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/
We have recently hired two research fellows, Alex Altair and Kaj Sotala. Both 
are exclusively focused on AI research.




-- 
Michael AnissimovSingularity Institutewww.singularity.org





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche









  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Michael AnissimovSingularity Institutewww.singularity.org




  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to