please refrain from vulgar language

Nanograte Knowledge Technologies via AGI <[email protected]> wrote:
>Hey Calum. Watch it! Pigs are very-very smart. They can sniff out
>truffles, and corpses from 6 feet under. :-)
>
>I'm so itching to say it, but I won't. NOW can we get some ass into
>gear? Are we going to use this as a wakeup call, or not?   
>
>Subject: Re: [agi] AI Protest in Texas
>From: [email protected]
>Date: Wed, 18 Mar 2015 07:48:07 +0000
>To: [email protected]
>
>Ha!  I like the Occupy Coke idea!  You may not be a "PR guy" but you
>surely could be if you wanted to. 
>I thought you believed AGI to be Good News, not a pig in need of better
>lipstick?
>
>Calum
>On 18 Mar 2015, at 02:21 AM, "Ben Goertzel via AGI" <[email protected]>
>wrote:
>
>
>
>On Tue, Mar 17, 2015 at 11:13 PM, Robert Levy via AGI <[email protected]>
>wrote:
>This is probably an important discussion, independent of the event that
>prompted it, but it turns out the protest at SXSW was a hoax/ viral
>marketing campaign staged to promote a new dating site.
>
>
>
>I wonder what's next.... "Was that a revolution in Russia we just saw,
>or just a large-scale advertisement for Kalashnikovs??"
> 
>Will the border between political activity and marketing campaign blur
>even further?
>
>"Occupy your stomach with 'Occupy Cola' ,the true rebel's beverage of
>choice "  ???
>
>ben
>
>On Tue, Mar 17, 2015 at 8:06 AM, Calum Chace via AGI <[email protected]>
>wrote:
>Very well said.  (And Pandora's Brain does say it, as it happens...)
>But the pro-AGI community also needs to convince the public that the
>AGI we'll get will be a Friendly one.
>Calum
>On 17 March 2015 at 15:56, Ben Goertzel via AGI <[email protected]>
>wrote:
>
>A problem is that careful, balanced discussions of difficult issues are
>boring and don't attract media attention
>
>Joel Pitt and I wrote a fairly thoughtful discussion of AGI safety
>issues a few years ago,
>
>http://jetpress.org/v22/goertzel-pitt.htm
>
>but of course our thoughts are more complex and nuanced, whereas a
>tweet from a billionaire comparing AI research to demon-summoning is a
>lot sexier...
>
>IMO, to get media attention sufficient to counteract the media's love
>of
>
>alarmism and doomsaying, the pro-AGI community would need to come
>forward very
>
>aggressively with the message that AGI is important for SAVING AND
>
>IMPROVING  HUMAN LIVES ... for designing the next generation of
>
>medicines, for creating elder-care robots to make old age more
>
>livable, for extending healthspan for those who want it, for aiding
>
>the invention of new energy sources, for aiding in the fight against
>
>physical and cyber terrorism, and so forth....   "Don't worry too
>
>much, we'll be careful" is not a convincing counterargument -- a better
>
>counterargument to the Musks, Hawkings, Bostroms and Yudkowskys of the
>world is more like  "Hey, I don't want your fear of science fiction bad
>guys to
>
>deny my grandma her life-extending, health-improving medicine and her
>
>robot friend, to eliminate my future of virtually unlimited energy and
>
>to put me at risk from terrorist attacks...."   I.e. "DON'T LET THE
>
>LUDDITES KILL YOUR GRANDMA AND TAKE YOUR TOYS AWAY!!   EMBRACE AI AND
>ROBOTS LIKE YOU'VE EMBRACED SMARTPHONES, AC POWER, THE INTERNET AND
>BIRTH CONTROL PILLS -- AND YOUR LIFE WILL BE BETTER -- " ....
>
>
>
>OK I'm semi-joking ;) ;p ... but unfortunately I think it's a mistake
>to overestimate the general public's appetite for rational, balanced
>discussion and thinking ;p ...  When careful nuanced thinking on
>difficult issues it put out there, it tends to be vigorously ignored...
>
>-- Ben 
>
>On Tue, Mar 17, 2015 at 8:01 PM, Benjamin Kapp via AGI
><[email protected]> wrote:
>If you think of governments as an artificial man (as was done by
>Aristotle and Hobbes amongst others) which is composed of humans who
>are the muscles (military, police), the intelligence (spys,
>scientists), the judging and planning (judges, politicians), etc..  In
>a way the state is a leviathan (a thing which has power to overawe any
>individual or group of individuals).  And in this way AGI (or a super
>intelligence) already exists.  
>On Tue, Mar 17, 2015 at 6:29 AM, Calum Chace via AGI <[email protected]>
>wrote:
>Thanks Basile.  I agree with Pitrat, although I might dial up the
>consideration of the downside possibility a touch.
>Hawking usually gets slightly mis-represented.  He said that AGI could
>be either the best or the worst thing ever to happen to humanity.  The
>"best" bit seems to get missed by both sides of the debate.
>So, my question is, what is the best way for people who think along
>these lines to try and steer the public debate on AGI?  Alarmism is
>unhelpful, and hard to avoid.  Secrecy won't work.  Ben is tackling the
>issue head-on (as in the video he posted just now), but it's a hard
>debate to get right.
>Calum
>On 17 March 2015 at 11:17, Basile Starynkevitch
><[email protected]> wrote:
>On Tue, Mar 17, 2015 at 09:33:22AM +0100, Calum Chace via AGI wrote:
>
>> Steve
>
>>
>
>> I sympathise with your very understandable preference not to be
>targeted by
>
>> anti-AI crazies!
>
>>
>
>> What do you think is the best way to try and shape the growing public
>
>> debate about AGI?  Following Bostrom's book, and the comments by
>Hawking,
>
>> Musk and Gates, a fair proportion of the general public is now aware
>that
>
>> AGI might arrive in the medium term, and that it will have a very big
>
>> impact.
>
>>
>
>> Some AI researchers seem to be responding by saying, "Don't worry, it
>can't
>
>> happen for centuries, if ever".  No doubt some of them genuinely
>believe
>
>> that, but I wonder whether some are saying it in the (forlorn?) hope
>the
>
>> debate will go away. It won't.  In fact I suspect that the new
>Avengers
>
>> movie will kick it up a level.
>
>>
>
>> Others are saying, "Don't worry, AGI cannot and will not harm
>humans."  To
>
>> my mind (and I realise I may be in a small minority here on this)
>that is
>
>> hard to be certain about - as Bostrom demonstrated.
>
>>
>
>> Yet others are saying, "AI researcher will solve the problem long
>before
>
>> AGI arrives, and it's best not to worry everyone else in the
>meantime."
>
>>  That seems a dangerous approach to me.  If the public ever feels
>(rightly
>
>> or wrongly) that things have been hidden from them, they will react
>badly.
>
>>
>
>> But I do definitely sympathise with the desire not to be targeted by
>
>> crazies, or to be vilified by journalists who have half-understood
>the
>
>> situation!
>
>>
>
>
>
>[...]
>
>
>
>> >> > -------------------------------------------
>
>> >> > AGI
>
>> >> > Archives: https://www.listbox.com/member/archive/303/=now
>
>
>
>[....]
>
>
>
>
>
>I would suggest reading J.Pitrat's december 2014 blog entry on that
>subject.
>
>J.Pitrat is probably not subscribing to that list, i
>
>so I am blind-carbon-copying him.
>
>
>
>http://bootstrappingartificialintelligence.fr/WordPress3/2014/12/not-developing-an-advanced-artificial-intelligence-could-spell-the-end-of-the-human-race/
>
>
>
>He is explaining that
>
>
>
> "Not developing an advanced artificial intelligence
>
>  could spell the end of the human race"
>
>
>
>and I believe he has a point. Of course AGI researchers should be
>careful.
>
>
>
>Regards
>
>
>
>--
>
>Basile Starynkevitch   http://starynkevitch.net/Basile/
>
>
>
>
>
>-- 
>Regards
>Calum
>
>
>
>
>
>  
>    
>      
>      AGI | Archives
>
> | Modify
> Your Subscription
>
>
>      
>    
>  
>
>
>
>
>
>
>
>  
>    
>      
>      AGI | Archives
>
> | Modify
> Your Subscription
>
>
>      
>    
>  
>
>
>
>
>-- 
>Ben Goertzel, PhD
>http://goertzel.org
>
>"The reasonable man adapts himself to the world: 
>the unreasonable one persists in trying to adapt the world to himself. 
>Therefore all progress depends on the unreasonable man." -- George
>Bernard Shaw
>
>
>
>
>
>  
>    
>      
>      AGI | Archives
>
> | Modify
> Your Subscription
>
>
>      
>    
>  
>
>
>
>
>-- 
>Regards
>Calum
>
>
>
>
>
>  
>    
>      
>      AGI | Archives
>
> | Modify
> Your Subscription
>
>
>      
>    
>  
>
>
>
>
>
>
>
>  
>    
>      
>      AGI | Archives
>
> | Modify
> Your Subscription
>
>
>      
>    
>  
>
>
>
>
>-- 
>Ben Goertzel, PhD
>http://goertzel.org
>
>"The reasonable man adapts himself to the world: 
>the unreasonable one persists in trying to adapt the world to himself. 
>Therefore all progress depends on the unreasonable man." -- George
>Bernard Shaw
>
>
>
>
>
>  
>    
>      
>      AGI | Archives
>
> | Modify
> Your Subscription
>
>
>      
>    
>  
>
>
>
>
>
>  
>    
>      
>      AGI | Archives
>
> | Modify
> Your Subscription
>
>
>      
>    
>  
>
>                                         
>
>
>-------------------------------------------
>AGI
>Archives: https://www.listbox.com/member/archive/303/=now
>RSS Feed:
>https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d
>Modify Your Subscription:
>https://www.listbox.com/member/?&;
>Powered by Listbox: http://www.listbox.com

Sent from my Android device with Emails.


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to