Hi Calum

I've read through some of your post-Seville comments and your review on 
Bostrom's book. Thank you for a well-balanced and informative perspective. 
http://pandoras-brain.com/


I think your finest moment of this book review was contained in just 3 words. 
"In any case, Bostrom’s main argument – that we should take the 
prospect of superintelligence very seriously – is surely right.  Towards
 the end of book he issues a powerful rallying cry:
“Before the prospect of an intelligence explosion, we humans are like
 small children playing with a bomb. … [The] sensible thing to do would 
be to put it down gently, back out of the room, and contact the nearest 
adult.  [But] the chances that we will all find the sense to 
put down the dangerous stuff seems almost negligible.  … Nor is there a 
grown-up in sight.  [So] in the teeth of this most unnatural and inhuman
 problem [we] need to bring all our human resourcefulness to bear on its
 solution.”

Amen to that."

Rob
Date: Tue, 17 Mar 2015 09:33:22 +0100
Subject: Re: [agi] AI Protest in Texas
From: a...@listbox.com
To: a...@listbox.com

Steve
I sympathise with your very understandable preference not to be targeted by 
anti-AI crazies!  

What do you think is the best way to try and shape the growing public debate 
about AGI?  Following Bostrom's book, and the comments by Hawking, Musk and 
Gates, a fair proportion of the general public is now aware that AGI might 
arrive in the medium term, and that it will have a very big impact.
Some AI researchers seem to be responding by saying, "Don't worry, it can't 
happen for centuries, if ever".  No doubt some of them genuinely believe that, 
but I wonder whether some are saying it in the (forlorn?) hope the debate will 
go away. It won't.  In fact I suspect that the new Avengers movie will kick it 
up a level.
Others are saying, "Don't worry, AGI cannot and will not harm humans."  To my 
mind (and I realise I may be in a small minority here on this) that is hard to 
be certain about - as Bostrom demonstrated.
Yet others are saying, "AI researcher will solve the problem long before AGI 
arrives, and it's best not to worry everyone else in the meantime."  That seems 
a dangerous approach to me.  If the public ever feels (rightly or wrongly) that 
things have been hidden from them, they will react badly.
But I do definitely sympathise with the desire not to be targeted by crazies, 
or to be vilified by journalists who have half-understood the situation!
I am not an AI researcher.  I am a retired businessman now resuming a lapsed 
career as a writer, and my subject is AGI.  My first novel, Pandora's Brain, is 
just out.  I chose the subject because I think it is the most important one in 
the world, bar none.
The most likely outcome of my writing, of course, is zilch.  Half a million new 
books are published in the US and the UK every year, and it would be arrogant 
to think mine will stand out.  But I work hard on them, and one can dream.
In the unlikely event that more than a handful of people will read Pandora, I 
want to make a responsible contribution to the debate.  That is why I chose the 
subject in the first place.
Having said all this, I'll understand if Ben just kicks me off the forum!
RegardsCalum


On 17 March 2015 at 08:36, Steve Richfield via AGI <a...@listbox.com> wrote:
The BIG problem that threatens the safety of everyone on this forum are crazies 
like some on this forum who publicly (this AGI forum is public and is Google 
searchable) claim that potentially dangerous AGIs will be up in running in 5 
years or whenever. Regardless of the truth or lack thereof, movies (like the 
Terminator series) portraying the great social value of KILLING people who 
develop AGIs is enough to motivate other crazies out there to do really bad 
things to the crazies who make such statements.

Which of the crazies are more dangerous TO ME. That is easy - it is the crazies 
on this forum, who might guide other crazies to my door. There are LOTS of 
murderous crazies in our world, which I can live with, so long as no one is 
guiding them TO ME.

It is really hard to believe that ANYONE who is SO stupid to be publicly making 
such obviously dangerous statements actually thinks they are smart enough to 
participate in making a genuine AGI. There is NO WAY that this could possibly 
happen. Stupid is forever.

Personally, I have absolutely NO expectation of any sort of dangerous AGIs 
emerging anytime soon, most especially from people who are stupid enough to 
make such obviously dangerous public statements.

The less mentally impaired participants on this forum are quietly working on 
their respective theories WITHOUT making such incendiary statements.

If you the reader have made such public statements in the past, then GIVE UP 
working on AGI technology, because you have failed the intelligence test and 
are obviously NOT smart enough to ever succeed. Further, your potential future 
colleagues will see you as being too dangerous to have as an associate.

If you are an assassin looking for someone to kill, then look elsewhere. These 
guys aren't worth your ammunition.

BEN: As an act of social responsibility, you should purge ALL postings, 
including this posting, that mentions any sort of short time horizon for AGI 
development, and go all the way back to the beginning of this forum to do so. 
That demonstrations like the one in Texas can even come about shows the 
pushback that such statements can attract. Do we really need to be seen as 
dangerous social pariahs?

Steve
==================


On Mon, Mar 16, 2015 at 7:21 PM, Nanograte Knowledge Technologies via AGI 
<a...@listbox.com> wrote:



If I may say something pelase? To my understanding, Google would and Elon Musk 
would. 

However, AI is not the real threat. In my most-humble opinion, it is the key to 
the solution to a real threat. The technology would still be developed, 
regardless, and is probably being hastened as we speak. Perhaps the following 
tenets should be applied to "their" rationale: "The cat's out of the bag. What 
they don't know, won't harm them. Let's just go underground and hurry it up 
some more." As such then, protests are insignificant, red herrings. Protests 
could be staged to support public statements, as a distraction and disabling 
strategy to detract from the real issues at hand. Who knows? Who cares? Most 
people don't even read. I agree more with the 5-year outlook, and it could even 
become 4, depending on how quickly the key constraints to such progress could 
be resolved by people like us.     

Date: Tue, 17 Mar 2015 08:39:22 +0800
Subject: Re: [agi] AI Protest in Texas
From: a...@listbox.com
To: a...@listbox.com
CC: a...@listbox.com


yeah, that's more consistent with what I've heard from Demis in the past...



On Tue, Mar 17, 2015 at 8:30 AM, Calum Chace <ccca...@gmail.com> wrote:
Sorry, Ben, it wasn't centuries for Hassabis.  It was decades.  Rather an 
important difference!
Last year, the American entrepreneur, Elon Musk, one of Deep Mind’s early 
investors, described AI as humanity’s greatest existential threat. “Unless you 
have direct exposure to groups like Deepmind, you have no idea how fast [AI] is 
growing,” he said. “The risk of something seriously dangerous happening is in 
the five year timeframe. Ten years at most.”
However, the Google team played down the concerns. “We agree with him there are 
risks that need to be borne in mind, but we’re decades away from any sort of 
technology that we need to worry about,” Hassabis said.
http://www.theguardian.com/technology/2015/feb/25/google-develops-computer-program-capable-of-learning-tasks-independently

Calum
On 17 March 2015 at 01:20, Ben Goertzel <b...@goertzel.org> wrote:

Did Demis really say AGI is hundreds of years away?   That surprises me....

I think Ng actually believes AGI is far off, he's conservative but I believe 
he's a straight shooter.

I don't know Yann and Christof F2F so I don't have a strong opinion on their 
attitudes...

-- Ben

On Tue, Mar 17, 2015 at 8:13 AM, Calum Chace <ccca...@gmail.com> wrote:
Yes, but Austin, of all places.
Ben, why do you think Yann LeCun, Andrew Ng, Christof Koch and Demis Hassabis 
have all been lining up to say that AGI is hundreds of years away?  Are they 
worried about this sort of reaction?
On 17 March 2015 at 01:10, Ben Goertzel via AGI <a...@listbox.com> wrote:

And of course it has to be in Texas 8-D ...

On Tue, Mar 17, 2015 at 5:17 AM, Piaget Modeler via AGI <a...@listbox.com> 
wrote:



Straight out of Stephen Spielberg's film: A.I.
~PM

> Date: Mon, 16 Mar 2015 12:59:25 -0700
> Subject: Re: [agi] AI Protest in Texas
> From: a...@listbox.com
> To: a...@listbox.com
> 
> On 3/16/15, Aaron Hosford <hosfor...@gmail.com> wrote:
> > This sort of thing was predicted 50 years ago.
> > http://en.wikipedia.org/wiki/Butlerian_Jihad
> >
> > Nonetheless, yes, mind blowing.
> >
> > On Mon, Mar 16, 2015 at 11:41 AM, Mike Archbold via AGI <a...@listbox.com>
> > wrote:
> >
> 
> Bultlerian, named after a guy from Stanwood, WA.  I'm not far from
> there, actually, and there is a beautiful old Scandinavian farming
> community there, with falling down barns and images of tall blonde
> girls.
> 
> A woman in the building I live in told me I have to find Jesus right
> away after she took a look at my book, presently at position about
> 5,000,000 on amazon.  If I don't find Jesus right away it is all over.
> 
> What a strange world.
> 
> >>
> >> http://en.yibada.com/articles/19837/20150316/humans-hold-anti-ai-robot-protest-sxsw-texas.htm
> >>
> >> I find this kind of mind blowing.  Down with robots?  Down with AI?
> >>
> >>
> >> -------------------------------------------
> >> AGI
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed:
> >> https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> >> Modify Your Subscription:
> >> https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >>
> >
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          



      
    
  

  
    
      
      AGI | Archives

 | Modify
 Your Subscription





-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: 
the unreasonable one persists in trying to adapt the world to himself. 
Therefore all progress depends on the unreasonable man." -- George Bernard Shaw





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Regards
Calum




-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: 
the unreasonable one persists in trying to adapt the world to himself. 
Therefore all progress depends on the unreasonable man." -- George Bernard Shaw




-- 
Regards
Calum




-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: 
the unreasonable one persists in trying to adapt the world to himself. 
Therefore all progress depends on the unreasonable man." -- George Bernard Shaw





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  





-- 
Full employment can be had with the stoke of a pen. Simply institute a six hour 
workday. That will easily create enough new jobs to bring back full employment.






  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Regards
Calum





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to