Ben,

You are up to your ankles in pine needles and wondering where is the forest.

Continuing...
On Tue, Mar 17, 2015 at 7:56 AM, Ben Goertzel via AGI <[email protected]>
wrote:

>
> A problem is that careful, balanced discussions of difficult issues are
> boring and don't attract media attention
>

Would YOU bet the future of the human race on the word of a middle aged guy
in dreadlocks who claims there is no problem? Well, you probably would, but
don't be surprised if you are relatively alone in this view.

Don't get me wrong - I am NOT arguing against your reasoning - just the
meta-level at which it is presented.

This is somewhat parallel to the issue of CERN creating tiny black holes
that might gobble up the Earth. Careful analysis shows that Hawking
radiation will quickly exceed their ability to gobble up other things, but
many people don't see that betting the Earth on such "careful analysis" is
really worth it.

Another parallel discussion centered around the ability of early atomic
bombs to ignite fusion reactions in seawater, turning the Earth into a
gigantic hydrogen bomb. Again, careful analysis succeeded.

However, careful analysis has FAILED in MANY other instances, e.g. with
soil depletion, we have now passed the point where it is even possible to
live on the vegetables we grow, without supplements for B-12, etc. We live
in a lead-poisoned world from previous "ethyl" gasoline. The roof of my RV
is radioactive from fallout from Fukushima. I could go on and on.

My point here is that the "bar" for "careful analysis" must be raised
considerably. We have discussed the potential downsides for AGI here on
this forum, and it really boils down to relative opinions - in part
regarding how stupid an AGI implementer would have to be to build a
dangerous AGI.

Compare this with the Iranian Nuke issue. All rational uses for nukes are
ONLY for defense, so there should be no reason to fear their spread.
However, there is a religion (Islam) that promises Heaven to those who die
in its cause, so we are VERY concerned when an Islamic theocracy seeks to
develop their own nukes. Here, careful careful analysis fails, because
those who should be doing the careful analysis are idiots.

Similarly, suppose an organization with the morals of ISIS were to have an
opportunity to create AGIs to kill the people they would like to kill, e.g.
all non-suni-muslims. Here, your "careful analysis" would clearly fail.
Right now this is impossible, but fast-forward a century or so, and they
will probably be able to order a toolkit for $20 and make whatever they
want.

You have been enjoying the freedom of a lax society heading toward eventual
destruction from failures in critical analysis, some of which have been
more "air tight" than yours.

Continuing...
Joel Pitt and I wrote a fairly thoughtful discussion of AGI safety issues a
few years ago,

>
> http://jetpress.org/v22/goertzel-pitt.htm
>
> but of course our thoughts are more complex and nuanced, whereas a tweet
> from a billionaire comparing AI research to demon-summoning is a lot
> sexier...
>

No, it is a lot more important. I pay a LOT more attention to things that
might kill ME than I pay to technologies that might help YOU. For the
average Joe, AGI presents much more threat than it does opportunity -
especially if there is ANY reason to SUSPECT and it MIGHT someday go
haywire.

>
> IMO, to get media attention sufficient to counteract the media's love of
> alarmism and doomsaying, the pro-AGI community would need to come forward
> very
> aggressively with the message that AGI is important for SAVING AND
> IMPROVING  HUMAN LIVES ...
>

... and in advancing this view, you simply come off as a nut case to a LOT
of people who are more concerned about their safety. "Those who would trade
their liberty for safety deserve neither" (Ben Franklin) but here I don't
see AGIs bringing anything but a REDUCTION in liberty.

for designing the next generation of medicines,
>

The medicinal paradigm appears to be reaching its asymptote.

for creating elder-care robots to make old age more livable,
>

This is a social and NOT a scientific issue. Who wants to relate to a
machine?!!! Give me a sexy nurse.

for extending healthspan for those who want it,
>

Here, people are MUCH more effective than machines, because people can
sense what is happening in their bodies, in ways that FAR exceed present
sensor technology.


> for aiding the invention of new energy sources,
>

If you can ever get the oil lobby out of the way.


> for aiding in the fight against physical and cyber terrorism,
>

That may well bring an end to freedom as it is now understood.


> and so forth....   "Don't worry too
> much, we'll be careful" is not a convincing counterargument -- a better
> counterargument to the Musks, Hawkings, Bostroms and Yudkowskys of the
> world is more like  "Hey, I don't want your fear of science fiction bad
> guys to
> deny my grandma her life-extending, health-improving medicine and her
> robot friend, to eliminate my future of virtually unlimited energy and
> to put me at risk from terrorist attacks...."   I.e. "DON'T LET THE
> LUDDITES KILL YOUR GRANDMA AND TAKE YOUR TOYS AWAY!!   EMBRACE AI AND
> ROBOTS LIKE YOU'VE EMBRACED SMARTPHONES, AC POWER, THE INTERNET AND BIRTH
> CONTROL PILLS -- AND YOUR LIFE WILL BE BETTER -- " ....
>
> OK I'm semi-joking ;) ;p ... but unfortunately I think it's a mistake to
> overestimate the general public's appetite for rational, balanced
> discussion and thinking ;p ...
>

Just as it is a mistake to overestimate the value of "careful analysis"
when it involves existential threats.


> When careful nuanced thinking on difficult issues it put out there, it
> tends to be vigorously ignored...
>

For good reasons that you apparently don't yet grok.

Steve
===============

>
> -- Ben
>
> On Tue, Mar 17, 2015 at 8:01 PM, Benjamin Kapp via AGI <[email protected]>
> wrote:
>
>> If you think of governments as an artificial man (as was done by
>> Aristotle and Hobbes amongst others) which is composed of humans who are
>> the muscles (military, police), the intelligence (spys, scientists), the
>> judging and planning (judges, politicians), etc..  In a way the state is a
>> leviathan (a thing which has power to overawe any individual or group of
>> individuals).  And in this way AGI (or a super intelligence) already
>> exists.
>>
>> On Tue, Mar 17, 2015 at 6:29 AM, Calum Chace via AGI <[email protected]>
>> wrote:
>>
>>> Thanks Basile.  I agree with Pitrat, although I might dial up the
>>> consideration of the downside possibility a touch.
>>>
>>> Hawking usually gets slightly mis-represented.  He said that AGI could
>>> be either the best or the worst thing ever to happen to humanity.  The
>>> "best" bit seems to get missed by both sides of the debate.
>>>
>>> So, my question is, what is the best way for people who think along
>>> these lines to try and steer the public debate on AGI?  Alarmism is
>>> unhelpful, and hard to avoid.  Secrecy won't work.  Ben is tackling the
>>> issue head-on (as in the video he posted just now), but it's a hard debate
>>> to get right.
>>>
>>> Calum
>>>
>>> On 17 March 2015 at 11:17, Basile Starynkevitch <
>>> [email protected]> wrote:
>>>
>>>> On Tue, Mar 17, 2015 at 09:33:22AM +0100, Calum Chace via AGI wrote:
>>>> > Steve
>>>> >
>>>> > I sympathise with your very understandable preference not to be
>>>> targeted by
>>>> > anti-AI crazies!
>>>> >
>>>> > What do you think is the best way to try and shape the growing public
>>>> > debate about AGI?  Following Bostrom's book, and the comments by
>>>> Hawking,
>>>> > Musk and Gates, a fair proportion of the general public is now aware
>>>> that
>>>> > AGI might arrive in the medium term, and that it will have a very big
>>>> > impact.
>>>> >
>>>> > Some AI researchers seem to be responding by saying, "Don't worry, it
>>>> can't
>>>> > happen for centuries, if ever".  No doubt some of them genuinely
>>>> believe
>>>> > that, but I wonder whether some are saying it in the (forlorn?) hope
>>>> the
>>>> > debate will go away. It won't.  In fact I suspect that the new
>>>> Avengers
>>>> > movie will kick it up a level.
>>>> >
>>>> > Others are saying, "Don't worry, AGI cannot and will not harm
>>>> humans."  To
>>>> > my mind (and I realise I may be in a small minority here on this)
>>>> that is
>>>> > hard to be certain about - as Bostrom demonstrated.
>>>> >
>>>> > Yet others are saying, "AI researcher will solve the problem long
>>>> before
>>>> > AGI arrives, and it's best not to worry everyone else in the
>>>> meantime."
>>>> >  That seems a dangerous approach to me.  If the public ever feels
>>>> (rightly
>>>> > or wrongly) that things have been hidden from them, they will react
>>>> badly.
>>>> >
>>>> > But I do definitely sympathise with the desire not to be targeted by
>>>> > crazies, or to be vilified by journalists who have half-understood the
>>>> > situation!
>>>> >
>>>>
>>>> [...]
>>>>
>>>> > >> > -------------------------------------------
>>>> > >> > AGI
>>>> > >> > Archives: https://www.listbox.com/member/archive/303/=now
>>>>
>>>> [....]
>>>>
>>>>
>>>> I would suggest reading J.Pitrat's december 2014 blog entry on that
>>>> subject.
>>>> J.Pitrat is probably not subscribing to that list, i
>>>> so I am blind-carbon-copying him.
>>>>
>>>>
>>>> http://bootstrappingartificialintelligence.fr/WordPress3/2014/12/not-developing-an-advanced-artificial-intelligence-could-spell-the-end-of-the-human-race/
>>>>
>>>> He is explaining that
>>>>
>>>>  "Not developing an advanced artificial intelligence
>>>>   could spell the end of the human race"
>>>>
>>>> and I believe he has a point. Of course AGI researchers should be
>>>> careful.
>>>>
>>>> Regards
>>>>
>>>> --
>>>> Basile Starynkevitch   http://starynkevitch.net/Basile/
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards
>>>
>>> Calum
>>>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | Modify
>> <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "The reasonable man adapts himself to the world: the unreasonable one
> persists in trying to adapt the world to himself. Therefore all progress
> depends on the unreasonable man." -- George Bernard Shaw
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to