If you think of governments as an artificial man (as was done by Aristotle
and Hobbes amongst others) which is composed of humans who are the muscles
(military, police), the intelligence (spys, scientists), the judging and
planning (judges, politicians), etc..  In a way the state is a leviathan (a
thing which has power to overawe any individual or group of individuals).
And in this way AGI (or a super intelligence) already exists.

On Tue, Mar 17, 2015 at 6:29 AM, Calum Chace via AGI <[email protected]>
wrote:

> Thanks Basile.  I agree with Pitrat, although I might dial up the
> consideration of the downside possibility a touch.
>
> Hawking usually gets slightly mis-represented.  He said that AGI could be
> either the best or the worst thing ever to happen to humanity.  The "best"
> bit seems to get missed by both sides of the debate.
>
> So, my question is, what is the best way for people who think along these
> lines to try and steer the public debate on AGI?  Alarmism is unhelpful,
> and hard to avoid.  Secrecy won't work.  Ben is tackling the issue head-on
> (as in the video he posted just now), but it's a hard debate to get right.
>
> Calum
>
> On 17 March 2015 at 11:17, Basile Starynkevitch <[email protected]>
> wrote:
>
>> On Tue, Mar 17, 2015 at 09:33:22AM +0100, Calum Chace via AGI wrote:
>> > Steve
>> >
>> > I sympathise with your very understandable preference not to be
>> targeted by
>> > anti-AI crazies!
>> >
>> > What do you think is the best way to try and shape the growing public
>> > debate about AGI?  Following Bostrom's book, and the comments by
>> Hawking,
>> > Musk and Gates, a fair proportion of the general public is now aware
>> that
>> > AGI might arrive in the medium term, and that it will have a very big
>> > impact.
>> >
>> > Some AI researchers seem to be responding by saying, "Don't worry, it
>> can't
>> > happen for centuries, if ever".  No doubt some of them genuinely believe
>> > that, but I wonder whether some are saying it in the (forlorn?) hope the
>> > debate will go away. It won't.  In fact I suspect that the new Avengers
>> > movie will kick it up a level.
>> >
>> > Others are saying, "Don't worry, AGI cannot and will not harm humans."
>> To
>> > my mind (and I realise I may be in a small minority here on this) that
>> is
>> > hard to be certain about - as Bostrom demonstrated.
>> >
>> > Yet others are saying, "AI researcher will solve the problem long before
>> > AGI arrives, and it's best not to worry everyone else in the meantime."
>> >  That seems a dangerous approach to me.  If the public ever feels
>> (rightly
>> > or wrongly) that things have been hidden from them, they will react
>> badly.
>> >
>> > But I do definitely sympathise with the desire not to be targeted by
>> > crazies, or to be vilified by journalists who have half-understood the
>> > situation!
>> >
>>
>> [...]
>>
>> > >> > -------------------------------------------
>> > >> > AGI
>> > >> > Archives: https://www.listbox.com/member/archive/303/=now
>>
>> [....]
>>
>>
>> I would suggest reading J.Pitrat's december 2014 blog entry on that
>> subject.
>> J.Pitrat is probably not subscribing to that list, i
>> so I am blind-carbon-copying him.
>>
>>
>> http://bootstrappingartificialintelligence.fr/WordPress3/2014/12/not-developing-an-advanced-artificial-intelligence-could-spell-the-end-of-the-human-race/
>>
>> He is explaining that
>>
>>  "Not developing an advanced artificial intelligence
>>   could spell the end of the human race"
>>
>> and I believe he has a point. Of course AGI researchers should be careful.
>>
>> Regards
>>
>> --
>> Basile Starynkevitch   http://starynkevitch.net/Basile/
>>
>>
>
>
> --
> Regards
>
> Calum
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to