I didn't get the impression that Musk was anti-AGI, but rather he expressed
some unfortunately worded concerns about the specific way in which AGI is
approached being important.  From this perspective it makes perfect sense
that he would want to direct the course of AGI development in the way he
believes to be safe for the future of humankind.  The problem isn't AGI
itself, it's the way those other people might do it, who aren't me!

On Sun, Dec 13, 2015 at 11:46 AM, Steve Richfield <[email protected]
> wrote:

> HI all,
>
> Am I missing something here, or is this really as stupid as it sounds?!!!
>
> On the one hand, Musk says that AI is "humanity greatest existential
> threat" and then he pledges money to develop that threat?!!! Bad guys, e.g.
> the military industrial complex, can simply take whatever OpenAI develops
> and turn it on US.
>
> I have seen NOTHING suggesting any great value in AGI over fully funding
> human efforts in the same areas that AGI is being promoted. Geniuses have
> always been able to get to the bottom of things - IF they can live well
> while doing so and not be impaired by competing interests. If you think AGI
> can somehow sidestep these influences, think again, as these influences are
> pervasive. Heck, even just living as we do is seen by some people as being
> SO much of a threat that they are willing to kill themselves just to impair
> a pleasant Friday evening in Paris.
>
> If not for drug company influence, I believe most chronic illnesses would
> have been cured long ago. If not for self-serving mismanagement of our
> economy, space travel would now be as routine as vacation travel. Thorium
> reactors appear to be the cheap and simple solution to limitless energy,
> with more thorium now being discarded than would be necessary to power the
> world, yet special interests have kept thorium reactors from being
> developed (see YouTube videos about this)
>
> Our system is SO mis-controlled "our" government won't even reduce the
> length of a workweek to promote full employment - as some other countries
> have done. Having an AGI come up with these same sorts of solutions would
> be of ZERO value, because in present human society they would NOT be
> implementable, unless you are contemplating the AGI of *Colossus, the
> Forbin Project*.
>
> ONLY in the hands of unscrupulous entities (e.g. Skynet) could the AGI of
> people's misguided dreams truly thrive without effective impairment by the
> entirety of humanity.
>
> If these guys see SOME way their investments could do anything but create
> humanity's greatest existential threat, then PLEASE let me in on the secret.
>
> *Steve*
> =======
>
> On Sun, Dec 13, 2015 at 5:49 AM, <[email protected]> wrote:
>
>> http://futurism.com/links/19499/
>>
>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Full employment can be had with the stoke of a pen. Simply institute a six
> hour workday. That will easily create enough new jobs to bring back full
> employment.
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18769370-bddcdfdc> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to