Robert,

You seem to be saying that a particular developmental route determines the
ultimate product, e.g. that a destructive AGI is a bad step along a route
to a good AGI result.

The challenge here is all wrapped up in the definitions of good and bad,
e.g. is it "good" to kill all of the "bad" people, or to protect us from
the results of our own destructive tendencies?

People are now adamantly dysinterested (new word) in anything resembling
controlling the development of our genome, preferring instead random
selection. This would be an obvious area of early AGI intervention that
would be good for the human race, but would be seen as bad to ~99% of the
human race.

There are numerous other examples, e.g. China's present limitations on
number of children.

I would like to see SOME "clear vision" of an ultimately "good" AGI before
even considering whether a particular route is necessary for getting there.
Until such a vision can be held up to close scrutiny, discussions of route
are EXTREMELY premature. I strongly suspect that ALL "good" AGI
descriptions are just wishful thinking about mechanisms that if allowed to
follow their designs would do very "bad" things (in the eyes of 99% of our
population).

Note that my discussion above is more about the flaws in us than in AGIs,
but it appears that it is OpenAI's goal to preserve those flaws in AI,
which will predictably lead to an even bigger social mess than we have now.
Right?

*Steve*
========

On Sun, Dec 13, 2015 at 11:52 AM, Robert Levy <[email protected]> wrote:

> I didn't get the impression that Musk was anti-AGI, but rather he
> expressed some unfortunately worded concerns about the specific way in
> which AGI is approached being important.  From this perspective it makes
> perfect sense that he would want to direct the course of AGI development in
> the way he believes to be safe for the future of humankind.  The problem
> isn't AGI itself, it's the way those other people might do it, who aren't
> me!
>
> On Sun, Dec 13, 2015 at 11:46 AM, Steve Richfield <
> [email protected]> wrote:
>
>> HI all,
>>
>> Am I missing something here, or is this really as stupid as it sounds?!!!
>>
>> On the one hand, Musk says that AI is "humanity greatest existential
>> threat" and then he pledges money to develop that threat?!!! Bad guys, e.g.
>> the military industrial complex, can simply take whatever OpenAI develops
>> and turn it on US.
>>
>> I have seen NOTHING suggesting any great value in AGI over fully funding
>> human efforts in the same areas that AGI is being promoted. Geniuses have
>> always been able to get to the bottom of things - IF they can live well
>> while doing so and not be impaired by competing interests. If you think AGI
>> can somehow sidestep these influences, think again, as these influences are
>> pervasive. Heck, even just living as we do is seen by some people as being
>> SO much of a threat that they are willing to kill themselves just to impair
>> a pleasant Friday evening in Paris.
>>
>> If not for drug company influence, I believe most chronic illnesses would
>> have been cured long ago. If not for self-serving mismanagement of our
>> economy, space travel would now be as routine as vacation travel. Thorium
>> reactors appear to be the cheap and simple solution to limitless energy,
>> with more thorium now being discarded than would be necessary to power the
>> world, yet special interests have kept thorium reactors from being
>> developed (see YouTube videos about this)
>>
>> Our system is SO mis-controlled "our" government won't even reduce the
>> length of a workweek to promote full employment - as some other countries
>> have done. Having an AGI come up with these same sorts of solutions would
>> be of ZERO value, because in present human society they would NOT be
>> implementable, unless you are contemplating the AGI of *Colossus, the
>> Forbin Project*.
>>
>> ONLY in the hands of unscrupulous entities (e.g. Skynet) could the AGI of
>> people's misguided dreams truly thrive without effective impairment by the
>> entirety of humanity.
>>
>> If these guys see SOME way their investments could do anything but create
>> humanity's greatest existential threat, then PLEASE let me in on the secret.
>>
>> *Steve*
>> =======
>>
>> On Sun, Dec 13, 2015 at 5:49 AM, <[email protected]> wrote:
>>
>>> http://futurism.com/links/19499/
>>>
>>>
>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Full employment can be had with the stoke of a pen. Simply institute a
>> six hour workday. That will easily create enough new jobs to bring back
>> full employment.
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/18769370-bddcdfdc> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to