Tim,

Yea - a good point that USING a super-intelligent AGI doesn't make a lot of
sense.

There is an interesting real-world comparison - dogs and wolves. Dogs make
great pets - in part because they aren't extremely intelligent, so they
don't constitute much of a threat from intelligence. However, wolves are
considerably more intelligent - but make awful pets because they are
constantly plotting subtle ways to take over your home. A super-intelligent
AGI would presumably be MUCH more intelligent than a wolf - and hence would
probably be rather antisocial.

I have previously posted here that there is plenty of evidence that there
is an optimal level of intelligence - beyond which you will be seen as
dangerous and lose the support of others.

As I see it, the BIG question is: Is there an extremely high level of
intelligence that is WAY beyond any socially-acceptable "optimal"
intelligence that has enough value to offset being shunned by society? If
so, then watch out for a hard start. If not, then the entire AGI effort
will turn out to be a bust. Either way isn't good, so maybe we shouldn't
bother investing much to get there.

The rest of the world wasn't paying much attention when early man evolved,
but we ARE paying attention to machines, so machines will never have the
chance to refine their capabilities that early man enjoyed.

Thoughts?

*Steve*
=========

On Thu, Dec 17, 2015 at 8:08 PM, TimTyler <[email protected]> wrote:

> Scott Alexander tries to make the case against open source development
> of machine intelligence in a recent article. He claims that open source
> software is more dangerous.
>
> http://slatestarcodex.com/2015/12/17/should-ai-be-open/
>
> What has happened in cryptography may turn out to be a reasonable model
> for what will happen in machine intelligence. In cryptography there's
> a community of academic and commercial researchers who mostly publish
> their algorithms and allow their peers to criticize their work. Then
> there are massive governmental organizations who share practically
> nothing, and by most accounts are well ahead of everyone else.
>
> Scott appears worried that companies will avoid getting trapped by
> anti trust organizations and the invention secrecy act and that their
> growth will outstrip the government.
>
> Revolutions do happen - but governments that permit them are selected
> out - and current crop of governments appears to be fairly well
> adapted to avoiding revolutions.
>
> Open source projects seem to me to be particularly unlikely to threaten
> the government. Like all other agents, the government can embrace and
> extend open source software. So, it is hard to see how Scott's fears
> are justified.
>
> --
> __________
>  |im Tyler http://timtyler.org/
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to