Scott Alexander tries to make the case against open source development
of machine intelligence in a recent article. He claims that open source
software is more dangerous.

http://slatestarcodex.com/2015/12/17/should-ai-be-open/

What has happened in cryptography may turn out to be a reasonable model
for what will happen in machine intelligence. In cryptography there's
a community of academic and commercial researchers who mostly publish
their algorithms and allow their peers to criticize their work. Then
there are massive governmental organizations who share practically
nothing, and by most accounts are well ahead of everyone else.

Scott appears worried that companies will avoid getting trapped by
anti trust organizations and the invention secrecy act and that their
growth will outstrip the government.

Revolutions do happen - but governments that permit them are selected
out - and current crop of governments appears to be fairly well
adapted to avoiding revolutions.

Open source projects seem to me to be particularly unlikely to threaten
the government. Like all other agents, the government can embrace and
extend open source software. So, it is hard to see how Scott's fears
are justified.

--
__________
 |im Tyler http://timtyler.org/



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to