Thanks, Bill. FWIW, I found the 2017 Asilomar conference videos
disturbing watching.
Kurzweil has long been holding up the 1975 Asilomar conference on biotech as
an example of industry self-governance and self-regulation. Reading
between the
lines, we can see signs of an industry hoping to avoid government
regulation.
Is getting expert endorsement for a list of feel-good principles useful?
That
won't make me much more inclined to trust the organizations involved. The
best way of showing that you will share the benefits of machine intelligence
in the future is to share your resources now: via open source software. IMO,
nothing else is going to be very convincing. Put your source code where
your mouth is.
If you ask me to endorse a long list of principles, the chances of success
go down as the list gets longer, and the chances of dubious statements
goes up. With 23 principles, I hardly need to read the list to know that
there's likely to be something I'm not going to agree with.
I was disturbed to see the success of some of the more dubious FHI/MIRI
memes at the conference. For example, numerous speakers referred to the
importance of the "control problem". This refers to the "slavery"
paradigm of
superintelligence. IMO, if you are trying to enslave a
superintelligence, you
are probably going to fail. The "control problem" seems like a neat
encapsulation of an approach that is not going to work. What we should aim
for is to become superintelligences. Then there's no "control problem"
because
you are a superintelligence. Of course, this approach looks rather
challenging -
but the alternatives do not look very attractive to me.
Bill criticized a number of the principles in his article. Here I will
restrict
myself to just the first one:
"1. Research Goal: The goal of AI research should be to create not
undirected intelligence, but beneficial intelligence."
This statement, I feel, implicitly misunderstands how intellectual
progress takes place. Academic fields are not goal-directed systems.
They are complex aggregates of different departments and researchers.
These may have diverse and partly-conflicting goals.
For example, aeronautical engineering contains some folk trying to
make planes faster and other folks trying to make planes safer. The
field thus has multiple research goals - and they conflict somewhat.
This diversity of goals is actually healthy. It allows humans to specialize
at what they are good at. It also allows evolution and change of overall
direction in response to environmental changes.
IMO, one of the last things the field of machine intelligence needs
is a bunch of experts high-handedly dictating the overall direction
of the entire field. Instead, let a thousand flowers bloom.
On Fri, Feb 3, 2017 at 10:30 AM, Bill Hibbard <bi...@ssec.wisc.edu
<mailto:bi...@ssec.wisc.edu>> wrote:
> Here is my response to the recently published Asilomar
> AI Principles, that they should include transparency
> about the purpose and means of advanced AI systems:
>http://hplusmagazine.com/2017/02/02/asilomar-ai-principles-include-transparency-purpose-means-advanced-ai-systems/
<http://hplusmagazine.com/2017/02/02/asilomar-ai-principles-include-transparency-purpose-means-advanced-ai-systems/>
[...]
--
__________
|im Tyler http://timtyler.org/
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com