Tim,
As you probably know, the leader of GiveWell has engaged fairly thoroughly with
SIAI/MIRI on these issues,
http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
so I would say he is much more informed on the SIAI/MIRI/FHI/Musk/etc. view on
AGI and existential risks than most philanthropists
It seems that GiveWell is heavily biased toward philanthropic causes where the
benefit of the aid provided can be very clearly and unambiguously established.
This is a reasonable approach but also limiting in many ways...
I agree that SIAI/MIRI, FHI and others are playing on peoples' fears to get
their donation money. However, I also think that the SIAI/MIRI and FHI
principals genuinely share these fears....
-- Ben
On Saturday, January 17, 2015 10:03 PM, Tim Tyler via AGI
<[email protected]> wrote:
Dylan Evans takes aim at the profiteering of the risks associated
with superintelligent machines in his article "The Great AI Swindle":
http://edge.org/response-detail/26073
Dylan argues that the proponents are performing a "Pascal’s mugging" -
balancing enormous loss of utilities associated with civlization's
demise against the very small probability of it happening.
He says:
"in the past few years they have managed to convince some very
wealthy benefactors not only that the risk of unfriendly AI is
real, but also that they are the people best placed to mitigate it.
The result is a clutch of new organizations that divert philanthropy
away from more deserving causes. It is worth noting, for example,
that Give Well — a non-profit that evaluates the cost-effectiveness
of organizations that rely on donations — refuses to endorse any of
these self-proclaimed guardians of the galaxy."
I take exception to the idea that Give Well are the best arbiters
of where humanity should invest its resources. Give Well seem to
be focused on saving lives in third world countries - where life
is cheap. They say this quite explicitly:
"Low-income people in the developing world have dramatically
lower standards of living than low-income people in the U.S.,
and we believe that a given dollar amount can provide more
meaningful benefits when targeting the former."
Helping poor people is widely regarded as being a good thing -
but it is far from clear that it is the best thing to be
doing right now.
If there was a big meteorite heading towards the earth, I think
it would be prudent to allocate resources to the rocket scientists,
lazer engineers and solar sailors with the best chance of diverting it.
That is - more-or-less - the kind of situation we face with
superintelligent machines. They are big, they are coming - and
poor people in third world countries do not seem to be in an
especially good position to help very much.
While Give Well may have their hearts in the right place, they
aren't experts in this topic. Instead they look to domain-specific
experts to provide advice. In this case, it isn't clear that they've
been getting the right advice. The situation is difficult and
confusing - but we should at least try.
IMO, the 'signalling' theory of charity is applicable here.
Giving to Give Well's top charities signals that you care,
that you are smart - and that you are not conspicuously giving
for selfish signalling reasons. Attempting to get a positive
outcome from the transition to a machine-based civilization
just signals that you're a nerd. People are more interested
in sending the first set of signals.
--
__________
|im |yler http://timtyler.org/ [email protected] Remove lock to reply.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com