On Thu, May 4, 2023 at 1:30 PM Mike Archbold <[email protected]> wrote:
>
> Finally, AI seems to have caught on, it's white hot! And such pessimism!

I'm not as pessimistic as Eliezer Yudkowsky. Maybe you read his
warning in Time magazine. He is certain that we are doomed unless we
shut down AI now. He is calling for international treaties with
decreasing limits on GPU power, and calling for air strikes on
noncompliant data centers.
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

I understand his reasoning. The future is uncertain. If there is a 1%
chance that AI will wipe out humanity and prevent 1 trillion future
humans, that is worse than killing most of the world's 8 billion
people now. AI is the solution to the $100 trillion per year problem
of having to pay people to do work that machines aren't smart enough
to do. So it is not going to be stopped. People want this. Even people
who are aware of the problem, like Sam Altman, believe the best way to
slow down AI is to get there first, win the race, and then set the
pace. Nope.

Yudkowsky has spent 20 years on the alignment problem and is no closer
to a solution, so he assigns a very high probability of doom relative
to everyone else who has spent less time on it. People have proposed
to use intermediate level AI to communicate with ASI, kind of like
fleas trying to control humans by communicating through dogs. I don't
buy it, and neither does Goeffrey Hinton, who quit Google so he could
speak freely about the dangers of AI. He pointed out on CNN that there
are no cases of any species controlling a more intelligent species.

Let's enumerate the ways that AI might kill us, which I rank from
least to most likely below.

1. Self improving AI goes FOOM! When humans create smarter than human
AI, then so can it, but faster. This leads to a singularity. The AI
might or might not have goals that align with human goals, but in
general we can't predict, and therefore control, agents that are more
intelligent than us.

2. Self replicating nanotechnology. Yudkowsky mentions bacteria sized
diamondoid robots extracting CHON from the air and solar power, which
is already more efficient than chlorophyll. These would have higher
reproductive fitness than plants, thus killing the food chain and all
DNA based life. They could be designed to infect people and kill
everyone on a timer.

3. AI controlled biology. Anyone with a cheap 3-D nanoscale printer
could engineer viruses as deadly contagious as the common cold and as
deadly as rabies.

4. We solve alignment, and AI kills us by giving us everything we want.

First, I don't believe in singularities. There is no exact threshold
where machines exceed human intelligence. You can't compare them
because intelligence is a broad set of capabilities that machines are
achieving one by one. The Turing test this year was a big step, but it
really started in the 1950's when computers started beating humans in
arithmetic. Self improvement started centuries ago, when companies
built machines faster and stronger than humans and reinvested their
profits to grow exponentially. But we still have a long way to go in
robotics. The human body carries 10^23 bits in our DNA and processes
10^16 transcription operations per second at 10^-14 J per operation.
We can't get there with transistors either. Clock speeds stalled
around 2-3 GHz in 2010 and are now at the size limit and still using
10^5 too much power. It will take some time to switch to computation
by moving atoms instead of electrons.

Second, self replicating nanotechnology is 60 years away at the
current rate of Moore's law doubling global computing capacity every
1.5 years. The biosphere has 10^44 carbon atoms encoding 10^37 bits of
DNA, with 10^29 DNA copy and 10^31 RNA and protein transcription
operations per second, using 500 TW of solar power out of 90,000 TW
available at the Earth's surface.

Third, self replicating agents go back to the 1987 Morris worm. But
they are really hard to test. It might be easy to design a pathogen
with a 99% fatality rate but not 100%. They would have to release
several, which is harder. We also cannot assume that hobbyists are
rational and don't want to infect or kill themselves, or understand
the risks. And even if they do, they will make mistakes, because you
only have one chance to get it right.

Fourth was the scenario I described in my last post. Yudkowsky's early
attempt at human goal alignment was Coherent Extrapolated Volition,
what we would want for our future selves if we were smarter. That
won't work because AI is worth $1 quadrillion and will be funded by
people buying what they want right now. What we want is neural
activity in our nucleus accumbens. That is not a long term strategy
for survival.


-- Matt Mahoney, [email protected]

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7c0c95d86a178556-M8f4537fc9e6fe98a68f0eb70
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to