It's hard to know how to think about this kind of risk. It's safe to say EY
has done more thinking on this issue than just about anyone, and he's one
of the smartest people on the planet, probably. I've been following him for
over a decade, from even before his writings on lesswrong.

However, there's an interesting dynamic with highly intelligent people -
ironically, being really smart makes it possible to justify stupid
conclusions that you might be highly motivated to want to. A very smart
google engineer I know is a conspiracy theorist who can justify to himself
bullshit like 9/11 was a hoax and even Sandy Hook was a hoax. I'm *not*
saying EY's conclusions are stupid or bullshit. Just saying that people
clever enough to jump through the cognitive hoops required can convince
themselves of shit most people would think is bs, and that irrational
motivations often drive this - for example, conspiracy theorists are often
driven by psychological motivations such as the need to be right, or the
desire to have some kind of insider status.

So what irrational motivations might EY be prone to?  Over the years I've
gotten the sense he's got a bit of a savior complex, so the motivation to
view AI in the way he does could be rooted in that psychological
motivation. I mean obviously I'm speculating here, but it does play a
factor in how I process his message, which is quite dire.

That's not to say he's wrong. As you say John, it's totally unpredictable,
but I think there's room for less dire narratives about how it could all
go.  But one thing I do 100% agree with is that we're not taking this
seriously enough and that the usual capitalist incentives are pushing us
into dangerous territory.

Terren


On Fri, Jul 14, 2023 at 1:11 PM John Clark <johnkcl...@gmail.com> wrote:

> Recently  Eliezer Yudkowsky gave a TED talk and basically said the human
> race is doomed. You can see it on YouTube and I put the following in the
> comment section:
> --
>
> I think Eliezer was right when he said nobody can predict what moves a
> chess program like Stockfish will make but you can predict that it will
> beat you in a game of chess, that's because Stockfish is super good at
> playing chess but it can't do anything else, it can't even think of
> anything else. But an AI program like GPT-4 is different, it can think of a
> great many things besides chess so you can't really predict what it will
> do, sure it has the ability to beat you at a game of Chess but for its own
> inscrutable reasons it may deliberately let you win. So yes in a few years
> an AI will have the ability to exterminate the human race, but will it
> actually do so? I don't know and I can't even give a probability of it
> occurring, all I can say is the probability is greater than zero and less
> than 100%.
>
> Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
> <https://www.youtube.com/watch?v=Yd0yQ9yxSYY>
>
> John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> mfx
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2VMSWOJfL74rL%3DNoe2o5y%3DE8F%3DY5GCQKN%3D-inOY1x1kQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2VMSWOJfL74rL%3DNoe2o5y%3DE8F%3DY5GCQKN%3D-inOY1x1kQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_mQ%2BwhMUuwH0z_4piQH-9hpBxginmaJBLM%2BpHj5c5hFQ%40mail.gmail.com.

Reply via email to