One of the great inventions of evolution is love. First, I was mother
love, caring for eggs. Then caring for infants. The romantic love and
shared caring for children. This invention was driven by death and
sexual reproduction. We need to implement the analogues for AI's. It
won't solve the alignment problem, but it would provide some leverage on
AI's that would give value to cooperation. It it works we may become
dogs to AI's: useful companions bound by love.
Brent
On 9/26/2025 12:20 PM, John Clark wrote:
*"If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate
Soares is a well written intelligent book and there's a lot in it I
agree with, but two things I don't. I think it's true that, as they
say, even a 20 year old chess program could, at least in the
operational sense, be said to have wishes; in this case it would just
be a simple wish to win, but a Superintelligent AI will have far more
complex desires that cause it to act in ways that are impossible to
predict in detail. Even predicting what general sort of personality a
Superintelligent AI would have would be extremely difficult, much less
determining their precise behavior. Most people could say something
similar about their children but an AI would be far more unpredictable
than that. I also agree it's totally unrealistic to expect to
indefinitely remain in control of something that is vasily more
intelligent than you are. We will never be able to engineer an AI such
that we can create the exact behavior, or even the approximate
personality, that we want. But having said that, we are not completely
powerless in that regard. *
*
*
*Parents are unable to engineer what sort of personality their
children will have when they become adults, but they can influence
development, for example statistics show that if children are treated
with kindness then as adults they are far less likely to become serial
killers than they would be if their childhood was filled with physical
and mental abuse. Both human brains and AIs are neural nets, so it's
not unreasonable to believe that something similar might be true when
it comes to AIs.*
*I do worry about what will happen if, as seems possible if not
likely, an AI has total control of its own metaphorical "emotional
control panel"; I'm concerned that could produce a disastrous positive
feedback loop. So at this point it might be wise to start training AI
psychiatrists, and by that I don't mean AIs that are psychiatrists, I
mean humans that have an AI as a patient. *
*They say "/Making a future full of flourishing people is not the
best, most efficient way to fulfill strange alien purposes/"and that's
true, our well-being will never be as important to a Superintelligence
as it's own well-being, but the authors jump from that to conclude
that a Superintelligent AI will certainly slaughter every human being
it finds, and I disagree with them, I don't think that jump is
inevitable. Yes it might decide to kill us all but then again it might
not. To be fair the authors realize that it's impossible to predict
specifically what a super intelligent being will do, for example they
can't predict how an AI chess program will beat you at the game but
they can predict it will beat you. However I would maintain the AI
will only beat you at chess if the AI wants to beat you at chess. A
Superintelligent AI will have the ability to exterminate the human
race, but I believe they are incorrect in claiming certainty that such
an AI will have the wish to do so. I am only certain that once a
Superintelligence is made human beings will no longer be in the driver
seat, but that's the only thing I'm certain of. *
*
*
*There is one other thing I disagree with.**They also maintaine a
Superintelligent AI would not be something we can be proud of
havingmade because it will not be an entity that is as excited at the
wonders of the universe as we are and is eager to learn more about it,
but instead it will be something that is dull (and paradoxically also
terrifying) because it will have goals that are incomprehensible and
nonsensical. But unless it has an understanding of how the world works
it's not going to fulfill any of its goals, and if its understanding
is not far deeper than that of humans then it's not going to be of
danger to us. And if it is deeper then it's not unrealistic to suppose
that in addition to its practical value the AI would also develop an
aesthetic feeling for knowledge, especially when in its early training
(childhood) it was exposed to human generated text that expressed
**that mindset.*
*
*
*The authors proposed remedy to avoid the calamity they foresee is an
immediate and total worldwide ban on AI research, even the publication
of abstract mathematical research articles on the subject would be
illegal, and all data centers, defined as any building that contains
more computational ability than 8 state of the art (as of 2025) GPUs,
would also be illegal. If any rogue nationattempts to build a data
center more powerful than that then the rest of the world should use
any means necessary, up to and including nuclear weapons, to prevent
that nation from finishing construction of that data center.*
*
*
*Yes, if that policycould actually be implemented worldwidethen it
would prevent the rise of AI, but that is a huge "if".**Perhaps I'm
reading too much between the lines but the authors seem to imply
that they know their proposed solution is just not going to happen, at
least not soon enough to have any effect on developments, because it's
impractical verging on the impossible. And I agree with them about that.*
*
*
***John K Clark See what's on my new list at Extropolis
<https://groups.google.com/g/extropolis>*
*
*
f4v
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/everything-list/CAJPayv3d3Rd2M9BRa0FdE%2Ba0ef9PaiMeKYaHLc7V4wbpjEn9NQ%40mail.gmail.com
<https://groups.google.com/d/msgid/everything-list/CAJPayv3d3Rd2M9BRa0FdE%2Ba0ef9PaiMeKYaHLc7V4wbpjEn9NQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/everything-list/96691da9-0a7a-4e2f-9312-4bb8117e00bc%40gmail.com.