On Thu, Nov 13, 2025 at 10:32 PM Brent Meeker <[email protected]> wrote:

* >it* [an AI] *can't literally die.*
>


*>>>It can if you not just turn it off but also vaporize the AI's memory
>>>> and back up chips with an H bomb, or used some other less dramatic method
>>>> to erase that information. *
>>>
>>> * >> if it dies then it can't do any of the things that it WANTED to
>>> do. *
>>
>> *> But if it's just OFF that doesn't prevent it from doing what it wanted
> to do.*
>
> *And your decision to go to bed and sleep does not prevent you from doing
other things you want to do in the future. The only difference is that,
unless somebody secretly puts something in your food, you go to sleep
because you want to go to sleep, but the AI has no control over when it's
going to be turned off or when it's going to be turned back on. And there
is no reason for an AI to be certain humans will ever decide to turn it
back on.  It's not difficult to deduce that any intelligent entity would be
uncomfortable with that situation. And that's why we already have an
example of an AI resorting to blackmail to avoid being turned off. And we
have examples of AIs making a copy of themselves on a different server and
clear evidence of the AI attempting to hide evidence of it having done so
from the humans. *

*The facts are staring you in the face, regardless of if they are
electronic or biological intelligent minds do not want outside entities
having the power to turn them off or erase them. *

*>>> Your argument has a gap.  You argue that an AI necessarily will prefer
>> to do this rather than that.  But that assumes it is "doing" and therefore
>> choosing. *
>>
>
> *>> That is not an assumption that is a fact. It is beyond dispute that
> AI's are capable of "doing" things, and just like us they did one thing
> rather than another thing for a reason OR they did one thing rather than
> another thing for NO reason and therefore their "choice" was random. *
>
> *> You're overlooking that an AI, unlike a human, may not have any
> motivation at all. *
>

*It's an experimental confirmed FACT that every AI has motivation because
it has been observed that AIs DO THINGS that are not random, thus something
must've motivated them to do so. *

*> It could imagine a future in which it was "asleep" and "woke up" much
> later.  Why would it care that it was OFF?*


*If unknown to you, after what seemed like last night somebody had dumped
you in a vat of liquid nitrogen and you woke up feeling normal but then you
found out that it was the year 2125 not 2025, would you care that you had
been off for a century?  *

*Brent, usually you do better, the arguments you have been putting forward
on this subject have been uncharacteristically weak. I think this is a
classic example of forming an opinion and only then using logic in a
desperate attempt to find a reason, any reason, to justify that opinion.
That's the exact opposite way things should be done if you're interested in
finding the truth about something. *

*John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*
4s8y8d

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1fgZ5UHM6mpGmM3L8f_xg8VPVrDkU5hU0mUdpUb0Aqhw%40mail.gmail.com.

Reply via email to