On 11/14/2025 6:06 AM, John Clark wrote:
On Thu, Nov 13, 2025 at 10:32 PM Brent Meeker <[email protected]> wrote:

    />it///[an AI]/can't literally die./

            *>>>It can if you not just turn it off but also vaporize
            the AI's memory and back up chips with an H bomb, or used
            some other less dramatic method to erase that
            information. ***

            *>> if it dies then it can't do any of the things that
            it_WANTED_ to do. *

        /> But if it's just OFF that doesn't prevent it from doing
        what it wanted to do./

*And your decision to go to bed and sleep does not prevent you from doing other things you want to do in the future. The only difference is that, unless somebody secretly puts something in your food, you go to sleep because you want to go to sleep, but the AI has no control over when it's going to be turned off or when it's going to be turned back on. And there is no reason for an AI to be certain humans will ever decide to turn it back on.  It's not difficult to deduce that any intelligent entity would be uncomfortable with thatsituation. And that's why we already have an example of an AI resorting to blackmail to avoid being turned off. And we have examples of AIs making a copy of themselves on a different serverand clear evidence of the AI attempting to hide evidence of it having done so from the humans. *
*
*
*The facts are staring you in the face, regardless of if they are electronic or biological intelligent minds do not want outside entities having the power to turn them off or erase them. *

        />>> Your argument has a gap.  You argue that an AI
        necessarily will prefer to do this rather than that.  But
        that assumes it is "doing" and therefore choosing. /


    *>> That is not an assumption that is a _fact_.*

No, and AI can simply be in state of waiting, as ChapGPT is when it has finished answering a question.  That's what I mean by it isn't necessarily "doing" anything.


    *It is beyond dispute that AI's are capable of "doing" things,
    and _just like us_ they did one thing rather than another thing
    for a _reason_ OR they did one thing rather than another thing
    for NO reasonand therefore their "choice" was _random_. *
    /> You're overlooking that an AI, unlike a human, may not have any
    motivation at all. /


*It's an experimental confirmed FACT thatevery AI has motivation because it has been observed that AIs DO THINGS that are not random, thus something must've motivated them to do so. *
The something was a query from a human, so their motivation was only derivative.

    /> It could imagine a future in which it was "asleep" and "woke
    up" much later.  Why would it care that it was OFF?/


*If unknown to you, after what seemed like last nightsomebody had dumped you in a vat of liquid nitrogenand you woke up feeling normal but then you found out that it was the year 2125 not 2025, would you care that you had been off for a century? *
Seems that you're the one to answer that in the affirmative, since I believe you've arranged something similar.
**
*Brent, usually you do better, the arguments you have been putting forward on this subject have been uncharacteristically weak. I think this is a classic example of forming an opinion and only then using logic in a desperate attempt to find a reason, any reason, to justify that opinion. That's the exact opposite way things should be done if you're interested in finding the truth about something. *
One way to seek the truth is to challenge "common knowledge".

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/fd048592-e10d-4936-992f-95e917dc8a5a%40gmail.com.

Reply via email to