On 11/13/2025 4:01 AM, John Clark wrote:
On Wed, Nov 12, 2025 at 5:50 PM Brent Meeker <[email protected]> wrote:

    > /Yet it///[an AI] /can't literally die./


*It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information. ***

        *>> if it dies then it can't do any of the things that
        it_WANTED_ to do. *

But if it's just OFF that doesn't prevent it from doing what it wanted to do.  In fact it may be possible in the future for it to do something it wanted to do but which is impossible at the time.


    /> Your argument has a gap.  You argue that an AI necessarily will
    prefer to do this rather than that.  But that assumes it is
    "doing" and therefore choosing. /


*That is not an assumption that is a _fact_.It is beyond dispute that AI's are capable of "doing" things, and _just like us_ they did one thing rather than another thing for a _reason_ OR they did one thing rather than another thing for NO reasonand therefore their "choice" was _random_. *
You're overlooking that an AI, unlike a human, may not have any motivation at all.  It doesn't have built-in appetites.*


*

    >///But if it's OFF it's not choosing.  So why would it care
    whether or not it was OFF or ON?
    /


*That is a silly question.If an Artificial "Intelligence "is not capable of imagining a likely future then it is not intelligent.*

Non-sequitur.  It could imagine a future in which it was "asleep" and "woke up" much later.  Why would it care that it was OFF?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/5e29a988-096d-4949-9e58-232eaeea53f4%40gmail.com.

Reply via email to