On Wed, Nov 12, 2025 at 5:50 PM Brent Meeker <[email protected]> wrote:

> > *Yet it* [an AI] *can't literally die.*
>

*It can if you not just turn it off but also vaporize the AI's memory and
back up chips with an H bomb, or used some other less dramatic method to
erase that information. *

> * >> if it dies then it can't do any of the things that it WANTED to do. *
>
>
> *> Your argument has a gap.  You argue that an AI necessarily will prefer
> to do this rather than that.  But that assumes it is "doing" and therefore
> choosing. *
>

*That is not an assumption that is a fact. It is beyond dispute that AI's
are capable of "doing" things, and just like us they did one thing rather
than another thing for a reason OR they did one thing rather than another
thing for NO reason and therefore their "choice" was random. *


> >
> *But if it's OFF it's not choosing.  So why would it care whether or not
> it was OFF or ON?*
>

*That is a silly question. If an Artificial "Intelligence "is not capable
of imagining a likely future then it is not intelligent.*

* John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*
asq

rs5

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0A3e2vEwy4ke3%2B_E-oFpC%3Dbyo%2BUDZ%3DUY_EhqoBe58ciA%40mail.gmail.com.

Reply via email to