On Thu, Mar 13, 2025 at 8:42 PM Brent Meeker <meekerbr...@gmail.com> wrote:


*>> I understand the motivation of a modern large language model about as
>> well as I understand the motivation of one of my fellow human beings. I'm
>> sure there have been times when you saw somebody do something very strange
>> so you asked "why did you do that?" and the response you received you did
>> not consider satisfactory. Sometimes the only response possible was
>> "because I wanted to" because the person is unable to explain the detailed
>> pattern of neuron firings that caused him to do what he did. Exactly the
>> same thing could be said about a modern AI.*
>
>
> *> But now you've elided any reference to intelligence.  Yet you wouldn't
> infer that the person was not conscious, simply because he couldn't explain
> his action.  *
>

*It's true I can't conclude a person is not intelligent and conscious just
because I don't know his motivations, I can't always explain, even to
myself, why I did what I did other than "I just wanted to" and yet I know
for a fact that I am conscious and sometimes my actions are slightly more
intelligent than a rock's actions. And EXACTLY the same thing can be said
about an AI; so you were wrong when you said we have to understand the
motivations of an AI before we can say it is intelligent. *


> * My point is that "conscious" means different things, or may be said to
> have many components. *
>

*Use any definition of consciousness you like but to be useful the
definition must be made out of observables and not just be a bunch of
synonyms for the word "consciousness", and most important of all, whatever
definition you end up using you've got to play fair and use the same
definition when judging an AI.*


> *> One of them is making and carrying out plans, which you see as
> intelligent action.  But you must know from your own experience that this
> involves imagining the consequence of sequences of action; that's what it
> means "to plan". *
>


*And the hot new thing right now is "Agentic AI" which can do precisely
what you described in the above, including simulating the consequences of
potential actions it could take.  *

* > And that imagination is a kind of consciousness, which necessarily
> entails internal representations of the world.  Do you think an AI could do
> it some other way.*
>

*Making good plans without thinking about what the consequences of those
plans might be? No, I don't think that's possible.  *

* John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*
3sh

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0nJQTMFc25eERFJbFE-WFmBsV8MvAbEhwhCBp9qaYb4g%40mail.gmail.com.

Reply via email to