On 3/14/2025 4:47 AM, John Clark wrote:
On Thu, Mar 13, 2025 at 8:42 PM Brent Meeker <meekerbr...@gmail.com> wrote:


        *>> I understand the motivation of a modern large language
        modelabout as well as I understand the motivation of one of my
        fellow human beings. I'm sure there have been times when you
        saw somebody do something very strange so you asked "why did
        you do that?" and the response you received you did not
        consider satisfactory. Sometimes the only response possible
        was "because I wanted to" because the person is unable to
        explain the detailed pattern of neuron firings that caused him
        to do what he did. Exactly the same thing could be said about
        a modern AI.*


    /> But now you've elided any reference to intelligence.  Yet you
    wouldn't infer that the person was not conscious, simply because
    he couldn't explain his action. /


*It's true I can't conclude a person is not intelligent andconscious just becauseI don't know his motivations, I can't always explain, even to myself, why I did what I did other than "I just wanted to" and yet I know for a fact that I am conscious and sometimes my actions are slightly more intelligent than a rock's actions. And EXACTLY the same thing can be said about an AI; so you were wrong when you said we have to understand the motivations of an AI before we can say it is intelligent.
*
So if the AI loses every chess game we can say it's unintelligent without knowing whether it wanted to win?

    /My point is that "conscious" means different things, or may be
    said to have many components. /


*Use any definition of consciousness you like but to be useful the definition must be made out of observables and not just be a bunch of synonyms for the word "consciousness", and most important of all, whatever definition you end up using you've got to play fair and use the same definition when judging an AI.*

    /> One of them is making and carrying out plans, which you see as
    intelligent action.  But you must know from your own experience
    that this involves imagining the consequence of sequences of
    action; that's what it means "to plan". /


*And the hot new thing right now is"Agentic AI" which can do precisely what you described in the above, including simulating the consequences of potential actions it could take. *

    /> And that imagination is a kind of consciousness, which
    necessarily entails internal representations of the world.  Do you
    think an AI could do it some other way./


*Making good plans withoutthinking about what the consequences of those plans might be? No, I don't think that's possible.
*
Then we're in agreement that consciousness in not just a spandrel, and all things with human level intelligence probably have it.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/97fbcd0e-1945-4ecf-a390-7c82865b80d5%40gmail.com.

Reply via email to