On Fri, Nov 17, 2023, 4:25 PM <[email protected]> wrote:

> On Friday, November 17, 2023, at 10:15 PM, WriterOfMinds wrote:
>
> but what the entity is using that intelligence to achieve.
>
> So, maybe any ideas on how to choose goals other than learning from role
> models?
>

LLMs can pass the Turing test just fine without choosing any goals, unless
you count text compression as a goal.

The difference between having a goal and following some other algorithm is
that a goal means you don't know the inverse of the utility function, so
you have to search. Without this distinction you could describe a linear
regression algorithm as having a goal of fitting a line to a set of points.

We use goals as a shortcut to describe human behavior. But that would imply
that we search for the actions that would maximize reward. But that's not
what we do. What we actually do is repeat actions that were rewarded in the
past. The difference means that you don't have an overwhelming desire to
inject heroin unless you have already tried it.

You are following an algorithm. You were born knowing to yank your hand out
of a fire without having to test all possible body movement to learn which
ones stop the pain. We are fortunate that an AI can learn millions of years
of evolved human behavior without having to repeat evolution, and that it's
safer that way.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M1b9d9d8f8a20818507c34f64
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to