I wonder if Mike is nodding implicitly to the thesis of "The Promise of
Artificial Intelligence" by Brian Cantwell Smith in which he defines
"judgement" as a key feature of true intelligence as realized by living
agents, in contrast with a less authentic alternative he calls
"reckoning".  His aim is to work out what is needed to get to machines that
exercise true judgement.  BCS contends that judgement demands a pragmatic
involvement in the world such that something really is at stake for the
agent.  For living agents, the ultimate source of this skin in the game is
the imperative of staying far from thermodynamic equilibrium, an objective
that steers all levels of self-maintenant process from self-repair to
immunal response to attention and awareness.

On Fri, May 20, 2022 at 12:12 PM Boris Kazachenko <cogno...@gmail.com>
wrote:

> Then I guess your "judgement" is top-level choices. The problem is, GI
> can't have a fixed top level, forming incrementally higher levels of
> generalization is what scalable learning is all about. So, any choice on
> any level is "judgement", which renders the term meaningless.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-M32fccb756b8d409de7fba193>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-M34649b524b92c81b216e229e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to