Right now I'm just trying to sort out the main motivations I have
about having event(s) on meetup about judgments. Mostly it is that
modern AI seems to be hardly aware of what a judgment is. In the good
old days, people would scrutinize the rules, the lines of code, have a
good idea for the output of the software (ie.,, the "judgment") but it
seems now, who cares what decision the machine makes, so long it
corresponds to the test data in a close enough statistcal fashion?

-- the answer is AI ethicists and the problem of bias.

On 5/20/22, Mike Archbold <jazzbo...@gmail.com> wrote:
> Robert, Thanks for the tip on the book -- I will order a copy. He
> sounds a bit like Kant who basically held the judgment to be ... from
> Stanford Philo Encyclopedia:
>
> Kant’s taking the innate capacity for judgment to be the central
> cognitive faculty of the human mind, in the sense that judgment, alone
> among our various cognitive achievements, is the joint product of all
> of the other cognitive faculties operating coherently and
> systematically together under a single higher-order unity of rational
> self-consciousness (the centrality thesis)
>
> On 5/20/22, Robert Levy <r.p.l...@gmail.com> wrote:
>> I wonder if Mike is nodding implicitly to the thesis of "The Promise of
>> Artificial Intelligence" by Brian Cantwell Smith in which he defines
>> "judgement" as a key feature of true intelligence as realized by living
>> agents, in contrast with a less authentic alternative he calls
>> "reckoning".  His aim is to work out what is needed to get to machines
>> that
>> exercise true judgement.  BCS contends that judgement demands a pragmatic
>> involvement in the world such that something really is at stake for the
>> agent.  For living agents, the ultimate source of this skin in the game
>> is
>> the imperative of staying far from thermodynamic equilibrium, an
>> objective
>> that steers all levels of self-maintenant process from self-repair to
>> immunal response to attention and awareness.
>>
>> On Fri, May 20, 2022 at 12:12 PM Boris Kazachenko <cogno...@gmail.com>
>> wrote:
>>
>>> Then I guess your "judgement" is top-level choices. The problem is, GI
>>> can't have a fixed top level, forming incrementally higher levels of
>>> generalization is what scalable learning is all about. So, any choice on
>>> any level is "judgement", which renders the term meaningless.
>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>>> participants <https://agi.topicbox.com/groups/agi/members> +
>>> delivery options <https://agi.topicbox.com/groups/agi/subscription>
>>> Permalink
>>> <https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-M32fccb756b8d409de7fba193>
>>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-M01e015c41820b40c9b099479
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to