MOTIVATIONS

Is the following fair?

* There seems to be a prevailing, tacit climate of opinion in AI that
a judgment is correct primarily if it is 1) equivalent to prior
human-caused judgments (supervised) or 2) due to rewards.

*Thus there seems to be no need for developers to concern themselves
with the grounds of the decision; ie., WHY *this* specific judgment?
(just point to the data and precedent)

*But the problem is the combinatorial explosion:  in a real world
settings often novel variations occur, each nuance bearing a thousand
little preferences, values, probabilities that have not specifically
been trained for.

*So it seems like an AGI developer ought to know what a judgment is,
and to design accordingly. For example, we can think of purely
objective judgments ("this apple is red") or more subjective judgments
("I think blues is superior to jazz, but not in the future").
Presently it seems like modern AI regards both as valid if the answer
fits some pattern.

~~~~~~~~~~

By the way I know NARS holds "judgment" as one of its major
components, need to re-examine)

Mike A

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-M39f2d1049f5f979faedd0394
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to