Sorry for the confusion. I had "motivations" written there just because I
am motivated to do a study on the judgement facility. Motivation of course
is also crucial so thanks!

On Fri, May 20, 2022, 1:39 AM Daniel Jue <[email protected]> wrote:

>
> On point #1, Maybe we can expound on the meaning of "judgement".  By the
> dictionary it's "an opinion or conclusion", but in common parlance we might
> interpret a judgement as "a belief that we desire others to agree with.".
> An even more cynical take would be "a belief we expect to be accepted by
> others on grounds of appeal to authority, i.e an expert or judge".  We
> might even establish a chain of social value, from greatest to least:
>
> 1) Correct Judgement
> 2) Correct Opinion
> 3) Incorrect Judgement
> 4) Incorrect Opinion
>
> Where correctness is based on social acceptance, and judgement vs opinion
> is based on the source's expertise in the subject matter.  Social value
> being a measurement of long term durability and benefit to building upon.
> For instance, one might have a "Correct Judgement" that the "concept of
> laws" has long term durability, and there are societal benefits from
> building on (instantiating) laws.  Conversely, the "incorrect opinion" that
> some laws are meant to be broken may not have long term durability, and a
> negative social benefit if done.
>
> On point #2, I think only activist AI Ethicists are fighting for control
> over the grounds of the decisions (because they see the danger if no one
> champions the social good); I believe you are right that a majority of AI
> developers are not concerning themselves with the grounds, beyond subtly
> allowing their own bias.  (Not pointing fingers, in most AI approaches I
> believe escaping your own groups' biases is impossible)
>
> AIKR taken to heart on point #3.
>
> Agreed on the premise of point #4, and I'll inject that an AGI developer
> ought to know that a judgement is an opinion within a space-time-context
> frame.
> I've continued work on something I'm calling Facet Theory, which is
> independent of the one by Guttman on wikipedia by the same name.  The goal
> is to model contiguous space-time-context frames of understanding as
> something called a facet, which have interesting properties where two or
> more facets meet along an edge.  For instance where two similar, but
> incompatible paradigms (groups of opinions) meet, e.g. Newtonian and
> Relativistic physics.  In that example I like to visualize those as two
> facets on a 3D diamond.  They are both reasonable approximations of
> understanding for their own particular space, time, and context.  They may
> both be useful models of understanding how the world works (even at the
> same time in history).  One may reign supreme for centuries, and just
> because a more accurate, "better" understanding is discovered in the future
> does not mean that it is totally replaced.  "Context" in
> "space-time-context" is a catch-all for other dimensions of understanding,
> such as:
> * Socio-Cultural (body language in a certain social or cultural situation)
> * Information Source (The news being dire, but from a certain news channel)
> * Related Context (The food being good, related to a particular lunch
> event; A building having a good aesthetic, within the concept of Brutalist
> architecture)
> * etc.
>
>
> In this way the AGI may not only survive cognitive dissonance, but thrive
> in it by cognicizing at levels above the compartmentalized facets
> it encounters.
>
> Something that is related to (my) Facet Theory but not quite the same is
> Prototype theory, where our esteemed Antonio Lieto gets a mention:
> https://en.wikipedia.org/wiki/Prototype_theory
> I suppose a prototype of a concept would exist at the centroid of one of
> my facets.
>
> What we think of as subjective judgements are just opinions whose variance
> is greatest on the contextual scale (i.e. by person, place) and second
> greatest variance by time (your favorite music might change over your
> lifetime)
> However your objective judgements can be modeled as opinions with less
> variance over context, space and time, but still subject to instances where
> it does not hold true.  "This apple" refers to an apple in a particular
> space and time, but in the context of an entity moving close to c, the
> apple may not be red.  It also may not be red in the context of a
> colorblind persons' context, as distinguishable from green.
>
> Self referential statements such as "I think therefore I am", or axiomatic
> systems like mathematics absolve themselves from many dimensions and
> therefore seem profound to us because they are without counterexample
> across time, space, or context.
> Fleeting statements such as "my head itches right now" seem least profound
> simply because of the number of dimensional constraints on the facet.
>
> NARS has some great parts, and part of my substrate is based on what NARS
> can achieve.  However there are a great many things which, once
> incorporated into NARS, allow a more human-compatible understanding to take
> place.  For instance, instead of only frequency and confidence of an
> experience, also incorporating a learned confidence of the data source,
> when it was learned, etc.
>
> You may imagine a human case where you have to believe your brother who
> has old information, or a dubious politician.  An AGI will be put in
> analogous situations, and a conscious system also consumes the reflection
> of its own past decisions, not only the reputation of external data sources.
>
> For the last part of point #4, saying "follow the money" may sound cynical
> but it's rarely wrong.  Modern AI developers (especially Weak AI) will
> likely build systems to maximize stakeholders return, whatever the
> judgements.
>
> I don't have it all figured out, but this past year of sabbatical has been
> a tremendous help.  Instead of trying to cram my own opinions into the AGI,
> it needs to be able to interpret reality on its own, and learn to reflect
> on its own judgements like a child.  From an AI Safety perspective I've
> embarked on what I call Pathology First Development, which is basically
> generating failure modes of being that are analogous with human
> neuropathologies and psychopathologies.  The motivation is that if
> pathological behavior patterns can be simulated and recognized, an AGI
> could be taught to avoid behaviors that lead to these patterns.
>
> Daniel Jue
>
>
> On Fri, May 20, 2022 at 1:48 AM Mike Archbold <[email protected]> wrote:
>
>> MOTIVATIONS
>> 
>> Is the following fair?
>> 
>> * There seems to be a prevailing, tacit climate of opinion in AI that
>> a judgment is correct primarily if it is 1) equivalent to prior
>> human-caused judgments (supervised) or 2) due to rewards.
>> 
>> *Thus there seems to be no need for developers to concern themselves
>> with the grounds of the decision; ie., WHY *this* specific judgment?
>> (just point to the data and precedent)
>> 
>> *But the problem is the combinatorial explosion:  in a real world
>> settings often novel variations occur, each nuance bearing a thousand
>> little preferences, values, probabilities that have not specifically
>> been trained for.
>> 
>> *So it seems like an AGI developer ought to know what a judgment is,
>> and to design accordingly. For example, we can think of purely
>> objective judgments ("this apple is red") or more subjective judgments
>> ("I think blues is superior to jazz, but not in the future").
>> Presently it seems like modern AI regards both as valid if the answer
>> fits some pattern.
>> 
>> ~~~~~~~~~~
>> 
>> By the way I know NARS holds "judgment" as one of its major
>> components, need to re-examine)
>> 
>> Mike A
>
>
> --
> Daniel Jue
> Cognami LLC
> 240-515-7802
> www.cognami.ai
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-Mad81bfffaf67150e9cd80477>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-M1fb22cbd1d884b71cd30c756
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to