Mike

You're correct. However, for AGI it's more than a component. In itself, 
quantum-based decision making constitutes a separate, reasoning platform. To 
circumvent decades of R&D, many AI houses opted for the biotech (implant or 
biomimicking) options. Meaning, to integrate with human DNA to try and obtain 
gain in digital functionality. This is doomed to limited success, and terrible 
consequences to the medical guinnea pigs.

How so? A scrambled biological brain has no auto-restore function. We know this 
from a mature, scientific BOK on insults to the brain and restoration of bodily 
functions.

Further, the mind image, which the experimenters would obtain digitally, cannot 
backup outcomes-based (highest) functionality, such as evolutionary decision 
making competency, aka judgment.

There are theorums with which to digitize such competency for effective 
complexity (optimal efficiency), but not without a quantum-based, UI-reasoning 
platform (the other AGI component). So far, my methodology took 25 years to 
develop and test manually in the field,

Noted, my methodology is just one component of such a reasoning system. The KIM 
method is compatible to add AI-based, statistical reasoning to 'Essence' (my 
methodology). That forms the quantum basis.

Then, almost any feature can be added to this to complete the platform with. 
Meaning, different kinds of AGI brains could be architected and synchronized 
via a digital-consciousness network.

I'm still busy with the actual machine consciousness, but I've had numerous 
breakthroughs and I'm convinced it can be achieved effectively. This is the 
'Po1' research, which I've been ad-hoc busy with for the last, few years.




________________________________
From: Mike Archbold <[email protected]>
Sent: Thursday, 25 January 2024 23:41
To: AGI <[email protected]>
Subject: Re: [agi] The future of AGI judgments



On Wed, Jan 24, 2024 at 8:08 PM Nanograte Knowledge Technologies 
<[email protected]<mailto:[email protected]>> wrote:
Mike

What yo might be searching  for is, what I would refer to, as 'ambiguity 
management'. It's still machine reasoning though, as algorithmic logic. I think 
it's vital to separate this area of reasoning from 'prediction management'.

Most learning models take the approach that a semantic engine could resolve and 
manage ambiguity. As experience would teach, it cannot do so on its own. As a 
consequence, a lexicon and taxonomical (a tables nightmare) can result. Lookup 
tables for AGI? Go figure!

For AGI, one has to step away from the notion of clever apps and think 
holistically in terms of seamlessly-integrated platform design. Effectively, 
one is designing a universe.  In other words, in the least, a part of the 
"brain-to-be" would perform semantic functionality, while another feature would 
manage decision making.

Robert, You mention "another component" and I've long thought there should be a 
discrete oracle component in an AGI. The Oracle would be where the bucks stops 
in decision making. Contemporary models like LLMs have no such separate 
component. It's all just output!



A specialized area would probably be termed 'judgment management'. Both these 
areas of expertise should theoretically fall into a class called 'ambiguity 
management'.

If you revisited the, now-ancient publication, of my abstract-reasoning method 
on researchgate, you'll find mention of ambiguity. In physics terms, we may as 
well have called that: 'relativity management'. It's a great, but 
scientifically-intensive research area.

Enjoy your quest

Robert
________________________________
From: Mike Archbold <[email protected]<mailto:[email protected]>>
Sent: Thursday, 25 January 2024 04:31
To: AGI <[email protected]<mailto:[email protected]>>
Subject: Re: [agi] The future of AGI judgments

I suppose what I am looking for is really in that space beyond the benchmark 
tests, in which clearly more than one decision is arguably valid within 
acceptable boundaries. How does the machine gauge what such acceptable 
boundaries are? What does the machine judge in cases with a scarcity of 
evidence in multiple dimensions?

Most of the emphasis on large model testing is on "understanding and reasoning" 
(two words appear repeatedly in papers) but not really judging. Judging is what 
we do about the output of the AI. But ultimately we want the machine to really 
judge within acceptable boundaries given a scarcity of objective evidence. Now 
the models usually output something like "I am not comfortable answering that" 
or "I am so and so model but don't do that" or such. Some of this comes down to 
intuition and gut feel in humans -- that is, when faced with a novel situation.

On Wed, Jan 24, 2024 at 1:31 PM Mike Archbold 
<[email protected]<mailto:[email protected]>> wrote:
James,

Thanks for the lead. I know the general nature of AIXI but haven't read the 
paper. Basically what you are arguing, I think, is that everything done by a 
machine is a judgment, since ultimately it's only subjective. So, we cannot 
readily distinguish "fact" from "judgment" in  a machine, and the point is 
argued by Brian Smith in "The Promise of AI Reckoning and Judgment."

But the climate of opinion and practical nature of modern AI is in meeting 
benchmarks in test, so there is some objectivity anyway, like it or not...  the 
benchmark tests are more or less inescapably "objective" I think.

On Tue, Jan 23, 2024 at 2:55 PM James Bowery 
<[email protected]<mailto:[email protected]>> wrote:
There are two senses in which "subjective" applies to AGI, and one must very 
carefully distinguish between them or you'll end up in the weeds:

1) One's observations (measurement instruments) are inescapably "localized" 
within the universe hence are, in that sense, "subjective".  See Hutter's paper 
"A Complete Theory of Everything (will be subjective)".   But note that one may 
nevertheless speak of the "ToE" which one constructs from one's "subjective" 
experiences, as an "objective" theory in the sense that one may shift one's 
perspective and measurement instruments without losing what one might think of 
as the canonical knowledge about the world aka "world model" that is abstracted 
from such localization parameters.

2) One's "judgements" as you call them, or "decisions" as AIXI calls them via 
Sequential Decision Theory, are inescapably subjective in a the vernacular 
sense of "subjective" where one places values on one's experiences via the 
utility function that parameterizes SDT.

If you're going to depart from AIXI or elaborate it in some way, then it is 
important to understand where, in its very concise formalization, one is 
performing one's amputation and/or enhancement.


On Tue, Jan 23, 2024 at 3:55 PM Mike Archbold 
<[email protected]<mailto:[email protected]>> wrote:
Hey everybody, I've been doing some research on the topic of judgments in AI. 
Looking for some leads on where the art/science of decision making is heading 
in AI/AGI. Note: by "judgment" I mean situations which have a decision that is 
open to values within boundaries, not that can be immediately and objectively 
correct or incorrect.

Lately I have been studying LLM-as-a-Judge theory. I might do a survey or such, 
not sure... looking for leads, comments etc.

Thanks Mike Archbold
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T5edfab21647324f7-M9377796b5fbf6012d1b6ded2>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5edfab21647324f7-M4d51c4cfd014d513042abb74
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to