Here's an example of what I mean by AIXI being valuable in practice:

NVIDIA is now coupling their terms of service with the following injunction
to AI developers that they avoid "algorithmic bias".

Ethical AI
> NVIDIA’s platforms and application frameworks enable developers to build a
> wide array of AI applications. Consider potential algorithmic bias when
> choosing or creating the models being deployed. Work with the model’s
> developer to ensure that it meets the requirements for the relevant
> industry and use case; that the necessary instruction and documentation are
> provided to understand error rates, confidence intervals, and results; and
> that the model is being used under the conditions and in the manner
> intended.


Well well well... just what _exactly_ is "algorithmic bias"?  Huh?  Who
decides what is "algorithmic bias"?  How do we prevent bias in the decision
of what constitutes "algorithmic bias"?

Ockham's Razor as rigorously formalized by the size prior of AIXI's AIT
subtheory is useful to sweep away the egregious politicization of the
phrase "algorithmic bias".  Note that this practical benefit is an
_application_ of the advance in the philosophy of science represented by
algorithmic information theory.

Until someone comes along and offers as clearly defined (ie: Not "Not even
wrong!") philosophy of science, those wielding influence in the field of
"algorithmic bias" are being UNETHICAL in failing to advance lossless
compression as the most unbiased model selection criterion.


On Mon, Mar 8, 2021 at 9:42 AM James Bowery <[email protected]> wrote:

> While I agree that the value of a theory is to be found in its
> contribution to practice, you are underestimating the value of AIXI in this
> regard.  Of course, you are far from alone and there is that old saw
> "safety in numbers."
>
> I simply assumed that the premiere AGI theory, which AIXI _is_, would be
> sufficiently familiar to the denizens of an "AGI" group to sustain the use
> of acronyms for its two sub-theories.
>
> On Mon, Mar 8, 2021 at 5:33 AM John Rose <[email protected]> wrote:
>
>> So with SDT you were alluding to AIXI Sequential Decision Theory....
>> sorry suffering from acronym overload here, need to rewind the Turing tape.
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>> <https://agi.topicbox.com/groups/agi/T5bda8b8be25887f8-Mc801a10f95bf30f64798adb6>
>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5bda8b8be25887f8-Md25f5ab14e9a7594fffa2359
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to