"Because real-world intelligence is largely about computational
efficiency – about making choices in real, bounded situations using bounded
space and time resources."

It's all well and good to define "intelligence" in terms of
resources/practicality but you confuse the issue when you conflate such
"intelligence" with the value of Solomonoff Induction as a theoretic tool.

Unpacking the sedimentary layers of confusion here:

AIXI = Solomonoff Induction ∘ Sequential Decision Theory

Solomonoff Induction merely provides _theoretically_ optimal _predictions_
without regard to computational resources, let alone the value system used
to make decisions.  It can't be an "agent" at all, let alone an
"intelligent agent" like AIXI.  It does not provide decisions hence is
inadequate to define an "intelligent" agent in _any_ sense -- theoretic let
alone "real-world".

It requires only 2 givens:  An environment generated by some algorithm and
a Universal Turing Machine.

Admission of those 2 dooms any critique of its _purpose_.

Now, having said all that, when I talk about "enemies of humanity" knocking
Occam's Razor I _do_ restrict myself to _only_ Solomonoff Induction as the
gold standard for _prediction_ and I do so with full recognition that it is
not computable but only approximated by "real-world" computation.  Why?
Why am I so adamant about prosecuting, convicting and hanging by the neck
until dead those who undermine this principle for their crimes against
humanity?

In short because the powers that be will not permit sorting proponents of
social theories into governments that test them.  As a consequence we're
staring down the barrel of catastrophic suffering resulting from the powers
that be imposing social experiments on billions of unwilling human subjects
based on sophistry about selection of unified models of society.  If they
cannot see fit to perform even randomized phase I safety trials for their
ridiculous beliefs let alone phase II and phase III efficacy trials let
alone ask permission of the humans subjected to these moronic experiments,
and if we can't hang the bastards for that _alone_ at least require them to
admit the smallest model of the data upon which they purportedly rely is
the one they _should_ be using to _predict_ what their decisions will
produce.  Note, I've thrown the sophist bastards a bone here:  They can go
ahead and parameterize their decision tree with their own goddamn value
system(s).  If they can't be happy with that much power over the rest of
us, then Let The Heavens Fall.

On Thu, Sep 3, 2020 at 7:34 PM Ben Goertzel <[email protected]> wrote:

> Radical overhaul of my paper on the formal theory of simplicity (now
> saying a little more about pattern, multisimplicity, multipattern, and
> the underlying foundations of cognitive hierarchy and heterarchy and
> their synergy...) https://arxiv.org/abs/2004.05269 ... it's much nicer
> this time around
> 
> Occam's Razor 2020 becomes: *when in doubt, prefer hypotheses whose
> simplicity bundles are Pareto optimal* -- partly cuz this both permits
> and benefits from the construction of coherent dual networks
> comprising coordinated/consistent multipattern hierarchies and
> heterarchies.
> 
> This, I think, is the version of Occam's Razor that's really "as
> simple as possible but no simpler" where complex cognitive processing
> is concerned ... not coincidentally it ties closely w/ OpenCog's
> multi-goal-based control system and Weaver's Open-Ended Intelligence
> 
> --
> Ben Goertzel, PhD
> http://goertzel.org
> 
> β€œThe only people for me are the mad ones, the ones who are mad to
> live, mad to talk, mad to be saved, desirous of everything at the same
> time, the ones who never yawn or say a commonplace thing, but burn,
> burn, burn like fabulous yellow roman candles exploding like spiders
> across the stars.” -- Jack Kerouac

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M23d52964fe4af5b396b0086e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to