Re: [agi] Formal theory of simplicity

2020-09-04 Thread TimTyler

On 2020-09-04 12:19:PM, Ben Goertzel wrote:

The paper addresses what to do about the issue of there not
being any single completely satisfactory single metric of
simplicity/complexity. It proposes a solution: use an array
or such metrics and combine them using pareto optimality.

I think that is basically correct. You are likely to
have multiple measures of simplicity/complexity, and
pareto optimality seems like a fairly reasonable
approach to combining them.

Well it seems like weighted-averaging valid simplicity measures does
not generally yield a valid simplicity measure with nice symmetrics
(even if you're doing simple stuff like weighted-averaging of program
length and runtime, say...).  So you kinda have to go Pareto.

I am usually pretty skeptical about the relevance of Pareto optimality

to machine intelligence. It typically conflicts with utility-based 
frameworks.


A utility calculation typically doesn't care if some parties are worse off -

and will happily sacrifice in the name of the greater good - whereas

the notion of Pareto optimality will dismiss solutions if only one

party is a teeny tiny bit worse off. It seems like a childish way to 
negotiate.


Perhaps, if I think it through further, I will find similar flaws in 
this proposal too.


A weighted average might be appropriate on log scales. Otherwise, maybe a
weighted product would be better. As well as weights, you need log 
scaling - if

attempting to compare and combine things like program size and runtime.

I currently need to think about it all further, though.

--
__
 |im |yler http://timtyler.org/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M37486f6e56648c3988b223b8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread TimTyler

On 2020-09-04 15:24:PM, Matt Mahoney wrote:
The paper lacks an experimental results section. So I don't know how 
this simplicity measure compares to Solomonoff induction.


The paper does discuss some simplicity measures, but it is more like a 
framework


for combining simplicity measures.

Distributions that favor fast programs are allowed but not favored by 
Occam's Razor. We only use them because of practical limitations. But 
we still believe that a multiverse is more likely than a universe 
because it is a simpler description of our observations, in spite of 
requiring more physics computation.


It is, I think a widespread criticism of multiverse theories that they

require more compute. WIth our physics, runtime is frequently penalized -

but if the visible universe is part of a much bigger world different

physical laws might hold there. Maybe there runtime is heavily penalized -

or maybe it is not. We don't really know. Anyway, we have evidence

supporting a multiverse from interference experiments. Occam's

razor is all about priors. Once data starts to flood in, priors diminish

in significance and are often soon swamped.

--
__
 |im |yler http://timtyler.org/
 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M67321e9cf51a120e5c5e5d98
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread Ben Goertzel
On Fri, Sep 4, 2020 at 12:24 PM Matt Mahoney  wrote:
>
> The paper lacks an experimental results section.

So did Solomonoff's original papers ;-0)

> Remember that finding simple theories to fit the data is not computable, 
> which means that algorithms that do this well are necessarily complex. The 
> practical approach in data compression is to combine a lot of approaches 
> including lots of special cases.

Schmidhuber's "frontier search" is in the spirit of my multisimplicity
measures, and connects more closely to data compression

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-Mfd081234e59678a555e6ba40
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread Matt Mahoney
The paper lacks an experimental results section. So I don't know how this
simplicity measure compares to Solomonoff induction. Theoretically,
Solomonoff induction works because all possible probability distributions
over an infinite set of strings must favor shorter strings because for
every element, there must be an infinite set of longer and less likely
strings, and finite sets of the other 3 possible combinations.

Distributions that favor fast programs are allowed but not favored by
Occam's Razor. We only use them because of practical limitations. But we
still believe that a multiverse is more likely than a universe because it
is a simpler description of our observations, in spite of requiring more
physics computation.

Remember that finding simple theories to fit the data is not computable,
which means that algorithms that do this well are necessarily complex. The
practical approach in data compression is to combine a lot of approaches
including lots of special cases.

On Fri, Sep 4, 2020, 12:21 PM Ben Goertzel  wrote:

> > The paper addresses what to do about the issue of there not
> > being any single completely satisfactory single metric of
> > simplicity/complexity. It proposes a solution: use an array
> > or such metrics and combine them using pareto optimality.
> >
> > I think that is basically correct. You are likely to
> > have multiple measures of simplicity/complexity, and
> > pareto optimality seems like a fairly reasonable
> > approach to combining them.
>
> Well it seems like weighted-averaging valid simplicity measures does
> not generally yield a valid simplicity measure with nice symmetrics
> (even if you're doing simple stuff like weighted-averaging of program
> length and runtime, say...).  So you kinda have to go Pareto.
>
>
> I had this conclusion in practice in AGI design for a while -- as did
> Joscha Bach -- which is why OpenCog and MicroPsi get multiple
> top-level goals not a single top-level goal... where the regulation of
> goal-weightings is part of the cognitive dynamic...
>
>
> > One criticism is: why frame the theory in terms of
> > simpliciity? Everyone else seems to use complexity
> > metrics. It is like describing your temperature metric
> > as "coldness". In both cases, there's a lower bound,
> > but no real upper bound. It makes sense for complex
> > systems to score highly, and simple systems to have
> > low scores. The "simplicity" framing suggests inverting
> > this. It seems wrong to me.
> >
> 
> 
> Either way is right, it doesn't matter does it?
> 
> Just a matter of aesthetic taste...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M68e48ed86fc864642d65752d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread Mike Archbold
I read a lot of the introduction but I admit the math is a bit dense
for me (no PhD in math ;)

"In order for a system of inter-combining elements to effectively
understand the world, it should interpret itself and the world in the
context of some array of simplicity measures obeying certain basic
criteria.  Doing so enables it to build up coordinated hierarchical
and heterarchical pattern structures that help it interpret the world
in a subjectively meaningful and useful ways"

This makes perfect sense. Philosophically I think that the system
should reduce the world in some context to essentialities which are ~=
simplicities.

My question centers around "effectively understand." While yes, it
seems like simplicity would help understanding, does your paper
(apologize for skimming a lot of the dense parts) tie in with
understanding per se? So given a simplified view of patterns, how then
does it understand? It seems like the understanding would "kick in" if
and only if the situation was a simple as possible but no simpler.
That helps but I think theory of understanding is still needed.




On 9/3/20, Ben Goertzel  wrote:
> Radical overhaul of my paper on the formal theory of simplicity (now
> saying a little more about pattern, multisimplicity, multipattern, and
> the underlying foundations of cognitive hierarchy and heterarchy and
> their synergy...) https://arxiv.org/abs/2004.05269 ... it's much nicer
> this time around
> 
> Occam's Razor 2020 becomes: *when in doubt, prefer hypotheses whose
> simplicity bundles are Pareto optimal* -- partly cuz this both permits
> and benefits from the construction of coherent dual networks
> comprising coordinated/consistent multipattern hierarchies and
> heterarchies.
> 
> This, I think, is the version of Occam's Razor that's really "as
> simple as possible but no simpler" where complex cognitive processing
> is concerned ... not coincidentally it ties closely w/ OpenCog's
> multi-goal-based control system and Weaver's Open-Ended Intelligence
> 
> --
> Ben Goertzel, PhD
> http://goertzel.org
> 
> “The only people for me are the mad ones, the ones who are mad to
> live, mad to talk, mad to be saved, desirous of everything at the same
> time, the ones who never yawn or say a commonplace thing, but burn,
> burn, burn like fabulous yellow roman candles exploding like spiders
> across the stars.” -- Jack Kerouac

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-Mf16669853b746600990d0293
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread Ben Goertzel
> The paper addresses what to do about the issue of there not
> being any single completely satisfactory single metric of
> simplicity/complexity. It proposes a solution: use an array
> or such metrics and combine them using pareto optimality.
>
> I think that is basically correct. You are likely to
> have multiple measures of simplicity/complexity, and
> pareto optimality seems like a fairly reasonable
> approach to combining them.

Well it seems like weighted-averaging valid simplicity measures does
not generally yield a valid simplicity measure with nice symmetrics
(even if you're doing simple stuff like weighted-averaging of program
length and runtime, say...).  So you kinda have to go Pareto.


I had this conclusion in practice in AGI design for a while -- as did
Joscha Bach -- which is why OpenCog and MicroPsi get multiple
top-level goals not a single top-level goal... where the regulation of
goal-weightings is part of the cognitive dynamic...


> One criticism is: why frame the theory in terms of
> simpliciity? Everyone else seems to use complexity
> metrics. It is like describing your temperature metric
> as "coldness". In both cases, there's a lower bound,
> but no real upper bound. It makes sense for complex
> systems to score highly, and simple systems to have
> low scores. The "simplicity" framing suggests inverting
> this. It seems wrong to me.
>


Either way is right, it doesn't matter does it?

Just a matter of aesthetic taste...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M62279e7b87f891694f444e07
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread James Bowery
"Because real-world intelligence is largely about computational
efficiency – about making choices in real, bounded situations using bounded
space and time resources."

It's all well and good to define "intelligence" in terms of
resources/practicality but you confuse the issue when you conflate such
"intelligence" with the value of Solomonoff Induction as a theoretic tool.

Unpacking the sedimentary layers of confusion here:

AIXI = Solomonoff Induction ∘ Sequential Decision Theory

Solomonoff Induction merely provides _theoretically_ optimal _predictions_
without regard to computational resources, let alone the value system used
to make decisions.  It can't be an "agent" at all, let alone an
"intelligent agent" like AIXI.  It does not provide decisions hence is
inadequate to define an "intelligent" agent in _any_ sense -- theoretic let
alone "real-world".

It requires only 2 givens:  An environment generated by some algorithm and
a Universal Turing Machine.

Admission of those 2 dooms any critique of its _purpose_.

Now, having said all that, when I talk about "enemies of humanity" knocking
Occam's Razor I _do_ restrict myself to _only_ Solomonoff Induction as the
gold standard for _prediction_ and I do so with full recognition that it is
not computable but only approximated by "real-world" computation.  Why?
Why am I so adamant about prosecuting, convicting and hanging by the neck
until dead those who undermine this principle for their crimes against
humanity?

In short because the powers that be will not permit sorting proponents of
social theories into governments that test them.  As a consequence we're
staring down the barrel of catastrophic suffering resulting from the powers
that be imposing social experiments on billions of unwilling human subjects
based on sophistry about selection of unified models of society.  If they
cannot see fit to perform even randomized phase I safety trials for their
ridiculous beliefs let alone phase II and phase III efficacy trials let
alone ask permission of the humans subjected to these moronic experiments,
and if we can't hang the bastards for that _alone_ at least require them to
admit the smallest model of the data upon which they purportedly rely is
the one they _should_ be using to _predict_ what their decisions will
produce.  Note, I've thrown the sophist bastards a bone here:  They can go
ahead and parameterize their decision tree with their own goddamn value
system(s).  If they can't be happy with that much power over the rest of
us, then Let The Heavens Fall.

On Thu, Sep 3, 2020 at 7:34 PM Ben Goertzel  wrote:

> Radical overhaul of my paper on the formal theory of simplicity (now
> saying a little more about pattern, multisimplicity, multipattern, and
> the underlying foundations of cognitive hierarchy and heterarchy and
> their synergy...) https://arxiv.org/abs/2004.05269 ... it's much nicer
> this time around
> 
> Occam's Razor 2020 becomes: *when in doubt, prefer hypotheses whose
> simplicity bundles are Pareto optimal* -- partly cuz this both permits
> and benefits from the construction of coherent dual networks
> comprising coordinated/consistent multipattern hierarchies and
> heterarchies.
> 
> This, I think, is the version of Occam's Razor that's really "as
> simple as possible but no simpler" where complex cognitive processing
> is concerned ... not coincidentally it ties closely w/ OpenCog's
> multi-goal-based control system and Weaver's Open-Ended Intelligence
> 
> --
> Ben Goertzel, PhD
> http://goertzel.org
> 
> “The only people for me are the mad ones, the ones who are mad to
> live, mad to talk, mad to be saved, desirous of everything at the same
> time, the ones who never yawn or say a commonplace thing, but burn,
> burn, burn like fabulous yellow roman candles exploding like spiders
> across the stars.” -- Jack Kerouac

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M23d52964fe4af5b396b0086e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread TimTyler

On 2020-09-03 20:32:PM, Ben Goertzel wrote:

Radical overhaul of my paper on the formal theory of simplicity (now
saying a little more about pattern, multisimplicity, multipattern, and
the underlying foundations of cognitive hierarchy and heterarchy and
their synergy...) https://arxiv.org/abs/2004.05269 ... it's much nicer
this time around



The paper addresses what to do about the issue of there not
being any single completely satisfactory single metric of
simplicity/complexity. It proposes a solution: use an array
or such metrics and combine them using pareto optimality.

I think that is basically correct. You are likely to
have multiple measures of simplicity/complexity, and
pareto optimality seems like a fairly reasonable
approach to combining them.

One criticism is: why frame the theory in terms of
simpliciity? Everyone else seems to use complexity
metrics. It is like describing your temperature metric
as "coldness". In both cases, there's a lower bound,
but no real upper bound. It makes sense for complex
systems to score highly, and simple systems to have
low scores. The "simplicity" framing suggests inverting
this. It seems wrong to me.

--
__
 |im |yler http://timtyler.org/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-Mbc044b923e62fdd62054e10a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread Ben Goertzel
Dude... working out math-y theory related to AI has nothing to do with
earning money that's for sure... it has more to do with clarifying
core ideas in my own mind ...

The fact that you skimmed the first 20% of a paper and got nothing out
of it ... well... err  whatever   This is not a Hollywood
movie that needs to grab the bored viewer by the throat and not let
them go, it's an attempt to lay out core formal structures underlying
cognition...

Explaining things very simply for folks without much relevant
technical background is certainly of value... but writing things up
precisely for those (fewer people) with relevant technical background
is also useful IMO ...

The role of hierarchy in the mind, for instance, is core to modern
deep NNs and also to practical inference control heuristics e.g. in
planning or automated theorem-proving    one of the issues I was
trying to resolve in the theory presented in this paper is: In what
contexts is a pattern hierarchy a highly relevant and significant
meta-pattern?   What is the precise sense in which hierarchies and
heterarchies can/should be aligned in a mind?  These questions IMO are
quite relevant to AGI design, and I think I have made some progress on
understanding them ... in a way that will help inform some of the
design decisions we're now making in designing the OpenCog Hyperon
system...

The theoretical direction of which this paper is a part is outlined here,

https://multiverseaccordingtoben.blogspot.com/2020/05/gtgi-general-theory-of-general.html

although I am sure this will interest you very little, it could
interest some others on this list, who knows...



On Thu, Sep 3, 2020 at 11:24 PM  wrote:
>
> I'm with Tim. I gave his new paper a read maybe about 20% of the top area, 
> just too much filler words, unknown phrases, lots to read, and not doing 
> anything for me. Got barely anything out of it.
>
> Are you selling this paper to earn basic income needs? It's like you aimed 
> for word length, not simplicity. Why not earn money by writing up something 
> useful? I thought you want AGI Ben!!! I don't believe in any of yous anymore. 
> I just see the wrong things going on. Maybe I'm asking for too much though.
>
> All my work is becoming simpler to read and smaller / unified, it's like a 
> sugar rush, you get very high, very quick, and one triggers the other like a 
> chain reaction. After all you need to read your own work to get more out it, 
> so you must make it small and organized and update it instead of making new 
> papers.
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M64d642f3168ee18d1eeae574
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-04 Thread immortal . discoveries
I'm with Tim. I gave his new paper a read maybe about 20% of the top area, just 
too much filler words, unknown phrases, lots to read, and not doing anything 
for me. Got barely anything out of it.

Are you selling this paper to earn basic income needs? It's like you aimed for 
word length, not simplicity. Why not earn money by writing up something useful? 
I thought you want AGI Ben!!! I don't believe in any of yous anymore. I just 
see the wrong things going on. Maybe I'm asking for too much though.

All my work is becoming simpler to read and smaller / unified, it's like a 
sugar rush, you get very high, very quick, and one triggers the other like a 
chain reaction. After all you need to read your own work to get more out it, so 
you must make it small and organized and update it instead of making new papers.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M82516e79c6966549f9b7c9f1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-03 Thread Ben Goertzel
Ah yes -- but see, the theory is much simpler than the phenomena it
explains ;) ... thus my invocation of the saw "simple as possible but
no simpler" ;)

On Thu, Sep 3, 2020 at 8:48 PM TimTyler  wrote:
>
> On 2020-09-03 20:32:PM, Ben Goertzel wrote:
> > Radical overhaul of my paper on the formal theory of simplicity (now
> > saying a little more about pattern, multisimplicity, multipattern, and
> > the underlying foundations of cognitive hierarchy and heterarchy and
> > their synergy...) https://arxiv.org/abs/2004.05269 ... it's much nicer
> > this time around
> 
> For a paper about the virtues of simplicity, it seems pretty complex.
> 
> --
> 
> __
> |im |yler http://timtyler.org/
> 



-- 
Ben Goertzel, PhD
http://goertzel.org

“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M90f53be6352d94cfd13181c1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Formal theory of simplicity

2020-09-03 Thread TimTyler

On 2020-09-03 20:32:PM, Ben Goertzel wrote:

Radical overhaul of my paper on the formal theory of simplicity (now
saying a little more about pattern, multisimplicity, multipattern, and
the underlying foundations of cognitive hierarchy and heterarchy and
their synergy...) https://arxiv.org/abs/2004.05269 ... it's much nicer
this time around


For a paper about the virtues of simplicity, it seems pretty complex.

--

__
 |im |yler http://timtyler.org/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M93c59f1f611f7c21681de6ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription