On 2020-09-04 12:19:PM, Ben Goertzel wrote:
The paper addresses what to do about the issue of there not
being any single completely satisfactory single metric of
simplicity/complexity. It proposes a solution: use an array
or such metrics and combine them using pareto optimality.

I think that is basically correct. You are likely to
have multiple measures of simplicity/complexity, and
pareto optimality seems like a fairly reasonable
approach to combining them.
Well it seems like weighted-averaging valid simplicity measures does
not generally yield a valid simplicity measure with nice symmetrics
(even if you're doing simple stuff like weighted-averaging of program
length and runtime, say...).  So you kinda have to go Pareto.
I am usually pretty skeptical about the relevance of Pareto optimality

to machine intelligence. It typically conflicts with utility-based frameworks.

A utility calculation typically doesn't care if some parties are worse off -

and will happily sacrifice in the name of the greater good - whereas

the notion of Pareto optimality will dismiss solutions if only one

party is a teeny tiny bit worse off. It seems like a childish way to negotiate.

Perhaps, if I think it through further, I will find similar flaws in this proposal too.

A weighted average might be appropriate on log scales. Otherwise, maybe a
weighted product would be better. As well as weights, you need log scaling - if
attempting to compare and combine things like program size and runtime.

I currently need to think about it all further, though.

--
__________
 |im |yler http://timtyler.org/


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7f31810a817f8496-M37486f6e56648c3988b223b8
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to