Ben,

Are you claiming that the choice of "compiler constant" is not pragmatically

significant in the definition of the Solomonoff-Levin universal prior, and
in Kolmogorov
complexity?  For finite binary sequences...

I really don't see this, so it would be great if you could elaborate.


In some cases it matters, in others it doesn't.  Solomonoff's prediction
error
theorem shows that the total summed expected squared prediction error is
bounded by a constant when the true generating distribution \mu is
computable.
The constant is (ln 2)/2 K(\mu) bits.  The K term in this bound depends on
the
choice of reference machine.  For a reasonable choice of reference machine
you might be able to push the bound up by something like 1000 bits.  If you
are considering long running systems that will process large amounts of
data,
that 1000 extra bits is tiny.

On the other hand, if you want to know if K(10) < K(147) then your answer
will depend on which reference machine you use.  In short: Kolmogorov
complexity works well for reasonably big objects, it doesn't work well for
small objects.

Probably the best solution is to condition the measure with information
about
the world.  In which case K(10|lots of world data) < K(147|lots of world
data)
should work the way you expect.  Google complexity works this way.
In the case of Solomonoff induction, you let the predictor watch the world
for
a while before you start trying to get it to solve prediction tasks.


In a practical Novamente context, it seems to make a big difference.  If we
make different choices regarding the internal procedure-representation
language Novamente uses, this will make a big difference in what
internally-generated programs NM thinks are simpler ... which will make a
big difference in which ones it retains versus forgets; and which ones it
focuses its attention on and prioritizes for generating actions.


I think that the universal nature we see in Kolmogorov complexity should
also apply to practical AGI systems.  By that I mean the following:

By construction, things which have high Kolmogorov complexity are complex
with respect to any reasonable representation system.  In essence, the
reference
machine is your representation system.  Once an AGI system has spent some
time learning about the world I expect that it will also find that there are
certain
types of representation systems that work well for certain kinds of
problems.
For example, it might encounter a problem that seems complex, but then it
realises that, say, if it views the problem as a certain type of algebra
problem
then it knows how to find a solution quite easily.  I think that the hardest
part
to finding a solution to a difficult problem often lies in finding the right
way to
view the problem, in order words, the right representation.

Cheers
Shane

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to