--- Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Matt, I really don't see why you think Hutter's work shows that "Occam's 
> Razor holds" in any
> context except AI's with unrealistically massive amounts of computing 
> power (like AIXI and AIXItl)
> 
> In fact I think that it **does** hold in other contexts (as a strategy 
> for reasoning by modest-resources
> minds like humans or Novamente), but I don't see how Hutter's work shows 
> this...

I admit Hutter did not make claims about machine learning frameworks or
Occam's razor, but we should not view his work in such narrow context. 
Hutter's conclusions about the optimal behavior of rational agents were proven
for the following cases:

1. Unrestricted environments (in which case the solution is not computable),
2. Space and time bounded environments (in which case the solution is
intractable),
3. Subsets of (1) or (2) such that the environment is consistent with past
interaction.

But the same reasoning he used in his proofs could just as well be applied to
practical cases of machine learning for which efficient solutions are known. 
The proofs all use the fact that shorter Turing machines are more likely than
longer ones (a Solomonoff prior).

For example, Hutter does not tell us how to solve linear regression, fitting a
straight line to a set of points.  What Hutter tells us is two other things:

1. Linear regression is a good predictor, even though a higher order
polynomial might have a better fit (because a low order polynomial has lower
algorithmic complexity).
2. Linear regression is useful, even though other machine learning algorithms
might be better predictors (because a general solution is not computable, so
we have to settle for a suboptimal solution).

So what I did was two things.  First, I used the fact that Occam's razor works
in both simulated and real environments (based on extensions of AIXI and
empirical observations respectively) to argue that the universe is consistent
with a simulation.  (This is disturbing because you are not programmed to
think this way).

Second, I used the same reasoning to guess about the nature of the universe
(assuming it is simulated), and the only thing we know is that shorter
simulation programs are more likely than longer ones.  My conclusion was that
bizarre behavior or a sudden end is unlikely, because such events would not
occur in the simplest programs.  This ought to at least be reassuring.

-- Matt Mahoney


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to