Robert,

Hume and others were right.  Induction can't be proven because it's not
universally true.  However, induction works in universes in which induction
works.   We appear to be in the process of establishing by induction that
ours is such a universe.  That we exist and have invented science is, of
course, strong evidence for this claim, assuming that our prior puts
nonzero weight on our being in a universe in which induction works.

This is a variant of the anthropic principle -- my "Bayesian anthropic
principle."  Suppose you distributed your belief over possible universes in
such a way that universes in which induction works (i.e., universes which
learn about themselves) had nonzero mass.   Then by now we would have a
high posterior probability that ours is a universe in which induction
works.  Since most of us do believe that induction works, I think it's fair
to say most of us started with one of these kinds of prior distribution.
However, we as a society are just beginning to understand this stuff on a
mass scale, and we don't yet fully understand why we believe induction
works.

So what does it mean to be in a universe in which induction works? How can
you formalize that notion in a way that you can put a prior on it and
update with evidence?  The consensus among those who study complexity seems
to be that such universes need to be simple enough to be understandable,
yet complex enough so that beings which understand can evolve.  The
catch-phrase for this is universes "at the edge of chaos."

But what does THAT mean?  Anyone who has looked at the issue realizes that
it gets slippery to try to define complexity.  One thing is certain -- you
can't say something "is" complex all by itself.  Complexity is relational
-- it depends on the relationship between the phenomenon and the
representation of the phenomenon.  Something simple in one representation
may be highly complex, or even apparantly random, in another representation
that doesn't have the concepts to represent what's going on.

One way to encode "simple enough to be learnable" might be that a sequence
of models with finite degrees of freedom can produce an increasingly
accurate representation of the behavior of our universe.  One way to encode
"complex enough so that beings which understand can evolve" might be that
the number of degrees of freedom needed to represent the fine structure is
unbounded. The problem with models with unbounded degrees of freedom is
that our theory of statistics (our best formal theory of induction) is
based on models with finitely many degrees of freedom.  However, we're
beginning to move beyond that.  The first glimmering I saw was one part of
one chapter in Bishop, Fienberg and Holland in which they do asymptotics
for the case in which the dimension of the model increases with the sample
size, but in a way that the limit of the ratio still tends to infinity.
More recently we have the fractal priors Radford Neal used, and the recent
work on mixtures of graphical models.

I believe the UAI community has some interesting things to say about "edge
of chaos" universes, that most people in the complexity field seem unaware
of.  In particular, we seem to be groping toward a theory of prior
distributions that encode the knowledge that the phenomenon in question is
learnable in the representation of choice, without encoding much additional
knowledge beyond that.  This is a useful kind of "vague prior."
Essentially, we are saying, "the phenonmenon can be described in my
representation to adequate approximation by many fewer degrees of freedom
than exist in the observations, but I'm not sure which of the degrees of
freedom my representation allows are needed and which are not."  Countable
mixtures of graphical models,  where the dimension of the model is
unbounded, with fairly diffuse priors on the parameters, is a start in this
direction.  To apply these ideas to edge-of-chaos universes, we need to get
away from thinking about the scientist "in here" modeling reality "out
there," and realize that if we are talking about the universe, the
representer and the thing represented are the same.

I think we can actually build models of such "edge of chaos" universes, to
observe how they work and thereby learn some things about
self-learnability.  To think about how to do this, let's adopt a theistic
metaphor (excuse me if you don't like this, but if I'm going to try to
design an "edge of chaos" universe modeled after ours, it helps me to
imagine a "designer" making ours so that I can copy him/her).  Imagine that
God created our universe out of a family of such "edge of chaos" universes,
gave all the consciousnesses vague "edge of chaos" priors, and set the
system evolving.  If that were the kind of universe we were in, then we'd
eventually discover we were in such a universe, and we'd eventually learn
as much as was learnable about which one we were in.

For more on this idea, see some non-formal gropings in
      http://ite.gmu.edu/~klaskey/papers/Self-Representation.html

I wrote this for a conference on consciousness but never submitted it
because I didn't like it.  It's a little too flaky for my taste, but it has
the essential ideas, albeit in a form that doesn't communicate well enough,
especially to people who don't have the hooks for it.  I'll continue to
refine it, but I thought I'd point you to it because it's directly relevant
to your question.

Kathy

Reply via email to