David,

>Um, I hate to keep disagreeing with Kathryn,

No you don't hate it!  You're having the time of your life, and so am I.
(My p is approximately 0.7324589 for you, near unity for me.)

Please keep it up!

I HATE it when I spout off and no one challenges me.  It gets boring
listening to myself pontificate and hearing no nay-sayers.  Then I discover
they were holding their bewilderment or their disagreement inside and not
saying anything.  Nobody LEARNS anything that way!  (Assuming that learning
is even possible...)

> but I'm afraid her statement
>here is provably (i.e., mathematically) wrong. It amounts to the claim
>that cross-validation can establish the legitimacy of cross-validation,

That's not what I'm saying!

I'm not trying to establish deductively that induction works in our
universe! You yourself have proven that this is impossible.  You're in good
company, starting at least as early as Hume.

I'm making a perfectly good Bayesian argument.  You start with a prior that
places non-zero mass on "universe is complex but learnable in my
representation" (suitably formalized).  Then  you look at the observations
you are getting from our universe.  Are they what you'd expect from a
"complex but learnable" universe?  Are they NOT what you'd expect in either
a "simple" or a "chaotic" universe? If so, your posterior on "complex but
learnable" goes up.  If no to any of these questions, your posterior either
doesn't go up as much, or else it goes down, depending on the relative
strength of the evidence for the different hypotheses.

The stunningly accurate probability predictions from quantum physics seem
to give a resounding "yes" answer to "learnable at the micro level."  The
fact that we haven't duplicated this success all the way up to psychology,
sociology, and yes, theology, gives a "highly probable" to "either complex
or macro-unlearnable, but in any case not simple."  (A few centuries ago
many held out hope for "simple," but their numbers are dwindling.)  Some of
us appear to vote for complex; others for macro-unlearnable; others remain
agnostic.  That's what you'd expect, I'd say, at approximately this point
in the evolution of consciousness in a complex but learnable universe.  It
could also happen in a monkeys at the keyboard universe, but it's unlikely
the monkeys would ever type out human beings capable of writing something
like a Shakespeare sonnet, let alone UAI folks slugging it out over whether
or not they're being typed out by monkeys.  What would a micro-learnable
but macro-unlearnable universe be like?  Who's got ideas?

It is also useful to point out that if you place probability 1 on "complex
but learnable" you'll never be able to infer otherwise (Phil Dawid has a
theorem, JASA 1985, that can be taken to establish this).  No matter how
strange the observations, you will always just come to the conclusion that
you don't yet have enough data to be well-calibrated. I think if I were
designing a universe, I'd want my consciousnesses to admit the possibility
of "simple" or "macro-unlearnable" along with "complex."

We in the aggregate obviously don't place probability 1 on "complex but
learnable."  Otherwise we wouldn't have Davids and Rolfs arguing
vociferously against assuming learnability and the ability to measure all
uncertainty with probability. Thank you -- you keep us honest.  On the
other hand, I've watched a major shift toward the Bayesian view occurring
over the past 20 years in statistics, philosophy, artificial intelligence,
and other fields concerned with the problem of induction. That's what I'd
expect at approximately this point in the evolution of a complex but
learnable universe.

But I certainly wouldn't want such a shift to become "groupthink" you
disagree with at your peril.  I would hope that at some point humanity
would grow out of that. Anyway, it's important to keep the
"macro-unlearnable" hypothesis open in case it's what's "really" going on.
But my current probability happens to be low.

>i.e., that a learning algorithm can establish its own legitimacy. This is
>circular reasoning.

Bayes theorem is not circular reasoning.

As I said, I'm not trying to establish DEDUCTIVELY that ours is a complex
but learnable universe.  I'm arguing that it's a perfectly respectable
INDUCTIVE inference from the observations all of us have.  I give it a
pretty high probability, but not unity!

You don't have to agree with me.  Your prior is different from mine, or
else you can't be modeled as "having a prior."  In either case, as a
subjectivist, I grant you the right to your opinion.  I also grant you not
only the right, but the responsibility as a scientist, to express your
disagreement.  That's how we all learn. Or at least, we all seem to suspect
we are learning, despite the impossibility of establishing beyond a shadow
of a doubt that we are indeed doing so.

Cheers,

Kathy

p.s.  For some "real engineering" (as opposed to airy philosophizing) on
the advantages in complex learning problems of information exchange among a
population of learners, see my papers with student Jim Myers, available for
downloading from
    http://ite.gmu.edu/~klaskey/lectures.html
Stay tuned for late-breaking results (not written up yet) on learning
hidden variables.  Also maybe Jim will send this list information on how to
obtain his dissertation.

Reply via email to