Hal wrote
> Brent Meeker wrote:
> > In practice we use coherence with other theories to guide out choice. With
> > that kind of constraint we may have trouble finding even one candidate
> > theory.

##
Advertising

> Well, in principle there still should be an infinite number of theories,
> starting with "the data is completely random and just happens to
> look lawful by sheer coincidence". I think the difficulty we have in
> finding new ones is that we are implicitly looking for small ones, which
> means that we implicitly believe in Occam's Razor, which means that we
> implicitly adopt something like the Universal Distribution, a priori.
An intriguing way of putting it; yes, the amount of data compression
possible is necessarily related to both Occam's Razor and the UDist.
> > We begin with an intuitive physics that is hardwired into us by
> > evolution. And that includes mathematics and logic. There's an
> > excellent little book on this, "The Evolution of Reason" by Cooper.
>
> No doubt this is true. But there are still two somewhat-related problems.
> One is, you can go back in time to the first replicator on earth, and
> think of its evolution over the ages as a learning process. During this
> time it learned this "intuitive physics", i.e. mathematics and logic.
> But how did it learn it? Was it a Bayesian-style process? And if so,
> what were the priors? Can a string of RNA have priors?
I would say that the current state of the RNA string at any
given time can be regarded as its prior. After all, it survived
up to now, eh? The idea that evolution has to be pretty conservative,
---that is, the mechanisms must not allow too many new guesses---
also follows at once.
> And more abstractly, if you wanted to design a perfect learning machine,
> one that makes observations and optimally produces theories based on
> them, do you have to give it prior beliefs and expectations, including
> math and logic? Or could you somehow expect it to learn those? But to
> learn them, what would be the minimum you would have to give it?
>
> I'm trying to ask the same question in both of these formulations.
> On the one hand, we know that life did it, it created a very good (if
> perhaps not optimal) learning machine. On the other hand, it seems like
> it ought to be impossible to do that, because there is no foundation.
I strongly urge you to read the new book "What is Thought", by
Eric Baum. He very insightfully and carefully attends to these
questions.
Lee