Hal Finney wrote:

Brent Meeker wrote:## Advertising

[Hal Finney wrote:]When you observe evidence and construct your models, you need some basis for choosing one model over another. In general, you can create an infinite number of possible models to match any finite amount of evidence. It's even worse when you consider that the evidence is noisy and ambiguous. This choice requires prior assumptions, independent of the evidence, about which models are inherently more likely to be true or not.In practice we use coherence with other theories to guide out choice. With that kind of constraint we may have trouble finding even one candidate theory.Well, in principle there still should be an infinite number of theories, starting with "the data is completely random and just happens to look lawful by sheer coincidence". I think the difficulty we have in finding new ones is that we are implicitly looking for small ones, which means that we implicitly believe in Occam's Razor, which means that we implicitly adopt something like the Universal Distribution, a priori.We begin with an intuitive physics that is hardwired into us by evolution. And that includes mathematics and logic. Ther's an excellent little book on this, "The Evolution of Reason" by Cooper.No doubt this is true. But there are still two somewhat-related problems. One is, you can go back in time to the first replicator on earth, and think of its evolution over the ages as a learning process. During this time it learned this "intuitive physics", i.e. mathematics and logic. But how did it learn it? Was it a Bayesian-style process? And if so, what were the priors? Can a string of RNA have priors?

`An RNA string, arising naturally in a particular envirionment, can be modelled`

`as expressing a prior about the probability of such RNA strings.`

And more abstractly, if you wanted to design a perfect learning machine, one that makes observations and optimally produces theories based on them, do you have to give it prior beliefs and expectations, including math and logic? Or could you somehow expect it to learn those? But to learn them, what would be the minimum you would have to give it?

`You'd have to give it the ability to reproduce and an environment in which it`

`competed with other reproducing learners.`

I'm trying to ask the same question in both of these formulations. On the one hand, we know that life did it, it created a very good (if perhaps not optimal) learning machine. On the other hand, it seems like it ought to be impossible to do that, because there is no foundation.

Why aren't elementary particles and entropy gradients enough foundation?

Mathematics and logic are more than models of reality. They are pre-existent and guide us in evaluating the many possible models of reality which exist.I'd say they are *less* than models of reality. They are just consistency conditions on our models of reality. They are attempts to avoid talking nonsense. But note that not too long ago all the weirdness of quantum mechanics and relativity would have been regarded as contrary to logic.I guess we could agree that they are "other" than models of reality? It still strikes me as paradoxical: ultimately we have learned our intuitions about mathematics and logic from reality, via the mechanisms of evolution and also our own individual learning experiences. And yet it seems that at some level a degree of logic, and certain mathematical assumptions, are necessary to get learning off the ground in the first place, and that they should not depend on reality.

`Why should they be any more independent of reality than say evolution or`

`folk-physics? I highly recommend Cooper's book.`

Brent Meeker