However - what mainly interests me is what reason one could have for not taking the Pythagorean view, which does, after all, explain why the universe exists (or appears to exist).
Perhaps because most people believe that the Pythagorean view has been refuted.
Just consider the "little Pythagorean view" according to which
-Every length can be measured by integers or ratio of integers
This has been refuted by the Pythagorean themselves when they discovered that the square root of 2 *is* not given by any ratio of integers. It is the discovery of the irrational numbers, a long time ago.
Now Pythagore could have "corrected" his doctrine with:
-Every length can be measured by integers, ratio of integers or radical of integers.
But this would have been refuted by Abel's discovery in 1824 that polynomial with degree greater than 4 can have solutions which cannot be described in term of ratio and radicals.
Now Pythagore could have corrected his doctrine again with
-Every length can be measured by polynomial's zeros.
But then Pythagore would have been refuted by the discovery of the non algebraic numbers: the transcendant numbers like euler e, and PI.
Perhaps at this stage Pythagore would begin to think his Pythagorean view
could may be not work.
And then he would have been destroyed by Cantor's discovery, who showed
with his famous diagonalization, that the set of reals (the lengths) is not enumerable.
But then Pythagore would perhaps have postulated the comp hypothesis, thinking that *algorithmic* real , which should be obviously enumerable, exist and are easily defined.
Alas, the more subtle Post-Turing-Markov-Church-Kleene-Godel diagonalisation makes the algorithmic real not *algorithmically* enumerable.
Surely at this stage Pythagore should abandon the Pythagorean view. Isn't it?
NOT AT ALL. With *Church thesis* you can still say:
-Every length can be measured by a FORTRAN program.
Only you have a price to pay:
FORTRAN programs will measure *much more* than "length", and an enumeration of the algorithmic reals, will enumerate the reals + other objects, and no theories at all will give you an algorithmic way to distinguish the reals from the other objects. That is, the price is incompleteness, randomness, unpredictability, etc. (Click on the diagonalisation posts in my URL where I explain this, with the notion of function (from N to N) in place of the reals).
But that is nice (for a realist platonic), and this shows that Church thesis not only rehabilitates the little Pythagorean view (in term of length), but makes consistent the large Pythagorean view according to which:
-everything emerges from the integers and their relations.
And my PhD result shows that, with the comp hyp, the appearance of physics *should* emerge in the average memory of the consistent anticipating universal machine/program/number. And then I derived a theorem prover for the logic of the physical propositions from that, but for reason of inefficacy of that theorem prover I can still not decide if it gives really a quantum logic, or which one ... Other weakness, I have neither a semantics nor an axiomatic for that quantum logic, only a theorem prover, and a naive semantics in term of maximal consistent computational histories.
I have written 2/3 of a paper which summarize the proof and I intend to submit it to some international journal (once finished ...).
Sure, there exists other reasons to believe the Pythagorean view is coming back, like more direct astonishing relations between number theory and theoretical physics. Look at http://www.maths.ex.ac.uk/~mwatkins/zeta/renormalisation.htm