> Why do you say Q0 or R0 and not "simple type theory"? There are contentions > that HOL is too weak a type theory for much of mathematics. Simple type > theory is even weaker. So in what sense is it expressive? (Granted, you can > do a great deal with just PA, and I'm sure your system is at least as strong. > But there is plenty of mathematics that goes beyond this.)
Q0 is still simple type theory (the most elegant formulation, an improvement of Church's type theory and equivalent to it), but R0 is much stronger. R0 has type abstraction and provides some of the means for dependent type theory. It can naturally express group theory, which isn't possible in polymorphic type theory (e.g., HOL) or the even weaker simple type theory (e.g., Q0). For a short overview (and comparison), see p. 1 of: https://owlofminerva.net/files/fom_2018.pdf For more details, please see the announcement: https://owlofminerva.net/kubota/software-implementation-of-the-mathematical-logic-r0-available-for-download/ For all details, see: https://www.owlofminerva.net/files/formulae.pdf#page=12 I believe that with "substitution at the type (subscript) level" (p. 12), the only feature still missing, what can be expressed in the language of R0 coincides with (all of) mathematics. (Of course, meta-mathematical theorems / metatheorems cannot be expressed within a Hilbert-style system.) >> In summary, the object language (the first step) is not important, except as >> a specification for the second step. Only the second step matters for >> correctness and practical usability. > > This is not quite correct. > Not only that the first step is the logical foundation on which the second is > established upon. > Moreover, a reference implementation is required to study certain cases. > In particular the violation of type restriction has to be avoided, as > otherwise the system is rendered inconsistent. > This is quite tricky, and case studies are necessary, especially if you want > to implement new features such as type abstraction and dependent types. > Also, you want to show that in principle all of mathematics can be expressed > in the reference implementation (and the second step is due to practical > considerations only). > While of course you need the second step for numbers such as 2^2^2^60, all > principle questions (such as whether an antinomy can be expressed) can be > addressed with the first step. > > Finally, only in the first step (the ideal formal language) the formulation > is very elegant, and logical rigor preserved in a philosophical sense. > > But the move to the second step adds new axioms, so even if the ideal formal > language is elegant and rigorous, that property may not be preserved by the > second level language, which is also the one that people actually use. It depends on the purpose you use the system for. (See below.) > For example, a certain condition in Q0 appears as two distinct (independent) > conditions in Isabelle's metalogic (a natural deduction system). > > Quote: > > For example, in Isabelle's metalogic M by Larry Paulson, the Eigenvariable > conditions appear as two distinct (independent) conditions: > [...] > By contrast, in the logic Q0 by Peter Andrews, the restrictions in these > (derived) rules have their origin in the substitution procedure of (the > primitive) Rule R', which is valid only provided that it is not the case that > "x is free in a member of [the set of hypotheses] H and free in [A = B]." > [Andrews, 2002, p. 214]. > [...] > Hence, in a bottom-up representation (like Q0) - unlike in a top-down > representation (like Isabelle's metalogic M) - it is possible to trace the > origin of the two Eigenvariable conditions back to a common root, i.e., the > restriction in Rule R'. > > https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2018-May/msg00013.html > <https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2018-May/msg00013.html> > > So the formal language obtained in the first step is much more elegant than a > practical system obtained in the second step. > For foundational and philosophical research, the first is preferred, for > advanced mathematics such as with large numbers the second is preferred. > > What is the computer implementation for then? As a practical tool, it is too > limited, and in particular the exponential blowup means that you are pushing > the upper bound caused by the finiteness of the hardware down to below many > theorems that matter to people. If you want a system "for foundational and > philosophical research", you can do that on paper. If you want to prove > properties about the system, you need to formalize the system itself inside a > theorem prover (presumably with a stronger axiom system). Within that system, > you will be able to prove the provability of a lot of things you wouldn't be > able to do directly in the system, by making use of the metalogical natural > numbers to prove schematic constructions. Arguing that way means that Russell and Whitehead should have skipped all three volumes of Principia Mathematica but sketch the formal system only and do some demonstration proofs. But it was important for them to write and publish all three volumes (today one would do this in the computer, which I did for R0). The difference is, that type systems nowadays are much more sophisticated, such that a software implementation is required even for step one. With an advanced type system (in particular, with type abstraction), paper proofs aren't reliable anymore, which holds for step one, too. So you need computer verification for this. The type restrictions become more sophisticated since the Eigenvariable conditions have to be preserved for both the term as well as the type level, which aren't strictly separated anymore. See, for example: Scope Violation in Lambda Conversion at Type Level "[λtτ.[λuτ.xt](tτ)]uτ" https://www.owlofminerva.net/files/formulae.pdf#page=385 https://www.owlofminerva.net/files/formulae.pdf#page=796 Moreover, you need a type referencing mechanism similar to de Bruijn indices in order to preserve the dependencies. Even in simple symbols such as the universal quantifier (with type abstraction) you have dependencies over three levels, which cannot be effectively handled on paper in a reliable way: "definition of the universal quantifier with type abstraction ∀ := [λtτ .[λpot.(=o(ot)(ot)[λxt.To]pot)o](o(ot))] = [λt.[λp.(= [λx.T ] p)]]o(o\3)τ , with depth/nesting index \3 in o(o\3)τ referring to τ , hence ∀ ω is of type o(o\3)[ω / \3] = o(oω))." https://www.owlofminerva.net/files/formulae.pdf#page=12 One might consider formalizing the systems of both step one and step two and prove them equivalent. ____________________________________________________ Ken Kubota doi.org/10.4444/100 <https://doi.org/10.4444/100> -- You received this message because you are subscribed to the Google Groups "Metamath" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/51A97B50-285A-4D3D-B223-F8E83AED6277%40kenkubota.de.
