I trust that intuitively that these definitions are valid. The problem is
an alias rule can still fail if the syntax of the definition doesn't meet
certain criteria. If the parenthesis surrounding ↔ were missing in the
definition of the biconditional, for instance. If I could understand how
the definition checker for the set.mm database works I would have a better
understanding. I have my own hypothesis, but I'm not sure if it's fully
sufficient.
My hypothesis would as follows...
Let LS⁺ᴰ is the set of all WFFs in the system with new definition D and LS
is the set of all WFFs in the system before adding the new definition.
Then let F ≔ { Ψ ⇒ Φ } be the substitution rule replacing the definiendum Ψ
with the definiens Φ.
Now the first criteria should be that for any member of LS⁺ᴰ, a member of
LS should result after applying F a finite number of times.
The second criteria is that if we call the process of applying F a finite
number of times to obtain a member of LS, F*, then F* should be a
function. I.e. the result should be unique. If in any case the result
depends on the order application (ambiguous nesting of the definiendum or
clashing with other symbols), then the definition is going to be creative.
But are these two criteria alone enough?
On Monday, July 10, 2023 at 12:48:41 PM UTC-4 [email protected]
wrote:
> I was also disturbed by the issue of definitions being extra axioms
> according to Metamath, and solved it via interpreting only 'not' and
> 'imply' as part of the propositional core language, and all the other
> junctors as different ways to express something in terms of 'not' and
> 'imply' (which is also done in
> https://us.metamath.org/mmsolitaire/pmproofs.txt when proving statements
> which contain different junctors, see "Result of Proof" lines which differ
> from their theorems).
>
> The concept to build only on what is essentially required leads to much
> slimmer definitions that make use of a logical system (e.g. when defining
> semantics).
>
>
> Here is one of my exemplary syntax definitions:
>
>
> Alias step verifications can be done by converting both formulas into the
> core language and checking for equality.
>
> Based on
>
>
> (A1) ⊢\psi\imply(\phi\imply\psi)
> (A2)
> ⊢\psi\imply(\phi\imply\chi)\imply((\psi\imply\phi)\imply(\psi\imply\chi))
> (A3) ⊢\not\phi\imply\not\psi\imply(\psi\imply\phi)
> (MP) {\psi,\psi\imply\phi}⊢\phi
>
> I gave the below proof of df-bi. Most of it proves that a formula is
> equivalent to itself, from where a single alias step leads to df-bi.
> Since df-bi, under the previous alias definitions, equals (
> \phi\equiv\psi)\equiv(\phi\equiv\psi).
>
> a2i:
> 1. \phi\imply(\psi\imply\chi)
> premise
> 2. \phi\imply(\psi\imply\chi)\imply(\phi\imply\psi\imply(\phi\imply\chi))
> (A2) ; 〈\phi, \psi〉, 〈\psi, \phi〉
> 3. \phi\imply\psi\imply(\phi\imply\chi)
> (MP):1,2
>
> mms-H6:
> 1.
> \not1\imply\not2\imply(2\imply1)
>
> (A3) ; 〈\phi, 1〉, 〈\psi, 2〉
> 2.
> \not1\imply\not2\imply(2\imply1)\imply(0\imply(\not1\imply\not2\imply(2\imply1)))
>
> (A1) ; 〈\phi, 0〉, 〈\psi, \not1\imply\not2\imply(2\imply1)〉
> 3.
> 0\imply(\not1\imply\not2\imply(2\imply1))
>
> (MP):1,2
> 4.
> 0\imply(\not1\imply\not2\imply(2\imply1))\imply(0\imply(\not1\imply\not2)\imply(0\imply(2\imply1)))
>
> (A2) ; 〈\chi, 2\imply1〉, 〈\phi, \not1\imply\not2〉, 〈\psi, 0〉
> 5.
> 0\imply(\not1\imply\not2)\imply(0\imply(2\imply1))
>
> (MP):3,4
>
> mms-T13:
> 1.
> \not\not0\imply(1\imply\not\not0)
>
> (A1) ; 〈\phi, 1〉, 〈\psi, \not\not0〉
> 2.
> \not\not0\imply(\not\not(1\imply\not\not0)\imply\not\not0)
>
> (A1) ; 〈\phi, \not\not(1\imply\not\not0)〉, 〈\psi, \not\not0〉
> 3.
> \not\not0\imply(\not\not(1\imply\not\not0)\imply\not\not0)\imply(\not\not0\imply(\not0\imply\not(1\imply\not\not0)))
>
> (mms-H6) ; 5 ; 〈0, \not\not0〉, 〈1, \not(1\imply\not\not0)〉, 〈2, \not0〉
> 4.
> \not\not0\imply(\not0\imply\not(1\imply\not\not0))
>
> (MP):2,3
> 5.
> \not\not0\imply(\not0\imply\not(1\imply\not\not0))\imply(\not\not0\imply(1\imply\not\not0\imply0))
>
> (mms-H6) ; 5 ; 〈0, \not\not0〉, 〈1, 0〉, 〈2, 1\imply\not\not0〉
> 6.
> \not\not0\imply(1\imply\not\not0\imply0)
>
> (MP):4,5
> 7.
> \not\not0\imply(1\imply\not\not0\imply0)\imply(\not\not0\imply(1\imply\not\not0)\imply(\not\not0\imply0))
>
> (A2) ; 〈\chi, 0〉, 〈\phi, 1\imply\not\not0〉, 〈\psi, \not\not0〉
> 8.
> \not\not0\imply(1\imply\not\not0)\imply(\not\not0\imply0)
>
> (MP):6,7
> 9.
> \not\not0\imply0
>
> (MP):1,8
>
> mpd:
> 1. \phi\imply\psi premise
> 2. \phi\imply(\psi\imply\chi) premise
> 3. \phi\imply\psi\imply(\phi\imply\chi) (a2i):2 ; 3
> 4. \phi\imply\chi (MP):1,3
>
> id:
> 1. \phi\imply(\phi\imply\phi) (A1) ; 〈\psi, \phi〉
> 2. \phi\imply(\phi\imply\phi\imply\phi) (A1) ; 〈\phi, \phi\imply\phi〉,
> 〈\psi, \phi〉
> 3. \phi\imply\phi (mpd):1,2 ; 4 ; 〈\chi, \phi〉,
> 〈\psi, \phi\imply\phi〉
>
> mms-T35:
> 1.
> \not\not(0\imply1)\imply(0\imply1)
>
> (mms-T13) ; 9 ; 〈0, 0\imply1〉
> 2.
> \not\not(0\imply1)\imply(0\imply1)\imply(\not\not(0\imply1)\imply0\imply(\not\not(0\imply1)\imply1))
>
> (A2) ; 〈\chi, 1〉, 〈\phi, 0〉, 〈\psi, \not\not(0\imply1)〉
> 3.
> \not\not(0\imply1)\imply0\imply(\not\not(0\imply1)\imply1)
>
> (MP):1,2
>
> mms-*4.2:
> 1.
> P\implyP
>
> (id) ; 3 ; 〈\phi, P〉
> 2.
> P\implyP\imply(\not\not(P\implyP\imply\not(P\implyP))\imply(P\implyP))
>
> (A1) ; 〈\phi, \not\not(P\implyP\imply\not(P\implyP))〉, 〈\psi, P\implyP〉
> 3.
> \not\not(P\implyP\imply\not(P\implyP))\imply(P\implyP)
>
> (MP):1,2
> 4.
> \not\not(P\implyP\imply\not(P\implyP))\imply(P\implyP)\imply(\not\not(P\implyP\imply\not(P\implyP))\imply\not(P\implyP))
>
> (mms-T35) ; 3 ; 〈0, P\implyP〉, 〈1, \not(P\implyP)〉
> 5.
> \not\not(P\implyP\imply\not(P\implyP))\imply\not(P\implyP)
>
> (MP):3,4
> 6.
> \not\not(P\implyP\imply\not(P\implyP))\imply\not(P\implyP)\imply(P\implyP\imply\not(P\implyP\imply\not(P\implyP)))
>
> (A3) ; 〈\phi, \not(P\implyP\imply\not(P\implyP))〉, 〈\psi, P\implyP〉
> 7.
> P\implyP\imply\not(P\implyP\imply\not(P\implyP))
>
> (MP):5,6
> 8.
> \not(P\implyP\imply\not(P\implyP))
>
> (MP):1,7
> 9.
> P\equivP
>
> alias:8
>
> df-bi:
> 1.
> \phi\equiv\psi\equiv(\phi\equiv\psi)
>
>
> (mms-*4.2) ; 9 ; 〈P, \phi\equiv\psi〉
> 2.
> \not(((\phi\equiv\psi)\imply\not((\phi\imply\psi)\imply\not(\psi\imply\phi)))\imply\not(\not((\phi\imply\psi)\imply\not(\psi\imply\phi))\imply(\phi\equiv\psi)))
>
> alias:1
>
> Here is a way to define these formal proofs:
>
> Reasons and alias steps are technically not even part of a formal proof,
> but they should in practice be provided for clarity.
> Depending on the alias step, one might even prefer several sub-steps
> because it can be a little too much for our sluggish minds to process.
> However, the theory is nice and clean this way, which I think was the
> focus here.
>
> ------------------------------
> *Von:* [email protected] <[email protected]> im Auftrag von
> Marshall Stoner <[email protected]>
> *Gesendet:* Samstag, 8. Juli 2023 23:40:22
> *An:* Metamath
> *Betreff:* Re: [Metamath] Uncomfortable with definitions as axioms ...
> help?
>
> New link.
>
> paper <https://1drv.ms/b/s!AuvNPSOQfN3xjK1_IfOJ7LIT5Pl3NQ?e=5Ou22H>
>
> On Saturday, July 8, 2023 at 5:22:17 PM UTC-4 Marshall Stoner wrote:
>
>> I'm currently working on a "textbook-like" introduction to the system
>> used in set.mm. I'm currently thinking about how I should provide a
>> proof justifying the definition of '↔'.
>>
>> I write the abbreviation eq(𝜑, 𝜓) ≔ ¬ ((𝜑 → 𝜓) → ¬ (𝜓 → 𝜑)).
>> Then the definition is just the expansion of eq((𝜑 ↔ 𝜓), eq(𝜑, 𝜓)).
>> The elimination rule becomes (𝜑 ↔ 𝜓) ≔ eq(𝜑, 𝜓). Applying the
>> elimination rule to the definition you get eq(eq(𝜑, 𝜓), eq(𝜑, 𝜓)) which
>> can be proved as a theorem without any reference to the symbol '↔'. Any
>> theorem containing '↔' must have a proof tree that can be traced backwards
>> to a statement of the definition itself. You replace (𝜑 ↔ 𝜓) with eq(𝜑,
>> 𝜓) in the definition, you have a proved a theorem. You then replace it
>> everywhere else you see it, tracing the same steps forward as you traced
>> backwards. Every line along along the way should still be justified since
>> eq(𝜑, 𝜓) is a WFF that is self-contained.
>>
>> Anyways, I'm still working on the exact wording to make this
>> "meta-argument" absolutely 100% clear to myself and anyone else. Writing
>> out a formal induction meta-proof in the way the standard deduction theorem
>> is typically proved feels a bit cumbersome to me, but I don't want to just
>> handwave anything away either. I feel like I should maybe provide an
>> example showing how the elimination rule would work on a WFF containing
>> nested '↔'. The elimination rule has to be applied recursively until the
>> symbol is gone.
>>
>> paper <https://1drv.ms/b/s!AuvNPSOQfN3xjK1-DmOLCnCMd9hDrQ?e=5expKV>
>> On Wednesday, December 21, 2022 at 7:57:14 PM UTC-5 [email protected]
>> wrote:
>>
>>> On Wed, Dec 21, 2022 at 7:12 PM Samuel Goto <[email protected]> wrote:
>>>
>>>> Hey all,
>>>>
>>>> As I'm looking into metamath there is something lingering that is
>>>> making me uncomfortable: definitions are axioms.
>>>>
>>>> This is obviously intended and by design by the community (the book
>>>> has a whole chapter about it), but I'm still uncomfortable with the
>>>> justifications and consequences.
>>>>
>>>
>>> You are right to be concerned. I was also concerned several years ago,
>>> and the solution I came up with was to observe that definitions in
>>> set.mm follow a very regular structure, such that all definitions with
>>> that structure can be given a metatheoretic argument for conservative
>>> extension. So I added a "definition checker" module to mmj2, which checks
>>> the rules described in the "Additional rules for definitions" section of
>>> https://us.metamath.org/mpeuni/conventions.html . This program is run
>>> as part of the set.mm CI, so we can be sure that (with a small list of
>>> exceptions) every df-* axiom is actually conservative over the ZFC axioms.
>>>
>>>
>>>> A few irrational feelings I think I'm having:
>>>>
>>>> - I can't parse or understand df-bi trivially. What does it mean?
>>>>
>>>
>>> df-bi is one of the exceptions, because the first rule is that a
>>> definition has to have <-> or = as the root symbol and this doesn't work
>>> when we are defining <-> itself. That doesn't mean that the definition is
>>> not conservative, only that it can't be automatically checked. The way to
>>> read df-bi is that it is defining (𝜑 ↔ 𝜓) to be ¬ ((𝜑 → 𝜓) → ¬ (𝜓 →
>>> 𝜑)) (that is dfbi1), except that the root ↔ in the definition is itself
>>> expanded according to this expression. The justification that this is a
>>> conservative extension is that if you replace all instances of (𝜑 ↔ 𝜓)
>>> with ¬ ((𝜑 → 𝜓) → ¬ (𝜓 → 𝜑)) in df-bi, you get bijust, which is
>>> provable without df-bi. So you can perform that replacement in any
>>> derivation to eliminate the defined symbol.
>>>
>>>
>>>> - How do I know that the axioms of the definitions aren't
>>>> introducing contradictions?
>>>>
>>>
>>> As I mentioned, we can reduce the problem to a small list of exceptions
>>> like df-bi, plus the definition schema given by the definition checker's
>>> rules. The schema can be proved by a direct substitution argument like the
>>> one I just gave for df-bi, except that to handle bound variables we also
>>> need to have the ability to perform alpha-renaming, such as in _V = { x | x
>>> = x }, which you can use to prove the theorem { x | x = x } = { y | y = y
>>> }, so we need to know that this was already provable before (this is
>>> vjust). The ability to perform alpha renaming in any formula is a
>>> metatheorem of set.mm's axiom system.
>>>
>>>
>>>> - It seems that definitions aren't supposed to "extend the
>>>> language", but I don't get that sentiment as I read them
>>>>
>>>
>>> I'm not sure I would say that. Definitions definitely are intended to
>>> extend the language, but a conservative extension means that no statements
>>> expressible in the original language become provable as a result of the
>>> introduction of the definition. (In particular, if |- F. was not provable
>>> before, it is still not provable, so the extension is sound.)
>>>
>>>
>>>> - We say "all you need is propositional and first order logic and
>>>> set theory axioms" but then proceed to introduce a bunch by definitions
>>>>
>>>
>>> Yes, I tend to agree with this sentiment. It's not *really* true that
>>> you don't need definitions. You don't need them *in principle*, but if you
>>> don't like exponentially long proofs (which can make a qualitative
>>> difference if you have ultrafinitist tendencies and don't trust proofs that
>>> cannot be checked on any computer present or future) then you are more or
>>> less forced to accept them.
>>>
>>>
>>>> I'm sure this can't be a new sentiment, since an entire chapter in
>>>> the book was dedicated to it, but I was wondering if:
>>>>
>>>> (a) Does anyone have some explanation posted online that I could
>>>> read to inform myself and perhaps settle my anxiety?
>>>> (b) Is there a version of metamath and/or set.mm that don't rely
>>>> on definitions as axioms?
>>>>
>>>
>>> For the reasons I stated above, a set.mm without definitions would not
>>> be remotely usable. And metamath doesn't really support definitions in any
>>> other way.
>>>
>>> Even with the definition checker, the situation was still somewhat
>>> problematic to me, since the definition checker is now essentially a part
>>> of the "trusted code base" of metamath (unless you want to personally
>>> review all of those thousand definitions-as-axioms in set.mm) and yet
>>> it is not a required part of metamath verifier operation, and it has only
>>> one implementation, which moreover is tailored to set.mm notions. So I
>>> tried to make a version of metamath that includes definitions as part of
>>> the kernel, and the result is Metamath Zero (
>>> https://github.com/digama0/mm0). It unfortunately complicates several
>>> things to take definitions as primitive because you need to have a notion
>>> of bound variable to handle examples like df-v, as well as a conversion
>>> judgment to handle definition unfolding without having to also bake in an
>>> equality operator (since not all metamath databases have one, or only have
>>> one for some sorts and not others). You can read about MM0 in
>>> https://arxiv.org/abs/1910.10703 .
>>>
>>> Another Metamath variant which adds definitions as a primitive part of
>>> the language is GHilbert by Raph Levien, although this diverges a bit more
>>> from metamath and is closer to lambda calculus / LF in terms of its
>>> primitive operators.
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Metamath" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/metamath/a3d5056d-9e16-4678-af16-1a91431bbffcn%40googlegroups.com
>
> <https://groups.google.com/d/msgid/metamath/a3d5056d-9e16-4678-af16-1a91431bbffcn%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/metamath/583ce4fe-d4de-46a5-9a61-1922d45ecfden%40googlegroups.com.