On Sat, Jun 7, 2025 at 8:20 PM samiro.d...@rwth-aachen.de <
samiro.disc...@rwth-aachen.de> wrote:

> In particular, I noticed that Mario Carneiro gave a talk
> <https://us.metamath.org/other.html#natded> on "*Natural Deduction in the
> Metamath Proof Language*", but it doesn't seem to have gained much
> traction; at least I do not see many theorems in Metamath that would be
> crucial to apply proof design based on natural deduction.
>
To the contrary, I would say that the idea behind that talk was quite
influential on set.mm; writing large theorems in "deduction style"
basically became the default, replacing the more limited usage of inference
style + weak deduction theorem as well as "theorem style" closed inferences
which do a lot of context manipulation. Personally I consider writing
deduction proofs in metamath / set.mm to be basically a solved problem,
enough that I felt no need to introduce deduction style features into MM0
(which adds complexity to the kernel) and instead did it all in a
simulation layer styled after the set.mm approach.

>
>    - Most rules can be enabled by using a corresponding theorem. For
>    example, p→(q→(p∧q)) (pm3.2 <https://us.metamath.org/mpeuni/pm3.2.html>)
>    can be used — in combination with two modus ponens applications — to apply 
> *conjunction
>    introduction*, i.e. ∧I: Γ∪{p,q}⊢(p∧q). There may be multiple
>    rule-enabling theorems, for example p→(q→(q∧p)) (pm3.21
>    <https://us.metamath.org/mpeuni/pm3.21.html>) can accomplish the same
>    thing by changing the order of arguments. I provided a table of
>    rule-enabling theorems at nd/NdConverter.h
>    <https://github.com/xamidi/pmGenerator/blob/1.2.2/nd/NdConverter.h?ts=4>,
>    many of which do not have set.mm correspondents: Of six ∨E-enabling
>    theorems there is only (jao <https://us.metamath.org/mpeuni/jao.html>),
>    and I didn't find any of (p→⊥)→¬p, ¬p→(p→⊥), p→(¬p→⊥), (¬p→⊥)→p, i.e.
>    no enabling theorems for any of ¬I, ¬E, IP.
>
> * If I want to do conjunction introduction, I use jca (which is named iand
in MM0, a name I find rather more intuitive).
* For or elimination I use jaod or jaodan (and again, I find eord to be a
better name)
* For not elimination/introduction I will usually use absurd (~a -> a -> b)
and cases ({a -> b, ~a -> b} |- b). It can also be used in various other
ways via lemmas like contraposition; I would say that using the natded
axiom directly is not always the most convenient option here.

In general, you should check out the table
https://us.metamath.org/mpeuni/natded.html , which contains equivalents for
every intuitionistic logic axiomatization we have seen in the literature.

>
>    - *However*, in natural deduction proofs, there are blocks at certain
>    depths, each starting with an assumption.
>    For example, ∧I: Γ∪{p,q}⊢(p∧q) at depth 3 is actually
>    Γ∪{a→(b→(c→p)),a→(b→(c→q)}⊢a→(b→(c→(p∧q))). Fortunately, such variants
>    can easily be constructed from the zero-depth rule-enabling theorems:
>    For symbols 1 := (ax-1 <https://us.metamath.org/mpeuni/ax-1.html>) and
>    2 := (ax-2 <https://us.metamath.org/mpeuni/ax-2.html>), the proof
>    σ_mpd(d) for σ_mpd(0) := D and σ_mpd(n+1) := (σ_mpd(n))²(D1)ⁿ2 can be
>    used to apply modus ponens at depth d. For example, σ_mpd(0) is (ax-mp
>    <https://us.metamath.org/mpeuni/ax-mp.html>), σ_mpd(1) is (mpd
>    <https://us.metamath.org/mpeuni/mpd.html>), and σ_mpd(2) is (mpdd
>    <https://us.metamath.org/mpeuni/mpdd.html>). But σ_mpd(d) for d ≥ 3
>    seem to be still missing from Metamath.
>    Every theorem can be shifted one deeper by adding an antecedent via
>    preceding its proof with D1, i.e. with a single application of (a1i
>    <https://us.metamath.org/mpeuni/a1i.html>).
>    In combination with σ_mpd, rule-enabling theorems can thereby be
>    applied at any depth. I gave detailed constructions of all supported rules
>    at nd/NdConverter.cpp#L538-L769
>    
> <https://github.com/xamidi/pmGenerator/blob/1.2.2/nd/NdConverter.cpp?ts=4#L538-L769>
>    .
>
> As I explain in the talk, you don't actually need depth > 1 since you can
use conjunction instead. That's exactly the reason why this whole setup
works pragmatically and doesn't require an infinite list of axioms. Instead
of stacking things up as

ph -> a -> b -> c -> p,
ph -> a -> b -> c -> q
|- ph -> a -> b -> c -> (p /\ q)

if you use

ph /\ a /\ b /\ c -> p,
ph /\ a /\ b /\ c -> q
|- ph /\ a /\ b /\ c -> (p /\ q)

then this is just a substitution instance of jca again. You use imp/ex in
order to push and pop assumptions to the context but otherwise you never
have to worry about larger depths.

>
>    - We cannot simply make use of some rule-enabling theorem to translate 
> *conditional
>    introduction*, i.e. →I: from Γ∪{p}⊢q infer Γ⊢(p→q), since it handles
>    the elimination of blocks and depth, which is necessary because
>    Hilbert-style proofs operate on a global scope everywhere. Other rules just
>    call it in order to eliminate a block and then operate on the resulting
>    conditional.
>
> I don't understand this sentence, but I think I just answered it: the
equivalent of ->I is ex: {Γ /\ p -> q} |- Γ -> p -> q.

*Question*
>
> Regarding Metamath's databases, shouldn't there be more elements to
> support proof design based on natural deduction? One cannot prepare all the
> elements, of course, since translations of Fitch-style proofs of unlimited
> depth require infinitely many derivations — at least one per depth per
> rule-enabling theorem.
>
I hope I've already answered this, but just to reiterate: One *can* prepare
all the elements, and set.mm actually does so, and it is used in practice
and incredibly effective at making it unnecessary to play with the context
nearly as much as what used to be the case. The only infinite families you
really need are the simp-*r and ad*antlr series, for implementing the
hypothesis rule. (You can still live with a finite axiomatization, you just
need to use ad10antr when you reach the limit.) (PS: I also didn't like
these names, and use an*lr and anw*l respectively in MM0.)

The way I think of this is that in metamath, there is a low "cost"
associated with manipulating expressions which are near to the root of the
expression tree. Therefore ((a /\ b) /\ c) -> d is optimized to manipulate
c and d, and a and b are harder to get to, so if I have a bunch of
deduction theorems to apply I don't want everything to be in the form a ->
(b -> (c -> d)) where now a and b are easiest to reach and c and d are
farther away from the root, since d is the key thing I want to manipulate.
This  reason why high depth intuitionistic equivalent theorems like {a -> b
-> c -> p, a -> b -> c -> q} |- a -> b -> c -> (p /\ q) are a pain and
should be avoided.

Mario

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to metamath+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/metamath/CAFXXJSuA-RwyjhW6UobwnuzyP%3D6b5ET6E4N1NSC5w3PO4a%2BM_A%40mail.gmail.com.

Reply via email to