On Fri, Aug 9, 2019 at 6:39 PM Noam Tene <noam.t...@gmail.com> wrote:

> A single Mmj2 "unify" automatically created 4 oveqli proof steps to
> generate the proof for this larger expression.
> I am impressed that that mmj2 can do this manipulation.
> This feature can be very useful for generating parts of long proofs.
> The proof above generated 4 steps for a tree depth of 6.  Longer
> expressions could easily require more of these intermediate steps.
>

By the way, this magic was brought to you by the mmj2 macro system,
specifically "transformations.js" in the macro folder of mmj2. The way it
works is simply to apply any theorem that unifies with the goal and
produces no additional metavariables, except for a big blacklist of
theorems that are not reversible (e.g. theorems that drop hypotheses). This
turns out to be really effective at proving equality theorems (provided
both sides of the equality are known), as well as closure theorems like

( ( 1 + 2 ) x. ( A - B ) ) e. RR

but it sometimes applies some bad theorems anyway, and you have to manually
delete the bad steps. You can use the macro system to write your own
tactics, although no one has attempted it except me so far, so the
documentation is probably pretty bad.


> I still wonder if there is some way to use something like dfsbcq to:
>     * reduce proof length (number of steps) and size (number of symbols
> displayed)
>     * make proofs more readable.
>     * Use metamath's built in *substitution* to eliminate the need for
> multiple versions of the Algebra helpers
>     * Reduce the size of set.mm by eliminating multiple versions of
> useful patterns.
>     * Facilitate proofs by variable substitution.
>
> Can we use Metamath substitution to unify both the antecedent and the
> result as instances of the expression:
>
> A = ( ( ( ( x x. D ) + E ) x. F ) ^ G )
>
> Can we build that capability into a proof?
>

Not without changing the metamath spec. There is a rigid specification of
what metamath proofs can and can't do, which is used as the basis for the
implementation of the several dozen metamath verifiers, and the stability
of this spec is seen as a virtue, but it implies some fundamental
limitations on what the proof format itself will support. That said, you
can still obtain most of your goals without changing the spec by adding
tooling on top.

To reduce the number of proof steps and symbols displayed, you can change
the tool you are using to display the proof, so that it ignores some steps
or hides some symbols. It already does that today; neither the metamath web
site nor mmj2 display syntax steps, but they nevertheless appear in the
proof that is stored in the .mm file. Compare this proof of id1, displayed
using metamath.exe:

MM> sh p id1/all
 1     wph=wph   $f wff ph
 2       wph=wph   $f wff ph
 3       wps=wph   $f wff ph
 4   4:wps=wi    $a wff ( ph -> ph )
 5 5:wph=wi    $a wff ( ph -> ( ph -> ph ) )
 6   wps=4     $a wff ( ph -> ph )
 7     wph=wph   $f wff ph
 8     wps=wph   $f wff ph
 9   min=ax-1  $a |- ( ph -> ( ph -> ph ) )
10       wph=wph   $f wff ph
11         wph=4     $a wff ( ph -> ph )
12         wps=wph   $f wff ph
13       wps=wi    $a wff ( ( ph -> ph ) -> ph )
14     wph=wi    $a wff ( ph -> ( ( ph -> ph ) -> ph ) )
15       wph=5     $a wff ( ph -> ( ph -> ph ) )
16       wps=4     $a wff ( ph -> ph )
17     wps=wi    $a wff ( ( ph -> ( ph -> ph ) ) -> ( ph -> ph ) )
18       wph=wph   $f wff ph
19       wps=4     $a wff ( ph -> ph )
20     min=ax-1  $a |- ( ph -> ( ( ph -> ph ) -> ph ) )
21       wph=wph   $f wff ph
22       wps=4     $a wff ( ph -> ph )
23       wch=wph   $f wff ph
24     maj=ax-2  $a |- ( ( ph -> ( ( ph -> ph ) -> ph ) ) -> ( ( ph -> ( ph
->
                                                     ph ) ) -> ( ph -> ph )
) )
25   maj=ax-mp $a |- ( ( ph -> ( ph -> ph ) ) -> ( ph -> ph ) )
26 id1=ax-mp $a |- ( ph -> ph )

to the same proof without syntax steps:

MM> sh p id1
 9   min=ax-1  $a |- ( ph -> ( ph -> ph ) )
20     min=ax-1  $a |- ( ph -> ( ( ph -> ph ) -> ph ) )
24     maj=ax-2  $a |- ( ( ph -> ( ( ph -> ph ) -> ph ) ) -> ( ( ph -> ( ph
->
                                                     ph ) ) -> ( ph -> ph )
) )
25   maj=ax-mp $a |- ( ( ph -> ( ph -> ph ) ) -> ( ph -> ph ) )
26 id1=ax-mp $a |- ( ph -> ph )

(Actually even this form reveals the missing steps, since you can see that
the step numbers are not sequential. They are renumbered to be sequential
in the web site version http://us2.metamath.org/mpeuni/id1.html .)

We've talked a lot about how to identify and hide "inessential" steps from
the proof display, but I've never seen anything fully satisfactory. I would
just recommend to learn how to skim a metamath proof display for the
relevant information.

    * Use metamath's built in *substitution* to eliminate the need for
> multiple versions of the Algebra helpers
>

We already make extensive use of metamath's substitution to minimize proof
work. For example, if we have a function like

df-foo $a |- foo = ( x e. CC , y e. CC |- ( ( x + y ) x. ( x x. y ) ) )

then the very first theorem we will usually prove is the value of this
given some (class) arguments:

fooval $p |- ( ( A e. CC /\ B e. CC ) -> ( A foo B ) = ( ( A + B ) x. ( A
x. B ) ) )

This may look like a silly thing since all I've done is use capital letters
instead of lowercase, but it performs an important function. Inside the
proof of this theorem will be a proof of

|- ( ( x = A /\ y = B ) -> ( ( x + y ) x. ( x x. y ) ) = ( ( A + B ) x. ( A
x. B ) ) )

which will apply oveq12d and so on. The important part is that this is
"doing the equality proof so you don't have to", in the sense that without
the theorem I would have to prove this equality theorem for every choice of
input to foo, while with the theorem it's only one step, and I can easily
instantiate it to get

|- ( ( 1 e. CC /\ A e. CC ) -> ( 1 foo A ) = ( ( 1 + A ) x. ( 1 x. A ) ) )
|- ( ( ( 2 + 2 ) e. CC /\ 4 e. CC ) -> ( ( 2 + 2 ) foo 4 ) = ( ( ( 2 + 2 )
+ 4 ) x. ( ( 2 + 2 ) x. 4 ) ) )

and so on, with one application of fooval each. If fooval was stated as

fooval2 $p |- ( ( x e. CC /\ y e. CC ) -> ( x foo y ) = ( ( x + y ) x. ( x
x. y ) ) )

instead, then although it is logically equivalent to fooval, you wouldn't
be able to leverage metamath's built in substitution to apply it to the
case x = ( 2 + 2 ), y = 4, because set variables can only be substituted
for other set variables. You would have to prove the substitution inside
the logic, which means lots of oveq12d type steps.

In summary, a metamath theorem encodes a "template" which will match
against the goal modulo substitution to the class and wff variables of the
statement. A fixed theorem cannot do anything more complicated than that,
which means in particular that you can't "pre-substitute" a term into an
expression - you have to give that subgoal back to the user. This is why
theorems like nnind (http://us2.metamath.org/mpeuni/nnind.html) have a
bunch of hypotheses like ( x = 1 -> ( ph <-> ps ) ), because the theorem is
trying to make it convenient to do the substitution (here ps is meant to
denote [ 1 / x ] ph), but the best it can do is set up the required
equality subproof, because it doesn't yet know what the expressions are so
it can't apply the right equality theorem. A theorem cannot conditionally
apply a theorem, and it can't do proofs by recursion on the structure of
the input expression.

You can't prove all of David's theorems with one proof. In fact, you can
>> almost never apply one proof (sequence of theorem applications) to prove
>> multiple theorems, unless one is a substitution instance of another, and we
>> generally try to avoid that in the library.
>>
>
> To use unification in a proof we would need the following steps:
>
>    1. template:   $e  A template to be unified in which a set variable
>    appears twice
>    2. substitute:  $e  Equality of two Class expression to be substituted
>    into different instances of the set variable.
>    3. instantiate: $p  theorem: the template unified with the two
>    substitutions.
>
> For relatively shallow expressions like David's Algebra Helpers, it may be
> easy to generate short proofs that do not use substitutions.
> As the tree depth of the expression grows, using substitution will make
> more sense and have more benefits.
>

Hopefully I've explained why this cannot be done in the language of
metamath for fundamental logical reasons. That of course does not mean that
it can't be done outside the language, in the tooling, and this is exactly
what mmj2 does. The choice to spam text to the screen on success as mmj2
does is an implementation detail; my new proof assistant for metamath zero
doesn't display autogenerated steps which makes it significantly less
verbose (for better or worse).


> Metamath allows us to use unification on lemmas.
> I understand why we want to avoid having too many lemmas.
> Using lemmas (instead of a sequence of proof steps) inflates the library
> size and increases maintenance overhead.
>
> David's Algebra Helpers (and my more esoteric examples) can already be
> added as "lemmas" in personal toolboxes.
> I prefer using 3 proof steps (template, substitute, instantiate) to
> creating a new lemma for every template we might need.
> Is there a way to do that?
>

This procedure can be implemented as a macro or tactic in a proof
assistant, which is in some ways better than a lemma (they are more
flexible) and in some ways worse (the tactic itself is not stored, just the
result, so if you come back to the theorem later there will be no evidence
that you used a program to generate the proof).


> The entropy of the statement is comparable to the entropy of the sequence
>> "eqtri oveq1i oveq1i addcomi" that was used to prove it, so you would need
>> a huge number of variations on the theorem and there probably wouldn't be
>> enough use of any individual such theorem to be worth the work of writing
>> the whole set.
>>
> That is exactly why I want to use substitution:  I want to avoid having to
> add every possibly useful variation as a lemma.
> It will also save David the trouble of moving his helpers into the "main
> body".
>
> In the long run, we may want to look for repeating patterns in set.mm
> proofs and see if we can extract "lemmas" that appear often enough that
> that stating them as "theorems"  will actually reduce the overall database
> size.
> For now, I just want to avoid having to write a lemma for every math rule
> I ever learned in high school and having to wonder how many other people
> might or might not want to use it.
>

This project is possible, but you need to have a more rigid definition of
"repeating pattern" for it to work. The kind of repeating pattern that can
be extracted to a lemma is the kind where you apply the same sequence of
theorems in the same order. For example, suppose you apply mpbird with
divmuld as the second hypothesis, that is, you have the proof structure
"mpbird ?a (divmuld ?b ?c ?d ?e)". You can imagine searching for this tree
fragment in the library (unifying on the leaves and looking in subterms).
You can also work out by unification what this proof fragment is a proof
of, namely:

$e |- ( ph -> ( A / B ) = C )
$e |- ( ph -> A e. CC )
$e |- ( ph -> B e. CC )
$e |- ( ph -> C e. CC )
$e |- ( ph -> B =/= 0 )
$p |- ( ph -> ( B x. C ) = A )

You can generate this statement entirely by looking at the proof and naming
the metavariables remaining after unification. This statement is proven by
"mpbird h1 (divmuld h2 h3 h4 h5)" by construction, and if we call it
"mvllmuld2" (the naming is not automatic, unfortunately), then we can
replace all instances of "mpbird ?a (divmuld ?b ?c ?d ?e)" by "mvllmuld2 ?a
?b ?c ?d ?e". This decreases the number of steps used in all these proofs,
as well as possibly the number of theorems referenced, and If it is used
enough, this may cause a net decrease in total size of set.mm.

There are some small variations you can do to this pattern, but this is
essentially how you can do lemma-compression in a MM database. It's an
explicitly solvable problem exactly because we know that theorems are
simple unification matches and don't do anything fancy like analyze their
arguments.

Personally, I think that "every lemma you learned in high school" is
already captured in the existing set.mm algebra theorems. But because of
the combinatorial explosion you get by combining lemmas in different ways,
we want to have a relatively small set of theorems that all play well
together and can be powerfully combined, not necessarily an exhaustive
enumeration of all ways to combine symbols. In this case, we are combining
the relatively complete prop calc library (mpbird) with the deduction-form
algebra library (divmuld). If you think of theorems as words, then you are
able to express most atomic operations in one word, and the way it applies
to your context in two. With this perspective, metamath is actually not
that verbose at all, and creating more compound words like mvllmuld2 is of
dubious advantage.

Mario

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to metamath+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CAFXXJSvzfEpZypHQeq6wJ6w5ay81%3DGSdbU217OkGrJb68q%2BT-A%40mail.gmail.com.

Reply via email to