I'm actually deliberately not answering the header question, as that's
Norm's place and not mine. But I can give my take on definitions and the
way metamath has historically handled them (following Norm's vision
outlined here).

As you my know, MM0 departs from metamath in its handling of definitions.
While the MM definition mechanism is beautifully simple (and I think that
MM0 has missed the simplicity mark the most when it comes to this aspect),
it's not really addressing the problem when it declares that definitions
are "just axioms", because they aren't just axioms, and in particular every
new axiom is another potential unsoundness that requires its own review.
One can argue that the definition checker solves this problem, but in some
ways it's not that much better because the mmj2 definition checker, at
least, massively increases the trust base, because mmj2 is not a simple
program and the definition checker is leaning on a lot of mmj2 code like
the grammar parsing and automatic proof generation mechanisms. This is a
part of the core language, and it does not deserve to be treated as a
second class citizen.

Baking definitions into the kernel also means that it is more obvious how
to do automatic definition unfolding, but this is a double edged sword, as
it makes one definition preferenced over others. Most MM definitions you
wouldn't actually want to unfold; there is pretty much always some
"definitional theorem" that should be used in preference to the actual
definition. It would be nice if some automation generated this theorem.

But this isn't what Norm is talking about. I agree with the overall concept
of parsimony of definitions, and I practice it whenever I can, although I
have to admit that definitions are much lower cost when you have definition
unfolding for "free". Achieving something similar in metamath requires a
big step up in either annotations or annotation inference for things like
definitional theorems and general context management. I'm building
something similar to this as a tactic mode for MM0, but I think the ideas
may be back portable to MM.

But even if definition making was easy, there would be the responsibility
to "complete the graph" of relations between defined concepts. I think it
is irresponsible to create a definition and not relate it to everything
else. That is my metric for proper cohesion of the library, ensuring that
you can painlessly work with multiple parts of the library at once. For
example, if you define a semigroup, then you had better prove that a monoid
is a semigroup, and you should also migrate any monoid theorems that are
actually about semigroups to be about semigroups. (This came up in the part
18 discussion: there was a theorem about unital magmas, and we have that
theorem but the assumptions have been "rounded up" to monoid, because
that's the weakest structure that is actually in set.mm that satisfies the
conditions. If we get unital magmas then this theorem will be changed
appropriately, but it's not an argument for them on its own.)

I am a *huge* fan of opaque definitions though, and I need to bake it a bit
more explicitly into MM1. Dependent type theory has strengthened my view on
this by showing all the bad things that can happen when you can't hide
anything. Luckily for MM0, while there is definition unfolding, it's not
needed for any typing, and there is an explicit proof term for it, so it's
really just a matter of selectively turning off the MM1 feature that
generates these proofs.

On Fri, Jan 24, 2020 at 7:40 AM vvs <[email protected]> wrote:

> *"Rules" for introducing new definitions*
>>
>> Definitions are at the heart of achieving simplicity.  While developing
>> set.mm, two rules of thumb that evolved in my own work were (1) there
>> shouldn't be two definitions that mean almost the same thing and (2)
>> definitions shouldn't be introduced unless there is an actual need in the
>> work at hand.  Maybe I should add (3), don't formally define something that
>> doesn't help proofs and can be stated simply in English in a theorem's
>> comment.
>>
>
> Definitions are opaque in Metamath - that's a restriction which was part
> of the language from the beginning. And that prevents automatic unfolding
> of definitions using definitional equivalence. It's possible to do just
> that using conventions and external tools, but we don't have it at the
> moment.
>
> Mario probably could express more authoritative opinion, especially in
> light of his work on MM0.
>
> N.B. Yes, such technical details affect philosophical goals after all and
> must be taken into account.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Metamath" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/metamath/3fc7e623-798b-4ffc-83c6-e50976e46772%40googlegroups.com
> <https://groups.google.com/d/msgid/metamath/3fc7e623-798b-4ffc-83c6-e50976e46772%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CAFXXJStz3D9XB7V0cLMs5qm8Vzk%3DuXiPN%2B%2BkMYggXWcKd3sERw%40mail.gmail.com.

Reply via email to