Hi Corbin,

Welcome to the metamath mailing list. This is very cool, a very strong
first exploration into linear logic in metamath.

On Mon, May 20, 2019 at 1:13 PM Corbin Simpson <[email protected]>
wrote:

> * I tried several different flavors of syntax before picking this one. I
> tried having sequent-style symmetric turnstiles, but that had terrible
> bookkeeping problems. I thought about postfix negation, as is traditional
> for linear logic (it looks like exponentiation with _|_ ) but I had trouble
> writing the rules. (I am stopping myself from writing a Metamath syntax
> compiler.)
>
> * ⅋ is not in ASCII and there appears to be no good replacement. On the
> plus side, maybe this will lead to a demystification of this operator.
>

I think the syntax you've settled on is good, although the notation could
be better. My suggestion would be to use /\ and \/ in place of |"| and |_|
for notational similarity (I'm not sure what these are approximating). I
realize that the connectives behave slightly differently in linear logic
but it seems easier to borrow the usual notation for half of your operators
and invent new notation (like box and diamond) for the other half.

Have you thought about using /\ and \/ and && and || ? Also + and * seem
reasonable choices for the "plus" and "times" connectives, or if you
prefer, (+) and (x) to make circle-plus and circle-times. Also using & for
"with" is a no-brainer. One technique used in set.mm for variation symbols
is to have a marker like ~ or _ next to it; so you might try options like
"~&" or "^&" for par. Or just use \/ since that's not otherwise being used
and par is at least disjunctive-ish. Or just pick a random other symbol
like "%". It's not that important in the long run, people will get used to
whatever you use.

Obviously I've made an inconsistent set of suggestions; pick your favorite
subset. Below I will use (+) and (x) and & and \/.


> * As a category-theoretic desire, I'd like to be able to enter
> isomorphisms as axioms without incurring a double-entry penalty. I'd also
> like to not have to define both introduction and elimination axioms for new
> syntax built on old connectives.
>
> * Relatedly, I want to have to only express a single associative law for
> each connective. Digging down into parens is unpleasant. I was considering
> weakening the turnstile at the head of those axioms to `wff` or something
> along those lines, but I'm not yet thinking enough like the Metamath parser
> to be able to predict what will go wrong with that. (As a language
> designer, this is screaming to me "you're putting too much into the
> syntax", but aren't we putting *everything* into the syntax?)
>

One thing I thought reading your file is "you desperately need iff". There
is an alarming number of axioms in the file, maybe more than 50%. I'm not
completely up on linear logic but you should be able to define a connective
that means iff; I think it is (A <-> B) is ( A -o B ) & ( B -o A ). You
should introduce this early, give only the axioms you need to characterize
lolly and iff, and then use iff to define the other axioms like
distributivity of various kinds.

Duality should allow you to cut down on most of the axioms. The postfix _|_
operator is not a negation in the logic, but rather a defined operation on
expressions. To encode this in metamath we need a judgment for ~A = B, for
example:

$c ~= $.
$a wff ( ph ~= ps ) $.   $( this is slightly abusive as we don't really
want ~= to be an operator, just a top level judgment. But the alternative
requires more sorts $)

$e |- ( ph ~= ps ) $a |- ( ps ~= ph ) $.
$a |- ( 1 ~= F ) $.
$a |- ( 0 ~= T ) $.
$e |- ( ph ~= ch ) $e |- ( ps ~= th ) $a |- ( ( ph (x) ps ) ~= ( ch & th )
) $.
$e |- ( ph ~= ch ) $e |- ( ps ~= th ) $a |- ( ( ph (+) ps ) ~= ( ch \/ th )
) $.
$e |- ( ph ~= ps ) $a |- ( ! ph ~= ? ps ) $.

We still need a negation in the logic to denote the negation of an atom:

$c ~ $.
$a wff ~ ph $.
$a |- ( ph ~= ~ ph ) $.

You might consider this not parsimonious since ~= is provably equvalent to
~ A <-> B once you have the axioms to say that. But you have to either
introduce equality or equals-not early on in order to define negation that
is used in the axioms.

Linear logic lends itself well to a one-sided sequent calculus, but since
we have to model the comma as an operator on wffs it makes sense to use par
since the operator already exists. So then it just boils down to writing
the axioms of LL where par substitutes for the context disjunction. Since
the context disjunction is commutative and associative we need that for par:

$e |- ( ph \/ ps ) $a ( ps \/ ph ) $.
$e |- ( ( ph \/ ps ) \/ ch ) $a ( ph \/ ( ps \/ ch ) ) $.
$e |- ( ph \/ ( ps \/ ch ) ) $a ( ( ph \/ ps ) \/ ch ) $.

These on their own aren't good enough to put any disjunct wherever we want
though, as there is no way to apply associativity to a subterm in the
context. In classical logic, we would deduce ( ( ps /\ ph ) /\ ch ) from (
( ph /\ ps ) /\ ch ) by using modus ponens to reduce to ( ( ( ph /\ ps ) /\
ch ) -> ( ( ps /\ ph ) /\ ch ) ) and then use an implication lemma to
reduce to ( ( ph /\ ps ) -> ( ps /\ ph ) ) which is an application of
symmetry. Having equals here would be again very nice, and we know it's
coming but maybe we can't wait because we're still setting up initial proof
rules.

Let's give in to temptation and add = as a primitive connective to the
logic. This means we don't need ~= anymore, and we have the following:

$c <-> $.
$a |- ( ph <-> ps ) $.
$e |- ( ph <-> ps ) $e |- ( ch <-> ps ) $a |- ( ph <-> ps ) $.
$e |- ph $e |- ( ph <-> ps ) $a |- ps $.

This should eliminate the need for stating most things as inference rules.
Let's get back to contexts:

$a |- ( ( ph \/ ps ) <-> ( ps \/ ph ) ) $.
$a |- ( ( ( ph \/ ps ) \/ ch ) <-> ( ph \/ ( ps \/ ch ) ) ) $.
$e |- ( ph <-> ps ) $e |- ( ch <-> th ) $a ( ( ph \/ ps ) <-> ( ch \/ th )
) $.

Nice, only one of them is an inference rule now. It would be nice to have
the last one as a closed form lemma too but that needs the lolly and we
aren't there yet (and there are also linearity worries there). In the
spirit of parsimony, we can define half of the connectives in terms of
negation. (Let's say that \/, &, 1, T, ! are primitive and the others are
defined.) Let's put in the other equality axioms too:

$e |- ( ph <-> ps ) $e |- ( ch <-> th ) $a ( ( ph & ps ) <-> ( ch & th ) )
$.
$e |- ( ph <-> ps ) $a |- ( ~ ph <-> ~ ps ) $.

We should also have an equality axiom for equality itself, but we don't
really need to use <-> as a proper wff and once we have the full axiom
system it will be provable. The context disjunction now works correctly, so
we can get on with the real axioms:

$a |- ( ~ ~ ph <-> ph ) $.
$a |- ( ~ ph \/ ph ) $.
$e |- ( ph \/ ps ) $e |- ( ~ ps \/ ch ) $a |- ( ph \/ ch ) $.
$a |- 1 $.
$a |- ( ph \/ T ) $.
$e |- ( ph \/ ps ) $e |- ( ph \/ ch ) $a |- ( ph \/ ( ps & ch ) ) $.
$e |- ( ~ ph \/ ps ) $e |- ( ~ ps \/ ph ) $a |- ( ph <-> ps ) $.

The only rule we need for negation is DNE because of our choice to define
the other connectives, but we add a rule for proving an equality because
otherwise we won't be able to characterize it. We define the dual
connectives:

$a |- ( ( ph (+) ps ) <-> ~ ( ~ ph & ~ ps ) ) $.
$a |- ( ( ph (x) ps ) <-> ~ ( ~ ph \/ ~ ps ) ) $.
$a |- ( ? ph <-> ~ ! ~ ph ) $.
$a |- ( 0 <-> ~ T ) $.
$a |- ( F <-> ~ 1 ) $.
$a |- ( ( ph -o ps ) <-> ( ~ ph \/ ps ) ) $.

The cost of having these as definitions is that we still need the rules
that were associated to them, but now they are rules for negated forms of
the original connectives. At least we have the lolly now so these look good.

$e |- ( ph -o ch ) $e |- ( ps -o th ) $a ( ( ph \/ ps ) -o ( ch \/ th ) )
$.
$a ( ( ph & ps ) -o ph ) $.
$a ( ( ph & ps ) -o ps ) $.

The wikipedia rules for exponentials are a bit weird since they involve
both of them at once. Possibly this can be optimized.

$e |- ph $a |- ( ph \/ ? ps ) $.
$e |- ( ph \/ ( ? ps \/ ? ps ) ) $a |- ( ph \/ ? ps ) $.
$e |- ( ? ph \/ ps ) $a |- ( ? ph \/ ! ps ) $.
$e |- ( ph \/ ps ) $a |- ( ph \/ ? ps ) $.

>From these, I think all the axioms in your axiomatization are derivable.

* set.mm rushes to set up implication and then to move deduction into the
> object language, where we have more powerful deductive tools. However,
> implication is not as primitive in (classical) linear logic, and is
> honestly more pleasantly defined in terms of the multiplicative
> connectives; this not only leaves a gap before the introduction of
> implication, but cannot solve all upcoming issues since the additive
> connectives do not interact well with implication.
>

The natural deduction style in set.mm doesn't work as well in linear logic
because of the need to split the context, but the natural analogue is to
have a disjunct ( ph \/ ps ) where ps is the thing you are trying to prove
and ph is the context. This basically saying that ~ ph are the "resources"
to prove ps. With the lolly in place it's about the same.

Further, in this file so far, it seems like I can apply ~ ax-i or ~ id in
> order to convert the Metamath-level rules into lollipops. Maybe this isn't
> a big deal, just a matter of ergonomics and my own inexperience.
>

I might be missing something about how linear logic or this axiomatization
works, but I am suspicious of this claim. Suppose I have a rule $e |- foo
$a |- bar $. for some particular formulas foo and bar. This says that if
foo is derivable then so is bar, but this does not imply ( foo -o bar ). In
fact this is even more evident in LL because the lolly implies some kind of
resource consumption, but I can pick foo and bar so that doesn't work. For
example ( 1 \/ 1 ) is not provable because there is no way to get rid of
that second resource, so ( F -o ph ) is similarly not derivable. But a rule
$e |- F $a |- ph $. is conservative because |- F is not provable so the
rule can never be used. So if I add the axiom $e |- F $a |- ph $. there is
no way you can derive $p |- ( F -o ph ) $.


> * Nomenclature is difficult, as always.
>

Try to use the set.mm style of giving label fragments to each symbol, and
string those together to make your axioms. For example \/ "par" & "wi" (+)
"pl" (x) "mu" and then you get things like ax-wi and df-pl and so on. Or if
there is a similar set.mm theorem, you could name it after that; for
example I would be inclined to call ( ( ph & ps ) -o ph ) ax-simpl. The
most important thing is to find a convention and stick to it as rigidly as
you can, at least until you get comfortable with it.


> * This was a little hard, but mostly what is hard is looking at nearly
> 600KLOC of set.mm and trying to imagine how much of that I'll be able to
> get to. I know that there's a lot of intermediate stuff that I can pick and
> choose to prove, depending on where I'm going, but I haven't even reached
> the integers.
>

As FL says, this isn't really in the same arena as set.mm, it's more of a
propositional logic extension. If you want to do set theory you should work
out how first order logic works in this setting, and then also figure out
what you want to achieve with this. It might be personal bias but I'm not
really sure what LL has to offer set theory. I would be more interested in
using LL for type theory and program logics, which is where they are
usually seen.

* I need to replace 0 and 1, probably, if I *do* reach the integers. Z and
> O maybe?
>

That's certainly planning too far ahead. I would actually recommend
something more like 0. and 1. , and the dots would go on the primitive wffs
not the numbers. This already has precedent in set.mm; this is how the
lattice top and bottom functions are defined (
http://us2.metamath.org/mpeuni/df-p0.html).


> * I'm not a mathematician. This is probably a problem, or at least
> problematic.
>

Eh, it's overrated.


> * The proof assistant is alright. I'm able to integrate it with my editor
> somewhat, and to prove one or two proofs at a time, slowly.
>

What is your editor? Which prover interface are you using? It might
surprise you to know there's more than one, but there is a whole ecosystem
of different verifiers and so if your setup isn't working for you try a
different one. In particular I recommend mmj2, which I have spent some time
on (as a user and as a developer) and I find it to be quite usable.


> * Proof verification is so fast that it changes how I'm thinking about my
> workflow compared to other languages like Coq.
>

I'm glad someone noticed. I started with metamath and have since branched
out to more mainstream theorem provers, and I can't say how much I feel
this every day. When verification is literally 4 orders of magnitude faster
with an alternate strategy, you have to think something has gone wrong in
the community process to think that this is okay.

> A lot of people seem to think that performance is about doing the same
thing, just doing it faster, and that is not true. That is not what
performance is all about. If you can do something really fast, really well,
people will start using it differently. -Linus Torvalds

I can tell you that being able to check the entire math library in a few
seconds means that you can do it cheaply on startup and always have an
accurate view of what theorems are working and which need work. This is
just not possible in systems like Coq and Lean where building the library
means sending it off to CI and not hearing back for a few hours (and
burning a lot of CPU cycles the whole time). Mainstream theorem provers
have a lot to learn from metamath, and this is a big one, I think. I want
to scale MM to industrial grade projects to take advantage of this, because
that's when people notice.

Mario Carneiro

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CAFXXJSt7pkAghmQ%3DaYj1HMMnCH4%2BehEfwLEGUQQ4twLjvsDsTQ%40mail.gmail.com.

Reply via email to