Thanks Alexander for your analysis and feedback.

> I would really appreciate if you continue your work to find more shortenings 
> using OpenAI.

That's definitely the plan :+1:

-stan

On Sat, Mar 28, 2020 at 10:25 PM 'Alexander van der Vekens' via
Metamath <[email protected]> wrote:
>
> Hi Stan,
> I had a look at the proofs - very impressive results! Especially because we 
> had a global minimization recently, and your method found much shorter proofs 
> nevertheless.
> I had a closer look at the proofs for the theorems I contributed (these are 
> theorems I contributed in my first year working with Metamath, so I was 
> lacking experience):
>
> * opabbrex: reduction from 12 to 7 essential steps, using ~ssopab2
> * 2eluzge0: reduction from 5 to 3 essential steps, using ~nn0uz
> * elfz0add: reduction from 16 to 10 essential steps, using ~uzaddcl
> * elfzmlbm:  reduction from 33 to 16 essential steps, using ~lesub1dd
>
> Observations:
> * the number of essential steps was almost halved
> * the usage of only one theorem (indicated above) caused most of the 
> shortening
> * the proofs are still comprehensible, even better to understand because they 
> are shorter
>
> I would really appreciate if you continue your work to find more shortenings 
> using OpenAI.
>
> Alexander
>
> On Thursday, March 26, 2020 at 5:24:02 PM UTC+1, Stanislas Polu wrote:
>>
>> Hi!
>>
>> At OpenAI we've been exploring the use of deep learning for formal
>> math for the past couple months. We've worked a lot with Metamath
>> whose simplicity makes it very amenable to deep learning.
>>
>> Today, I submitted a PR containing a sample of proofs that were found
>> by our models and that are substantially shorter than their human
>> curated counterparts:
>>
>> https://github.com/metamath/set.mm/pull/1547
>>
>> We're very curious to hear from the community on whether this is
>> useful. Additionally, if the authors of the original proofs shortened
>> by this PR have qualitative feedback on the proposed versions, we'd be
>> keen on hearing from them as well, obviously.
>>
>> As mentioned in the PR the models were run on a sample of ~1k proofs
>> from set.mm so we could probably get 45 times as many shorter proofs
>> if there is interest for it!
>>
>> Looking forward to your feedback and comments.
>>
>> -stan
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Metamath" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/metamath/1c23b864-6b53-4ede-a2df-23851315e523%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CACZd_0wRwK91iCL0g91_OT4n96%3DPuiYZf4%3DPUokcds8RyevvSg%40mail.gmail.com.

Reply via email to