That paper is interesting, I was wondering a similar thing about whether it 
would be possible to generate training data for an mm prover by just making 
random substitutions. Looks like they have gone one better and trained the 
generator on human proofs to try and make it make more reasonable things.

I'm finding it a bit hard to interpret their results. Does table 3 mean 
their best result is proving 574/2720 of their test theorems? That would 
mean about 21% up from holophrasms 14%?

One thing I wonder about the future of theorem proving is would there be 
value in having a list of results which were desired to be formally proven 
but which are not yet? So basically a formalised version of a theorem with 
hypotheses and the theorem statement and no proof? For both AI and people 
one challenge is working out what statements are desired to be proven and 
then a second challenge is to prove them.

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/c448460f-6850-403e-b087-b6b355f764b8%40googlegroups.com.

Reply via email to