https://arxiv.org/abs/1905.09381 :

*Learning to Prove Theorems via Interacting with Proof Assistants*
Kaiyu Yang, Jia Deng
(Submitted on 21 May 2019)

Humans prove theorems by relying on substantial high-level reasoning and 
problem-specific insights. Proof assistants offer a formalism that 
resembles human mathematical reasoning, representing theorems in 
higher-order logic and proofs as high-level tactics. However, human experts 
have to construct proofs manually by entering tactics into the proof 
assistant. In this paper, we study the problem of using machine learning to 
automate the interaction with proof assistants. We construct CoqGym, a 
large-scale dataset and learning environment containing 71K human-written 
proofs from 123 projects developed with the Coq proof assistant. We develop 
ASTactic, a deep learning-based model that generates tactics as programs in 
the form of abstract syntax trees (ASTs). Experiments show that ASTactic 
trained on CoqGym can generate effective tactics and can be used to prove 
new theorems not previously provable by automated methods. Code is 
available at https://github.com/princeton-vl/CoqGym


developed at Princeton Vision & Learning Lab: http://pvl.cs.princeton.edu/

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8a7620ff-20e9-407a-92f6-c9bc5d956934%40googlegroups.com.

Reply via email to