On Wed, Mar 24, 2021 at 6:35 AM James Bowery <[email protected]> wrote:
>
> Agreed.  I came at this a someone doing practical work with a CAS, frustrated 
> with how stupid it seemed, but you're right that I should have dropped the 
> CAS references and gone for the juggular -- particularly given my management 
> of the Laws of Form mailing list arising from my association with The 
> Boundary Institute's purpose (which Faggin funded).
>
> Treating "a full-on Automated Theorem Proving system" as a Go (or Chess, 
> or...) player makes sense to me.
>
> Does it to you?

Deep RL has worked much less well at ATP than at these games so far, though

Check out the AITP conferences (run by my son's thesis advisor Josef
Urban) each year to get a sense of the state of the art in
AI-meets-theorem-proving.  Papers and videos from the last few AITP
are online, e.g. AITP 2020

http://aitp-conference.org/2020/

Slides 81-88 of my talk at AITP in 2019

https://goertzel.org/wp-content/uploads/2019/04/AITP-19-keynote-v1.pdf

discussed the issue of identifying which theorems are "interesting."

One goal of this is: If you can tractably make a looooong list of
interesting conjectures, then. you can use it to guide current ATPs to
create a very large corpus of theorems and proofs (much larger than
current human math).  This large corpus can be used to train RL
systems for controlling ATPs.   This would be comparable to what's
been done with deep RL for chess and Go, where a large number of
simulated games have been used for training the RL models.

AITP 2021 is described here btw,

http://aitp-conference.org/2021/

-- Ben

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf8bb7754cbb517a4-M9be49286df4cd8a70740feb3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to