I agree David it's a really interesting time for all this and an AI powered
proof assistant could be really helpful.
"Second, I think we should keep thinking about ways we could output
Metamath proofs
to make it easier for machine learning systems"
Re this one thing I wondered about is whether adding english text between
the statements might help a language model understand what is happening? It
would be very repetitive so maybe it makes no sense but here is an example.
Given that we have df-2 which states
50::df-2 |- 2 = ( 1 + 1 )
We can apply theorem oveq2i to step 50 to get
51:50:oveq2i |- ( 2 + 2 ) = ( 2 + ( 1 + 1 ) )
Given that we have df-4 which states
52::df-4 |- 4 = ( 3 + 1 )
Given that we have df-3 which states
53::df-3 |- 3 = ( 2 + 1 )
We can apply theorem oveq1i to step 53 to get
54:53:oveq1i |- ( 3 + 1 ) = ( ( 2 + 1 ) + 1 )
Given that we have 2cn which states
55::2cn |- 2 e. CC
Given that we have ax-1cn which states
56::ax-1cn |- 1 e. CC
We can apply theorem addassi to steps 55,56,56 to get
57:55,56,56:addassi
|- ( ( 2 + 1 ) + 1 ) = ( 2 + ( 1 + 1 ) )
We can apply theorem eqtri to steps 52,54,57 to get
58:52,54,57:3eqtri |- 4 = ( 2 + ( 1 + 1 ) )
We can apply theorem eqtr4i to steps 51.58 to get
qed:51,58:eqtr4i |- ( 2 + 2 ) = 4
Which finishes the proof
I think maybe that makes it easier to understand what is happening and
gives the language model something to grip on to which training?
On Thursday, May 4, 2023 at 5:21:43 PM UTC+1 David A. Wheeler wrote:
> On March 25, 2023, I posted some speculations on how recent AI/ML
> improvements might be applied to create Metamath proofs. Like any
> hypothesis, experiments are necessary to see whether or not they're true.
> That said, I think good hypotheses are a helpful first step.
>
> I have a few other notes & speculations. My hope is that these posts will
> inspire some of those experiments. See below for more.
>
> --- David A. Wheeler
>
> ========
>
> First, there is a *lot* going on to make AI/ML technologies more widely
> available.
>
> For example, we have "OpenLLaMA: An Open Reproduction of LLaMA":
> https://github.com/openlm-research/open_llama
>
> That also includes efforts to make it easier to run these kinds of tools
> locally and/or on servers controlled by others.
> For example, WebGPU has been recently implemented by Chrome & Firefox is
> expected to release that soon.
> I expect that will make it easier to run machine learning algorithms,
> including on client-side JavaScript
> (where it might be possible to run an ML algorithm without having to
> install anything, just use your browser).
>
> I can' summarize it all, there's too much going on. If someone can point
> us to sites which are keeping
> track of this, or to key new results, that'd be awesome.
>
> Second, I think we should keep thinking about ways we could output
> Metamath proofs
> to make it easier for machine learning systems
> to learn how to create new proofs. I discussed a lot of that in my March
> 2023 post, such
> as presenting proofs "backwards" (start from the goal).
>
> One challenge is that most outputs include step numbers and step
> references.
> Those are challenging for an ML tool to
> figure out what the underlying relationship is, because it has to convert
> those numbers into
> "spatial" relationships that aren't obvious. For training purposes it
> might be best to do something
> else, thought it's not clear what. One option would be to omit step
> numbers,
> and for references use relative steps (+1). Or instead of step numbers,
> use reference names
> based on the label of the statement that uses.
> We could also add {...} to surround groups, to
> help the ML system learn groupings. Other ideas welcome.
>
>
--
You received this message because you are subscribed to the Google Groups
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/metamath/6e58991d-d28d-429b-a497-672dc7beca59n%40googlegroups.com.