The recent AI Mathematical Olympiad competition on Kaggle 
<https://www.kaggle.com/competitions/ai-mathematical-olympiad-prize/overview> 
was a competition about solving problems similar to those seen in 
mathematical olympiads with artificial intelligence. It was won by the team 
of Project Numina <https://projectnumina.ai/>, a team that is developing 
fine-tuned LLMs to solve math problems. They were able to solve 29 out of 
40 problems in the closed test set.

This is their interview on YouTube 
<https://www.youtube.com/watch?v=zNplyggkjbY> after winning the competition.

In this interview, they explain their training solution, and the two steps 
involved. The first involved Chain of Thought training, where the model is 
trained to generate a textual description of the steps needed to solve the 
problem. The second step translates the chain-of-thought output into python 
code, most of the times using SymPy.

Their solution is open-source on GitHub 
<https://github.com/project-numina/aimo-progress-prize>. They also share on 
HuggingFace the two datasets they have used to fine-tune an open source LLM 
by a different team.

Their HuggingFace space hosts a web tool to run their model 
<https://huggingface.co/spaces/AI-MO/math-olympiad-solver>.

This work looks amazing, especially as it is fully open-source!

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/4d403a06-6f34-4fc0-8685-393bf1685bc3n%40googlegroups.com.

Reply via email to