Their paper appears to be an attempt at using the transformer model for language translation to symbolic math.
There is a Jupyter notebook with an example on how to create a translator from Portuguese to English using the transformer model: https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/transformer.ipynb If someone has some spare time, it would be interesting to see how this model would perform with SymPy (just add a tokenizer to the output of *srepr* and replace the Portuguese-English dataset). On Saturday, 28 September 2019 08:30:30 UTC+2, Aaron Meurer wrote: > > On Fri, Sep 27, 2019 at 11:56 PM Ondřej Čertík <[email protected] > <javascript:>> wrote: > > > > On Fri, Sep 27, 2019, at 12:48 PM, Aaron Meurer wrote: > > > There's a review paper for ICLR 2020 on training a neural network to > > > do symbolic integration. They claim that it outperforms Mathematica by > > > a large margin. Machine learning papers can sometimes make overzealous > > > claims, so scepticism is in order. > > > > > > https://openreview.net/pdf?id=S1eZYeHFDS > > > > > > The don't seem to post any code. The paper is in double blind review, > > > so maybe it will be available later. Or maybe it is available now and > > > I don't see it. If someone knows, please post a link here. > > > > > > They do cite the SymPy paper, but it's not clear if they actually use > SymPy. > > > > They wrote: > > > > "The validity of a solution itself is not provided by the model, but by > an external symbolic framework (Meurer et al., 2017). " > > > > So that seems to suggest they used SymPy to check the results. > > > > > > > > I think it's an interesting concept. They claim that they generate > > > random functions and differentiate them to train the network. But I > > > wonder if one could instead take a large pattern matching integration > > > table like RUBI and train it on that, and produce something that works > > > better than RUBI. The nice thing about indefinite integration is it's > > > trivial to check if an answer is correct (just check if > > > diff(integral(f)) - f == 0), so heuristic approaches that can > > > sometimes give nonsense are tenable, because you can just throw out > > > wrong answers. > > > > > > I'm also curious (and sceptical) on just how well a neural network can > > > "learn" symbolic mathematics and specifically an integration > > > algorithm. Another interesting thing to do would be to try to train a > > > network to integrate rational functions, to see if it can effectively > > > recreate the algorithm (for those who don't know, there is a complete > > > algorithm which can integrate any rational function). My guess is that > > > this sort of thing is still beyond the capabilities of a neural > > > network. > > > > I saw this paper too today. My main question is whether their approach > is better than Rubi (say in Mathematica, as it doesn't yet work 100% in > SymPy yet). They show that their approach is much better than Mathematica, > but so is Rubi. > > It actually isn't clear to me yet that they've shown it. I want to see > what their test suite of functions looks like. > > Aaron Meurer > > > > > The ML approach seems like a brute force. So is Rubi. So it's fair to > compare ML with Rubi. On the other hand, I feel it's unfair to compare > brute force with an actual algorithm, such as Risch. > > > > Ondrej > > > > -- > > You received this message because you are subscribed to the Google > Groups "sympy" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to [email protected] <javascript:>. > > To view this discussion on the web visit > https://groups.google.com/d/msgid/sympy/db41cf67-acc9-4a84-8267-2742b748de4d%40www.fastmail.com. > > > -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/f42e6f5d-a386-476b-ad15-bb962cfdcdba%40googlegroups.com.
