On Fri, Sep 27, 2019 at 11:56 PM Ondřej Čertík <[email protected]> wrote:
>
> On Fri, Sep 27, 2019, at 12:48 PM, Aaron Meurer wrote:
> > There's a review paper for ICLR 2020 on training a neural network to
> > do symbolic integration. They claim that it outperforms Mathematica by
> > a large margin. Machine learning papers can sometimes make overzealous
> > claims, so scepticism is in order.
> >
> > https://openreview.net/pdf?id=S1eZYeHFDS
> >
> > The don't seem to post any code. The paper is in double blind review,
> > so maybe it will be available later. Or maybe it is available now and
> > I don't see it. If someone knows, please post a link here.
> >
> > They do cite the SymPy paper, but it's not clear if they actually use SymPy.
>
> They wrote:
>
> "The validity of a solution itself is not provided by the model, but by an 
> external symbolic framework (Meurer et al., 2017). "
>
> So that seems to suggest they used SymPy to check the results.
>
> >
> > I think it's an interesting concept. They claim that they generate
> > random functions and differentiate them to train the network. But I
> > wonder if one could instead take a large pattern matching integration
> > table like RUBI and train it on that, and produce something that works
> > better than RUBI. The nice thing about indefinite integration is it's
> > trivial to check if an answer is correct (just check if
> > diff(integral(f)) - f == 0), so heuristic approaches that can
> > sometimes give nonsense are tenable, because you can just throw out
> > wrong answers.
> >
> > I'm also curious (and sceptical) on just how well a neural network can
> > "learn" symbolic mathematics and specifically an integration
> > algorithm. Another interesting thing to do would be to try to train a
> > network to integrate rational functions, to see if it can effectively
> > recreate the algorithm (for those who don't know, there is a complete
> > algorithm which can integrate any rational function). My guess is that
> > this sort of thing is still beyond the capabilities of a neural
> > network.
>
> I saw this paper too today. My main question is whether their approach is 
> better than Rubi (say in Mathematica, as it doesn't yet work 100% in SymPy 
> yet). They show that their approach is much better than Mathematica, but so 
> is Rubi.

It actually isn't clear to me yet that they've shown it. I want to see
what their test suite of functions looks like.

Aaron Meurer

>
> The ML approach seems like a brute force. So is Rubi. So it's fair to compare 
> ML with Rubi. On the other hand, I feel it's unfair to compare brute force 
> with an actual algorithm, such as Risch.
>
> Ondrej
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sympy/db41cf67-acc9-4a84-8267-2742b748de4d%40www.fastmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAKgW%3D6JExav6cUscxJbv%2B9QhY_S2AP52L9ycNzA%2Be2jtgdGSQQ%40mail.gmail.com.

Reply via email to