When I skimmed through the paper, I had the following queries:

1. Is integration really a tree to tree translation? Because, neural 
network is predicting the resulting expression tree for the input equation. 
However, integration is not a predictive operation. Moreover, how can we 
define that whether the model is over-fitting or not on the training data? 
What was the size of the training data set?

2. Does the data set of equations contains noise? Is there any uncertainty 
in the data set? For example, while translating from one language to 
another there are chances that one word can be mapped to different ones 
having the same meaning. However, here, it is not the case, there may be 
multiple results but we can check whether they are correct or not with 100% 
surety. 

3. Is this model able to generalise over any mathematical expression. The 
way they generated data sets is algorithmic and deterministic. It is not 
random(random number generators are itself deterministic). So, how can we 
say that this model is the one that outperforms any CAS?

Neural Networks don't learn, the way human beings do. They just imitate the 
underlying distribution of the data. But, I don't think that mathematical 
expressions have any such underlying distribution. Well, we can take a 
subset of those expressions which can have a distribution and I think 
that's the case with their model.

PS - I can be wrong at any many places above as, I am just a beginner in 
ML/DL/AI. Please correct me wherever possible. Thanks.

On Saturday, September 28, 2019 at 12:18:21 AM UTC+5:30, Aaron Meurer wrote:
>
> There's a review paper for ICLR 2020 on training a neural network to 
> do symbolic integration. They claim that it outperforms Mathematica by 
> a large margin. Machine learning papers can sometimes make overzealous 
> claims, so scepticism is in order. 
>
> https://openreview.net/pdf?id=S1eZYeHFDS 
> <https://www.google.com/url?q=https%3A%2F%2Fopenreview.net%2Fpdf%3Fid%3DS1eZYeHFDS&sa=D&sntz=1&usg=AFQjCNFxnpOmLjApvFOeRjNBAddCnUj8hQ>
>  
>
> The don't seem to post any code. The paper is in double blind review, 
> so maybe it will be available later. Or maybe it is available now and 
> I don't see it. If someone knows, please post a link here. 
>
> They do cite the SymPy paper, but it's not clear if they actually use 
> SymPy. 
>
> I think it's an interesting concept. They claim that they generate 
> random functions and differentiate them to train the network. But I 
> wonder if one could instead take a large pattern matching integration 
> table like RUBI and train it on that, and produce something that works 
> better than RUBI. The nice thing about indefinite integration is it's 
> trivial to check if an answer is correct (just check if 
> diff(integral(f)) - f == 0), so heuristic approaches that can 
> sometimes give nonsense are tenable, because you can just throw out 
> wrong answers. 
>
> I'm also curious (and sceptical) on just how well a neural network can 
> "learn" symbolic mathematics and specifically an integration 
> algorithm. Another interesting thing to do would be to try to train a 
> network to integrate rational functions, to see if it can effectively 
> recreate the algorithm (for those who don't know, there is a complete 
> algorithm which can integrate any rational function). My guess is that 
> this sort of thing is still beyond the capabilities of a neural 
> network. 
>
> Aaron Meurer 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/fa4ee5ee-a112-425b-ba2e-61d299812855%40googlegroups.com.

Reply via email to