FranckQC commented on PR #11423:
URL: https://github.com/apache/tvm/pull/11423#issuecomment-1138955023

   Hi everyone,
   
   @tkonolige :
   
   > Regarding 1), you are saying that right now 
SyntacticToSemanticComputations just happens to use ExprDeepEqual, but we 
intend to change it in the future? And so, right now (in main), 
SyntacticToSemanticComputations is essentially doing nothing? If so, could we 
just have SyntacticToSemanticComputations just be a no-op for now?
   
   Hum, no, I wouldn't say it does nothing. It still transforms a hastable into 
a vector. How is determine by the `EquivalentTerms()` function.
   
   `SyntacticToSemanticComputations()` transforms a table of syntactic entities 
(where x+y is not the same thing as y+x, even if you were to work modulo 
commutativity) into a vector of "semantical entities", where equivalent 
computations have been fusioned/merged together, according to the notion of 
"being equivalent". This function is building equivalence classes, if you want. 
And this notion of "being equivalent" is customizable, and implemented by the 
`EquivalentTerms()` function, which at the moment simply calls `EqualTerms()`. 
So at the moment, the only way to have `x Equiv y` is to have x and y being 
syntactically the same term. But that's not a requirement, and it could be any 
equivalence relation, because the pass has been written to work for any 
equivalence relation, in order to be able to produce even more commonings.
   
   As a use case of this kind of thing:
   For instance, in just a 3 lines of code, @yuanfz98 proposed to change the 
equivalence relation to now identify terms modulo associativity and 
distributivity (of * over +) in a 
[PR](https://github.com/apache/tvm/pull/10544). Sadly I didn't have enough time 
to discuss this PR at the time (which got quickly closed). Some people were 
afraid it would add too much computational complexity to the pass. The 
complexity of the pass itself wouldn't change (we already look at each pair!), 
but this time it took benefit of that. However, this new equivalence function 
was relying on a pseudo-normalisation function (`arith::Analyzer::Simplify`, 
see discussion [here](https://github.com/apache/tvm/issues/10211)), which would 
necessary need some time to (try to) normalize the terms being manipulated. It 
should not take too long, as they did not implement a full decision procedure, 
just some quick attempts of applying some rewrite rules, which are some known 
patterns, that often lead to the most simp
 lified term (but it's not guaranteed). (Said differently, 
`arith::Analyzer::Simplify` is correct but not complete.)
   It could still take too long in the current form, I don't know, I didn't try 
it as it was. But I think there's a way to make that more efficient.
   
   @AndrewZhaoLuo and @tkonolige 
   
   I think I should be able to address the issue without removing completely 
the possibility to have more interesting comparisons in the future.
   I'll give it a go this week-end if you're happy with that.
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to