vinx13 commented on PR #77:
URL: https://github.com/apache/tvm-rfcs/pull/77#issuecomment-1152992143

   Thanks for the discussion. To provide more context, the A0 approach we 
discussed is TIR-Relax layout rewriting 
https://github.com/tlc-pack/relax/issues/162 (the general idea is to lift such 
transformation in TIR scheduling into the graph, and then cancels out redundant 
intermediate transformations by either proving fusing the pair of post-compute 
and pre-compute transformations produces an identity TIR function, or use 
high-level operator semantic). I think this is very similar to  the 
[graph-level 
solution](https://discuss.tvm.apache.org/t/introducing-ty-nnp-backend-with-end2end-tensorir-integration/11807/4)
  mentioned by @wrongtest 
   In general, both A0 and A1 are valid approaches. It is mainly about how we 
would like to handle the complexity in simplifications.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to