lhutton1 commented on PR #107:
URL: https://github.com/apache/tvm-rfcs/pull/107#issuecomment-1944331637

   Thanks for taking a look @tqchen! Since scheduling will be completed with 
TensorIR, it will provide the building blocks for being plugged into an 
IRModule=>IRModule transformation pass. For our current use-case, it's 
important to be able to fallback to previous optimizations in the form of TE 
schedules / TOPI where coverage of the TensorIR schedules doesn't exist. 
   
   From the [proposed 
strategy](https://discuss.tvm.apache.org/t/discuss-tvm-core-strategy-for-operator-scheduling-and-tuning/16352),
 I understand it's important to ensure the schedule can operate on a generic 
compute definition of the operation. In the case of matmul-style operations, 
we'd want to apply "array packing" to the input, which is currently expressed 
via the compute definition. Is it possible to express this through TIR 
scheduling alone?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to