giuseros edited a comment on pull request #6711:
URL: https://github.com/apache/incubator-tvm/pull/6711#issuecomment-716586250


   Hi @FrozenGene 
   Thanks for your reply! 
   1) Yes we can, but there is a issue. It's very hard to have the two 
quantized strategies (int8 + smull/sadalp vs int16 + smlal/smlal2) available to 
the auto-tuner at the same time (you already saw [this 
post](https://discuss.tvm.apache.org/t/quantized-models-and-legalization-pass/8253/3),
 I will reply there to your comment). In the case of depthwise, we only have 
the int16 strategy, so the issue does not arise. 
   2) Nice you asked, because I am trying to prototype  the idea of a "tir 
pass" where this  tensorization (and possibly every arm tensorization in 
`tensor_intrin.py`) happens. My hope is to make it work more or less like 
`vectorize`. One of the main reasons to have this is to enable Ansor support


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to