masahi commented on issue #10223:
URL: https://github.com/apache/tvm/issues/10223#issuecomment-1037881211


   Fixed in https://github.com/apache/tvm/pull/10235. You shouldn't be using 
`torch.jit.optimize_for_inference`, it does no good for us and it even 
introduces `aten::conv2d` etc that we don't recognize. Of course we already 
support PT conv2d op, but we expect them to be represented as 
`aten::_convolution`, which is the case after you run `torch.jit.trace`. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to