ayeganov edited a comment on issue #9939:
URL: https://github.com/apache/tvm/issues/9939#issuecomment-1013786357


   @masahi Thank you for the helpful answers. I am going to go and try 
compiling the model with the VM. But I think I still managed to to be vague 
enough with my questions to confuse myself with your answers :). Here is what I 
want to accomplish and currently think I understand:
   
   1. I need to take an existing ONNX model, that performs well on CPU, but 
uses too many computation resources for me to deploy it in the wild as is, and 
convert it to TVM runtime utilizing Metal as hardware accelerator.
   2. Take the converted model and load it into my own library in C++
   
   I don't care about compiling the model to TVM in C++, as long as I can run 
it in C++. In fact, I'd prefer to do it in python, because it is easier to 
script. I did notice this comment in the referenced implementation you linked:
   
   ```
   # Compile with Relay VM
   # ---------------------
   # Note: Currently only CPU target is supported. For x86 target, it is
   # highly recommended to build TVM with Intel MKL and Intel OpenMP to get
   # best performance, due to the existence of large dense operator in
   # torchvision rcnn models.
   ```
   
   Is this outdated? I'll give this a shot in a few minutes, but wanted to 
bring that to your attention in case there is some discrepancy between examples 
and actual implemented functionality.
   
   In essence, what I think I understand:
   
   1. Models can be compiled by Python and used by TVM API in either C++ or 
Python
   2. Compiling with VM is going to bake it into the model and running it with 
C++ won't require me to use a VM
   
   Also, your examples all point to PyTorch, is that because they are good 
examples and I can do everything they do with ONNX as well, or should I use 
PyTorch to achieve the functionality I need?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to