[TVM Discuss] [Application] Deployment to Pytorch/dlpack

2020-03-25 Thread Yizhi Liu via TVM Discuss
TVM kernel supports dynamic shape, while rank of the shape has to be fixed. We did some experimental work before, to exhaust all the combination of shape rank + op attribute ahead of time and compile to .so. It's doable but has some restriction. --- [Visit Topic](https://discuss.tvm.ai/t/

[TVM Discuss] [Application] Deployment to Pytorch/dlpack

2020-03-23 Thread tqchen via TVM Discuss
You are absolutely right. libtvmruntime only library would be a good way to go. Right now(before 0.7) we can do that from source by typing `make runtime` instead of make --- [Visit Topic](https://discuss.tvm.ai/t/deployment-to-pytorch-dlpack/6069/5) to respond. You are receiving this bec

[TVM Discuss] [Application] Deployment to Pytorch/dlpack

2020-03-23 Thread Sasha Rush via TVM Discuss
Thanks. I wasn't sure if I was missing something. It seems pretty heavyweight to assume the users have full TVM installed and the only other option seems to ship the generated CUDA code which seems messy. Thanks for the awesome library, really interesting to use. --- [Visit Topic](https:/

[TVM Discuss] [Application] Deployment to Pytorch/dlpack

2020-03-23 Thread tqchen via TVM Discuss
We are in the process of doing quite a bit of refactoring in terms of the runtime in this release cycle. We do hope to get some packaging story for 0.7 through pip or conda --- [Visit Topic](https://discuss.tvm.ai/t/deployment-to-pytorch-dlpack/6069/3) to respond. You are receiving this

[TVM Discuss] [Application] Deployment to Pytorch/dlpack

2020-03-23 Thread masahi via TVM Discuss
Unfortunately we don't have any pip package at the moment. But a runtime only package sounds reasonable. cc @tqchen I'd imagine you'd build TVM code outside of Torch first, and export a build artifact as shared lib. And from Torch you can load the TVM-generated shared lib in either python cod

[TVM Discuss] [Application] Deployment to Pytorch/dlpack

2020-03-23 Thread Sasha Rush via TVM Discuss
I would like to use TVM to develop a GPU pytorch module. However I cannot figure out how to best distribute the code. What is the easiest way to ensure that end-users can run the function? Do they need to install TVM from source? Is there a pip package for the runtime alone? --- [Visit To