[TVM Discuss] [Questions] Why convolution written in python

2020-03-24 Thread Wyushun via TVM Discuss
Got it! Thank you very much ~~ --- [Visit Topic](https://discuss.tvm.ai/t/why-convolution-written-in-python/6072/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click

[TVM Discuss] [Questions] [AutoTVM] Selective tuning of hotspots

2020-03-24 Thread Robert Bücs via TVM Discuss
Dear community, I'm currently trying to **reduce overall Auto-TVM runtimes** by selectively tuning only the kernels that are actual hotspots in the application. **Hotspot detection** can be performed fairly easily, e.g. by using the **debug runtime** which gives a detailed callgraph profile

[TVM Discuss] [Application] Unintuitive lowered code in TVM

2020-03-24 Thread Pratik Fegade via TVM Discuss
Hi all, I defined a toy computation and scheduled it in TVM. I am having some difficulty in understanding how the lowered code that TVM produces corresponds to the schedule. I have reproduced both the Python and the lowered IR below. Python code: ``` import tvm from tvm import te

[TVM Discuss] [Questions] How does a Relay OP support variable length parameter list?

2020-03-24 Thread Lixiaoquan via TVM Discuss
I think the relay's convention is to convert multiple parameters into a tuple. --- [Visit Topic](https://discuss.tvm.ai/t/how-does-a-relay-op-support-variable-length-parameter-list/1753/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from

[TVM Discuss] [Questions] [VM] The performance degradation of VM runtime and Dynamic Shape support compared to Graph Runtime

2020-03-24 Thread Lfengad via TVM Discuss
Thank you for the response! I try it using the cpu backend and target with "llvm". --- [Visit Topic](https://discuss.tvm.ai/t/vm-the-performance-degradation-of-vm-runtime-and-dynamic-shape-support-compared-to-graph-runtime/6076/4) to respond. You are receiving this because you enabled

[TVM Discuss] [Questions] Why convolution written in python

2020-03-24 Thread Wheest via TVM Discuss
Since tvm is a compiler infrastructure, though the convolution is defined using a Python API, it is simply defining the computation. When the operation runs, this computation is compiled to a backend, e.g. LLVM, OpenCL, CUDA. So there isn't an overhead in inference time by using Python

[TVM Discuss] [Questions] [VM] The performance degradation of VM runtime and Dynamic Shape support compared to Graph Runtime

2020-03-24 Thread Haichen Shen via TVM Discuss
Are you running this on GPU or CPU? The performance degradation is expected on GPU as we need the heterogenous runtime support to avoid redundant memory copy between CPU and GPU. @zhiics is currently working on this. Besides, @jroesch is working on the memory planning for dynamic shape cases

[TVM Discuss] [Application] TOPI autotuning integration

2020-03-24 Thread Haichen Shen via TVM Discuss
Did you use the latest TVM master version? In latest version, we move to use [Relay Op Strategy](https://docs.tvm.ai/dev/relay_op_strategy.html) to choose which implementation to compile for each op. You need to add your implementation in the strategy in order to be used during the