[GitHub] [incubator-tvm] wweic merged pull request #4868: [doc][VM] Update the vm doc

2020-02-13 Thread GitBox
wweic merged pull request #4868: [doc][VM] Update the vm doc URL: https://github.com/apache/incubator-tvm/pull/4868 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub

[incubator-tvm] branch master updated (8d94587 -> a6c42b3)

2020-02-13 Thread wweic
This is an automated email from the ASF dual-hosted git repository. wweic pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 8d94587 Optimize x86 conv3d_ndhwc using data packing approach. (#4866) add c8e17dd fix vm doc add

[GitHub] [incubator-tvm] wweic commented on issue #4868: [doc][VM] Update the vm doc

2020-02-13 Thread GitBox
wweic commented on issue #4868: [doc][VM] Update the vm doc URL: https://github.com/apache/incubator-tvm/pull/4868#issuecomment-585608060 thanks @zhiics @tqchen This is an automated message from the Apache Git Service. To res

[GitHub] [incubator-tvm] masahi opened a new pull request #4874: [QNN] More doc fix on quantize and convolution

2020-02-13 Thread GitBox
masahi opened a new pull request #4874: [QNN] More doc fix on quantize and convolution URL: https://github.com/apache/incubator-tvm/pull/4874 This is an automated message from the Apache Git Service. To respond to the messag

[GitHub] [incubator-tvm] vizero1 opened a new issue #4875: Image preprocessing for darknet takes too long

2020-02-13 Thread GitBox
vizero1 opened a new issue #4875: Image preprocessing for darknet takes too long URL: https://github.com/apache/incubator-tvm/issues/4875 Hi, I was working on this tutorial https://docs.tvm.ai/tutorials/frontend/from_darknet.html#sphx-glr-tutorials-frontend-from-darknet-py and it se

[GitHub] [incubator-tvm] vinx13 commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-13 Thread GitBox
vinx13 commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378985967 ## File path: topi/tests/python/test_topi_relu.py ## @@ -20,11 +20,20 @@ import tvm import topi

[GitHub] [incubator-tvm] tqchen commented on issue #4875: Image preprocessing for darknet takes too long

2020-02-13 Thread GitBox
tqchen commented on issue #4875: Image preprocessing for darknet takes too long URL: https://github.com/apache/incubator-tvm/issues/4875#issuecomment-585876040 A PR is more than welcomed This is an automated message from the A

[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-13 Thread GitBox
wpan11nv commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r379014124 ## File path: topi/tests/python/test_topi_relu.py ## @@ -20,11 +20,20 @@ import tvm import to

[GitHub] [incubator-tvm] wpan11nv opened a new pull request #4876: [CodeGen][CUDA] Fix issues in cuda codegen

2020-02-13 Thread GitBox
wpan11nv opened a new pull request #4876: [CodeGen][CUDA] Fix issues in cuda codegen URL: https://github.com/apache/incubator-tvm/pull/4876 - Do not emit __shared__ etc. as part of type for casting - Fix fp16 reduction kernels with compiler errors: "no operator "+" matches t

[GitHub] [incubator-tvm] wpan11nv commented on issue #4876: [CodeGen][CUDA] Fix issues in cuda codegen

2020-02-13 Thread GitBox
wpan11nv commented on issue #4876: [CodeGen][CUDA] Fix issues in cuda codegen URL: https://github.com/apache/incubator-tvm/pull/4876#issuecomment-585914125 This patch should fix errors observed below (I did *not* verify as I found no complete reproduces there). My own test works fine with

[GitHub] [incubator-tvm] tqchen commented on issue #4876: [CodeGen][CUDA] Fix issues in cuda codegen

2020-02-13 Thread GitBox
tqchen commented on issue #4876: [CodeGen][CUDA] Fix issues in cuda codegen URL: https://github.com/apache/incubator-tvm/pull/4876#issuecomment-585924576 cc @vinx13 @ZihengJiang please help to take a look This is an automated

[GitHub] [incubator-tvm] anijain2305 commented on issue #3680: [TOPI] Update softmax compute and CPU schedule

2020-02-13 Thread GitBox
anijain2305 commented on issue #3680: [TOPI] Update softmax compute and CPU schedule URL: https://github.com/apache/incubator-tvm/pull/3680#issuecomment-585943014 Another suggestion - https://discuss.tvm.ai/t/softmax-sequence-of-relay-ops/5686 -

[GitHub] [incubator-tvm] masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution

2020-02-13 Thread GitBox
masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-585977753 @anijain2305 I find the usage of `channels` argument confusing and I think it is better to make `channels` a required argument. Other

[GitHub] [incubator-tvm] anijain2305 commented on issue #4874: [QNN] More doc fix on quantize and convolution

2020-02-13 Thread GitBox
anijain2305 commented on issue #4874: [QNN] More doc fix on quantize and convolution URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-585979392 Yes, I think the same problem exists with simple conv. We can make it a required argument, and change the parsers. Parsers shou

[GitHub] [incubator-tvm] kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-13 Thread GitBox
kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-586005848 Thank you for your valuable suggestions @tqchen @zhiics @FrozenGene ! I now changed the logic to try

[GitHub] [incubator-tvm] masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution

2020-02-13 Thread GitBox
masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-586012237 Tests passed, should be ready to go @anijain2305 @vinx13 @FrozenGene ---

[GitHub] [incubator-tvm] masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution

2020-02-13 Thread GitBox
masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-586020980 @anijain2305 do you think it also makes sense to make `units` param in qnn dense required? https://github.com/apache/incubator-tv

[GitHub] [incubator-tvm] tqchen opened a new pull request #4877: [REFACTOR][PY] Establish tvm.tir

2020-02-13 Thread GitBox
tqchen opened a new pull request #4877: [REFACTOR][PY] Establish tvm.tir URL: https://github.com/apache/incubator-tvm/pull/4877 - Move related files into the corresponding location as in C++ - Keep the top-level TVM API backward compatible to make minimum changes in topi ---

[GitHub] [incubator-tvm] masahi edited a comment on issue #4874: [QNN] More doc fix on quantize and convolution

2020-02-13 Thread GitBox
masahi edited a comment on issue #4874: [QNN] More doc fix on quantize and convolution URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-586020980 @anijain2305 do you think it also makes sense to make `units` param in qnn dense required? https://github.com/apache/incu

[GitHub] [incubator-tvm] tqchen commented on issue #4877: [REFACTOR][PY] Establish tvm.tir

2020-02-13 Thread GitBox
tqchen commented on issue #4877: [REFACTOR][PY] Establish tvm.tir URL: https://github.com/apache/incubator-tvm/pull/4877#issuecomment-586021509 cc @icemelon9 @ZihengJiang @yzhliu This is an automated message from the Apache G

[GitHub] [incubator-tvm] anijain2305 opened a new pull request #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops

2020-02-13 Thread GitBox
anijain2305 opened a new pull request #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops URL: https://github.com/apache/incubator-tvm/pull/4878 Discuss - https://discuss.tvm.ai/t/softmax-sequence-of-relay-ops/5686 @soiferj @yzhliu @kevinthesun Data sha

[GitHub] [incubator-tvm] soiferj opened a new pull request #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass

2020-02-13 Thread GitBox
soiferj opened a new pull request #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass URL: https://github.com/apache/incubator-tvm/pull/4879 This fixes a bug where call nodes are recursively processed more than once, potentially resulting in a composite function

[GitHub] [incubator-tvm] soiferj commented on issue #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass

2020-02-13 Thread GitBox
soiferj commented on issue #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass URL: https://github.com/apache/incubator-tvm/pull/4879#issuecomment-586044128 Sure, I'll work on adding a unit test. Th

[GitHub] [incubator-tvm] masahi commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops

2020-02-13 Thread GitBox
masahi commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586046390 please make sure you benchmark on 4d spatial inputs. In cuda softmax schedule, there is a special case handling

[GitHub] [incubator-tvm] anijain2305 commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops

2020-02-13 Thread GitBox
anijain2305 commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586047123 > Great job! > I'm not sure whether we want to keep softmax compute & schedule though. If someone build

[GitHub] [incubator-tvm] zhiics commented on issue #4459: [RUNTIME] Implement TVMDSOOp(TensorFlow custom op) for TVM runtime

2020-02-13 Thread GitBox
zhiics commented on issue #4459: [RUNTIME] Implement TVMDSOOp(TensorFlow custom op) for TVM runtime URL: https://github.com/apache/incubator-tvm/pull/4459#issuecomment-586048228 For python unit test, something similar in your RFC should be okay. We have TensorFlow in the CI. For gtest, it

[GitHub] [incubator-tvm] anijain2305 commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops

2020-02-13 Thread GitBox
anijain2305 commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586048850 > please make sure you benchmark on 4d spatial inputs. In cuda softmax schedule, there is a special case h

[GitHub] [incubator-tvm] zhiics commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops

2020-02-13 Thread GitBox
zhiics commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586049492 I think we may need to keep Softmax schedule as well. We can remove only if we treat them as batchnorm.

[GitHub] [incubator-tvm] masahi commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops

2020-02-13 Thread GitBox
masahi commented on issue #4878: [Relay][SimplifyInference] Express Softmax as sequence of Relay ops URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586049782 Correct, but look closer, it is for input dim more than 2d. See the PR below for background. https://

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-13 Thread GitBox
FrozenGene commented on a change in pull request #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r379218842 ## File path: src/relay/backend/build_module.cc ## @@ -437,28 +441,50 @

[GitHub] [incubator-tvm] masahi opened a new pull request #4880: [QNN] Add support for per channel weight scale in dense op

2020-02-13 Thread GitBox
masahi opened a new pull request #4880: [QNN] Add support for per channel weight scale in dense op URL: https://github.com/apache/incubator-tvm/pull/4880 QNN dense op does not accept a vector weight scale as argument at the moment, but this restriction can be fixed trivially. pleas

[GitHub] [incubator-tvm] FrozenGene commented on issue #4857: Windows Support for cpp_rpc

2020-02-13 Thread GitBox
FrozenGene commented on issue #4857: Windows Support for cpp_rpc URL: https://github.com/apache/incubator-tvm/pull/4857#issuecomment-586073161 Thanks @jmorrill to bring C++ RPC to Windows and change build system to CMake! I maybe have no time to review it this week, but I have a glan

[GitHub] [incubator-tvm] masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op

2020-02-13 Thread GitBox
masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379234008 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -982,13 +982,15 @@ def conver

[incubator-tvm] branch master updated (a6c42b3 -> b787ffa)

2020-02-13 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from a6c42b3 Update docs/dev/virtual_machine.rst add b787ffa [REFACTOR][PY] Establish tvm.tir No new revisi

[GitHub] [incubator-tvm] tqchen merged pull request #4877: [REFACTOR][PY] Establish tvm.tir

2020-02-13 Thread GitBox
tqchen merged pull request #4877: [REFACTOR][PY] Establish tvm.tir URL: https://github.com/apache/incubator-tvm/pull/4877 This is an automated message from the Apache Git Service. To respond to the message, please log on to G

[GitHub] [incubator-tvm] tqchen commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-13 Thread GitBox
tqchen commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-586091707 Thanks @vinx13 @wpan11nv ! This is an automated message from the Apa

[GitHub] [incubator-tvm] tqchen merged pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-13 Thread GitBox
tqchen merged pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867 This is an automated message from the Apache Git Service. To respond to the message, plea

[incubator-tvm] branch master updated (b787ffa -> 7013fc9)

2020-02-13 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from b787ffa [REFACTOR][PY] Establish tvm.tir add 7013fc9 [TOPI][CUDA] Enable vectorization on fp16 type (#4

[incubator-tvm] branch master updated (7013fc9 -> 24c53a3)

2020-02-13 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 7013fc9 [TOPI][CUDA] Enable vectorization on fp16 type (#4867) add 24c53a3 [QNN] More doc fix on quanti

[GitHub] [incubator-tvm] tqchen merged pull request #4874: [QNN] More doc fix on quantize and convolution

2020-02-13 Thread GitBox
tqchen merged pull request #4874: [QNN] More doc fix on quantize and convolution URL: https://github.com/apache/incubator-tvm/pull/4874 This is an automated message from the Apache Git Service. To respond to the message, plea

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op

2020-02-13 Thread GitBox
anijain2305 commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379277612 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -982,13 +982,15 @@ def c

[GitHub] [incubator-tvm] masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op

2020-02-13 Thread GitBox
masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379280705 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -982,13 +982,15 @@ def conver

[GitHub] [incubator-tvm] masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op

2020-02-13 Thread GitBox
masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379280705 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -982,13 +982,15 @@ def conver

[GitHub] [incubator-tvm] masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op

2020-02-13 Thread GitBox
masahi commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379286230 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -982,13 +982,15 @@ def conver

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op

2020-02-13 Thread GitBox
FrozenGene commented on a change in pull request #4880: [QNN] Add support for per channel weight scale in dense op URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379286796 ## File path: python/tvm/relay/frontend/tflite.py ## @@ -982,13 +982,15 @@ def co

[GitHub] [incubator-tvm] masahi commented on issue #4790: Fast exponent

2020-02-13 Thread GitBox
masahi commented on issue #4790: Fast exponent URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-586136935 @tqchen please give an approval. This is an automated message from the Apache Git Service. To respond