[GitHub] [incubator-tvm] kevinthesun commented on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
kevinthesun commented on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585575763 Thanks @alexgl-github @anijain2305 This is an automa

[GitHub] [incubator-tvm] kevinthesun merged pull request #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
kevinthesun merged pull request #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866 This is an automated message from the Apache Git Service. To respond to th

[incubator-tvm] branch master updated (70c6382 -> 8d94587)

2020-02-12 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository. kevinthesun pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 70c6382 [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess (#4543) add 8d94587 Optim

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4790: Fast exponent

2020-02-12 Thread GitBox
FrozenGene commented on a change in pull request #4790: Fast exponent URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r378637381 ## File path: topi/python/topi/math.py ## @@ -449,3 +449,19 @@ def reinterpret(x, dtype): The result. """ re

[GitHub] [incubator-tvm] FrozenGene commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
FrozenGene commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585527191 > I did try just commenting out the assert and it seems to work. However, I then ran into a new problem...

[GitHub] [incubator-tvm] FrozenGene edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
FrozenGene edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585527191 > I did try just commenting out the assert and it seems to work. However, I then ran into a new pro

[GitHub] [incubator-tvm] FrozenGene commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-02-12 Thread GitBox
FrozenGene commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-585513267 Thanks everyone , merged now This is an automa

[GitHub] [incubator-tvm] FrozenGene merged pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-02-12 Thread GitBox
FrozenGene merged pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess URL: https://github.com/apache/incubator-tvm/pull/4543 This is an automated message from the Apache Git Service. To respon

[incubator-tvm] branch master updated (51a265a -> 70c6382)

2020-02-12 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository. zhaowu pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 51a265a [REFACTOR][PY][API-CHANGE] Establish tvm.target add 70c6382 [FRONTEND][TFLITE] Add support for

[GitHub] [incubator-tvm] alexgl-github commented on issue #4790: Fast exponent

2020-02-12 Thread GitBox
alexgl-github commented on issue #4790: Fast exponent URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585506768 > Right. I think `fast_exp` fits better with current naming style. @anijain2305 I've changed fastexp to fast_exp ---

[GitHub] [incubator-tvm] masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585505734 I don't know what is the plan for updating topi, but that is out of scope for this PR anyway. I think it's fine to disable the mobilenet v2 te

[GitHub] [incubator-tvm] alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820 > Thank you for this work! It would be great if you can provide benchmarking data comparing tvm conv3d perfor

[GitHub] [incubator-tvm] alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820 > Thank you for this work! It would be great if you can provide benchmarking data comparing tvm conv3d perfor

[GitHub] [incubator-tvm] alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820 > Thank you for this work! It would be great if you can provide benchmarking data comparing tvm conv3d perfor

[GitHub] [incubator-tvm] alexgl-github commented on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
alexgl-github commented on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820 > Thank you for this work! It would be great if you can provide benchmarking data comparing tvm conv3d performance V

[GitHub] [incubator-tvm] wpan11nv commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-12 Thread GitBox
wpan11nv commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-585497119 Thanks all for the suggestions! Tests updated. This is an automat

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh

2020-02-12 Thread GitBox
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378594559 ## File path: src/relay/backend/build_module.cc ## @@ -307,6 +307,10 @@ class RelayBuildModule

[incubator-tvm] branch master updated (79cfab0 -> 51a265a)

2020-02-12 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 79cfab0 [JVM] Update the runtime PackedFunc for module add 51a265a [REFACTOR][PY][API-CHANGE] Establish

[GitHub] [incubator-tvm] tqchen merged pull request #4872: [REFACTOR][PY][API-CHANGE] Establish tvm.target

2020-02-12 Thread GitBox
tqchen merged pull request #4872: [REFACTOR][PY][API-CHANGE] Establish tvm.target URL: https://github.com/apache/incubator-tvm/pull/4872 This is an automated message from the Apache Git Service. To respond to the message, pl

[GitHub] [incubator-tvm] anijain2305 commented on issue #4790: Fast exponent

2020-02-12 Thread GitBox
anijain2305 commented on issue #4790: Fast exponent URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585489041 Right. I think `fast_exp` fits better with current naming style. This is an automated message fr

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh

2020-02-12 Thread GitBox
anijain2305 commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378589877 ## File path: src/relay/backend/build_module.cc ## @@ -307,6 +307,10 @@ class RelayBuildM

[GitHub] [incubator-tvm] tqchen commented on issue #4790: Fast exponent

2020-02-12 Thread GitBox
tqchen commented on issue #4790: Fast exponent URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585487279 Overall looks OK, it would be great if we can decide a consistent naming convention. In this case, we can have `fastexp` vs `fast_exp` -

[GitHub] [incubator-tvm] anijain2305 commented on issue #4873: [Relay][FastMath] Relay pass to use fast exp/tanh

2020-02-12 Thread GitBox
anijain2305 commented on issue #4873: [Relay][FastMath] Relay pass to use fast exp/tanh URL: https://github.com/apache/incubator-tvm/pull/4873#issuecomment-585487055 Thanks @zhiics for quick review. I will incorporate your comments once the parent PR gets merged in. --

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh

2020-02-12 Thread GitBox
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378587946 ## File path: src/relay/pass/fast_math.cc ## @@ -0,0 +1,79 @@ +/* + * Licensed to the Apache S

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh

2020-02-12 Thread GitBox
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378588075 ## File path: src/relay/pass/fast_math.cc ## @@ -0,0 +1,79 @@ +/* + * Licensed to the Apache S

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh

2020-02-12 Thread GitBox
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378587789 ## File path: src/relay/backend/build_module.cc ## @@ -307,6 +307,10 @@ class RelayBuildModule

[GitHub] [incubator-tvm] anijain2305 commented on issue #4790: Fast exponent

2020-02-12 Thread GitBox
anijain2305 commented on issue #4790: Fast exponent URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585485172 @tqchen @FrozenGene Can you please check if the changes you requested are addressed? This is an

[GitHub] [incubator-tvm] anijain2305 opened a new pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh

2020-02-12 Thread GitBox
anijain2305 opened a new pull request #4873: [Relay][FastMath] Relay pass to use fast exp/tanh URL: https://github.com/apache/incubator-tvm/pull/4873 As Title, dependent on the following PR - https://github.com/apache/incubator-tvm/pull/4790 @alexgl-github @zhiics @masahi

[GitHub] [incubator-tvm] alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848 > Do you think you can fix the padding issue? I haven't looked into what is going on, we may need to update cuda topi schedule. >

[GitHub] [incubator-tvm] zhiics edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
zhiics edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585464161 For Q2, a simpler way would be feeding the first argument of CSourceModuleCreate with ";" instead of em

[GitHub] [incubator-tvm] alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848 > Do you think you can fix the padding issue? I haven't looked into what is going on, we may need to update cuda topi schedule. >

[GitHub] [incubator-tvm] zhiics edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
zhiics edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585464161 For Q2, a simpler way would be feeding it with ";" instead of empty string. But LLVM module should be f

[GitHub] [incubator-tvm] zhiics commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
zhiics commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585464161 For Q2, a simpler would be feeding it with ";" instead of empty string. But LLVM module should be faster for c

[GitHub] [incubator-tvm] tqchen edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
tqchen edited a comment on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585462552 The reason that LLVMModule simplifies compilation is because it remembers the correct target triple. We

[GitHub] [incubator-tvm] tqchen commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
tqchen commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585462552 The reason that LLVMModule simplifies compilation is because it remembers the correct target triple. We can tr

[GitHub] [incubator-tvm] kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585454809 @mbarrett97 Thank you for your detailed explanation! I kind of understand what's going here. For Q1,

[GitHub] [incubator-tvm] alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848 > Do you think you can fix the padding issue? I haven't looked into what is going on, we may need to update cuda topi schedule. >

[GitHub] [incubator-tvm] alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848 > Do you think you can fix the padding issue? I haven't looked into what is going on, we may need to update cuda topi schedule. >

[GitHub] [incubator-tvm] mbarrett97 commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
mbarrett97 commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585449674 I should probably explain my 'test case' :) Apologies if you know this all already. I'm using the external

[GitHub] [incubator-tvm] anijain2305 commented on issue #4790: Fast exponent

2020-02-12 Thread GitBox
anijain2305 commented on issue #4790: Fast exponent URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585443948 Can this get in? I will work on Relay changes. This is an automated message from the Apache Git

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
anijain2305 commented on a change in pull request #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#discussion_r378538482 ## File path: topi/python/topi/x86/conv3d.py ## @@ -17,66 +17,359 @@ # pylint: dis

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
anijain2305 commented on a change in pull request #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#discussion_r378488343 ## File path: topi/python/topi/nn/util.py ## @@ -47,6 +47,36 @@ def infer_pad(data,

[GitHub] [incubator-tvm] kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585438528 Hi @mbarrett97 Sorry I have got some questions for the case you've mentioned. 1. As you sugg

[GitHub] [incubator-tvm] tqchen commented on issue #4872: [REFACTOR][PY][API-CHANGE] Establish tvm.target

2020-02-12 Thread GitBox
tqchen commented on issue #4872: [REFACTOR][PY][API-CHANGE] Establish tvm.target URL: https://github.com/apache/incubator-tvm/pull/4872#issuecomment-585433502 cc @ZihengJiang @icemelon9 @yzhliu @merrymercy @gussmith23 This is

[GitHub] [incubator-tvm] tqchen opened a new pull request #4872: [REFACTOR][PY][API-CHANGE] Establish tvm.target

2020-02-12 Thread GitBox
tqchen opened a new pull request #4872: [REFACTOR][PY][API-CHANGE] Establish tvm.target URL: https://github.com/apache/incubator-tvm/pull/4872 Move the related target modules into tvm.target. API change: - tvm.target.current_target -> tvm.target.Target.current - tvm.datatype ->

[GitHub] [incubator-tvm] tqchen merged pull request #4871: [JVM] Update the runtime PackedFunc for module

2020-02-12 Thread GitBox
tqchen merged pull request #4871: [JVM] Update the runtime PackedFunc for module URL: https://github.com/apache/incubator-tvm/pull/4871 This is an automated message from the Apache Git Service. To respond to the message, plea

[incubator-tvm] branch master updated (aaf62e4 -> 79cfab0)

2020-02-12 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from aaf62e4 Fix optimize add 79cfab0 [JVM] Update the runtime PackedFunc for module No new revisions were

[GitHub] [incubator-tvm] Laurawly commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-12 Thread GitBox
Laurawly commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378516859 ## File path: topi/tests/python/test_topi_tensor.py ## @@ -84,18 +84,41 @@ def check_device(dev

[GitHub] [incubator-tvm] Laurawly commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-12 Thread GitBox
Laurawly commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378516074 ## File path: topi/tests/python/test_topi_relu.py ## @@ -87,12 +87,12 @@ def _prelu_numpy(x, W)

[GitHub] [incubator-tvm] alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848 > Do you think you can fix the padding issue? I haven't looked into what is going on, we may need to update cuda topi schedule. > > I

[GitHub] [incubator-tvm] masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585410718 Do you think you can fix the padding issue? I haven't looked into what is going on, we may need to update cuda topi schedule. It is ok

[GitHub] [incubator-tvm] vinx13 commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-12 Thread GitBox
vinx13 commented on a change in pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378503237 ## File path: topi/tests/python/test_topi_tensor.py ## @@ -84,18 +84,41 @@ def check_device(devic

[GitHub] [incubator-tvm] tqchen commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-12 Thread GitBox
tqchen commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-585395086 @vinx13 @Laurawly please help to review the PR This is an automated

[GitHub] [incubator-tvm] alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585393595 > @alexwong Reverting the commit #4787 fixed the mobilenet issue for me. Yes, that does seem to be the issue. Commenting out the call

[GitHub] [incubator-tvm] paddyhoran commented on issue #2885: [SGX] Use Fortanix EDP

2020-02-12 Thread GitBox
paddyhoran commented on issue #2885: [SGX] Use Fortanix EDP URL: https://github.com/apache/incubator-tvm/pull/2885#issuecomment-585389205 I just thought about this yesterday. I'm not sure I have time to contribute much but development on Rust can't continue without this getting merged as i

[GitHub] [incubator-tvm] tqchen opened a new pull request #4871: [JVM] Update the runtime PackedFunc for module

2020-02-12 Thread GitBox
tqchen opened a new pull request #4871: [JVM] Update the runtime PackedFunc for module URL: https://github.com/apache/incubator-tvm/pull/4871 for changes in https://github.com/apache/incubator-tvm/pull/4837/ cc @yzhliu @kparzysz-quic

[GitHub] [incubator-tvm] tqchen commented on issue #4837: [REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update

2020-02-12 Thread GitBox
tqchen commented on issue #4837: [REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update URL: https://github.com/apache/incubator-tvm/pull/4837#issuecomment-585373355 @kparzysz-quic Thanks for the catch, here is a patch https://github.com/apache/incubator-tvm/pull/4871

[GitHub] [incubator-tvm] comaniac commented on issue #4870: [AutoTVM] Support range in index based tuners

2020-02-12 Thread GitBox
comaniac commented on issue #4870: [AutoTVM] Support range in index based tuners URL: https://github.com/apache/incubator-tvm/pull/4870#issuecomment-585357287 @merrymercy could you help to review this PR? Thanks! This is an au

[GitHub] [incubator-tvm] alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585355953 > hmm it's weird. After I reboot my machine, alexnet and vgg test both passed on cuda. Do you have accuracy issues with alexnet and vgg loca

[GitHub] [incubator-tvm] wpan11nv commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type

2020-02-12 Thread GitBox
wpan11nv commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-585355592 Kindly ping. Thanks! This is an automated message from the Apache

[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-12 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r378438728 ## File path: tests/python/frontend/pytorch/test_forward.py ## @@ -0,0 +1,766 @@ +# Licensed to the Apac

[GitHub] [incubator-tvm] comaniac opened a new pull request #4870: [AutoTVM] Support range in index based tuners

2020-02-12 Thread GitBox
comaniac opened a new pull request #4870: [AutoTVM] Support range in index based tuners URL: https://github.com/apache/incubator-tvm/pull/4870 This PR includes the following changes in order to let grid search and random tuner accept an index range. This is beneficial for distributed tunin

[GitHub] [incubator-tvm] kparzysz-quic commented on issue #4837: [REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update

2020-02-12 Thread GitBox
kparzysz-quic commented on issue #4837: [REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update URL: https://github.com/apache/incubator-tvm/pull/4837#issuecomment-585348595 The file `jvm/core/src/main/java/org/apache/tvm/Module.java` still uses old names. This broke

[GitHub] [incubator-tvm] kevinthesun commented on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach.

2020-02-12 Thread GitBox
kevinthesun commented on issue #4866: Optimize x86 conv3d_ndhwc using data packing approach. URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585332792 Thank you for this work! It would be great if you can provide benchmarking data comparing tvm conv3d performance VS ex

[GitHub] [incubator-tvm] tqchen commented on issue #2885: [SGX] Use Fortanix EDP

2020-02-12 Thread GitBox
tqchen commented on issue #2885: [SGX] Use Fortanix EDP URL: https://github.com/apache/incubator-tvm/pull/2885#issuecomment-585305001 ping @nhynes This is an automated message from the Apache Git Service. To respond to the me

[GitHub] [incubator-tvm] tqchen commented on issue #4845: [DEV] TVM v0.7 Roadmap

2020-02-12 Thread GitBox
tqchen commented on issue #4845: [DEV] TVM v0.7 Roadmap URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-585290701 @yangjunpro Perhaps it worth to start a new thread in the discuss forum to discuss MLIR related topics. We certainly would love some proposals about inter

[GitHub] [incubator-tvm] yangjunpro commented on issue #4845: [DEV] TVM v0.7 Roadmap

2020-02-12 Thread GitBox
yangjunpro commented on issue #4845: [DEV] TVM v0.7 Roadmap URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-585288898 Is there any plan to integrate TVM as a dialect into MLIR? So other components based on MLIR can leverage the capability of TVM, such as high perfo

[GitHub] [incubator-tvm] tqchen commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-02-12 Thread GitBox
tqchen commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-585287746 @FrozenGene please https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4868: [doc][VM] Update the vm doc

2020-02-12 Thread GitBox
zhiics commented on a change in pull request #4868: [doc][VM] Update the vm doc URL: https://github.com/apache/incubator-tvm/pull/4868#discussion_r378356522 ## File path: docs/dev/virtual_machine.rst ## @@ -284,40 +297,76 @@ Dispatch Loop ~ A critical piece of

[incubator-tvm] branch master updated (a566161 -> aaf62e4)

2020-02-12 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from a566161 [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862) add 176ffe5 [

[GitHub] [incubator-tvm] tqchen merged pull request #4869: [DOCS][PY] Add docs for tvm.ir

2020-02-12 Thread GitBox
tqchen merged pull request #4869: [DOCS][PY] Add docs for tvm.ir URL: https://github.com/apache/incubator-tvm/pull/4869 This is an automated message from the Apache Git Service. To respond to the message, please log on to Git

[GitHub] [incubator-tvm] kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
kumasento commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585243629 @mbarrett97 Thanks, I just noticed that the base of my PR is not the latest commit. I will update it soon.

[GitHub] [incubator-tvm] u99127 commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-02-12 Thread GitBox
u99127 commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-585199370 All changes have been done ? Anything left to merge this in ? ---

[GitHub] [incubator-tvm] kumasento commented on issue #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
kumasento commented on issue #4847: Use dummy func when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585197212 > I don't think the empty CSourceModule method works. There's a check in source_module.cc that fails when you try and c

[GitHub] [incubator-tvm] mbarrett97 commented on issue #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-12 Thread GitBox
mbarrett97 commented on issue #4847: Use dummy func when no lowered_funcs exists in Relay mod URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585180105 I don't think the empty CSourceModule method works. There's a check in source_module.cc that fails when you try and cr

[GitHub] [incubator-tvm] masahi edited a comment on issue #4816: [TFLite] Using real image for QNN testing.

2020-02-12 Thread GitBox
masahi edited a comment on issue #4816: [TFLite] Using real image for QNN testing. URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-585153972 What happen if the zero point is a vector, as in per channel quantization? What should the pad value be? --

[GitHub] [incubator-tvm] masahi commented on issue #4816: [TFLite] Using real image for QNN testing.

2020-02-12 Thread GitBox
masahi commented on issue #4816: [TFLite] Using real image for QNN testing. URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-585153972 What happen if the zero point is a vector, as in per channel quantuzation? What should the pad value be? --