kevinthesun commented on issue #4866: Optimize x86 conv3d_ndhwc using data
packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585575763
Thanks @alexgl-github @anijain2305
This is an automa
kevinthesun merged pull request #4866: Optimize x86 conv3d_ndhwc using data
packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866
This is an automated message from the Apache Git Service.
To respond to th
This is an automated email from the ASF dual-hosted git repository.
kevinthesun pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 70c6382 [FRONTEND][TFLITE] Add support for
TFLite_Detection_PostProcess (#4543)
add 8d94587 Optim
FrozenGene commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r378637381
##
File path: topi/python/topi/math.py
##
@@ -449,3 +449,19 @@ def reinterpret(x, dtype):
The result.
"""
re
FrozenGene commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585527191
> I did try just commenting out the assert and it seems to work. However, I
then ran into a new problem...
FrozenGene edited a comment on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585527191
> I did try just commenting out the assert and it seems to work. However, I
then ran into a new pro
FrozenGene commented on issue #4543: [FRONTEND][TFLITE] Add support for
TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-585513267
Thanks everyone , merged now
This is an automa
FrozenGene merged pull request #4543: [FRONTEND][TFLITE] Add support for
TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543
This is an automated message from the Apache Git Service.
To respon
This is an automated email from the ASF dual-hosted git repository.
zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 51a265a [REFACTOR][PY][API-CHANGE] Establish tvm.target
add 70c6382 [FRONTEND][TFLITE] Add support for
alexgl-github commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585506768
> Right. I think `fast_exp` fits better with current naming style.
@anijain2305
I've changed fastexp to fast_exp
---
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585505734
I don't know what is the plan for updating topi, but that is out of scope
for this PR anyway. I think it's fine to disable the mobilenet v2 te
alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using
data packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820
> Thank you for this work! It would be great if you can provide benchmarking
data comparing tvm conv3d perfor
alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using
data packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820
> Thank you for this work! It would be great if you can provide benchmarking
data comparing tvm conv3d perfor
alexgl-github edited a comment on issue #4866: Optimize x86 conv3d_ndhwc using
data packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820
> Thank you for this work! It would be great if you can provide benchmarking
data comparing tvm conv3d perfor
alexgl-github commented on issue #4866: Optimize x86 conv3d_ndhwc using data
packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585502820
> Thank you for this work! It would be great if you can provide benchmarking
data comparing tvm conv3d performance V
wpan11nv commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16
type
URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-585497119
Thanks all for the suggestions! Tests updated.
This is an automat
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay
pass to use fast exp/tanh
URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378594559
##
File path: src/relay/backend/build_module.cc
##
@@ -307,6 +307,10 @@ class RelayBuildModule
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 79cfab0 [JVM] Update the runtime PackedFunc for module
add 51a265a [REFACTOR][PY][API-CHANGE] Establish
tqchen merged pull request #4872: [REFACTOR][PY][API-CHANGE] Establish
tvm.target
URL: https://github.com/apache/incubator-tvm/pull/4872
This is an automated message from the Apache Git Service.
To respond to the message, pl
anijain2305 commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585489041
Right. I think `fast_exp` fits better with current naming style.
This is an automated message fr
anijain2305 commented on a change in pull request #4873: [Relay][FastMath]
Relay pass to use fast exp/tanh
URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378589877
##
File path: src/relay/backend/build_module.cc
##
@@ -307,6 +307,10 @@ class RelayBuildM
tqchen commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585487279
Overall looks OK, it would be great if we can decide a consistent naming
convention. In this case, we can have `fastexp` vs `fast_exp`
-
anijain2305 commented on issue #4873: [Relay][FastMath] Relay pass to use fast
exp/tanh
URL: https://github.com/apache/incubator-tvm/pull/4873#issuecomment-585487055
Thanks @zhiics for quick review. I will incorporate your comments once the
parent PR gets merged in.
--
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay
pass to use fast exp/tanh
URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378587946
##
File path: src/relay/pass/fast_math.cc
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache S
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay
pass to use fast exp/tanh
URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378588075
##
File path: src/relay/pass/fast_math.cc
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache S
zhiics commented on a change in pull request #4873: [Relay][FastMath] Relay
pass to use fast exp/tanh
URL: https://github.com/apache/incubator-tvm/pull/4873#discussion_r378587789
##
File path: src/relay/backend/build_module.cc
##
@@ -307,6 +307,10 @@ class RelayBuildModule
anijain2305 commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585485172
@tqchen @FrozenGene Can you please check if the changes you requested are
addressed?
This is an
anijain2305 opened a new pull request #4873: [Relay][FastMath] Relay pass to
use fast exp/tanh
URL: https://github.com/apache/incubator-tvm/pull/4873
As Title, dependent on the following PR -
https://github.com/apache/incubator-tvm/pull/4790
@alexgl-github @zhiics @masahi
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848
> Do you think you can fix the padding issue? I haven't looked into what is
going on, we may need to update cuda topi schedule.
>
zhiics edited a comment on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585464161
For Q2, a simpler way would be feeding the first argument of
CSourceModuleCreate with ";" instead of em
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848
> Do you think you can fix the padding issue? I haven't looked into what is
going on, we may need to update cuda topi schedule.
>
zhiics edited a comment on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585464161
For Q2, a simpler way would be feeding it with ";" instead of empty string.
But LLVM module should be f
zhiics commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585464161
For Q2, a simpler would be feeding it with ";" instead of empty string. But
LLVM module should be faster for c
tqchen edited a comment on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585462552
The reason that LLVMModule simplifies compilation is because it remembers
the correct target triple. We
tqchen commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585462552
The reason that LLVMModule simplifies compilation is because it remembers
the correct target triple. We can tr
kumasento commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585454809
@mbarrett97 Thank you for your detailed explanation! I kind of understand
what's going here.
For Q1,
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848
> Do you think you can fix the padding issue? I haven't looked into what is
going on, we may need to update cuda topi schedule.
>
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848
> Do you think you can fix the padding issue? I haven't looked into what is
going on, we may need to update cuda topi schedule.
>
mbarrett97 commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585449674
I should probably explain my 'test case' :) Apologies if you know this all
already. I'm using the external
anijain2305 commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-585443948
Can this get in? I will work on Relay changes.
This is an automated message from the Apache Git
anijain2305 commented on a change in pull request #4866: Optimize x86
conv3d_ndhwc using data packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#discussion_r378538482
##
File path: topi/python/topi/x86/conv3d.py
##
@@ -17,66 +17,359 @@
# pylint: dis
anijain2305 commented on a change in pull request #4866: Optimize x86
conv3d_ndhwc using data packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#discussion_r378488343
##
File path: topi/python/topi/nn/util.py
##
@@ -47,6 +47,36 @@ def infer_pad(data,
kumasento commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585438528
Hi @mbarrett97
Sorry I have got some questions for the case you've mentioned.
1. As you sugg
tqchen commented on issue #4872: [REFACTOR][PY][API-CHANGE] Establish tvm.target
URL: https://github.com/apache/incubator-tvm/pull/4872#issuecomment-585433502
cc @ZihengJiang @icemelon9 @yzhliu @merrymercy @gussmith23
This is
tqchen opened a new pull request #4872: [REFACTOR][PY][API-CHANGE] Establish
tvm.target
URL: https://github.com/apache/incubator-tvm/pull/4872
Move the related target modules into tvm.target.
API change:
- tvm.target.current_target -> tvm.target.Target.current
- tvm.datatype ->
tqchen merged pull request #4871: [JVM] Update the runtime PackedFunc for module
URL: https://github.com/apache/incubator-tvm/pull/4871
This is an automated message from the Apache Git Service.
To respond to the message, plea
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from aaf62e4 Fix optimize
add 79cfab0 [JVM] Update the runtime PackedFunc for module
No new revisions were
Laurawly commented on a change in pull request #4867: [TOPI][CUDA] Enable
vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378516859
##
File path: topi/tests/python/test_topi_tensor.py
##
@@ -84,18 +84,41 @@ def check_device(dev
Laurawly commented on a change in pull request #4867: [TOPI][CUDA] Enable
vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378516074
##
File path: topi/tests/python/test_topi_relu.py
##
@@ -87,12 +87,12 @@ def _prelu_numpy(x, W)
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585414848
> Do you think you can fix the padding issue? I haven't looked into what is
going on, we may need to update cuda topi schedule.
>
> I
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585410718
Do you think you can fix the padding issue? I haven't looked into what is
going on, we may need to update cuda topi schedule.
It is ok
vinx13 commented on a change in pull request #4867: [TOPI][CUDA] Enable
vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378503237
##
File path: topi/tests/python/test_topi_tensor.py
##
@@ -84,18 +84,41 @@ def check_device(devic
tqchen commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-585395086
@vinx13 @Laurawly please help to review the PR
This is an automated
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585393595
> @alexwong Reverting the commit #4787 fixed the mobilenet issue for me.
Yes, that does seem to be the issue. Commenting out the call
paddyhoran commented on issue #2885: [SGX] Use Fortanix EDP
URL: https://github.com/apache/incubator-tvm/pull/2885#issuecomment-585389205
I just thought about this yesterday. I'm not sure I have time to contribute
much but development on Rust can't continue without this getting merged as i
tqchen opened a new pull request #4871: [JVM] Update the runtime PackedFunc for
module
URL: https://github.com/apache/incubator-tvm/pull/4871
for changes in https://github.com/apache/incubator-tvm/pull/4837/
cc @yzhliu @kparzysz-quic
tqchen commented on issue #4837: [REFACTOR][PY][API-Change] Polish tvm.runtime,
tvm.runtime.module API update
URL: https://github.com/apache/incubator-tvm/pull/4837#issuecomment-585373355
@kparzysz-quic Thanks for the catch, here is a patch
https://github.com/apache/incubator-tvm/pull/4871
comaniac commented on issue #4870: [AutoTVM] Support range in index based tuners
URL: https://github.com/apache/incubator-tvm/pull/4870#issuecomment-585357287
@merrymercy could you help to review this PR? Thanks!
This is an au
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-585355953
> hmm it's weird. After I reboot my machine, alexnet and vgg test both
passed on cuda. Do you have accuracy issues with alexnet and vgg loca
wpan11nv commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16
type
URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-585355592
Kindly ping. Thanks!
This is an automated message from the Apache
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r378438728
##
File path: tests/python/frontend/pytorch/test_forward.py
##
@@ -0,0 +1,766 @@
+# Licensed to the Apac
comaniac opened a new pull request #4870: [AutoTVM] Support range in index
based tuners
URL: https://github.com/apache/incubator-tvm/pull/4870
This PR includes the following changes in order to let grid search and
random tuner accept an index range. This is beneficial for distributed tunin
kparzysz-quic commented on issue #4837: [REFACTOR][PY][API-Change] Polish
tvm.runtime, tvm.runtime.module API update
URL: https://github.com/apache/incubator-tvm/pull/4837#issuecomment-585348595
The file `jvm/core/src/main/java/org/apache/tvm/Module.java` still uses old
names. This broke
kevinthesun commented on issue #4866: Optimize x86 conv3d_ndhwc using data
packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-585332792
Thank you for this work! It would be great if you can provide benchmarking
data comparing tvm conv3d performance VS ex
tqchen commented on issue #2885: [SGX] Use Fortanix EDP
URL: https://github.com/apache/incubator-tvm/pull/2885#issuecomment-585305001
ping @nhynes
This is an automated message from the Apache Git Service.
To respond to the me
tqchen commented on issue #4845: [DEV] TVM v0.7 Roadmap
URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-585290701
@yangjunpro Perhaps it worth to start a new thread in the discuss forum to
discuss MLIR related topics. We certainly would love some proposals about
inter
yangjunpro commented on issue #4845: [DEV] TVM v0.7 Roadmap
URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-585288898
Is there any plan to integrate TVM as a dialect into MLIR?
So other components based on MLIR can leverage the capability of TVM, such
as high perfo
tqchen commented on issue #4543: [FRONTEND][TFLITE] Add support for
TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-585287746
@FrozenGene please
https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly
zhiics commented on a change in pull request #4868: [doc][VM] Update the vm doc
URL: https://github.com/apache/incubator-tvm/pull/4868#discussion_r378356522
##
File path: docs/dev/virtual_machine.rst
##
@@ -284,40 +297,76 @@ Dispatch Loop
~
A critical piece of
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from a566161 [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate
corresponding files (#4862)
add 176ffe5 [
tqchen merged pull request #4869: [DOCS][PY] Add docs for tvm.ir
URL: https://github.com/apache/incubator-tvm/pull/4869
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
kumasento commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585243629
@mbarrett97 Thanks, I just noticed that the base of my PR is not the latest
commit. I will update it soon.
u99127 commented on issue #4543: [FRONTEND][TFLITE] Add support for
TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-585199370
All changes have been done ? Anything left to merge this in ?
---
kumasento commented on issue #4847: Use dummy func when no lowered_funcs exists
in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585197212
> I don't think the empty CSourceModule method works. There's a check in
source_module.cc that fails when you try and c
mbarrett97 commented on issue #4847: Use dummy func when no lowered_funcs
exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-585180105
I don't think the empty CSourceModule method works. There's a check in
source_module.cc that fails when you try and cr
masahi edited a comment on issue #4816: [TFLite] Using real image for QNN
testing.
URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-585153972
What happen if the zero point is a vector, as in per channel quantization?
What should the pad value be?
--
masahi commented on issue #4816: [TFLite] Using real image for QNN testing.
URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-585153972
What happen if the zero point is a vector, as in per channel quantuzation?
What should the pad value be?
--
77 matches
Mail list logo