This is an automated email from the ASF dual-hosted git repository.
lmzheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 902e21b [TFLite] Using real image for QNN testing. (#4816)
add 31c2f97 [RUNTIME] Fix memory leakage of
merrymercy merged pull request #4856: [RUNTIME] Fix memory leakage of
TVMByteArray
URL: https://github.com/apache/incubator-tvm/pull/4856
This is an automated message from the Apache Git Service.
To respond to the message, p
mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377545174
##
File path: tests/python/relay/test_pass_merge_composite.py
##
@@ -0,0 +1,158 @@
+# Licensed to the Apa
mbarrett97 opened a new pull request #4864: [Relay] Ignore Primitive functions
in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864
Primitive functions are not to be modified by passes and as such should be
ignored. This commit makes Visitor patterns ignore Primitive function
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584690004
the mobilenet issue is a new one since torch update?
This is an automated mes
masahi edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584690004
the mobilenet issue is a new one since torch update?
`CUDA_ERROR_INVALID_PTX` seems a codegen issue (too many threads per block, too
m
masahi edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584690004
the mobilenet issue is a new one since torch update?
`CUDA_ERROR_INVALID_PTX` seems a codegen issue (too many threads/shared mem per
b
yongfeng-nv commented on issue #4651: Tensor Expression Debug Display (TEDD)
URL: https://github.com/apache/incubator-tvm/pull/4651#issuecomment-584702130
The PR is clear from building/testing failures, after qualifying with
@Hzfengsy's help on tutorial.
I am attaching the three static i
tqchen merged pull request #4859: [LLVM] Explicit llvm::StringRef to
std::string conversion
URL: https://github.com/apache/incubator-tvm/pull/4859
This is an automated message from the Apache Git Service.
To respond to the m
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 31c2f97 [RUNTIME] Fix memory leakage of TVMByteArray (#4856)
add 91d2f5a [LLVM] Explicit llvm::StringRe
tqchen edited a comment on issue #4863: vta+nvdla integration
URL: https://github.com/apache/incubator-tvm/pull/4863#issuecomment-584732108
Thanks for the contribution, please create an RFC to describe what are you
proposing, features and implementation plans. I also noticed that there is a
tqchen commented on issue #4863: vta+nvdla integration
URL: https://github.com/apache/incubator-tvm/pull/4863#issuecomment-584732108
Thanks for the contribution, please create an RFC to describe what are you
proposing, features and implementation plans. I also noticed that there is a
refer
tqchen commented on issue #4858: Error in Build NNPACK @mbp
URL: https://github.com/apache/incubator-tvm/issues/4858#issuecomment-584732891
Please open a new trouble shooting thread on https://discuss.tvm.ai/ note
that NNPack is completely optional, and you do not have to build with NNPack
tqchen edited a comment on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584733547
I feel that this is a change that should be bought to a sub-class of a
visitor, instead of the visitor itself. Given that it
tqchen commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584733547
I feel this is a change that should be bought to a sub-class of a visitor,
instead of the visitor itself. Given that it is quite spec
tqchen merged pull request #4861: [TVM] const auto p -> const auto& p
URL: https://github.com/apache/incubator-tvm/pull/4861
This is an automated message from the Apache Git Service.
To respond to the message, please log on t
tqchen commented on issue #4861: [TVM] const auto p -> const auto& p
URL: https://github.com/apache/incubator-tvm/pull/4861#issuecomment-584734150
Thanks @hlu1 !
This is an automated message from the Apache Git Service.
To res
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 91d2f5a [LLVM] Explicit llvm::StringRef to std::string conversion
(#4859)
add c42bb6c [TVM] const auto
tqchen commented on issue #4628: [Object] Add String container
URL: https://github.com/apache/incubator-tvm/pull/4628#issuecomment-584734563
@wweic please update as per comments :)
This is an automated message from the Apache
tqchen edited a comment on issue #4628: [Object] Add String container
URL: https://github.com/apache/incubator-tvm/pull/4628#issuecomment-584734563
@wweic please update per comments :)
This is an automated message from the Apa
tqchen closed issue #4858: Error in Build NNPACK @mbp
URL: https://github.com/apache/incubator-tvm/issues/4858
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
mbarrett97 commented on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584737964
I agree it's not ideal to modify the behaviour of the Visitors directly.
However, if I sub-class to produce a 'PrimitiveSkipping
mbarrett97 commented on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584738329
@comaniac @zhiics Can you review please? I'm wondering whether I've
interpreted the intended behaviour of primitive functions co
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377780629
##
File path: python/tvm/runtime/vm.py
##
@@ -0,0 +1,357 @@
+# License .to t
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377780840
##
File path: python/tvm/runtime/vm.py
##
@@ -0,0 +1,357 @@
+# License .to t
tqchen commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584751569
A potentially better approach here would be to lift the primitive functions
into the module-level, so that the function pass can safe
tqchen commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584753379
It would also be great to understand how things would break the primitive
function(e.g. was that due to inlining), and whether liftin
tqchen edited a comment on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584753379
It would also be great to understand how things would break in the case of
external functions(e.g. was that due to inlining),
alexgl-github commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-584757028
@masahi @anijain2305 @FrozenGene Would you mind reviewing again?
This is an automated message
mbarrett97 commented on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584758674
An example to illustrate why this breaks is that we have external codegens
that want to act directly on the relay 'qnn dialect'.
comaniac commented on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584759811
Specifically for removing all functions from main, would this help?
https://github.com/apache/incubator-tvm/pull/4847
This PR s
masahi commented on issue #4838: [Frontend, ONNX] Add Resize op converter
URL: https://github.com/apache/incubator-tvm/pull/4838#issuecomment-584765362
Can somebody merge this? @tqchen @icemelon9 @yzhliu
This is an automated
icemelon9 merged pull request #4838: [Frontend, ONNX] Add Resize op converter
URL: https://github.com/apache/incubator-tvm/pull/4838
This is an automated message from the Apache Git Service.
To respond to the message, please
icemelon9 commented on issue #4838: [Frontend, ONNX] Add Resize op converter
URL: https://github.com/apache/incubator-tvm/pull/4838#issuecomment-584767853
Thanks @masahi @jwfromm
This is an automated message from the Apache G
This is an automated email from the ASF dual-hosted git repository.
haichen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from c42bb6c [TVM] const auto p -> const auto &p (#4861)
add 4fce513 add resize op converter (#4838)
No ne
zhiics commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584769286
I think there are at least two solutions to prevent Relay passes from
touching the external functions:
- S0: lifting the external
zhiics edited a comment on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584769286
I think there are at least two solutions to prevent Relay passes from
touching the external functions:
- S0: lifting the e
tqchen commented on issue #4862: [REFACTOR][PY] establish tvm.ir, migrate base,
expr, type, adt
URL: https://github.com/apache/incubator-tvm/pull/4862#issuecomment-584776027
cc @zhiics @wweic @yzhliu @icemelon9 @jroesch @MarisaKirisame @ZihengJiang
zhiics commented on a change in pull request #4862: [REFACTOR][PY] establish
tvm.ir, migrate base, expr, type, adt
URL: https://github.com/apache/incubator-tvm/pull/4862#discussion_r377813960
##
File path: python/tvm/ir/base.py
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache
zhiics commented on a change in pull request #4862: [REFACTOR][PY] establish
tvm.ir, migrate base, expr, type, adt
URL: https://github.com/apache/incubator-tvm/pull/4862#discussion_r377817042
##
File path: python/tvm/ir/type.py
##
@@ -0,0 +1,205 @@
+# Licensed to the Apach
zhiics commented on a change in pull request #4862: [REFACTOR][PY] establish
tvm.ir, migrate base, expr, type, adt
URL: https://github.com/apache/incubator-tvm/pull/4862#discussion_r377813574
##
File path: python/tvm/ir/base.py
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache
zhiics commented on a change in pull request #4862: [REFACTOR][PY] establish
tvm.ir, migrate base, expr, type, adt
URL: https://github.com/apache/incubator-tvm/pull/4862#discussion_r377810691
##
File path: python/tvm/ir/adt.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache
tqchen commented on a change in pull request #4862: [REFACTOR][PY] establish
tvm.ir, migrate base, expr, type, adt
URL: https://github.com/apache/incubator-tvm/pull/4862#discussion_r377831779
##
File path: python/tvm/ir/base.py
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache
zhiics commented on a change in pull request #4862: [REFACTOR][PY] establish
tvm.ir, migrate base, expr, type, adt, transform
URL: https://github.com/apache/incubator-tvm/pull/4862#discussion_r377835396
##
File path: python/tvm/ir/base.py
##
@@ -0,0 +1,86 @@
+# Licensed to
jwfromm commented on a change in pull request #4771: [Relay] Added Merge
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377835990
##
File path: tests/python/relay/test_pass_merge_composite.py
##
@@ -0,0 +1,158 @@
+# Licensed to the Apache
tqchen commented on a change in pull request #4862: [REFACTOR][PY] establish
tvm.ir, migrate base, expr, type, adt, transform
URL: https://github.com/apache/incubator-tvm/pull/4862#discussion_r377837198
##
File path: python/tvm/ir/base.py
##
@@ -0,0 +1,86 @@
+# Licensed to
yzhliu commented on issue #3670: [RFC] AlterOpLayout Pass Refactoring
URL: https://github.com/apache/incubator-tvm/issues/3670#issuecomment-584813744
@tqchen I have limited progress at this moment. might be able to revisit in
a month or so.
-
tqchen merged pull request #4855: [Refactor] move vm.py under runtime and adt
to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855
This is an automated message from the Apache Git Service.
To respon
tqchen commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584825131
I think S0 is a better approach here
This is an automated message fr
tqchen commented on issue #4855: [Refactor] move vm.py under runtime and adt to
runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#issuecomment-584825270
Thanks @zhiics !
This is an automated message
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 4fce513 add resize op converter (#4838)
add 502cf26 [Refactor] move vm.py under runtime and adt to
run
tqchen commented on issue #3670: [RFC] AlterOpLayout Pass Refactoring
URL: https://github.com/apache/incubator-tvm/issues/3670#issuecomment-584825626
no problem, just to check to see if the issue can be closed
This is an autom
hlu1 opened a new pull request #4865: [Topi] Missing header
URL: https://github.com/apache/incubator-tvm/pull/4865
Fix compilation error with clang:
```
tvm/tvm/topi/include/topi/x86/default.h:58:14: error: no type named
'AutoInlineInjective' in namespace 'tvm::te'
tvm::te::Aut
zhiics closed issue #4854: [REFACTOR] Move python vm runtime into runtime/vm.py
URL: https://github.com/apache/incubator-tvm/issues/4854
This is an automated message from the Apache Git Service.
To respond to the message, ple
tqchen commented on issue #4862: [REFACTOR][PY][API-CHANGE] establish tvm.ir,
migrate corresponding relay files
URL: https://github.com/apache/incubator-tvm/pull/4862#issuecomment-584832133
Updated the PR and comment, @zhiics please help to take another look
---
tqchen commented on issue #4690: WIP: Flake8: Add undefined names test
URL: https://github.com/apache/incubator-tvm/pull/4690#issuecomment-584835748
close for now per the reasons stated above
This is an automated message from
tqchen closed pull request #4690: WIP: Flake8: Add undefined names test
URL: https://github.com/apache/incubator-tvm/pull/4690
This is an automated message from the Apache Git Service.
To respond to the message, please log on
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584855049
> the mobilenet issue is a new one since torch update?
`CUDA_ERROR_INVALID_PTX` seems a codegen issue (too many threads/shared mem per
bloc
kparzysz-quic commented on a change in pull request #4859: [LLVM] Explicit
llvm::StringRef to std::string conversion
URL: https://github.com/apache/incubator-tvm/pull/4859#discussion_r377915813
##
File path: src/target/llvm/codegen_llvm.cc
##
@@ -88,7 +88,7 @@ void CodeGen
alexgl-github opened a new pull request #4866: Optimize x86 conv3d_ndhwc using
data packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866
Add tuneable conv3d_ndhwc schedule
Thanks for contributing to TVM! Please refer to guideline
https://docs.tvm.ai/contribute
alexgl-github commented on issue #4866: Optimize x86 conv3d_ndhwc using data
packing approach.
URL: https://github.com/apache/incubator-tvm/pull/4866#issuecomment-584868090
@anijain2305 Please take a look
This is an automate
tqchen merged pull request #4750: Fix onnx import bugs
URL: https://github.com/apache/incubator-tvm/pull/4750
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and us
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 502cf26 [Refactor] move vm.py under runtime and adt to
runtime.container.py (#4855)
add 54349ec Fix on
tqchen commented on issue #4750: Fix onnx import bugs
URL: https://github.com/apache/incubator-tvm/pull/4750#issuecomment-584868960
Merge it for now to avoid staleness, and given that the code is properly
reviewed. @kice @jwfromm would be great to add a followup PR to cover the case
--
kumasento commented on a change in pull request #4847: Use dummy func when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377919071
##
File path: src/relay/backend/build_module.cc
##
@@ -438,13 +442,19 @@ class Relay
kumasento commented on issue #4847: Use dummy func when no lowered_funcs exists
in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-584869309
Thank you guys for all your kind reviews! @mbarrett97 @FrozenGene @tqchen
@zhiics
I've updated this PR to fulfi
kumasento commented on a change in pull request #4847: Use dummy func when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377919235
##
File path: src/relay/backend/build_module.cc
##
@@ -438,13 +442,19 @@ class Relay
kumasento commented on issue #4748: [RELAY] Support RelayBuild with Only
Constants
URL: https://github.com/apache/incubator-tvm/issues/4748#issuecomment-584871414
Thank you @zhiics @FrozenGene @tqchen I've changed the current
implementation to use `CSourceModule`. It does not cost much and
zhiics commented on a change in pull request #4847: Use dummy func when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377923131
##
File path: src/relay/backend/build_module.cc
##
@@ -438,13 +439,14 @@ class RelayBui
zhiics commented on a change in pull request #4847: Use dummy func when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377921458
##
File path: src/relay/backend/build_module.cc
##
@@ -28,15 +28,20 @@
#include
#inc
zhiics commented on a change in pull request #4847: Use dummy func when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377926067
##
File path: src/relay/backend/build_module.cc
##
@@ -438,13 +439,14 @@ class RelayBui
zhiics commented on a change in pull request #4847: Use dummy func when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377926439
##
File path: src/relay/backend/build_module.cc
##
@@ -438,13 +439,14 @@ class RelayBui
hlu1 commented on a change in pull request #4859: [LLVM] Explicit
llvm::StringRef to std::string conversion
URL: https://github.com/apache/incubator-tvm/pull/4859#discussion_r377926869
##
File path: src/target/llvm/codegen_llvm.cc
##
@@ -88,7 +88,7 @@ void CodeGenLLVM::Ini
kumasento commented on a change in pull request #4847: Use dummy func when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377927901
##
File path: src/relay/backend/build_module.cc
##
@@ -438,13 +439,14 @@ class Relay
mbarrett97 commented on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584880777
I'll investigate the global function approach in the context of the graph
runtime which is what I'm using at the moment. In thin
masahi commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584883789
Primitive functions are also created during the op fusion pass.
https://github.com/apache/incubator-tvm/blob/e4d817d4c63b1f9881e5085e
mbarrett97 commented on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584886942
It would prevent the contents of the fused functions from being further
modified (except by passes which specifically exempt the
zhiics commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-58480
hmm, it should be okay for the primitive functions created in fusion as we
actually annotate external functions with `kCompiler` and
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584855049
> the mobilenet issue is a new one since torch update?
`CUDA_ERROR_INVALID_PTX` seems a codegen issue (too many threads/shared mem pe
masahi commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584891015
I don't know what exactly "Primitive" function is supposed mean either, but
functions created during op fusion and partitioning have
tqchen merged pull request #4865: [Topi] Missing header
URL: https://github.com/apache/incubator-tvm/pull/4865
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 54349ec Fix onnx import bugs (#4750)
add 15df204 [Topi] Missing header (#4865)
No new revisions were a
tqchen commented on issue #4864: [Relay] Ignore Primitive functions in Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584897668
Given the current discussion, I still think it is better to close this PR
for now and take the lift to global approach.
While
tqchen edited a comment on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584897668
Given the current discussion, I still think it is better to close this PR
for now and take the lift to global approach.
tqchen edited a comment on issue #4864: [Relay] Ignore Primitive functions in
Visitors
URL: https://github.com/apache/incubator-tvm/pull/4864#issuecomment-584897668
Given the current discussion, I still think it is better to close this PR
for now and take the lift to global approach(sorry
wpan11nv opened a new pull request #4867: [TOPI][CUDA] Enable vectorization on
fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867
- This allows to better utilize the memory bandwidth
- Note that not all cases are vectorized for fp16 datatype. For
instance, when the
anijain2305 commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r377957864
##
File path: topi/include/topi/elemwise.h
##
@@ -360,5 +360,85 @@ inline Tensor full_like(const Tensor& x,
}, name, tag);
anijain2305 commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r377958380
##
File path: topi/include/topi/elemwise.h
##
@@ -360,5 +360,85 @@ inline Tensor full_like(const Tensor& x,
}, name, tag);
anijain2305 commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r377959246
##
File path: topi/include/topi/elemwise.h
##
@@ -360,5 +360,85 @@ inline Tensor full_like(const Tensor& x,
}, name, tag);
zhiics opened a new pull request #4868: [doc][VM] Update the vm doc
URL: https://github.com/apache/incubator-tvm/pull/4868
https://discuss.tvm.ai/t/relay-vm-from-c/5623
The VM doc is stale, this PR updates it.
@icemelon9 we need a followup to update the dynamic shape handling s
zhiics commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-584925474
I have some silly questions: when should we switch to the fast_exp since it
is in topi? Do we expect users to select it? Does this mean that this op is
only availabl
tqchen commented on issue #4862: [REFACTOR][PY][API-CHANGE] establish tvm.ir,
migrate corresponding relay files
URL: https://github.com/apache/incubator-tvm/pull/4862#issuecomment-584946965
The Base class has been moved to tvm.ir.RelayExpr to reflect the changes in
the C++ side, so now Exp
tqchen edited a comment on issue #4862: [REFACTOR][PY][API-CHANGE] establish
tvm.ir, migrate corresponding relay files
URL: https://github.com/apache/incubator-tvm/pull/4862#issuecomment-584946965
@MarisaKirisame The Base class has been moved to tvm.ir.RelayExpr to reflect
the changes in t
alexgl-github commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-584949689
> I have some silly questions: when should we switch to the fast_exp since
it is in topi? Do we expect users to select it? Does this mean that this op is
only
tqchen commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-584951383
Please request reviews from reviewers
This is an automated message f
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584981315
hmm it's weird. After I reboot my machine, alexnet and vgg test both passed
on cuda. Do you have accuracy issues with alexnet and vgg locally?
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584985662
@alexwong I have another finding. I get `CUDA_ERROR_INVALID_PTX` even with
my `torchscript-to-tvm` repo if I use the latest TVM. Previously I
masahi edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584985662
@alexwong I have another finding. I get `CUDA_ERROR_INVALID_PTX` even with
my `torchscript-to-tvm` repo if I use the latest TVM. Previo
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584991294
@alexwong Reverting the commit
https://github.com/apache/incubator-tvm/pull/4787 fixed the mobilenet issue for
me.
masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r378019344
##
File path: tests/python/frontend/pytorch/test_forward.py
##
@@ -0,0 +1,766 @@
+# Licensed to the Apache
1 - 100 of 108 matches
Mail list logo