FinnWeng commented on issue #4262: [RELAY][Bug] 'name_hint' AttributeError
issue when covert tensorflow to TVM
URL: https://github.com/apache/incubator-tvm/issues/4262#issuecomment-550188080
Great!
Thanks for the prompt response. May I ask when will the fixed-version been
released?
yongwww commented on issue #4262: [RELAY][Bug] 'name_hint' AttributeError issue
when covert tensorflow to TVM
URL: https://github.com/apache/incubator-tvm/issues/4262#issuecomment-550185668
We saw the issue these days. @kevinthesun has a fix locally for the similar
issue.
KimBioInfoStudio commented on issue #4235: [CMake] Imporve parameter 'ANTLR4'
to avoid cmake will ignore the 'space'
URL: https://github.com/apache/incubator-tvm/pull/4235#issuecomment-550168689
> Hi
>
> This PR cannot pass the CI build process due to following issue, but I am
not
FrozenGene edited a comment on issue #1756: [RUNTIME][RPC]C++ RPC Server
URL: https://github.com/apache/incubator-tvm/pull/1756#issuecomment-550119753
> We should keep cpp_rpc's dependences in a limited region, then fix these
cpp file using
FrozenGene edited a comment on issue #1756: [RUNTIME][RPC]C++ RPC Server
URL: https://github.com/apache/incubator-tvm/pull/1756#issuecomment-550119753
> We should keep cpp_rpc's dependences in a limited region, then fix these
cpp file using
FrozenGene commented on issue #1756: [RUNTIME][RPC]C++ RPC Server
URL: https://github.com/apache/incubator-tvm/pull/1756#issuecomment-550119753
> We should keep cpp_rpc's dependences in a limited region, then fix these
cpp file using
snowolfhawk commented on issue #1756: [RUNTIME][RPC]C++ RPC Server
URL: https://github.com/apache/incubator-tvm/pull/1756#issuecomment-550116962
@yagnasrinath
Currently i apply this PR on armv8 and don't find any problems.
But just like my reply above, this cpp_rpc depends on
cchung100m edited a comment on issue #4235: [CMake] Imporve parameter 'ANTLR4'
to avoid cmake will ignore the 'space'
URL: https://github.com/apache/incubator-tvm/pull/4235#issuecomment-550097774
Hi
This PR cannot pass the CI build process due to following issue, but I am
not sure
jroesch commented on a change in pull request #4258: [WIP][TVM] Bring Your Own
Codegen to TVM
URL: https://github.com/apache/incubator-tvm/pull/4258#discussion_r342893692
##
File path: include/tvm/runtime/vm.h
##
@@ -286,6 +298,15 @@ struct Instruction {
*/
static
tqchen commented on issue #4261: DepthWise Transposed Conv error
URL: https://github.com/apache/incubator-tvm/issues/4261#issuecomment-550113671
Thanks for reporting the problem, it would be great if you can open a new
trouble shooting thread on https://discuss.tvm.ai/ where the community
tqchen closed issue #4261: DepthWise Transposed Conv error
URL: https://github.com/apache/incubator-tvm/issues/4261
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
CoinCheung opened a new issue #4261: DepthWise Transposed Conv error
URL: https://github.com/apache/incubator-tvm/issues/4261
A short version of my code is:
```python
import onnx
import torch
import torch.nn as nn
import tvm
import tvm.relay as relay
class
optima2005 commented on issue #4165: [FRONTEND] Operator BlockGrad is not
supported in frontend MXNet.
URL: https://github.com/apache/incubator-tvm/issues/4165#issuecomment-550104426
Hi, @Edwardmark,
I found the Operator BlockGrad has been implemented in current mxnet
frontend. It is
cchung100m commented on issue #4235: [CMake] Imporve parameter 'ANTLR4' to
avoid cmake will ignore the 'space'
URL: https://github.com/apache/incubator-tvm/pull/4235#issuecomment-550097774
Hi
This PR cannot pass the CI build process due to following issue, but I am
not sure
jwfromm commented on a change in pull request #4242: [AutoTVM] Add batch_matmul
to tunable operations
URL: https://github.com/apache/incubator-tvm/pull/4242#discussion_r342877940
##
File path: topi/python/topi/x86/batch_matmul.py
##
@@ -18,43 +18,70 @@
"""x86
comaniac commented on a change in pull request #4242: [AutoTVM] Add
batch_matmul to tunable operations
URL: https://github.com/apache/incubator-tvm/pull/4242#discussion_r342869421
##
File path: topi/python/topi/x86/batch_matmul.py
##
@@ -18,43 +18,70 @@
"""x86
jwfromm edited a comment on issue #4242: [AutoTVM] Add batch_matmul to tunable
operations
URL: https://github.com/apache/incubator-tvm/pull/4242#issuecomment-550086640
@icemelon9, autotuning seems to yield speedups around 20% faster than the
base configuration on a 32 core CPU. Again,
jwfromm commented on issue #4242: [AutoTVM] Add batch_matmul to tunable
operations
URL: https://github.com/apache/incubator-tvm/pull/4242#issuecomment-550086640
@icemelon9, autotuning seems to yield speedups of around 20% improvement
over the base configuration on a 32 core CPU. Again,
jwfromm commented on a change in pull request #4242: [AutoTVM] Add batch_matmul
to tunable operations
URL: https://github.com/apache/incubator-tvm/pull/4242#discussion_r342866103
##
File path: topi/python/topi/x86/batch_matmul.py
##
@@ -18,43 +18,70 @@
"""x86
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from aae5cde workaround typing.Deque import error for Python 3.5 (#4254)
add 86b844b [DOCS] Update link loc
tqchen merged pull request #4257: [DOCS] Link to new repo loc
URL: https://github.com/apache/incubator-tvm/pull/4257
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
tqchen closed issue #4256: Install from Source doc still says to clone dmlc/tvm
URL: https://github.com/apache/incubator-tvm/issues/4256
This is an automated message from the Apache Git Service.
To respond to the message,
kevinthesun commented on issue #4260: [TOPI] Fix bug in Winograd on CUDA
URL: https://github.com/apache/incubator-tvm/pull/4260#issuecomment-550068734
@cbalint13
This is an automated message from the Apache Git Service.
To
zhiics commented on issue #4258: [WIP][TVM] Bring Your Own Codegen to TVM
URL: https://github.com/apache/incubator-tvm/pull/4258#issuecomment-550065063
@tqchen Yes, we are doing something similar. We generate the C APIs directly
and compile them into a .so file so that the runtime module
comaniac opened a new pull request #4260: [TOPI] Fix bug in Winograd on CUDA
URL: https://github.com/apache/incubator-tvm/pull/4260
Several topics [1, 2, 3] in the discuss mention that the conv2d failed to
pass the shape checking in the runtime after the conv2d has been tuned by
AutoTVM.
tqchen commented on issue #4258: [WIP][TVM] Bring Your Own Codegen to TVM
URL: https://github.com/apache/incubator-tvm/pull/4258#issuecomment-550046151
To be specific, if we want to make uses of the shared library, what I will
do is to generate the redirection code (like the current
comaniac commented on issue #4258: [WIP][TVM] Bring Your Own Codegen to TVM
URL: https://github.com/apache/incubator-tvm/pull/4258#issuecomment-550044429
I agree that our base module has many similarities to the DSOMoulde. Maybe
we can consider to directly base on the DSOModule and keep
tqchen opened a new issue #4259: [DEV][DRAFT] TVM v0.6 Release candidate
URL: https://github.com/apache/incubator-tvm/issues/4259
Dear Community, thanks to everyone's effort in the past few months. This is
a proposal to do a v0.6 release.
This release will be managed by the TVM
zhiics commented on issue #4258: [WIP][TVM] Bring Your Own Codegen to TVM
URL: https://github.com/apache/incubator-tvm/pull/4258#issuecomment-550039110
@tqchen Thanks for comment:) We actually tried something similar to the
DSOModule you mentioned here. @comaniac can you probably share a
tqchen commented on issue #4258: [WIP][TVM] Bring Your Own Codegen to TVM
URL: https://github.com/apache/incubator-tvm/pull/4258#issuecomment-550030642
Thanks for the PR.
It would be great if you can propose a separate PR for the
```runtime/contrib``` support. Given that the
yzhliu commented on a change in pull request #4257: [DOCS] Link to new repo loc
URL: https://github.com/apache/incubator-tvm/pull/4257#discussion_r342800939
##
File path: jvm/pom.xml
##
@@ -22,7 +22,7 @@
scm:git:g...@github.com:dmlc/tvm.git
yzhliu commented on a change in pull request #4257: [DOCS] Link to new repo loc
URL: https://github.com/apache/incubator-tvm/pull/4257#discussion_r342800282
##
File path: docs/deploy/android.md
##
@@ -38,5 +38,5 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will
yzhliu commented on a change in pull request #4257: [DOCS] Link to new repo loc
URL: https://github.com/apache/incubator-tvm/pull/4257#discussion_r342800731
##
File path: jvm/native/src/main/native/ml_dmlc_tvm_native_c_api.cc
##
@@ -249,9 +249,9 @@ extern "C" int
yzhliu commented on a change in pull request #4257: [DOCS] Link to new repo loc
URL: https://github.com/apache/incubator-tvm/pull/4257#discussion_r342800638
##
File path: jvm/native/src/main/native/jni_helper_func.h
##
@@ -206,7 +206,7 @@ jobject tvmRetValueToJava(JNIEnv
zhiics opened a new pull request #4258: [WIP][TVM] Bring Your Own Codegen to TVM
URL: https://github.com/apache/incubator-tvm/pull/4258
This is a WIP that enables different backends and/or hardware vendors to
bring their own codegen tools to TVM. This is the collaboration between
tqchen commented on issue #4256: Install from Source doc still says to clone
dmlc/tvm
URL: https://github.com/apache/incubator-tvm/issues/4256#issuecomment-550006455
Should be fixed by https://github.com/apache/incubator-tvm/pull/4257
yongwww commented on issue #4256: Install from Source doc still says to clone
dmlc/tvm
URL: https://github.com/apache/incubator-tvm/issues/4256#issuecomment-549980886
besides this, we can see this issue (`github.com/dmlc/tvm`) in tons of other
files. @tqchen
apivovarov opened a new issue #4256: Install from Source doc still says to
clone dmlc/tvm
URL: https://github.com/apache/incubator-tvm/issues/4256
tvm moved to https://github.com/apache/incubator-tvm
Install from Source doc still says to clone dmlc/tvm
comaniac commented on issue #4188: [RFC][AutoTVM] Selective Tuning
URL: https://github.com/apache/incubator-tvm/issues/4188#issuecomment-549974311
Updated.
This is an automated message from the Apache Git Service.
To respond
soiferj commented on a change in pull request #4242: [AutoTVM] Add batch_matmul
to tunable operations
URL: https://github.com/apache/incubator-tvm/pull/4242#discussion_r342734128
##
File path: topi/python/topi/x86/batch_matmul.py
##
@@ -18,43 +18,70 @@
"""x86
kevinthesun commented on issue #4188: [RFC][AutoTVM] Selective Tuning
URL: https://github.com/apache/incubator-tvm/issues/4188#issuecomment-549959110
@comaniac Thank you for these data. Can you update in the main thread?
This
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 635831c Require LLVM >= 9 for AMDGPU backend (#4253)
add aae5cde workaround typing.Deque import error
tqchen merged pull request #4254: Work around typing.Deque import error for
Python 3.5
URL: https://github.com/apache/incubator-tvm/pull/4254
This is an automated message from the Apache Git Service.
To respond to the
tqchen commented on issue #4254: Work around typing.Deque import error for
Python 3.5
URL: https://github.com/apache/incubator-tvm/pull/4254#issuecomment-549937971
Thanks @zhuochenKIDD
This is an automated message from the
yzhliu commented on a change in pull request #4242: [AutoTVM] Add batch_matmul
to tunable operations
URL: https://github.com/apache/incubator-tvm/pull/4242#discussion_r342653180
##
File path: topi/python/topi/x86/batch_matmul.py
##
@@ -18,43 +18,70 @@
"""x86 batch_matmul
CoinCheung opened a new issue #4255: Feature request of pytorch interpolation
and pixelshuffle operator
URL: https://github.com/apache/incubator-tvm/issues/4255
Hi,
I can export pytorch operator of `torch.nn.functional.interpolate` and
`torch.nn.PixelShuffle` to onnx(opset11).
This is an automated email from the ASF dual-hosted git repository.
masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 411fe27 CI trigger after repo move (#4252)
add 635831c Require LLVM >= 9 for AMDGPU backend (#4253)
masahi merged pull request #4253: Require LLVM >= 9 for AMDGPU backend
URL: https://github.com/apache/incubator-tvm/pull/4253
This is an automated message from the Apache Git Service.
To respond to the message, please log on
masahi closed issue #4087: rocm backend: crash with LLVM 8
URL: https://github.com/apache/incubator-tvm/issues/4087
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
zhuochenKIDD opened a new pull request #4254: Work around typing.Deque import
error for Python 3.5
URL: https://github.com/apache/incubator-tvm/pull/4254
Issue is:
https://discuss.tvm.ai/t/what-is-minimal-python3-version-requirements/4594
After
t-vi commented on issue #4253: Require LLVM >= 9 for AMDGPU backend
URL: https://github.com/apache/incubator-tvm/pull/4253#issuecomment-549715587
@masahi @petrex @tqchen for interest in the issue and potential reviewers.
This
t-vi commented on issue #4087: rocm backend: crash with LLVM 8
URL: https://github.com/apache/incubator-tvm/issues/4087#issuecomment-549710626
So initially I thought to make this a compile time check (using `#error`),
but it seems that the `codegen_amdgpu.c` is compiled in even when
52 matches
Mail list logo