antinucleon opened a new pull request #5377: [Blocksparse] Pipeline for
lowering dense model to sparse-dense
URL: https://github.com/apache/incubator-tvm/pull/5377
Useful pass and helpful function to lower dense model to block sparse model.
In this PR it brings two new optimization i
JishinMaster commented on issue #5376: [Relay][Strategy] Add cuda target check
to dense tensorcore schedule.
URL: https://github.com/apache/incubator-tvm/pull/5376#issuecomment-616048946
With your PR it worked for me with OpenCL and Vulkan backend.
It still fails with RoCM though, but i
anijain2305 commented on a change in pull request #5362: [Tutorial - QNN]
Prequantized MXNet model compilation.
URL: https://github.com/apache/incubator-tvm/pull/5362#discussion_r410820269
##
File path: tutorials/frontend/deploy_prequantized_mxnet.py
##
@@ -0,0 +1,232 @@
+
anijain2305 edited a comment on issue #5362: [Tutorial - QNN] Prequantized
MXNet model compilation.
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-616037285
@tqchen This tutorial requires `mxnet-mkl` package.
Currently, the CI failure is
~~~
File "/us
anijain2305 commented on issue #5362: [Tutorial - QNN] Prequantized MXNet model
compilation.
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-616037285
@tqchen This tutorial requires `mxnet-mkl` package.
Currently, the CI failure is
~~~
File "/usr/local
yongfeng-nv commented on issue #5367: WIP: Improve floormod (Don't merge)
URL: https://github.com/apache/incubator-tvm/pull/5367#issuecomment-616035227
The current behavior of IntervalSet floormod(a, b) is rough -- it returns
[0, b-1], [-b+1, b-1], or everything. It causes extra iterations
tqchen edited a comment on issue #5374: [TE] Warp memory in InferBound
URL: https://github.com/apache/incubator-tvm/issues/5374#issuecomment-616012671
thanks @roastduck given that this topic is more like a development
discussion, please open a new thread on https://discuss.tvm.ai/
tqchen merged pull request #5375: Remove developer facing api from frontend
exports.
URL: https://github.com/apache/incubator-tvm/pull/5375
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new a4902e0 Remove developer facing api from
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from a2d6fe6 [TIR] Fix lower_warp_memory when there are >1 warp buffers
(#5368)
add 4672dc7 Add cuda target
tqchen closed issue #5342: [PYTHON] Remove developer facing api from frontend
exports
URL: https://github.com/apache/incubator-tvm/issues/5342
This is an automated message from the Apache Git Service.
To respond to the messa
tqchen merged pull request #5376: [Relay][Strategy] Add cuda target check to
dense tensorcore schedule.
URL: https://github.com/apache/incubator-tvm/pull/5376
This is an automated message from the Apache Git Service.
To resp
tqchen commented on issue #5368: [TIR] Fix lower_warp_memory when there are >1
warp buffers
URL: https://github.com/apache/incubator-tvm/pull/5368#issuecomment-616016298
thanks @roastduck !
This is an automated message from t
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new a2d6fe6 [TIR] Fix lower_warp_memory when
tqchen merged pull request #5368: [TIR] Fix lower_warp_memory when there are >1
warp buffers
URL: https://github.com/apache/incubator-tvm/pull/5368
This is an automated message from the Apache Git Service.
To respond to the
tqchen closed issue #5366: [TIR] lower_warp_memory cannot handle >1 warp buffers
URL: https://github.com/apache/incubator-tvm/issues/5366
This is an automated message from the Apache Git Service.
To respond to the message, pl
tqchen commented on issue #5374: [TE] Warp memory in InferBound
URL: https://github.com/apache/incubator-tvm/issues/5374#issuecomment-616012671
given that this topic is more like a development discussion, please open a
new thread on https://discuss.tvm.ai/
-
tqchen closed issue #5374: [TE] Warp memory in InferBound
URL: https://github.com/apache/incubator-tvm/issues/5374
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub a
jwfromm commented on issue #5370: Remove Interference of TensorCore Strategy to
OpenCL/Metal
URL: https://github.com/apache/incubator-tvm/issues/5370#issuecomment-616006511
It looks like this is probably happening because we forgot to check that the
current target is `cuda` before calling
jwfromm commented on issue #5376: [Relay][Strategy] Add cuda target check to
dense tensorcore schedule.
URL: https://github.com/apache/incubator-tvm/pull/5376#issuecomment-616006345
@Shawn-Inspur, @Hzfengsy , @JishinMaster, can you take a look at this?
-
jwfromm opened a new pull request #5376: [Relay][Strategy] Add cuda target
check to dense tensorcore schedule.
URL: https://github.com/apache/incubator-tvm/pull/5376
Calling `nvcc.have_tensorcore` will cause errors when using a non-nvidia
GPU. This one line PR adds a check that the current
shoubhik opened a new pull request #5375: Remove developer facing api from
frontend exports.
URL: https://github.com/apache/incubator-tvm/pull/5375
Fix for #5342. Remove the imports in __init__ in the front end module for
mxnet dev apis. They were used in tests. In the tests now we use the
roastduck opened a new issue #5374: [TE] Warp memory in InferBound
URL: https://github.com/apache/incubator-tvm/issues/5374
I'm working with a buffer that bound to warp scope. In
`src/te/schedule/message_passing.cc:208`:
```c++
PrimExpr outer = state.at(s->outer);
PrimExpr inne
siju-samuel closed pull request #5371: [DOCS]Update Readme
URL: https://github.com/apache/incubator-tvm/pull/5371
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub an
siju-samuel commented on issue #5371: [DOCS]Update Readme
URL: https://github.com/apache/incubator-tvm/pull/5371#issuecomment-615997471
Ok
This is an automated message from the Apache Git Service.
To respond to the message, pl
roastduck commented on issue #5368: [TIR] Fix lower_warp_memory when there are
>1 warp buffers
URL: https://github.com/apache/incubator-tvm/pull/5368#issuecomment-615997082
Fixed.
This is an automated message from the Apache
tqchen opened a new issue #5373: [REFACTOR] Migrate HoistIfThenElse to the
unified pass manager
URL: https://github.com/apache/incubator-tvm/issues/5373
See background https://github.com/apache/incubator-tvm/pull/5364
cc @kevinthesun can you work on this item?
--
tqchen edited a comment on issue #5371: [DOCS]Update Readme
URL: https://github.com/apache/incubator-tvm/pull/5371#issuecomment-615954249
Let us keep the original statement as it encapsulates the expanded items.
The goal of the project also goes beyond expanded the listed items
--
tqchen commented on issue #5372: [TIR][REFACTOR] Remove te::Tensor dependencies
from TIR passes.
URL: https://github.com/apache/incubator-tvm/pull/5372#issuecomment-615954871
cc @merrymercy @zhiics @yzhliu @hzfan @ZihengJiang @masahi @were @Hzfengsy
@spectrometerHBH @FrozenGene @junrushao1
tqchen commented on issue #5236: [WIP][TVM][.NET] Introduce TVM.NET project
URL: https://github.com/apache/incubator-tvm/pull/5236#issuecomment-615954623
We can start code reviews and getting feedbacks from the community. Since we
do want to hold a high standard for new languauge bindings,
tqchen commented on issue #5371: [DOCS]Update Readme
URL: https://github.com/apache/incubator-tvm/pull/5371#issuecomment-615954249
Let us keep the original statement as it encapsulates the expanded items and
the expanded items may not be the only features that tvm offers
--
tqchen opened a new pull request #5372: [TIR][REFACTOR] Remove te::Tensor
dependencies from TIR passes.
URL: https://github.com/apache/incubator-tvm/pull/5372
This is needed to migrate passes before StorageFlatten to the PassManager.
te::Tensor is an useful object for tensor expression.
kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP]
Dynamic NMS and strided_slice
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r410751616
##
File path: src/relay/op/tensor/transform.cc
##
@@ -1775,105 +1776,165 @@ Array GetIntArr
kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP]
Dynamic NMS and strided_slice
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r410751616
##
File path: src/relay/op/tensor/transform.cc
##
@@ -1775,105 +1776,165 @@ Array GetIntArr
tqchen merged pull request #5364: [TIR][REFACTOR] Migrate low-level passes in
tvm.lower to the Pass Manager
URL: https://github.com/apache/incubator-tvm/pull/5364
This is an automated message from the Apache Git Service.
To
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new 3264895 [TIR][REFACTOR] Migrate low-leve
ANSHUMAN87 edited a comment on issue #5236: [WIP][TVM][.NET] Introduce TVM.NET
project
URL: https://github.com/apache/incubator-tvm/pull/5236#issuecomment-615921445
> both sounds fine
@tqchen : As discussed earlier, i have not uploaded any changes yet to this
PR. I think it will be
ANSHUMAN87 commented on issue #5236: [WIP][TVM][.NET] Introduce TVM.NET project
URL: https://github.com/apache/incubator-tvm/pull/5236#issuecomment-615921445
> both sounds fine
@tqchen : As discussed earlier, i have not uploaded any changes yet to this
PR. I think it will be better t
PrashantDandriyal edited a comment on issue #2802: [BUILD]can't recognize llvm
whem building
URL: https://github.com/apache/incubator-tvm/issues/2802#issuecomment-615913037
Hey @mnboos Can you help me figure the same problem on Ubuntu ?
```
-- Build with RPC support...
-- Build
PrashantDandriyal commented on issue #2802: [BUILD]can't recognize llvm whem
building
URL: https://github.com/apache/incubator-tvm/issues/2802#issuecomment-615913037
Hey @mnboos Can you help me figure the same problem on Ubuntu ?
```
-- Build with RPC support...
-- Build with G
siju-samuel opened a new pull request #5371: [DOCS]Update Readme
URL: https://github.com/apache/incubator-tvm/pull/5371
Minor updations on Readme
Thanks for contributing to TVM! Please refer to guideline
https://tvm.apache.org/docs/contribute/ for useful information and tips. After
tqchen commented on a change in pull request #5368: [TIR] Fix lower_warp_memory
when there are >1 warp buffers
URL: https://github.com/apache/incubator-tvm/pull/5368#discussion_r410718199
##
File path: src/tir/transforms/lower_warp_memory.cc
##
@@ -379,7 +379,7 @@ class Wa
tqchen commented on issue #5370: Make TensorCore Strategy Specific to CUDA
URL: https://github.com/apache/incubator-tvm/issues/5370#issuecomment-615897637
cc @icemelon9 @jwfromm @Shawn-Inspur @Hzfengsy
This is an automated me
tqchen commented on issue #5370: Make TensorCore Strategy Specific to CUDA
URL: https://github.com/apache/incubator-tvm/issues/5370#issuecomment-615897684
This is related to the strategy introduced by the tensorcore PR
This is
tqchen commented on issue #5363: [Topi, ARM] Disbale Winograd for quantized
tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-615897077
We might want to do winograd transformation explicitly in relay(as opposed
to implicitly in topi), that would enable quantizati
masahi commented on issue #5363: [Topi, ARM] Disbale Winograd for quantized
tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-615892062
A related discussion at https://github.com/andravin/wincnn/issues/16
It might be better to do weight transform before wei
JishinMaster opened a new issue #5370: Enable OpenCL/Vulkan/RoCM without
enabling CUDA
URL: https://github.com/apache/incubator-tvm/issues/5370
Dear all,
Is it possible to use the OpenCL/Vulkan/RoCM backends without enabling the
CUDA one?
I want to use TVM on AMD and Intel GPU pl
adobay opened a new issue #5369: [AutoTVM]Tune graph throws exception
URL: https://github.com/apache/incubator-tvm/issues/5369
I tuned my model for x86 cpu according to [this
tutorial](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_x86.html?highlight=tune_graph),
and an exception
cbalint13 edited a comment on issue #5363: [Topi, ARM] Disbale Winograd for
quantized tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-615853505
> Output is all zero. It is not doing the right compute anymore, so it 0%
accuracy.
I think operations overflo
cbalint13 commented on issue #5363: [Topi, ARM] Disbale Winograd for quantized
tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-615853505
> Output is all zero. It is not doing the right compute anymore, so it 0%
accuracy.
I think operations overflows badl
This is an automated email from the ASF dual-hosted git repository.
zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 3db8880 fix fuse over functions that are handled by external codegen
(#5365)
add fbcf61a [RUNTIME] Fas
FrozenGene commented on issue #5353: [RUNTIME] FastRPC interface for Hexagon
runtime
URL: https://github.com/apache/incubator-tvm/pull/5353#issuecomment-615852007
Thanks @kparzysz-quic @abhikran-quic ! It is merged now.
This
cbalint13 commented on issue #5363: [Topi, ARM] Disbale Winograd for quantized
tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-615851928
> Thanks for sharing the pointers @masahi
> That makes sense. It also explains why accuracy is bad in master. We do
not c
cbalint13 edited a comment on issue #5363: [Topi, ARM] Disbale Winograd for
quantized tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-615851928
> Thanks for sharing the pointers @masahi
> That makes sense. It also explains why accuracy is bad in master. We do
FrozenGene merged pull request #5353: [RUNTIME] FastRPC interface for Hexagon
runtime
URL: https://github.com/apache/incubator-tvm/pull/5353
This is an automated message from the Apache Git Service.
To respond to the message
cbalint13 commented on issue #5363: [Topi, ARM] Disbale Winograd for quantized
tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-615850737
> It might be because of this
>
https://github.com/apache/incubator-tvm/blob/c936a81dab2b4b4b595d02153a6654b9d4e09cd5/top
cbalint13 commented on a change in pull request #5363: [Topi, ARM] Disbale
Winograd for quantized tensors.
URL: https://github.com/apache/incubator-tvm/pull/5363#discussion_r410686770
##
File path: python/tvm/relay/op/strategy/arm_cpu.py
##
@@ -59,16 +59,21 @@ def conv2d_s
roastduck opened a new pull request #5368: Fix lower_warp_memory when there are
>1 warp buffers
URL: https://github.com/apache/incubator-tvm/pull/5368
Fix #5366.
Changes:
1. Fixed the recursion. (Just look at the code)
2. Added a test.
Requesting for a review. @tqche
yongfeng-nv opened a new pull request #5367: WIP: Improve floormod (Don't merge)
URL: https://github.com/apache/incubator-tvm/pull/5367
Prototype to improve floormod for int_set.
This is an automated message from the Apach
roastduck opened a new issue #5366: [TIR] lower_warp_memory cannot handle >1
warp buffers
URL: https://github.com/apache/incubator-tvm/issues/5366
Pass `lower_warp_memory` cannot handle more than one warp buffers. Buffers
except the first one cannot be correctly transformed to warp shuffl
60 matches
Mail list logo