masahi opened a new issue #7312:
URL: https://github.com/apache/tvm/issues/7312
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-7303/12/pipeline
This is an automated message from the Apache Git Service.
To
FrozenGene commented on a change in pull request #6998:
URL: https://github.com/apache/tvm/pull/6998#discussion_r560684155
##
File path: tests/python/integration/test_reduce.py
##
@@ -191,11 +191,11 @@ def test_rfactor_factor_axis():
n = tvm.runtime.convert(1027)
A =
anijain2305 commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763297967
Thanks :) LGTM :)
This is an automated message from the Apache Git Service.
To respond to the message,
masahi edited a comment on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763291062
@anijain2305 I added an empty tensor test in
https://github.com/apache/tvm/pull/7303/commits/a6c740348282c2d13f22883e62c7c910b73ad8c2
OpenCL seems to have a problem
masahi commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763291062
@anijain2305 I added an empty tensor test in
https://github.com/apache/tvm/pull/7303/commits/20afc3243a17f48084204855f498c7f9af1cad7a
OpenCL seems to have a problem with 0
icemelon9 opened a new pull request #7311:
URL: https://github.com/apache/tvm/pull/7311
Please join us to welcome Tristan Konolige (@tkonolige) as a new reviewer.
Tristan has been actively contributing to TVM core compiler and TVM script.
- [Commits
masahi commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763264832
> Once it is merged, I can try on my end with TF models as well.
Perf improvement is not expected, since it only improves `get_valid_count`
slightly if you use thrust scan
anijain2305 commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763263064
Yes, I eyeballed the changes wrt to empty tensor and they looked good. So, I
am happy to approve this PR. Once it is merged, I can try on my end with TF
models as well.
masahi commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763257824
hmm interesting, I've never created a test case with empty tensor, is that
possible?
Note that the IR is copied straight from
https://github.com/apache/tvm/pull/7303, so the
comaniac commented on a change in pull request #7297:
URL: https://github.com/apache/tvm/pull/7297#discussion_r560599941
##
File path: python/tvm/topi/cuda/dense.py
##
@@ -44,6 +44,8 @@ def dense_cublas(cfg, data, weight, bias=None,
out_dtype=None):
matmul =
mbrookhart commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763245797
Yeah, scanning on the non-inner axis will have a cache locality performance
hit, but I'm honestly not sure if that would be better or worse than the
overhead from doing a pair
anijain2305 commented on pull request #7293:
URL: https://github.com/apache/tvm/pull/7293#issuecomment-763244935
Thanks @d-smirnov @jwfromm
This is an automated message from the Apache Git Service.
To respond to the
This is an automated email from the ASF dual-hosted git repository.
anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.
from 2290cc0 [TOPI] Minor perf improvement for GPU scatter (#7233)
add f8c55db [TFLite] Added ability to infer
anijain2305 merged pull request #7293:
URL: https://github.com/apache/tvm/pull/7293
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
ZihengJiang commented on pull request #6370:
URL: https://github.com/apache/tvm/pull/6370#issuecomment-763230104
@hanke580 Nice work! I did not realize that we already has an
implementation for `einsum`. I will take a closer look at it!
tqchen commented on pull request #6370:
URL: https://github.com/apache/tvm/pull/6370#issuecomment-763226229
also cc @ZihengJiang
This is an automated message from the Apache Git Service.
To respond to the message, please
gromero commented on a change in pull request #7266:
URL: https://github.com/apache/tvm/pull/7266#discussion_r560563123
##
File path: tests/micro/qemu/zephyr-runtime/src/main.c
##
@@ -161,6 +162,27 @@ tvm_crt_error_t TVMPlatformTimerStop(double*
elapsed_time_seconds) {
gromero commented on a change in pull request #7266:
URL: https://github.com/apache/tvm/pull/7266#discussion_r560563123
##
File path: tests/micro/qemu/zephyr-runtime/src/main.c
##
@@ -161,6 +162,27 @@ tvm_crt_error_t TVMPlatformTimerStop(double*
elapsed_time_seconds) {
gromero commented on a change in pull request #7266:
URL: https://github.com/apache/tvm/pull/7266#discussion_r560564611
##
File path: tests/micro/qemu/zephyr-runtime/src/main.c
##
@@ -161,6 +162,27 @@ tvm_crt_error_t TVMPlatformTimerStop(double*
elapsed_time_seconds) {
dlexplorer opened a new issue #7310:
URL: https://github.com/apache/tvm/issues/7310
commit [[AutoScheduler] Add layout rewrite support for dense and batch
matmul…](https://github.com/apache/tvm/commit/7dcafb017a05ac0d5ecd7cfe8d8741d33a24bbad)
breaks compilation of ONNX BERT with autotune
masahi edited a comment on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763106769
1. Right now, inclusive scan can be supported by `exclusive_scan(data) +
data`. I think that is fine for now, given that our scan IR is far from stable
and we don't want to
masahi commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-763106769
1. Right now, inclusive scan can be supported by `exclusive_scan(data) +
data`. I think that is fine for now, given that our scan IR is far from stable
and we don't want to maintain
mbrookhart commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-763065131
Thanks @masahi @tkonolige @FrozenGene
This is an automated message from the Apache Git Service.
To respond to
This is an automated email from the ASF dual-hosted git repository.
mbrookhart pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new 2290cc0 [TOPI] Minor perf improvement for GPU
mbrookhart merged pull request #7233:
URL: https://github.com/apache/tvm/pull/7233
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
tqchen edited a comment on pull request #7309:
URL: https://github.com/apache/tvm/pull/7309#issuecomment-763036932
The deletion on non-virtual destructor is fine as long as they are held via
ObjectPtr (the deleter is stored inside the object itself).
Unfortunately directly printing
tqchen commented on pull request #7309:
URL: https://github.com/apache/tvm/pull/7309#issuecomment-763036932
The deletion on non-virtual destructor is fine as long as they are held via
ObjectPtr (the deleter is stored inside the object itself).
Unfortunately directly printing out the
altanh commented on a change in pull request #7287:
URL: https://github.com/apache/tvm/pull/7287#discussion_r560391942
##
File path: python/tvm/topi/random/kernel.py
##
@@ -234,6 +242,18 @@ def gen_ir(gen_ptr, out_gen_ptr, out_array_ptr):
out_gen =
tkonolige opened a new pull request #7309:
URL: https://github.com/apache/tvm/pull/7309
I've added a new function `GetFullType` on all object that prints the type
of the object along with the types of objects it contains. I've only updated on
assert to use it so far, I'll update the rest
areusch commented on a change in pull request #7266:
URL: https://github.com/apache/tvm/pull/7266#discussion_r560359820
##
File path: src/runtime/crt/host/main.cc
##
@@ -93,6 +94,20 @@ tvm_crt_error_t TVMPlatformTimerStop(double*
elapsed_time_seconds) {
areusch commented on a change in pull request #7266:
URL: https://github.com/apache/tvm/pull/7266#discussion_r560359820
##
File path: src/runtime/crt/host/main.cc
##
@@ -93,6 +94,20 @@ tvm_crt_error_t TVMPlatformTimerStop(double*
elapsed_time_seconds) {
tkonolige opened a new pull request #7308:
URL: https://github.com/apache/tvm/pull/7308
If there are multiple alter_ops in a model, the first alteration does not
run type inference for the subsequent ones. In this case, we don't have the
shape information, so we run the inferencer
mbrookhart commented on pull request #7303:
URL: https://github.com/apache/tvm/pull/7303#issuecomment-762969945
Scan is probably the most hand-optimized kernel in thrust, I'm thrilled to
be within 10x for a cross-GPU kernel. Overall I'm happy with this, but I have 2
thoughts.
1.
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git
The following commit(s) were added to refs/heads/main by this push:
new f91b51d [Relay][Frontend][Onnx] Compare against
tqchen merged pull request #7300:
URL: https://github.com/apache/tvm/pull/7300
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
tqchen commented on pull request #7108:
URL: https://github.com/apache/tvm/pull/7108#issuecomment-762871262
closing in favor of the alternative capturing approach
This is an automated message from the Apache Git Service.
To
tqchen closed pull request #7108:
URL: https://github.com/apache/tvm/pull/7108
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
tqchen closed issue #7302:
URL: https://github.com/apache/tvm/issues/7302
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
tqchen commented on issue #7302:
URL: https://github.com/apache/tvm/issues/7302#issuecomment-762863140
https://github.com/apache/tvm/pull/7306
This is an automated message from the Apache Git Service.
To respond to the
juannzou removed a comment on issue #7182:
URL: https://github.com/apache/tvm/issues/7182#issuecomment-753602648
Hi @junrushao1994
I thought it is a bug, since I tried with different settings and still get
errors.
But I just posted it on the discuss forum, according to your
masahi commented on pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#issuecomment-762707056
@tkonolige @mbrookhart I separated the two scatter implementations, things
should look clean now. The sequential one is chosen by default, and I confirmed
that by tuning the scatter
masahi commented on a change in pull request #7233:
URL: https://github.com/apache/tvm/pull/7233#discussion_r560015060
##
File path: python/tvm/topi/cuda/scatter.py
##
@@ -312,19 +313,18 @@ def gen_ir_4d(data, indices, updates, axis, out,
update_func):
out_ptr =
42 matches
Mail list logo