masahi commented on pull request #5729:
URL: https://github.com/apache/incubator-tvm/pull/5729#issuecomment-639273379
Thanks @siju-samuel
This is an automated message from the Apache Git Service.
To respond to the message, p
masahi merged pull request #5729:
URL: https://github.com/apache/incubator-tvm/pull/5729
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
This is an automated email from the ASF dual-hosted git repository.
masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new fbc2b87 [ONNX]MaxRoiPool, Mod & Xor op s
handar423 commented on pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695#issuecomment-639246598
sorry for replying so late! So grateful if you are still here.
Thank you for the guidance, I misunderstood the definition of batching and
units ago and everything actu
handar423 closed pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above t
leonwanghui opened a new pull request #5733:
URL: https://github.com/apache/incubator-tvm/pull/5733
Signed-off-by: leonwanghui
Hi all, this PR is proposed to make some private member inside `Tensor`
struct public so that it could be embedded into other crates.
lixiaoquan commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r435668087
##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3194,6 +3191,55 @@ def _convert_operator(self, op_name, inputs, attrs,
masahi commented on a change in pull request #5619:
URL: https://github.com/apache/incubator-tvm/pull/5619#discussion_r435627742
##
File path: src/relay/op/tensor/transform.cc
##
@@ -781,6 +781,53 @@ non-zero)doc" TVM_ADD_FILELINE)
.set_attr("TOpPattern", kOpaque)
.se
comaniac opened a new pull request #5732:
URL: https://github.com/apache/incubator-tvm/pull/5732
After #5429, it requires the new shape of `relay.reshape` can only be either
`int`, `Tuple[int, ...]`, `List[int]`, or `Expr`. However, a use case in ARM
conv2d alter layout passes `Tuple[int,
zhiics commented on issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639156952
ah, I see. We should have used `delete[]`. You are welcome to send a PR.
Thanks.
This is an automated message f
akosik-anyvision commented on issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639144034
Sorry, the title is backwards. The issue is that `new[]` is used to allocate
objects that are deleted with `delete`.
-
tqchen commented on pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#issuecomment-639143314
https://github.com/apache/incubator-tvm/pull/5730 splits the two type codes,
we only need to add BFLoat16 to the DataTypeCode. cc @Menooker
--
tqchen commented on pull request #5730:
URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639143025
@zhiics Yes the DLPack changes are needed for the followup PRs to use
bfloat16
This is an automated messag
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 34c95a8 [Frontend][TFLite] Add parser support for shape and range
(#5329)
add 8a98782 [REFACTOR] Separ
tqchen merged pull request #5730:
URL: https://github.com/apache/incubator-tvm/pull/5730
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
masahi commented on a change in pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435576994
##
File path: src/tir/transforms/lower_thread_allreduce.cc
##
@@ -478,9 +478,20 @@ class ThreadAllreduceBuilder final : public
StmtExprMutator {
masahi commented on a change in pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435576740
##
File path: src/tir/transforms/lower_thread_allreduce.cc
##
@@ -478,9 +478,20 @@ class ThreadAllreduceBuilder final : public
StmtExprMutator {
zhiics commented on issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639138972
They are allocated when the instruction is constructed and deleted when it
is deconstructed
https://github.com/apache/incubator-tvm/blob/34c95a89eae13043aa3a269ad7d27d288
tqchen commented on issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639133231
cc @jroesch @zhiics
This is an automated message from the Apache Git Service.
To respond to the message, please
t-vi edited a comment on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-639109441
That's the idea, yes. In my microbenchmark of the imagenet softmax on the
Radeon VII, I'm going from ~140µs to ~14µs. The baseline from PyTorch
(handcrafted but somewh
t-vi commented on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-639109441
That's the idea, yes. In my microbenchmark of the imagenet softmax on the
Radeon VII, I'm going from ~140µs to ~14µs. The baseline from PyTorch
(handcrafted but somewhat gene
trevor-m opened a new pull request #5731:
URL: https://github.com/apache/incubator-tvm/pull/5731
TensorFlow batch norm op has two type attributes:
`U` which is the type of the parameters scale, offset, mean, and variance.
`T` which is the type of the input and output.
The TF imp
masahi commented on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-639099200
@t-vi This is great! Does this mean rocm backend would use shuffle
instruction to compute reduction, such as in softmax?
--
tqchen edited a comment on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639094174
This is similiar to what we faced in the case like ceil/trunc/round. The
ideal is to detect if the data is too close to the boundary, then we try to add
a number to
tqchen edited a comment on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639094174
This is similiar to what we faced in the case like ceil/trunc
See an example here:
https://github.com/apache/incubator-tvm/blob/master/topi/tests/python/test_t
tqchen commented on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639094174
This is a similiar we faced in the case like ceil/trunc
See an example here:
https://github.com/apache/incubator-tvm/blob/master/topi/tests/python/test_topi_math.py#L
anijain2305 commented on pull request #4805:
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-639071565
It seems you still have old `3rdparty/dmlc-core'. You can check that by
clicking on "Files changed" tab.
-
notoraptor commented on pull request #5716:
URL: https://github.com/apache/incubator-tvm/pull/5716#issuecomment-639035230
@mbrookhart Hi ! There is a difference at least in the output shape for
take:
https://github.com/mbrookhart/incubator-tvm/blob/062a244d66262353cdef0792a54d05cc99d7fa74/
t-vi commented on a change in pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435466266
##
File path: tests/python/unittest/test_tir_transform_lower_warp_memory.py
##
@@ -249,9 +245,13 @@ def check(m):
B_np = A_np + 1
t-vi commented on a change in pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435423642
##
File path: src/target/llvm/intrin_rule_rocm.cc
##
@@ -40,8 +41,59 @@ inline void DispatchExternOCML(const TVMArgs& args,
TVMRetValue* rv) {
*rv
junrushao1994 commented on pull request #5730:
URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639018766
Will take a look tonight! Thank you!
This is an automated message from the Apache Git Service.
To re
abergeron commented on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639011278
From what I can seen the problem comes when result of the fmod() is very
small.
It would happen for something like `20.0001 % 10`.
I'm not sure how to condi
anijain2305 commented on pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#issuecomment-639010961
Thanks @dhruvaray @siju-samuel This is merged!
This is an automated message from the Apache Git Servic
anijain2305 merged pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
This is an automated email from the ASF dual-hosted git repository.
anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 490510d codegen llvm: move nvptx-specific intrinsic handling into
codegen_nvptx (#5726)
add c2e248
This is an automated email from the ASF dual-hosted git repository.
anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from c2e248f [TOPI,RELAY][TFLITE] Sparse to dense operator (#5447)
add 34c95a8 [Frontend][TFLite] Add p
anijain2305 commented on pull request #5447:
URL: https://github.com/apache/incubator-tvm/pull/5447#issuecomment-639010813
Thanks @dhruvaray @siju-samuel This is merged!
This is an automated message from the Apache Git Servic
anijain2305 merged pull request #5447:
URL: https://github.com/apache/incubator-tvm/pull/5447
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
tqchen opened a new pull request #5730:
URL: https://github.com/apache/incubator-tvm/pull/5730
We use a single enum(TypeCode) to represent ArgTypeCode and DLDataTypeCode.
However, as we start to expand more data types, it is clear that argument
type code(in the FFI convention) and data
tqchen commented on pull request #5730:
URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639000274
cc @zhiics @yzhliu @jroesch @junrushao1994
This is an automated message from the Apache Git Service.
To re
t-vi commented on a change in pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435423642
##
File path: src/target/llvm/intrin_rule_rocm.cc
##
@@ -40,8 +41,59 @@ inline void DispatchExternOCML(const TVMArgs& args,
TVMRetValue* rv) {
*rv
t-vi commented on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638992030
@mei-ye Cool! Yes. I ran into trouble when the target info erroneously
specified 1 thread per warp for ROCm, which would look similar but not for the
same reason. I'm glad you fou
anijain2305 commented on pull request #5495:
URL: https://github.com/apache/incubator-tvm/pull/5495#issuecomment-638988738
@siju-samuel Pinging for review if interested :)
This is an automated message from the Apache Git Serv
anijain2305 commented on pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#issuecomment-638987607
@siju-samuel Can you please approve the PR if you are ok with the changes?
https://tvm.apache.org/docs/contribute/code_review.html#approve-and-request-changes-explic
anijain2305 commented on pull request #5447:
URL: https://github.com/apache/incubator-tvm/pull/5447#issuecomment-638985828
@siju-samuel Please approve the PR if you are ok with the changes
This is an automated message from th
anijain2305 commented on pull request #4805:
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-638984870
@inadob Your changes look better now. Can you please rebase? (`git submodule
update --init --recursive`)
-
tqchen commented on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638974554
@t-vi yes please rebase to master while addressing the comments.
This is an automated message from the Apac
wpan11nv commented on a change in pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435391687
##
File path: tests/python/unittest/test_tir_transform_lower_warp_memory.py
##
@@ -249,9 +245,13 @@ def check(m):
B_np = A_np + 1
wpan11nv commented on a change in pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435386022
##
File path: src/target/llvm/intrin_rule_rocm.cc
##
@@ -40,8 +41,59 @@ inline void DispatchExternOCML(const TVMArgs& args,
TVMRetValue* rv) {
wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r435383834
##
File path: src/target/llvm/codegen_llvm.cc
##
@@ -736,7 +736,40 @@ llvm::Function*
CodeGenLLVM::GetIntrinsicDecl(llvm::Intrinsic::ID id, llvm::
ANSHUMAN87 commented on pull request #5725:
URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638958232
@junrushao1994 : Thanks a lot for clarifying! :+1:
This is an automated message from the Apache Git Se
majiang31312 commented on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638956688
@t-vi Thanks for the advice.
In my opinion this is not a backend problem, we can triger it in the cuda
backend (my test case above is using cuda).
---
majiang31312 edited a comment on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638953820
The fix seems quite simple, but I'm not sure whether it's complete.
Please take a look at the Discussion section. Thanks! @tqchen @wpan11nv
Problem:
wh
majiang31312 commented on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638953820
The fix seems quite simple, but I'm not sure whether it's complete.
Please take a look at the Discussion section. Thanks! @tqchen @wpan11nv
Problem:
when num_
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new beecf83 Build at Thu Jun 4 09:
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git
The following commit(s) were added to refs/heads/master by this push:
new 179bcb4 Add microtvm blog post (#9)
junrushao1994 commented on pull request #5725:
URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638947410
@ANSHUMAN87 yeah you are right. This is all about the type name.
It is a bit tricky, because the JSON string was generated after 0.6.0 where
type name is chang
t-vi commented on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638944272
Just a quick note: Of the two commits, only the second is new to this PR. If
you want me to rebase this on master, let me know.
-
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new 490510d codegen llvm: move nvptx-specifi
tqchen commented on pull request #5726:
URL: https://github.com/apache/incubator-tvm/pull/5726#issuecomment-638939983
Thanks @t-vi !
This is an automated message from the Apache Git Service.
To respond to the message, please
tqchen merged pull request #5726:
URL: https://github.com/apache/incubator-tvm/pull/5726
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
tqchen commented on pull request #5725:
URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638925609
Thansk @junrushao1994
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git
The following commit(s) were added to refs/heads/master by this push:
new 8935990 Fix runtime::String backward com
tqchen merged pull request #5725:
URL: https://github.com/apache/incubator-tvm/pull/5725
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
tqchen commented on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638924385
cc @wpan11nv @masahi
This is an automated message from the Apache Git Service.
To respond to the message,
yongwww commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r435344564
##
File path: include/tvm/relay/attrs/transform.h
##
@@ -210,14 +210,22 @@ struct SplitAttrs : public tvm::AttrsNode {
/*! \brief Attributes for
siju-samuel opened a new pull request #5729:
URL: https://github.com/apache/incubator-tvm/pull/5729
- MaxRoiPool
- Mod
- Xor
@masahi @FrozenGene please help me to review this PR.
TIA
This is an automated mes
GalMoore commented on issue #4412:
URL: https://github.com/apache/incubator-tvm/issues/4412#issuecomment-638884194
For me the issue was that the tvm I built was not in PYTHONPATH.
Solution from [here:](https://docs.tvm.ai/install/from_source.html)
```
export TVM_HOME=/path/to/tvm
akosik-anyvision opened a new issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728
There seem to be mismatched new / delete[] statements in the following lines:
https://github.com/apache/incubator-tvm/blob/a64208910b22b60da5eb312d3a63cffb6e11f747/src/runtime/vm/vm.cc
t-vi commented on pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#issuecomment-638823562
@wpan11nv Thanks for your offer to help. I submitted the clean-up #5726 and
then in #5727 I add ROCm warp reductions. One of the things I did was to avoid
assuming a fixed w
t-vi commented on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638817130
@masahi Could I interest you in this?
@tqchen This is the followup to the discussion of #5600
This
t-vi opened a new pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727
This adds warp shuffle intrinsics to ROCm and enables reductions.
- There was at least one hardcoded 32 threads per warp assumption in
`lower_thread_allreduce`.
- I have tentatively hijack
t-vi edited a comment on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638753031
This sounds similar to me to the symptoms discussed in the recent posts in
#5600.
So quite likely, making the cuda softmax schedule specific to cuda would fix
this (did
t-vi commented on pull request #5726:
URL: https://github.com/apache/incubator-tvm/pull/5726#issuecomment-638808130
That was a bit much of moving... Fixed.
I also changed the softmax schedule to only activate warp shuffle reductions
on cuda/nvptx.
This should fix #5686.
---
t-vi commented on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638753031
This sounds similar to me to the symptoms discussed in the recent posts in
#5600.
So quite likely, making the cuda softmax schedule specific to cuda would fix
this. Of course,
t-vi commented on pull request #5726:
URL: https://github.com/apache/incubator-tvm/pull/5726#issuecomment-638743199
@tqchen @wpan11nv
This is an automated message from the Apache Git Service.
To respond to the message, pl
t-vi opened a new pull request #5726:
URL: https://github.com/apache/incubator-tvm/pull/5726
See discussion in #5600.
I'm also throwing in a pointer lifetime fix for the context held by NVPTX
because otherwise `topi/tests/python/test_topi_softmax.py` would sefault for
me. With the t
ANSHUMAN87 edited a comment on pull request #5725:
URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638734857
Thanks @junrushao1994 !
As i remember, I had handled version upgrade for "GlobalVar" in PR (#5547).
So maybe we need to discuss how it got bypassed?
I saw
ANSHUMAN87 commented on pull request #5725:
URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638734857
Thanks @junrushao1994 !
As i remember, I had handled version upgrade for "GlobalVar" in PR (#5547).
So maybe we need to discuss how it got bypassed?
I saw the ch
t-vi commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r435088925
##
File path: topi/python/topi/cuda/softmax.py
##
@@ -39,6 +39,7 @@ def schedule_softmax(outs):
outs = [outs] if isinstance(outs, te.tensor.Tensor
t-vi commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r435088614
##
File path: src/target/llvm/codegen_llvm.cc
##
@@ -736,7 +736,40 @@ llvm::Function*
CodeGenLLVM::GetIntrinsicDecl(llvm::Intrinsic::ID id, llvm::Type
t-vi edited a comment on pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#issuecomment-638622419
I'm adding shfl intrinsics to the rocm bits (using
`tvm.intrin.rule.rocm.tvm_warp_shuffle /-up/-down` definitions).
I'll probably run into the nvptx bits in the llvm
82 matches
Mail list logo