tqchen commented on PR #104:
URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1757910627
I think assuming a single vector width(vscale) and use
`kScalableVectorMark=-1` to mark it would be a good tradeoff, given it may not
be that useful to create vectors with multiple vector
Lunderberg opened a new pull request, #15916:
URL: https://github.com/apache/tvm/pull/15916
Prior to this commit, several transforms assumed that the arguments passed
to a `call_tir` builtin were provided as in-line `relax::Tuple` objects.
Because it would be equally valid for the
MasterJH5574 commented on PR #15910:
URL: https://github.com/apache/tvm/pull/15910#issuecomment-1757929983
Mark as draft for an update of docstring and APIs. Sorry that it is not
mature enough.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
ekalda merged PR #15780:
URL: https://github.com/apache/tvm/pull/15780
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
MrGouYou commented on issue #15151:
URL: https://github.com/apache/tvm/issues/15151#issuecomment-1761458134
> > @Johnson9009 Change the session_timeout of rpc.connect(), right? In
which file rpc.connect() needs to be modified? Thanks!
>
> Yes, just try increase the session_timeout of
dmilosevic252 opened a new pull request, #15924:
URL: https://github.com/apache/tvm/pull/15924
Sets the default value 0 for the k variable, if the value has not been
previously set.
This is a code patch for fixing this issue:
https://github.com/apache/tvm/issues/15729
--
This is an
starrkk opened a new pull request, #15926:
URL: https://github.com/apache/tvm/pull/15926
i
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
MrGouYou opened a new issue, #15927:
URL: https://github.com/apache/tvm/issues/15927
When i compile a NN model with Autotvm, get a quite strange problem.
compile device: CUDA
code snippet: (Note: Using LocalRunner)
Hzfengsy commented on code in PR #15924:
URL: https://github.com/apache/tvm/pull/15924#discussion_r1358136474
##
python/tvm/relax/frontend/onnx/onnx_frontend.py:
##
@@ -637,6 +637,11 @@ def _impl_v14(cls, bb, inputs, attr, params):
x = inputs[0]
k = inputs[1]
masahi merged PR #15904:
URL: https://github.com/apache/tvm/pull/15904
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
cbalint13 commented on PR #15761:
URL: https://github.com/apache/tvm/pull/15761#issuecomment-1765115891
> Hey @cbalint13, I got a compiler warning from this commit saying:
>
> ```
> /Users/jshao/Projects/tvm-dev/src/target/llvm/llvm_instance.cc:76:81:
warning: reference to stack
cbalint13 commented on PR #15918:
URL: https://github.com/apache/tvm/pull/15918#issuecomment-1764839468
> ```
> idx_vec = T.allocate_const([0, 1, 4, 5, 2, 3, 6, 7], "int32")
> tir.vectorpermute("int32x8", whatever_vector, idx_vec)
> ```
>
> is not more complex in my
cbalint13 commented on PR #15761:
URL: https://github.com/apache/tvm/pull/15761#issuecomment-1765120519
> > /Users/jshao/Projects/tvm-dev/src/target/llvm/llvm_instance.cc:763:49:
warning: cast from 'const llvm::MCSubtargetInfo *' to 'llvm::MCSubtargetInfo *'
drops const qualifier
lhutton1 merged PR #15929:
URL: https://github.com/apache/tvm/pull/15929
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
cbalint13 commented on PR #15918:
URL: https://github.com/apache/tvm/pull/15918#issuecomment-1765261587
> > ```
> > idx_vec = T.allocate_const([0, 1, 4, 5, 2, 3, 6, 7], "int32")
> > tir.vectorpermute("int32x8", whatever_vector, idx_vec)
> > ```
> >
> >
> > is not more
junrushao commented on PR #15761:
URL: https://github.com/apache/tvm/pull/15761#issuecomment-1765093175
Hey @cbalint13, I got a compiler warning from this commit saying:
```
/Users/jshao/Projects/tvm-dev/src/target/llvm/llvm_instance.cc:76:81:
warning: reference to stack memory
Hzfengsy merged PR #15937:
URL: https://github.com/apache/tvm/pull/15937
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
quic-sanirudh merged PR #15928:
URL: https://github.com/apache/tvm/pull/15928
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
sunggg commented on code in PR #15842:
URL: https://github.com/apache/tvm/pull/15842#discussion_r1361351524
##
tests/python/relax/test_transform_legalize_ops.py:
##
@@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) ->
R.Tensor([16, 8]):
assert
Hzfengsy merged PR #15935:
URL: https://github.com/apache/tvm/pull/15935
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
sunggg commented on code in PR #15842:
URL: https://github.com/apache/tvm/pull/15842#discussion_r1361318051
##
tests/python/relax/test_transform_legalize_ops.py:
##
@@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) ->
R.Tensor([16, 8]):
assert
Archermmt opened a new pull request, #15938:
URL: https://github.com/apache/tvm/pull/15938
The trilu test from test_translate_relay.py results in failures(e.g
https://github.com/apache/tvm/pull/15783). This PR fix the bugs
--
This is an automated message from the Apache Git Service.
To
MasterJH5574 commented on code in PR #15910:
URL: https://github.com/apache/tvm/pull/15910#discussion_r1358698766
##
tests/python/relax/test_runtime_builtin_paged_attention_kv_cache.py:
##
@@ -0,0 +1,420 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or
sunggg commented on code in PR #15842:
URL: https://github.com/apache/tvm/pull/15842#discussion_r1358446279
##
tests/python/relax/test_transform_legalize_ops.py:
##
@@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) ->
R.Tensor([16, 8]):
assert
Lunderberg commented on code in PR #15898:
URL: https://github.com/apache/tvm/pull/15898#discussion_r1358767925
##
python/tvm/relax/ir/instrument.py:
##
@@ -25,13 +25,19 @@ class WellFormedInstrument:
is well formed. It will skip specific passes, like Normalize.
"""
ry3s opened a new issue, #15931:
URL: https://github.com/apache/tvm/issues/15931
Thanks for participating in the TVM community! We use https://discuss.tvm.ai
for any general usage questions and discussions. The issue tracker is used for
actionable items such as feature proposals
dmilosevic252 commented on code in PR #15924:
URL: https://github.com/apache/tvm/pull/15924#discussion_r1361756821
##
python/tvm/relax/frontend/onnx/onnx_frontend.py:
##
@@ -637,6 +637,11 @@ def _impl_v14(cls, bb, inputs, attr, params):
x = inputs[0]
k =
LeiWang1999 commented on PR #15939:
URL: https://github.com/apache/tvm/pull/15939#issuecomment-1766044785
@malixian Awesome! I think community should have a thorough discussion about
the infra of HIP source codegen support, as it contains a significant amount of
duplicated code to CUDA,
malixian opened a new pull request, #15939:
URL: https://github.com/apache/tvm/pull/15939
This Pull Request adds support for AMD Matrix Core in MetaScheduler.
## Changes Made
- **Add code generation for HIP**. Although adding hip support seems
redundant, the advantages of
starrkk commented on code in PR #15932:
URL: https://github.com/apache/tvm/pull/15932#discussion_r1359794835
##
python/tvm/relax/frontend/onnx/onnx_frontend.py:
##
@@ -1290,7 +1290,7 @@ def _impl_v18(cls, bb, inputs, attr, params):
), "Only one of scales and sizes can
quic-sanirudh commented on PR #15928:
URL: https://github.com/apache/tvm/pull/15928#issuecomment-1763425961
cc @ibsidorenko
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
ysh329 commented on issue #15812:
URL: https://github.com/apache/tvm/issues/15812#issuecomment-1763418982
Hi all, due to a new idea about release flow before and after cut branch,
this release may delay. More informations can refer below PRs:
- https://github.com/apache/tvm/pull/15847
Thrsu opened a new pull request, #15933:
URL: https://github.com/apache/tvm/pull/15933
This PR fixes a bug in the interpolate operator of the PyTorch frontend in
TVM. The bug was caused by incorrectly using the `method` keyword instead of
the `mode` keyword when retrieving the default
tiandi111 commented on PR #15740:
URL: https://github.com/apache/tvm/pull/15740#issuecomment-1766400704
Current status: TODO-1 is almost done.
I've finished integrating vanilla gemm kernels(no fusion capability now) and
its profiler. Code still needs to be improved, especially in
cbalint13 opened a new pull request, #15940:
URL: https://github.com/apache/tvm/pull/15940
Fix compilation warnings for clang.
---
The warnings was:
```
BUILD/tvm/src/target/llvm/llvm_instance.cc:84:81: warning: reference to
stack memory associated with parameter 'Obj'
Lunderberg commented on code in PR #15842:
URL: https://github.com/apache/tvm/pull/15842#discussion_r1362092455
##
tests/python/relax/test_transform_legalize_ops.py:
##
@@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) ->
R.Tensor([16, 8]):
assert
ehion commented on PR #6395:
URL: https://github.com/apache/tvm/pull/6395#issuecomment-1766615940
i am curious how to run autotvm with tensorrt when i run
```
mod = partition_for_tensorrt(mod, params)
*
tasks = autotvm.task.extract_from_program(
mod['main'],
Lunderberg commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779395939
Current CI failures were present in unity head, but should be resolved after
PR#15941. (See [this
comment](https://github.com/apache/tvm/pull/15941#issuecomment-1779377445) for
Lunderberg commented on PR #15971:
URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779402991
This may be a duplicate of https://github.com/apache/tvm/pull/15916, which
also resolves the analogous problem for `FuseOps`, `RewriteDataflowReshape`,
and `FoldConstant`.
--
This is
Lunderberg opened a new pull request, #15980:
URL: https://github.com/apache/tvm/pull/15980
Prior this this commit, all symbolic variables needed to be defined either
by tensor shapes, or by an explicit `tvm.runtime.ShapeTuple` argument. This
commit allows arguments `arg:
Lunderberg commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1779338988
One unrelated question, though. In your pseudocode, you have the signature
`def test(x: Object, callback)`, but I wasn't able to pass a callback directly
into a relax function. I
Lunderberg commented on PR #15978:
URL: https://github.com/apache/tvm/pull/15978#issuecomment-1779386185
Rebased onto head to re-run CI. Previous failures were present in unity
head, and have been resolved. (See [this
quic-sanirudh commented on PR #15971:
URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779451578
> This may be a duplicate of #15916, which also resolves the analogous
problem for `FuseOps`, `RewriteDataflowReshape`, and `FoldConstant`.
Oh nice, this is great, thanks for
Lunderberg opened a new pull request, #15981:
URL: https://github.com/apache/tvm/pull/15981
- Removed unused `import os`
- Used `tvm.testing.main()` inside `if __name__=="__main__"`
- Added parametrized fixture `exec_mode` instead of marking all tests.
- Replace
JackWeiw commented on code in PR #15961:
URL: https://github.com/apache/tvm/pull/15961#discussion_r1372017992
##
src/tir/transforms/inject_ptx_async_copy.cc:
##
@@ -113,9 +116,11 @@ class PTXAsyncCopyInjector : public StmtMutator {
return PrimExpr();
Lunderberg commented on PR #15941:
URL: https://github.com/apache/tvm/pull/15941#issuecomment-1779377445
No problem!
I'm going to merge this in, as it resolves a few test failures in unity head
that resulted from a conflict between
Lunderberg merged PR #15941:
URL: https://github.com/apache/tvm/pull/15941
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
Lunderberg merged PR #15970:
URL: https://github.com/apache/tvm/pull/15970
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lhutton1 commented on PR #104:
URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1779468180
Regarding the changes required to support scalability in the data type, I've
been prototyping adding a new `scalable_` attribute to `DataType` that wraps
`DLDataType`.
However,
Lunderberg commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1779329489
Thank you, @tqchen, and I really like that unit test to validate the desired
behavior in the VM, and not just how that behavior impacts a use case. I've
updated the PR to include the
Lunderberg commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779394478
@tvm-bot re-run
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
Lunderberg commented on PR #15968:
URL: https://github.com/apache/tvm/pull/15968#issuecomment-1779442186
Looks like the CI failures are due to a check that the PR body is non-empty.
Can you add a description to the PR?
--
This is an automated message from the Apache Git Service.
To
Lunderberg commented on PR #15961:
URL: https://github.com/apache/tvm/pull/15961#issuecomment-1779454823
It looks like this PR isn't unity-specific. Can the PR be applied to the
`main` branch instead, so we get the bugfix on both branches?
--
This is an automated message from the Apache
Lunderberg commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779387049
@ci-bot re-run
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
Lunderberg commented on PR #15864:
URL: https://github.com/apache/tvm/pull/15864#issuecomment-1779381464
Sounds good, and thank you!
I'm rebasing the commit on top of unity head, as the current CI failures are
due to a breakage on `unity` head, and are resolved with
github-actions[bot] commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779395462
Failed to re-run CI in https://github.com/apache/tvm/actions/runs/6641806894
```
Traceback (most recent call last):
File
cbalint13 commented on PR #15953:
URL: https://github.com/apache/tvm/pull/15953#issuecomment-1771566305
@tvm-bot rerun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
vinx13 merged PR #15951:
URL: https://github.com/apache/tvm/pull/15951
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
Lunderberg opened a new pull request, #15954:
URL: https://github.com/apache/tvm/pull/15954
Prior to this commit, the `DecomposeOpsForTraining` transform directly
replaced `relax.nn.batch_norm` into more primitive relax operations. This
required the decomposed form of `relax.nn.batch_norm`
Lunderberg merged PR #15842:
URL: https://github.com/apache/tvm/pull/15842
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
Lunderberg commented on PR #15954:
URL: https://github.com/apache/tvm/pull/15954#issuecomment-1771588343
@sunggg This is what I was referring to by a different decomposition being
applied for training and for inference in
https://github.com/apache/tvm/pull/15842. This PR extracts that
jinhongyii commented on issue #15880:
URL: https://github.com/apache/tvm/issues/15880#issuecomment-1771529875
Hi @Lunderberg just want to check if there's any progress on this issue
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cbalint13 commented on PR #15953:
URL: https://github.com/apache/tvm/pull/15953#issuecomment-1771707055
@tvm-bot rerun ci
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
cbalint13 commented on PR #15953:
URL: https://github.com/apache/tvm/pull/15953#issuecomment-1771634992
@tvm-bot rerun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
yelite opened a new pull request, #15996:
URL: https://github.com/apache/tvm/pull/15996
This PR changes how to get the `Session` from `DPackedFunc` or `DModule`.
Without this, there will be data corruption in the message channel, if Python
gc happens in a different thread than the thread
JackWeiw closed pull request #15961: [BugFix][TIR] fix error in symbolic
floormod
URL: https://github.com/apache/tvm/pull/15961
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
JackWeiw commented on code in PR #15986:
URL: https://github.com/apache/tvm/pull/15986#discussion_r1374209716
##
tests/python/unittest/test_tir_transform_lower_opaque_block.py:
##
@@ -250,6 +250,34 @@ def transformed_strided_buffer_func(
C[i0 * 4 + i1, j] = B[i1,
JackWeiw commented on code in PR #15986:
URL: https://github.com/apache/tvm/pull/15986#discussion_r1374212520
##
src/tir/transforms/inject_ptx_async_copy.cc:
##
@@ -79,7 +80,7 @@ class PTXAsyncCopyInjector : public StmtMutator {
if (indices_lanes == 1) {
Hzfengsy commented on issue #15974:
URL: https://github.com/apache/tvm/issues/15974#issuecomment-1782380337
+1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
JackWeiw closed pull request #15986: [Fix][TIR]fix symbolic strides lower
URL: https://github.com/apache/tvm/pull/15986
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
JackWeiw commented on PR #16000:
URL: https://github.com/apache/tvm/pull/16000#issuecomment-1782470884
CC @Lunderberg @wrongtest-intellif
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
padreofthegame opened a new pull request, #16001:
URL: https://github.com/apache/tvm/pull/16001
Fix in interpreting auto_pad parameters SAME_UPPER and SAME_LOWER in
ConvTranspose version 11 operator according to documentation on
[](https://onnx.ai/onnx/operators/onnx__ConvTranspose.html).
JackWeiw opened a new pull request, #16000:
URL: https://github.com/apache/tvm/pull/16000
compact_buffer_region PASS modify shared buffer stride[0] to
T.int64(72) * T.min((n + T.int64(63)) // T.int64(64) * T.int64(64),
T.int64(96)) and stride[1] is T.int64(72)
but in
slyubomirsky commented on PR #15916:
URL: https://github.com/apache/tvm/pull/15916#issuecomment-1782267071
If we intend to have special cases like `call_tir` where one argument _must_
be a tuple literal (i.e., not following the normal rule of the type system that
any member of the type
ysh329 commented on issue #15974:
URL: https://github.com/apache/tvm/issues/15974#issuecomment-1782363896
cc @leandron @Hzfengsy @vinx13 @areusch @Mousius @tqchen @AndrewZhaoLuo
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
tqchen commented on PR #15916:
URL: https://github.com/apache/tvm/pull/15916#issuecomment-1781923886
The principle here is we make common cases(and their optimizations easy),
while placing burdens on less common cases.
As for the pass writing patterns, most of our current relax
Archermmt opened a new pull request, #15997:
URL: https://github.com/apache/tvm/pull/15997
This is a pull request for MSC(Multi-System Compile)
RFC:
https://discuss.tvm.apache.org/t/rfc-unity-msc-introduction-to-multi-system-compiler/15251/5
Tracking issue:
vinx13 opened a new pull request, #15998:
URL: https://github.com/apache/tvm/pull/15998
cc @MasterJH5574 @tqchen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
yelite opened a new pull request, #15999:
URL: https://github.com/apache/tvm/pull/15999
Found this problem when investigating for #15996
cc @Lunderberg
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
slyubomirsky commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372273991
##
src/relax/transform/call_tir_rewrite.cc:
##
@@ -111,41 +111,69 @@ class CallTIRMutator : public ExprMutator {
<< expr->struct_info_;
}
Lunderberg opened a new pull request, #15983:
URL: https://github.com/apache/tvm/pull/15983
TVM containers, such as tvm::runtime::Array, require the contained objects
to inherit from `ObjectRef`. As a result, the wrapper types `IntImm`,
`FloatImm`, and `StringImm` are often used to allow
slyubomirsky commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372276835
##
python/tvm/relax/op/base.py:
##
@@ -97,7 +97,11 @@ def call_tir(
ret: Call
A call node for the call_tir operator.
"""
-if isinstance(args,
tqchen commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1780006889
ah i see, one way to get around is define the callback as a global test
function and call that with call_packed. e.g. `test.vm.assert_notnull`
masahi merged PR #15980:
URL: https://github.com/apache/tvm/pull/15980
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
Lunderberg commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372341781
##
src/relax/transform/call_tir_rewrite.cc:
##
@@ -111,41 +111,69 @@ class CallTIRMutator : public ExprMutator {
<< expr->struct_info_;
}
dependabot[bot] opened a new pull request, #15982:
URL: https://github.com/apache/tvm/pull/15982
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.2.3 to 3.0.1.
Release notes
Sourced from https://github.com/pallets/werkzeug/releases;>werkzeug's
releases.
3.0.1
Lunderberg commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1780074018
> ah i see, one way to get around is define the callback as a global test
function and call that with call_packed.
That's what I ended up doing, with a global definition which can
slyubomirsky commented on PR #15971:
URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779980410
Didn't see it, yep, they're duplicates. There is one case that the other PR
misses so hopefully that can be updated.
--
This is an automated message from the Apache Git Service.
To
slyubomirsky closed pull request #15971: [Unity][Op] Allow the argument to
`call_tir` to be a var bound to a tuple, not a tuple literal
URL: https://github.com/apache/tvm/pull/15971
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
vinx13 merged PR #15556:
URL: https://github.com/apache/tvm/pull/15556
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
tqchen commented on PR #15916:
URL: https://github.com/apache/tvm/pull/15916#issuecomment-1780084913
Thanks for the PR, I know this is indeed a generalization and there are some
tradeoffs to be considered here. Specifically, we should consider the following
alternative:
- C0: We
slyubomirsky commented on PR #15916:
URL: https://github.com/apache/tvm/pull/15916#issuecomment-1779975180
I think you may need to update the StructInfo inference for
`call_tir_inplace` like in #15971, since (without modification) that assumes
the argument is a tuple literal. The test
tqchen commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372345395
##
include/tvm/relax/expr_functor.h:
##
@@ -278,6 +278,37 @@ class ExprVisitor : public ExprFunctor {
virtual void VisitSpan(const Span& span);
virtual void
yelite commented on code in PR #15995:
URL: https://github.com/apache/tvm/pull/15995#discussion_r1374552066
##
cmake/modules/CUDA.cmake:
##
@@ -64,6 +64,7 @@ if(USE_CUDA)
message(STATUS "Build with Thrust support")
cmake_minimum_required(VERSION 3.13) # to compile
leandron commented on PR #15990:
URL: https://github.com/apache/tvm/pull/15990#issuecomment-1783145283
> Thanks @leandron! The [change
log](https://arm-software.github.io/ComputeLibrary/v23.08/versions_changelogs.xhtml#S2_2_changelog)
states `libarm_compute_core` was deprecated and that we
Lunderberg opened a new pull request, #16002:
URL: https://github.com/apache/tvm/pull/16002
This PR updates the type of `TupleGetItem::index` from `int` to `Expr`, to
allow access of a tuple at a location specified by a symbolic variable. The
lack of this functionality was run into
Aleksei-grovety opened a new pull request, #16003:
URL: https://github.com/apache/tvm/pull/16003
In ConcatRewriter the case was not considered when the concatenation
argument is TupleGetItem.
cc @lhutton1, @ekalda, @leandron
--
This is an automated message from the Apache Git
vinx13 merged PR #15977:
URL: https://github.com/apache/tvm/pull/15977
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
JackWeiw commented on PR #15961:
URL: https://github.com/apache/tvm/pull/15961#issuecomment-1780341103
> It looks like this PR isn't unity-specific. Can the PR be applied to the
`main` branch instead, so we get the bugfix on both branches?
JackWeiw closed pull request #15984: [Fix][TIR] Symbolic strides lower
URL: https://github.com/apache/tvm/pull/15984
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
601 - 700 of 16577 matches
Mail list logo