cbalint13 commented on PR #15685:
URL: https://github.com/apache/tvm/pull/15685#issuecomment-1720011129
> I am very excited about this feature and cannot wait to try it out myself!
Thank you @cbalint13 for this super well-documented and well-tested PR, and
it's going to be super useful for
junrushao commented on PR #15706:
URL: https://github.com/apache/tvm/pull/15706#issuecomment-1721978241
Thanks @Lunderberg! 100% agreed with your comments, and particularly,
feeling the same as you that the previous generation of TVMScript printer comes
with many subtle issues making it
MasterJH5574 merged PR #15752:
URL: https://github.com/apache/tvm/pull/15752
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao merged PR #15741:
URL: https://github.com/apache/tvm/pull/15741
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao commented on PR #15762:
URL: https://github.com/apache/tvm/pull/15762#issuecomment-1721978647
Love this PR. Moving my comments from the [previous
one](https://github.com/apache/tvm/pull/15706#issuecomment-1721978241).
> Thanks @Lunderberg! 100% agreed with your comments,
junrushao commented on PR #15762:
URL: https://github.com/apache/tvm/pull/15762#issuecomment-1721979373
BTW, `line_length` is something we could configure in black:
https://github.com/psf/black/blob/19.3b0/black.py#L168
--
This is an automated message from the Apache Git Service.
To
junrushao merged PR #15743:
URL: https://github.com/apache/tvm/pull/15743
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao commented on PR #15764:
URL: https://github.com/apache/tvm/pull/15764#issuecomment-1722072728
CC @LeshengJin
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
Hzfengsy commented on PR #15762:
URL: https://github.com/apache/tvm/pull/15762#issuecomment-1722143981
Great PR! It's a great solution to make people with different tastes happy!
Thanks @Lunderberg
--
This is an automated message from the Apache Git Service.
To respond to the message,
junrushao commented on PR #15748:
URL: https://github.com/apache/tvm/pull/15748#issuecomment-1722156796
CC: @yelite
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
MasterJH5574 commented on PR #15517:
URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722362012
Hi @vinx13, we noticed that this PR breaks the WebGPU codegen as the WebGPU
codegen right now does not support tir::ShuffleNode. Therefore, exceptions are
thrown in pass
tlopex opened a new pull request, #15769:
URL: https://github.com/apache/tvm/pull/15769
Support NOT_EQUAL quantization operation as part of #15148
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
tlopex closed pull request #15767: [TFLite][Frontend] Support quantized
NOT_EQUAL
URL: https://github.com/apache/tvm/pull/15767
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
p3achyjr opened a new pull request, #15768:
URL: https://github.com/apache/tvm/pull/15768
As per https://github.com/apache/tvm/issues/15148, adding support for div.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
slyubomirsky commented on PR #15627:
URL: https://github.com/apache/tvm/pull/15627#issuecomment-1721793381
I like the idea about having control over the amount of simplification, for
what it's worth. That said, I would be surprised if the arithmetic analyzer
turns out to be a performance
tlopex opened a new pull request, #15767:
URL: https://github.com/apache/tvm/pull/15767
(no comment)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
tlopex closed pull request #15767: [TFLite][Frontend] Support quantized
NOT_EQUAL
URL: https://github.com/apache/tvm/pull/15767
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
junrushao closed pull request #15765: [CI] Remove Unused GitHub Workflow
URL: https://github.com/apache/tvm/pull/15765
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
junrushao commented on PR #15765:
URL: https://github.com/apache/tvm/pull/15765#issuecomment-1722423304
Sorry wrong reop
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
yzh119 commented on code in PR #15760:
URL: https://github.com/apache/tvm/pull/15760#discussion_r1328057772
##
cmake/modules/CUDA.cmake:
##
@@ -81,6 +81,14 @@ if(USE_CUDA)
list(APPEND RUNTIME_SRCS ${CONTRIB_CURAND_SRC_CU})
endif(USE_CURAND)
+ if(USE_NVTX)
+
vinx13 commented on PR #15517:
URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722417049
Is it possible to support it in codegen? Usually this can be supported via
element extraction e.g `vex.x/y/z`. Alternatively we can set
`rewrite_scalar_access_to_vector_shuffle` to false
junrushao opened a new pull request, #15765:
URL: https://github.com/apache/tvm/pull/15765
(no comment)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
tqchen commented on PR #15517:
URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722455791
I think we should support via codegen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
junrushao opened a new pull request, #15766:
URL: https://github.com/apache/tvm/pull/15766
This PR integrates NCCL with CUDAGraph by using the default stream that
CUDAGraph uses for NCCL communication.
--
This is an automated message from the Apache Git Service.
To respond to the
junrushao commented on PR #15766:
URL: https://github.com/apache/tvm/pull/15766#issuecomment-1722424384
CC: @vinx13 @jinhongyii
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
MasterJH5574 merged PR #15763:
URL: https://github.com/apache/tvm/pull/15763
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao commented on PR #15760:
URL: https://github.com/apache/tvm/pull/15760#issuecomment-1722605355
This PR is ready for review! CC: @yzh119 @vinx13
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
CharlieFRuan commented on PR #15517:
URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722601651
Will look into adding Shuffle support for WebGPU!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
MasterJH5574 merged PR #15748:
URL: https://github.com/apache/tvm/pull/15748
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
gmeeker opened a new issue, #15771:
URL: https://github.com/apache/tvm/issues/15771
tvmc tune appears to have broken between 0.12.0 and 0.13.0.
### Expected behavior
Produce an autotuner json file (which worked in 0.12.0).
### Actual behavior
```
[Task 1/25]
junrushao closed pull request #15755: [Disco] Use Non-Null Stream for Compute
URL: https://github.com/apache/tvm/pull/15755
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
quic-sanirudh merged PR #15537:
URL: https://github.com/apache/tvm/pull/15537
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
adstraw closed pull request #15613: CUDA async copy with barrier synchronization
URL: https://github.com/apache/tvm/pull/15613
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
adstraw opened a new pull request, #15616:
URL: https://github.com/apache/tvm/pull/15616
Adds codegen support only for CUDA async copy with barrier synchronization.
TVM currently uses group based synchronization for async copies through the
InjectSoftwarePipeline pass but new "bulk" async
tvm-bot commented on PR #15616:
URL: https://github.com/apache/tvm/pull/15616#issuecomment-1691644889
Thanks for contributing to TVM! Please refer to the contributing guidelines
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please
request code reviews
ekalda commented on PR #104:
URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1692114269
Tagging some people who have been involved in related discussions before:
@tqchen @kparzysz-quic @masahi
--
This is an automated message from the Apache Git Service.
To respond to the
jwfromm opened a new pull request, #15650:
URL: https://github.com/apache/tvm/pull/15650
This PR adds the `debug` argument to `export_tvm`. When `debug` is `False`,
effects are not included in the output graph. This can make deploying models
less cumbersome since its not often theyll use
junrushao merged PR #15647:
URL: https://github.com/apache/tvm/pull/15647
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
jwfromm commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1312212862
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]:
# pylint: enable=protected-access
junrushao commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1311911458
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]:
# pylint: enable=protected-access
junrushao commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1311913637
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]:
# pylint: enable=protected-access
junrushao commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1311913637
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]:
# pylint: enable=protected-access
junrushao commented on code in PR #15633:
URL: https://github.com/apache/tvm/pull/15633#discussion_r1307575087
##
src/relax/op/ccl/ccl.h:
##
@@ -35,6 +35,9 @@ namespace relax {
/*! \brief AllReduce. */
Expr allreduce(Expr data, String op_type);
+/*! \brief Broadcast data
junrushao opened a new pull request, #15635:
URL: https://github.com/apache/tvm/pull/15635
This PR fixes an error reporting from Clang on the line below:
```
/.../include/tvm/runtime/packed_func.h:1706:3: error: no matching function
for call to 'F'
yongwww opened a new pull request, #15636:
URL: https://github.com/apache/tvm/pull/15636
This pr adds RealizeVDevice pass as mentioned in #15101
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
tqchen commented on code in PR #15633:
URL: https://github.com/apache/tvm/pull/15633#discussion_r1307738990
##
src/relax/op/ccl/ccl.h:
##
@@ -35,6 +35,9 @@ namespace relax {
/*! \brief AllReduce. */
Expr allreduce(Expr data, String op_type);
+/*! \brief Broadcast data from
jinhongyii merged PR #15571:
URL: https://github.com/apache/tvm/pull/15571
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao opened a new pull request, #15637:
URL: https://github.com/apache/tvm/pull/15637
Backported from #15635. This template is never instantiated on `apache:main`
yet, but in case of future ICE, I backported the related fix from Unity branch.
--
This is an automated message from the
junrushao merged PR #15634:
URL: https://github.com/apache/tvm/pull/15634
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao commented on code in PR #15634:
URL: https://github.com/apache/tvm/pull/15634#discussion_r1307612173
##
src/runtime/disco/threaded_session.cc:
##
@@ -146,13 +152,17 @@ class DiscoThreadedMessageQueue : public dmlc::Stream {
this->Read(>reg_id);
LeshengJin commented on code in PR #15633:
URL: https://github.com/apache/tvm/pull/15633#discussion_r1307796184
##
src/relax/op/ccl/ccl.h:
##
@@ -35,6 +35,9 @@ namespace relax {
/*! \brief AllReduce. */
Expr allreduce(Expr data, String op_type);
+/*! \brief Broadcast data
junrushao commented on PR #15635:
URL: https://github.com/apache/tvm/pull/15635#issuecomment-1695998415
CC: @spectrometerHBH
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
jinhongyii merged PR #15632:
URL: https://github.com/apache/tvm/pull/15632
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
kparzysz-quic commented on code in PR #15595:
URL: https://github.com/apache/tvm/pull/15595#discussion_r1307790075
##
include/tvm/runtime/packed_func.h:
##
@@ -1622,6 +1641,20 @@ inline TVMRetValue PackedFunc::operator()(Args&&...
args) const {
return rv;
}
+template
Biubiubiu12 commented on PR #15274:
URL: https://github.com/apache/tvm/pull/15274#issuecomment-1702014777
> Maybe offtopic question: Are/Will OpenCV operators (tscript) that you work
on be public somewhere ?
Yes, TVMScript is part of the implementation of some CV operator, but I
junrushao commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1312517589
##
python/tvm/relax/frontend/nn/core.py:
##
@@ -360,12 +360,31 @@ def to(self, dtype: Optional[str] = None) -> None: #
pylint: disable=invalid-na
def
lhutton1 commented on PR #15649:
URL: https://github.com/apache/tvm/pull/15649#issuecomment-1702398281
Thanks for creating the PR @toyowata. cc @ekalda @ashutosh-arm could you
take a look? Perhaps we can do it in a follow-up PR, but I think the
`task_demo_microtvm.sh` script could check
junrushao commented on PR #15653:
URL: https://github.com/apache/tvm/pull/15653#issuecomment-1702566750
CC: @tqchen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
junrushao commented on PR #15654:
URL: https://github.com/apache/tvm/pull/15654#issuecomment-1702567048
CC: @tqchen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
tqchen commented on code in PR #15652:
URL: https://github.com/apache/tvm/pull/15652#discussion_r1312977545
##
include/tvm/runtime/container/shape_tuple.h:
##
@@ -42,6 +43,9 @@ class ShapeTupleObj : public Object {
/*! \brief The size of the shape tuple object. */
junrushao commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1311906278
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]:
# pylint: enable=protected-access
junrushao opened a new pull request, #15653:
URL: https://github.com/apache/tvm/pull/15653
This PR adds two internal APIs:
```C++
// Scatter `n - 1` buffers to each worker from worker 0, where `n` is
the total number of workers
void ScatterFromWorker0(Array buffers);
//
junrushao commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1312854922
##
python/tvm/relax/frontend/nn/core.py:
##
@@ -360,12 +360,31 @@ def to(self, dtype: Optional[str] = None) -> None: #
pylint: disable=invalid-na
def
junrushao opened a new pull request, #15652:
URL: https://github.com/apache/tvm/pull/15652
This commit adds two convenient methods for `ShapeTuple`.
```C++
// Returns the number of elements in the shape,
// i.e. the product of all dimensions
ShapeTupleObj::index_type
junrushao commented on PR #15652:
URL: https://github.com/apache/tvm/pull/15652#issuecomment-1702561924
CC: @tqchen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
ashutosh-arm commented on PR #15649:
URL: https://github.com/apache/tvm/pull/15649#issuecomment-1702561718
I don't have much idea of the new flag. It will be good to get it reviewed
offline from someone who knows more about linker scripts.
--
This is an automated message from the Apache
tqchen commented on code in PR #15653:
URL: https://github.com/apache/tvm/pull/15653#discussion_r1312983067
##
src/runtime/disco/builtin.cc:
##
@@ -72,38 +70,48 @@ Module LoadVMModule(std::string path, Device device) {
return mod;
}
haoyang9804 commented on PR #15386:
URL: https://github.com/apache/tvm/pull/15386#issuecomment-1702692212
> From one hand LGTM, from another one it confuses me due to fix for
specific op was done on very high level (and CI test was constructed for
specific op not for all cases). It looks
tqchen commented on code in PR #15653:
URL: https://github.com/apache/tvm/pull/15653#discussion_r1312983067
##
src/runtime/disco/builtin.cc:
##
@@ -72,38 +70,48 @@ Module LoadVMModule(std::string path, Device device) {
return mod;
}
Thrsu opened a new issue, #15651:
URL: https://github.com/apache/tvm/issues/15651
The ONNX
[model](https://drive.google.com/file/d/1OyCdBhPRFo3MXvZSLsth4DK4cb7Ui5Z9/view?usp=share_link)
produced inconsistent inference results with ONNX when using relax to load
model and obtain the
Archermmt opened a new pull request, #15645:
URL: https://github.com/apache/tvm/pull/15645
This is a pull request for MSC(Multi-System Compile)
RFC:
https://discuss.tvm.apache.org/t/rfc-unity-msc-introduction-to-multi-system-compiler/15251/5
Tracking issue:
tqchen commented on PR #104:
URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1699183708
BTW, after writing it down, we can find that perhaps it is not necessary
(for S1) to explicitly introduce a special vscale. Another approach is that we
can mark an SVE scope, and use a
junrushao commented on code in PR #15652:
URL: https://github.com/apache/tvm/pull/15652#discussion_r1313190374
##
include/tvm/runtime/container/shape_tuple.h:
##
@@ -42,6 +43,9 @@ class ShapeTupleObj : public Object {
/*! \brief The size of the shape tuple object. */
junrushao merged PR #15654:
URL: https://github.com/apache/tvm/pull/15654
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
ekalda commented on PR #104:
URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1703019039
@tqchen Thanks for elaborating on the GPU programming model, I see the
parallels between programming for variable number of threads and vectors with
unknown lenghts. S1 option looks quite
tqchen commented on code in PR #15652:
URL: https://github.com/apache/tvm/pull/15652#discussion_r1313197538
##
include/tvm/runtime/container/shape_tuple.h:
##
@@ -42,6 +43,9 @@ class ShapeTupleObj : public Object {
/*! \brief The size of the shape tuple object. */
Lunderberg opened a new pull request, #15657:
URL: https://github.com/apache/tvm/pull/15657
Implemented `relax.transform.BundleModelParams`, which groups parameters
into user-provided runtime parameters, and a tuple of compile-time model
weights. This functionality was previously part of
adstraw opened a new pull request, #15656:
URL: https://github.com/apache/tvm/pull/15656
Adds CUDA codegen support for bulk asynchronous copy which are new
instructions for Hopper. Also includes some cleanup of PR #15616 in the form
of comments and tests. Notably this PR does not include
gessha commented on issue #12567:
URL: https://github.com/apache/tvm/issues/12567#issuecomment-1703033730
I was trying to reproduce the bug but I got stuck. Do you know what I did
wrong?
I built the tvm.ti_cpu container using
`~/projects/tvm$ ./docker/build.sh ci_cpu`
As
jwfromm commented on code in PR #15650:
URL: https://github.com/apache/tvm/pull/15650#discussion_r1313273646
##
python/tvm/relax/frontend/nn/core.py:
##
@@ -360,12 +360,31 @@ def to(self, dtype: Optional[str] = None) -> None: #
pylint: disable=invalid-na
def export_tvm(
junrushao opened a new pull request, #15654:
URL: https://github.com/apache/tvm/pull/15654
Prior to this PR, `ndarray-cache.json` is loaded, parsed along with the
concrete weights. This PR adds support to parse this JSON file to a structured
C++ class instead for later use.
--
This is
junrushao commented on PR #15640:
URL: https://github.com/apache/tvm/pull/15640#issuecomment-1702560479
Continued in: #15652, #15653, #15654, #15655
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
junrushao opened a new pull request, #15655:
URL: https://github.com/apache/tvm/pull/15655
This PR introduces `ShardLoader`, an object that allows convenient
sharding of each parameter, assuming there is a single shard dimension
and the sharding strategy is even. The sharding can be
junrushao closed pull request #15640: [Disco] Support Sharding via Scatter
URL: https://github.com/apache/tvm/pull/15640
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
junrushao commented on PR #15654:
URL: https://github.com/apache/tvm/pull/15654#issuecomment-1703610989
@kparzysz-quic OK actually I’ve done another round of refactoring on flight,
which gets rid of the filesystem dependency. Will submit a patch later today
when I got back home!
--
This
junrushao commented on code in PR #15653:
URL: https://github.com/apache/tvm/pull/15653#discussion_r1313691599
##
src/runtime/disco/builtin.cc:
##
@@ -72,38 +70,48 @@ Module LoadVMModule(std::string path, Device device) {
return mod;
}
junrushao merged PR #15650:
URL: https://github.com/apache/tvm/pull/15650
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
jikechao commented on PR #15386:
URL: https://github.com/apache/tvm/pull/15386#issuecomment-1703651509
cc @vvchernov
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
junrushao commented on code in PR #15652:
URL: https://github.com/apache/tvm/pull/15652#discussion_r1313731714
##
include/tvm/runtime/container/shape_tuple.h:
##
@@ -42,6 +43,9 @@ class ShapeTupleObj : public Object {
/*! \brief The size of the shape tuple object. */
junrushao commented on code in PR #15652:
URL: https://github.com/apache/tvm/pull/15652#discussion_r1313514113
##
include/tvm/runtime/container/shape_tuple.h:
##
@@ -42,6 +43,9 @@ class ShapeTupleObj : public Object {
/*! \brief The size of the shape tuple object. */
kparzysz-quic commented on PR #15654:
URL: https://github.com/apache/tvm/pull/15654#issuecomment-1703282818
Hi. Unfortunately the Hexagon toolchain does not support `std::filesystem`
and this does not compile for us. Would it be possible to use `std::string`
for paths? Sorry about the
kparzysz-quic opened a new pull request, #15658:
URL: https://github.com/apache/tvm/pull/15658
This makes the code a bit more readable at a little cost.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
masahi commented on PR #15708:
URL: https://github.com/apache/tvm/pull/15708#issuecomment-1716237070
Have you tried `FoldScaleAxis`? Combined with `SimplifyExpr` I think it
already does such optimization.
--
This is an automated message from the Apache Git Service.
To respond to the
vvchernov commented on PR #15723:
URL: https://github.com/apache/tvm/pull/15723#issuecomment-1716250601
Hello @echuraev! Looks like it isproblem from jenkins:
```[2023-09-12T17:54:08.315Z] + ./ci/scripts/jenkins/s3.py --action upload
--bucket tvm-jenkins-artifacts-prod --prefix
tqchen commented on PR #89:
URL: https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1715756599
sending another reminder for everyone to chime into related unity discussion
threads https://discuss.tvm.apache.org/c/development/unity/14, love to see your
participations on all the
Lunderberg commented on code in PR #15694:
URL: https://github.com/apache/tvm/pull/15694#discussion_r1323329466
##
src/relax/ir/dataflow_matcher.cc:
##
@@ -443,6 +444,92 @@ bool DFPatternMatcher::VisitDFPattern_(const
ShapePatternNode* op, const Expr& e
return false;
}
masahi commented on code in PR #15679:
URL: https://github.com/apache/tvm/pull/15679#discussion_r1323414157
##
python/tvm/relax/transform/legalize_ops/manipulate.py:
##
@@ -205,3 +213,16 @@ def te_layout_transform(data, name):
output_dtype = call_args[0].struct_info.dtype
p3achyjr opened a new pull request, #15733:
URL: https://github.com/apache/tvm/pull/15733
As part of https://github.com/apache/tvm/issues/15148, this PR adds support
for floor_div.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
Lunderberg commented on code in PR #15672:
URL: https://github.com/apache/tvm/pull/15672#discussion_r1323342788
##
tests/cpp/container_test.cc:
##
@@ -853,3 +854,23 @@ TEST(Optional, PackedCall) {
test_ffi(s, static_cast(kTVMObjectHandle));
test_ffi(String(s),
masahi commented on code in PR #15678:
URL: https://github.com/apache/tvm/pull/15678#discussion_r1322614653
##
python/tvm/relax/transform/optimize_layout_transform.py:
##
@@ -0,0 +1,75 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor
201 - 300 of 16577 matches
Mail list logo