Hzfengsy merged PR #15639:
URL: https://github.com/apache/tvm/pull/15639
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
Civitasv commented on issue #15505:
URL: https://github.com/apache/tvm/issues/15505#issuecomment-1698951555
Can confirm this fix my problem, much appreciate!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
ekalda commented on PR #104:
URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1699105121
Thanks for your comments @tqchen, much appreciated! I want to ask some
clarifications and expand on some of the points you made, based on my
understanding.
TL;DR:
- We need to
Hzfengsy merged PR #15615:
URL: https://github.com/apache/tvm/pull/15615
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
toyowata opened a new issue, #15643:
URL: https://github.com/apache/tvm/issues/15643
A run_demo.sh script in the apps/microtvm/ethosu is able to build microtvm
executable binary for Ethos-U and kick to FVP to run the demo, but the
inference result is wrong.
i.e. A cat image is
Anndrey24 commented on code in PR #15648:
URL: https://github.com/apache/tvm/pull/15648#discussion_r1315722536
##
python/tvm/topi/arm_cpu/conv2d_gemm.py:
##
@@ -326,7 +328,12 @@ def schedule_conv2d_gemm_interleaved(cfg, s, out,
final_out):
b, m, n = data_im2col.op.axis
Anndrey24 commented on PR #15648:
URL: https://github.com/apache/tvm/pull/15648#issuecomment-1706406306
Regarding the tests, it seems that all of the padding cases are already
visited at least once in:
Thrsu commented on issue #15669:
URL: https://github.com/apache/tvm/issues/15669#issuecomment-1706694799
> Hi @Thrsu perhaps you can try the suggestion here: [#9362
(comment)](https://github.com/apache/tvm/issues/9362#issuecomment-955263494)
Thank you for your prompt response!
--
Lunderberg commented on PR #15627:
URL: https://github.com/apache/tvm/pull/15627#issuecomment-1706744358
> Thanks @Lunderberg . One thing that we should consider is the overall
overhead of simplifier and code simplicity. Ideally the deduction should be a
fast path
Thank you for the
adstraw commented on code in PR #15656:
URL: https://github.com/apache/tvm/pull/15656#discussion_r1315918041
##
include/tvm/tir/builtin.h:
##
@@ -645,14 +645,29 @@ TVM_DLL const Op& ptx_mma_sp();
TVM_DLL const Op& ptx_ldmatrix();
/*!
- * \brief tvm intrinsics for ptx async
Lunderberg opened a new pull request, #15672:
URL: https://github.com/apache/tvm/pull/15672
This commit introduces a new container, `Variant`, which is analogous to the
`std::variant` introduced in C++17, the `enum` in Rust, or a tagged union in C.
The `Variant` class is templated over
lhutton1 commented on PR #15648:
URL: https://github.com/apache/tvm/pull/15648#issuecomment-1706540666
Awesome, thanks for checking!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
bkmgit closed pull request #13491: [Docs][Rust] Clarify Rust dependencies
URL: https://github.com/apache/tvm/pull/13491
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
bkmgit closed pull request #13249: [Docs][Rust] Add link to information on
stable Rust release
URL: https://github.com/apache/tvm/pull/13249
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
bkmgit closed pull request #13493: [Bug][Rust] Fix variable type mismatch
URL: https://github.com/apache/tvm/pull/13493
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
Civitasv closed issue #15505: [Bug] [Relay] [MetaSchedule] ValueError: The
block no longer exists in the IRModule
URL: https://github.com/apache/tvm/issues/15505
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
jinhongyii opened a new pull request, #15660:
URL: https://github.com/apache/tvm/pull/15660
(no comment)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
Lunderberg commented on code in PR #15577:
URL: https://github.com/apache/tvm/pull/15577#discussion_r1313927771
##
tests/python/relax/test_bind_symbolic_vars.py:
##
@@ -183,6 +183,91 @@ def expected(A: R.Tensor(["M", 16])):
tvm.ir.assert_structural_equal(expected, after)
junrushao merged PR #15655:
URL: https://github.com/apache/tvm/pull/15655
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
vvchernov commented on code in PR #15386:
URL: https://github.com/apache/tvm/pull/15386#discussion_r1314148994
##
python/tvm/relay/frontend/pytorch.py:
##
@@ -4291,7 +4291,11 @@ def _handel_nested_input(inputs):
self.current_op.pop()
-return
vvchernov commented on code in PR #15386:
URL: https://github.com/apache/tvm/pull/15386#discussion_r1314142490
##
python/tvm/relay/frontend/pytorch.py:
##
@@ -4291,7 +4291,11 @@ def _handel_nested_input(inputs):
self.current_op.pop()
-return
haoyang9804 commented on code in PR #15386:
URL: https://github.com/apache/tvm/pull/15386#discussion_r1314145709
##
python/tvm/relay/frontend/pytorch.py:
##
@@ -4291,7 +4291,11 @@ def _handel_nested_input(inputs):
self.current_op.pop()
-return
junrushao merged PR #15660:
URL: https://github.com/apache/tvm/pull/15660
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
jinhongyii commented on PR #15660:
URL: https://github.com/apache/tvm/pull/15660#issuecomment-1703875786
cc: @junrushao
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
csullivan commented on issue #15101:
URL: https://github.com/apache/tvm/issues/15101#issuecomment-1707069136
Thank you for bringing support for heterogeneous graphs @yongwww! Shall we
close this as completed?
--
This is an automated message from the Apache Git Service.
To respond to the
jinhongyii commented on PR #15673:
URL: https://github.com/apache/tvm/pull/15673#issuecomment-1707323862
cc: @junrushao @tqchen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
junrushao commented on code in PR #15673:
URL: https://github.com/apache/tvm/pull/15673#discussion_r1316421990
##
src/runtime/disco/loader.cc:
##
@@ -93,9 +93,18 @@ inline std::vector ShardShape(const
ShapeTuple& shape, i
return result;
}
+class ShardInfoCompare {
+
jinhongyii opened a new pull request, #15674:
URL: https://github.com/apache/tvm/pull/15674
1. Add wrapper for copytoworker0
2. change DRef and Session to nullable for easy initlialization
3. add a "func_exists" function in executable
--
This is an automated message from the Apache
jinhongyii commented on PR #15674:
URL: https://github.com/apache/tvm/pull/15674#issuecomment-1707327306
cc: @junrushao @tqchen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
kparzysz-quic merged PR #15664:
URL: https://github.com/apache/tvm/pull/15664
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
csullivan merged PR #15636:
URL: https://github.com/apache/tvm/pull/15636
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao commented on code in PR #15673:
URL: https://github.com/apache/tvm/pull/15673#discussion_r1316423495
##
src/runtime/disco/loader.cc:
##
@@ -43,7 +43,7 @@ class ShardLoaderObj : public Object {
/*! \brief Create a shard loader. */
static ObjectRef Create(const
ashutosh-arm commented on code in PR #15667:
URL: https://github.com/apache/tvm/pull/15667#discussion_r1316163715
##
tests/scripts/task_demo_microtvm.sh:
##
@@ -20,6 +20,12 @@ set -euxo pipefail
source tests/scripts/setup-pytest-env.sh
+cleanup()
+{
+rm -f out.log
+}
junrushao commented on PR #15636:
URL: https://github.com/apache/tvm/pull/15636#issuecomment-1707119861
This is awesome work! Thanks @yongwww!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cyx-6 opened a new pull request, #15675:
URL: https://github.com/apache/tvm/pull/15675
This is a cherry pick commit from https://github.com/mlc-ai/relax/pull/282,
to enable OpenCL on Google Pixel device.
cc: @alex4o
--
This is an automated message from the Apache Git Service.
To
yongwww commented on PR #15636:
URL: https://github.com/apache/tvm/pull/15636#issuecomment-1706957074
I will land it if no further comments. cc: @psrivas2 @junrushao @csullivan
@Lunderberg
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
masahi merged PR #15656:
URL: https://github.com/apache/tvm/pull/15656
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
jinhongyii opened a new pull request, #15673:
URL: https://github.com/apache/tvm/pull/15673
1. support default f_shard
2. add API ShardLoaderLoadAll
3. broadcast when no sharding dim is specified
--
This is an automated message from the Apache Git Service.
To respond to the
junrushao commented on code in PR #15670:
URL: https://github.com/apache/tvm/pull/15670#discussion_r1316271852
##
python/tvm/relax/frontend/nn/core.py:
##
@@ -39,6 +39,7 @@
import numpy as np
+import tvm
Review Comment:
don't import tvm directly inside the tvm pacakge
jinhongyii commented on code in PR #15673:
URL: https://github.com/apache/tvm/pull/15673#discussion_r1316425796
##
src/runtime/disco/loader.cc:
##
@@ -43,7 +43,7 @@ class ShardLoaderObj : public Object {
/*! \brief Create a shard loader. */
static ObjectRef Create(const
junrushao opened a new pull request, #15662:
URL: https://github.com/apache/tvm/pull/15662
This PR introduces `nn.MultiLinear` as an extension to `nn.Linear` which is
arithmetically equivalent to applying multiple `nn.Linear` to the same input.
For example, QKV projection in
junrushao commented on PR #15662:
URL: https://github.com/apache/tvm/pull/15662#issuecomment-1704429272
CC: @MasterJH5574 @jwfromm
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
echuraev merged PR #15386:
URL: https://github.com/apache/tvm/pull/15386
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316532945
##
include/tvm/runtime/disco/session.h:
##
@@ -236,7 +248,7 @@ class Session : public ObjectRef {
public:
/*! \brief Create a session backed by a thread pool of
spectrometerHBH merged PR #15675:
URL: https://github.com/apache/tvm/pull/15675
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
junrushao commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316514867
##
include/tvm/runtime/disco/session.h:
##
@@ -155,7 +155,7 @@ class DRefObj : public Object {
*/
class DRef : public ObjectRef {
public:
-
jinhongyii commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316538803
##
include/tvm/runtime/disco/session.h:
##
@@ -236,7 +248,7 @@ class Session : public ObjectRef {
public:
/*! \brief Create a session backed by a thread pool of
MasterJH5574 opened a new pull request, #15677:
URL: https://github.com/apache/tvm/pull/15677
This PR fixes an issue of the IterMapRewriter. Prior to this PR, the
mutation function of the rewriter class returns the mutation results even when
an invalid PrimExpr pattern was detected.
LeshengJin opened a new pull request, #15680:
URL: https://github.com/apache/tvm/pull/15680
This pr introduces op scatter_from_worker0, which performs a scatter
operation from worker-0, chunking the given buffer into equal parts.
--
This is an automated message from the Apache Git
jikechao closed issue #15004: [Bug][Relay] func ` _expr.const(c)` get an
unexpected crash: 'NoneType' object has no attribute 'dtype'
URL: https://github.com/apache/tvm/issues/15004
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
rutkoor opened a new pull request, #15679:
URL: https://github.com/apache/tvm/pull/15679
This patch adds support for non-bijective transformations in alter-op-impl
pass. The idea is to introduce remove_pad operator, which copies non-padded
buffer entries from padded_buffer. This additional
junrushao commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316514183
##
include/tvm/runtime/disco/session.h:
##
@@ -195,6 +195,18 @@ class SessionObj : public Object {
* \param host_array The array to be copied to worker-0
*
junrushao commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316513189
##
include/tvm/runtime/disco/session.h:
##
@@ -236,7 +248,7 @@ class Session : public ObjectRef {
public:
/*! \brief Create a session backed by a thread pool of
junrushao commented on PR #14460:
URL: https://github.com/apache/tvm/pull/14460#issuecomment-1707430345
Note on unittests: in the future, we want to migrate
`check_correctness`-based numerical tests to IR structural equality test, which
is much more accurate and to-the-point
--
This is
abhikran-quic opened a new pull request, #15678:
URL: https://github.com/apache/tvm/pull/15678
Relax `AlterOpImpl` pass introduces `layout_transform` operations. If the
layouts match for consecutive `layout_transform` operations, they can be
cancelled out. This pass tries to optimize
junrushao merged PR #15668:
URL: https://github.com/apache/tvm/pull/15668
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
tqchen commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316512081
##
include/tvm/runtime/disco/session.h:
##
@@ -195,6 +195,18 @@ class SessionObj : public Object {
* \param host_array The array to be copied to worker-0
* \param
csullivan opened a new pull request, #15676:
URL: https://github.com/apache/tvm/pull/15676
(no comment)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
jinhongyii commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316518795
##
include/tvm/runtime/disco/session.h:
##
@@ -236,7 +248,7 @@ class Session : public ObjectRef {
public:
/*! \brief Create a session backed by a thread pool of
jinhongyii commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316539990
##
include/tvm/runtime/disco/session.h:
##
@@ -195,6 +195,18 @@ class SessionObj : public Object {
* \param host_array The array to be copied to worker-0
*
jinhongyii commented on code in PR #15674:
URL: https://github.com/apache/tvm/pull/15674#discussion_r1316518795
##
include/tvm/runtime/disco/session.h:
##
@@ -236,7 +248,7 @@ class Session : public ObjectRef {
public:
/*! \brief Create a session backed by a thread pool of
lhutton1 commented on PR #15649:
URL: https://github.com/apache/tvm/pull/15649#issuecomment-1705457161
Thanks @toyowata @ashutosh-arm!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
lhutton1 merged PR #15649:
URL: https://github.com/apache/tvm/pull/15649
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lhutton1 commented on issue #15643:
URL: https://github.com/apache/tvm/issues/15643#issuecomment-1705457832
Closing as fixed by #15649
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
lhutton1 closed issue #15643: [Bug] [Ethos] apps/microtvm/ethosu/run_demo.sh
gets wrong inference result
URL: https://github.com/apache/tvm/issues/15643
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
adstraw opened a new pull request, #15684:
URL: https://github.com/apache/tvm/pull/15684
This PR adds an intrinsic to create barriers that can be used with existing
barrier intrinsics for synchronization. The prior method of barrier allocation
was to use `alloc_buffer` e.g. as follows
lhutton1 merged PR #15667:
URL: https://github.com/apache/tvm/pull/15667
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lhutton1 commented on PR #15667:
URL: https://github.com/apache/tvm/pull/15667#issuecomment-1708449510
Thanks @ashutosh-arm!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
tlopex opened a new pull request, #15681:
URL: https://github.com/apache/tvm/pull/15681
The commit enables convert_square function for the quantized tensor. The op
SQUARE can be tested by previous testing.
cc @leandron
--
This is an automated message from the Apache Git Service.
tqchen merged PR #15666:
URL: https://github.com/apache/tvm/pull/15666
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
Hzfengsy commented on PR #15645:
URL: https://github.com/apache/tvm/pull/15645#issuecomment-1708398787
Thanks @Archermmt!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
Hzfengsy merged PR #15645:
URL: https://github.com/apache/tvm/pull/15645
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
haoyang9804 commented on issue #15282:
URL: https://github.com/apache/tvm/issues/15282#issuecomment-1708161759
Here is my analysis after carefully reading the source code.
Function `InstanceNormRel` in `relay/op/nn/nn.cc` defines `instance_norm`'s
type constraints.
In
tqchen merged PR #15677:
URL: https://github.com/apache/tvm/pull/15677
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
tqchen commented on PR #15627:
URL: https://github.com/apache/tvm/pull/15627#issuecomment-1708210343
Thanks @Lunderberg , let me elaborate a bit since this is indeed a design
tradeoff here. There are a few factors that goes into this
- (a) overall readability and predictability of the
youxiudeshouyeren opened a new pull request, #15682:
URL: https://github.com/apache/tvm/pull/15682
When I specify the ReLU6 activation function in yolov5, This redundancy
occurs.
For example:
![image](https://github.com/apache/tvm/assets/41790911/add17bfa-2f3c-4553-b8fe-7c140e50d847)
youxiudeshouyeren commented on PR #15682:
URL: https://github.com/apache/tvm/pull/15682#issuecomment-1708277499
Code Review: @Hzfengsy @junrushao @Lunderberg Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
tqchen commented on PR #15577:
URL: https://github.com/apache/tvm/pull/15577#issuecomment-1708294750
cc @junrushao @Hzfengsy
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
haoyang9804 commented on issue #15282:
URL: https://github.com/apache/tvm/issues/15282#issuecomment-1708279654
Update:
In the above analysis, I summarize that `float64` in `InstanceNormRel`
should be `Tensor[(1), float64]`. Seems this scala type is from
`_create_typed_const`
lhutton1 commented on code in PR #15667:
URL: https://github.com/apache/tvm/pull/15667#discussion_r1316903804
##
tests/scripts/task_demo_microtvm.sh:
##
@@ -34,6 +40,18 @@ FVP_PATH="/opt/arm/FVP_Corstone_SSE-300_Ethos-U55"
CMAKE_PATH="/opt/arm/cmake/bin/cmake"
chunit-quic commented on PR #13728:
URL: https://github.com/apache/tvm/pull/13728#issuecomment-1707891633
Hi @xiaolong18
Thank you for your information. Currently we are doing some urgent projects,
and perhaps we can only get back to here at the end of October.
But we might
LeshengJin opened a new pull request, #15670:
URL: https://github.com/apache/tvm/pull/15670
- Enable tuple/list input for models, with nesting support.
- Add GPU schedule Pass when `device` is not cpu.
- Correct the order of inputs for JIT execution.
--
This is an automated message
xiaolong18 commented on PR #13728:
URL: https://github.com/apache/tvm/pull/13728#issuecomment-1706144376
@chunit-quic Thank you for your work,I want to ask you a question, I
tested the
tests/python/frontend/tensorflow/test_forward.py::test_forward_dynmaic_rnn_lstmblockcell
and report
masahi commented on code in PR #15656:
URL: https://github.com/apache/tvm/pull/15656#discussion_r1315592599
##
include/tvm/tir/builtin.h:
##
@@ -645,14 +645,29 @@ TVM_DLL const Op& ptx_mma_sp();
TVM_DLL const Op& ptx_ldmatrix();
/*!
- * \brief tvm intrinsics for ptx async
lhutton1 commented on code in PR #15648:
URL: https://github.com/apache/tvm/pull/15648#discussion_r1315667771
##
python/tvm/topi/arm_cpu/conv2d_gemm.py:
##
@@ -428,12 +435,29 @@ def schedule_conv2d_gemm_native(cfg, s, out, final_out):
b, m, n = data_im2col.op.axis
lhutton1 commented on issue #15669:
URL: https://github.com/apache/tvm/issues/15669#issuecomment-1706123357
Hi @Thrsu perhaps you can try the suggestion here:
https://github.com/apache/tvm/issues/9362#issuecomment-955263494
--
This is an automated message from the Apache Git Service.
To
echuraev opened a new pull request, #15671:
URL: https://github.com/apache/tvm/pull/15671
In VM `fn->attrs` doesn't contain information about `kernel_layout`. So we
can get this value from `expr_attrib`. In this PR function `CanUseBuffers` was
modified to work with VM.
A new test
masahi commented on code in PR #15656:
URL: https://github.com/apache/tvm/pull/15656#discussion_r1315572243
##
python/tvm/tir/op.py:
##
@@ -1458,16 +1512,42 @@ def ptx_arrive_barrier(barrier_arr, barrier_id):
return call_intrin("", "tir.ptx_arrive_barrier", barrier_arr,
jinhongyii commented on PR #15674:
URL: https://github.com/apache/tvm/pull/15674#issuecomment-1709407700
please review @junrushao @tqchen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
slyubomirsky commented on code in PR #15577:
URL: https://github.com/apache/tvm/pull/15577#discussion_r1317951627
##
tests/python/relax/test_bind_symbolic_vars.py:
##
@@ -183,6 +183,91 @@ def expected(A: R.Tensor(["M", 16])):
tvm.ir.assert_structural_equal(expected, after)
LeshengJin commented on code in PR #15670:
URL: https://github.com/apache/tvm/pull/15670#discussion_r1318044932
##
python/tvm/relax/frontend/nn/core.py:
##
@@ -411,6 +412,10 @@ def jit( # pylint: disable=too-many-arguments
# Compile mod and feed it to VM
echuraev commented on code in PR #15683:
URL: https://github.com/apache/tvm/pull/15683#discussion_r1318118522
##
python/tvm/relay/frontend/pytorch.py:
##
@@ -4424,7 +4424,10 @@ def _create_typed_const(data, dtype):
dtype should be a TVM dtype"""
if dtype ==
LeshengJin commented on code in PR #15670:
URL: https://github.com/apache/tvm/pull/15670#discussion_r1318045404
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -59,6 +59,32 @@ def __repr__(self) -> str:
return f"Tensor([{shape}], '{self.dtype}')"
+class TupleList:
+
LeshengJin commented on code in PR #15670:
URL: https://github.com/apache/tvm/pull/15670#discussion_r1318045540
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -59,6 +59,32 @@ def __repr__(self) -> str:
return f"Tensor([{shape}], '{self.dtype}')"
+class TupleList:
+
LeshengJin commented on code in PR #15670:
URL: https://github.com/apache/tvm/pull/15670#discussion_r1318047379
##
python/tvm/relax/frontend/nn/spec.py:
##
@@ -59,6 +61,25 @@ def __repr__(self) -> str:
return f"Tensor([{shape}], '{self.dtype}')"
+class Tuple:
+
Thrsu opened a new issue, #15722:
URL: https://github.com/apache/tvm/issues/15722
I encountered this TVMError when attempting to load an [ONNX
model](https://github.com/Thrsu/onnx_model/blob/main/Add.onnx) containing the
Add operator using TVM's relax module.
It seems that relax.add
Hzfengsy commented on PR #15692:
URL: https://github.com/apache/tvm/pull/15692#issuecomment-1712797037
Please make the CI green :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cbalint13 commented on PR #15685:
URL: https://github.com/apache/tvm/pull/15685#issuecomment-1712846790
> I am strongly opting for a backward compatibility for this case.
> Allow a little time (1-2 day) to investigate a way going down `llvm<11`,
then I'll be back with the results.
Thrsu opened a new issue, #15721:
URL: https://github.com/apache/tvm/issues/15721
I encountered an InternalError while attempting to run an [ONNX
model](https://github.com/Thrsu/onnx_model/blob/main/Split.onnx) with the split
operator using TVM's relax module. The error message I received
junrushao opened a new pull request, #15720:
URL: https://github.com/apache/tvm/pull/15720
This PR contains necessary tweaks to run Llama2 in Tensor Parallelism.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
1 - 100 of 16279 matches
Mail list logo