[GitHub] [tvm] cbalint13 commented on pull request #15685: [Target][TOPI] Use LLVM for x86 CPU feature lookup

2023-09-14 Thread via GitHub
cbalint13 commented on PR #15685: URL: https://github.com/apache/tvm/pull/15685#issuecomment-1720011129 > I am very excited about this feature and cannot wait to try it out myself! Thank you @cbalint13 for this super well-documented and well-tested PR, and it's going to be super useful for

[GitHub] [tvm] junrushao commented on pull request #15706: [TVMScript] Disable `black_format` by default

2023-09-15 Thread via GitHub
junrushao commented on PR #15706: URL: https://github.com/apache/tvm/pull/15706#issuecomment-1721978241 Thanks @Lunderberg! 100% agreed with your comments, and particularly, feeling the same as you that the previous generation of TVMScript printer comes with many subtle issues making it

[GitHub] [tvm] MasterJH5574 merged pull request #15752: [Relay][Bugfix] fix the wrong calculate logic of operator flip in PyTorch frontend

2023-09-15 Thread via GitHub
MasterJH5574 merged PR #15752: URL: https://github.com/apache/tvm/pull/15752 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao merged pull request #15741: [NN][Op]ConvTranspose1D

2023-09-15 Thread via GitHub
junrushao merged PR #15741: URL: https://github.com/apache/tvm/pull/15741 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao commented on pull request #15762: [TVMScript] Use environment variable TVM_BLACK_FORMAT for .show()

2023-09-15 Thread via GitHub
junrushao commented on PR #15762: URL: https://github.com/apache/tvm/pull/15762#issuecomment-1721978647 Love this PR. Moving my comments from the [previous one](https://github.com/apache/tvm/pull/15706#issuecomment-1721978241). > Thanks @Lunderberg! 100% agreed with your comments,

[GitHub] [tvm] junrushao commented on pull request #15762: [TVMScript] Use environment variable TVM_BLACK_FORMAT for .show()

2023-09-15 Thread via GitHub
junrushao commented on PR #15762: URL: https://github.com/apache/tvm/pull/15762#issuecomment-1721979373 BTW, `line_length` is something we could configure in black: https://github.com/psf/black/blob/19.3b0/black.py#L168 -- This is an automated message from the Apache Git Service. To

[GitHub] [tvm] junrushao merged pull request #15743: [Unity][DLight] Fix outer_reduction for dynamic shape workloads

2023-09-15 Thread via GitHub
junrushao merged PR #15743: URL: https://github.com/apache/tvm/pull/15743 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao commented on pull request #15764: [Disco] Add AllGather

2023-09-15 Thread via GitHub
junrushao commented on PR #15764: URL: https://github.com/apache/tvm/pull/15764#issuecomment-1722072728 CC @LeshengJin -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] Hzfengsy commented on pull request #15762: [TVMScript] Use environment variable TVM_BLACK_FORMAT for .show()

2023-09-15 Thread via GitHub
Hzfengsy commented on PR #15762: URL: https://github.com/apache/tvm/pull/15762#issuecomment-1722143981 Great PR! It's a great solution to make people with different tastes happy! Thanks @Lunderberg -- This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] [tvm] junrushao commented on pull request #15748: [BugFix] Move symbols that are relevant to the runtime from libtvm to…

2023-09-16 Thread via GitHub
junrushao commented on PR #15748: URL: https://github.com/apache/tvm/pull/15748#issuecomment-1722156796 CC: @yelite -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] MasterJH5574 commented on pull request #15517: [TIR] Shuffle in PointerValueTypeRewrite for scalar reads

2023-09-16 Thread via GitHub
MasterJH5574 commented on PR #15517: URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722362012 Hi @vinx13, we noticed that this PR breaks the WebGPU codegen as the WebGPU codegen right now does not support tir::ShuffleNode. Therefore, exceptions are thrown in pass

[GitHub] [tvm] tlopex opened a new pull request, #15769: [TFLite][Frontend] Support quantized NOT_EQUAL

2023-09-17 Thread via GitHub
tlopex opened a new pull request, #15769: URL: https://github.com/apache/tvm/pull/15769 Support NOT_EQUAL quantization operation as part of #15148 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go

[GitHub] [tvm] tlopex closed pull request #15767: [TFLite][Frontend] Support quantized NOT_EQUAL

2023-09-17 Thread via GitHub
tlopex closed pull request #15767: [TFLite][Frontend] Support quantized NOT_EQUAL URL: https://github.com/apache/tvm/pull/15767 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

[GitHub] [tvm] p3achyjr opened a new pull request, #15768: [TFLite][Frontend] Support quantized div

2023-09-17 Thread via GitHub
p3achyjr opened a new pull request, #15768: URL: https://github.com/apache/tvm/pull/15768 As per https://github.com/apache/tvm/issues/15148, adding support for div. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the

[GitHub] [tvm] slyubomirsky commented on pull request #15627: [Unity][Analysis] Improve handling of symbolic variables

2023-09-15 Thread via GitHub
slyubomirsky commented on PR #15627: URL: https://github.com/apache/tvm/pull/15627#issuecomment-1721793381 I like the idea about having control over the amount of simplification, for what it's worth. That said, I would be surprised if the arithmetic analyzer turns out to be a performance

[GitHub] [tvm] tlopex opened a new pull request, #15767: Support NOT_EQUAL quantization operation as part of #15148

2023-09-17 Thread via GitHub
tlopex opened a new pull request, #15767: URL: https://github.com/apache/tvm/pull/15767 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe,

[GitHub] [tvm] tlopex closed pull request #15767: [TFLite][Frontend] Support quantized NOT_EQUAL

2023-09-17 Thread via GitHub
tlopex closed pull request #15767: [TFLite][Frontend] Support quantized NOT_EQUAL URL: https://github.com/apache/tvm/pull/15767 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

[GitHub] [tvm] junrushao closed pull request #15765: [CI] Remove Unused GitHub Workflow

2023-09-17 Thread via GitHub
junrushao closed pull request #15765: [CI] Remove Unused GitHub Workflow URL: https://github.com/apache/tvm/pull/15765 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] junrushao commented on pull request #15765: [CI] Remove Unused GitHub Workflow

2023-09-17 Thread via GitHub
junrushao commented on PR #15765: URL: https://github.com/apache/tvm/pull/15765#issuecomment-1722423304 Sorry wrong reop -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] yzh119 commented on a diff in pull request #15760: [Runtime][CUDA] NVTX Integration

2023-09-17 Thread via GitHub
yzh119 commented on code in PR #15760: URL: https://github.com/apache/tvm/pull/15760#discussion_r1328057772 ## cmake/modules/CUDA.cmake: ## @@ -81,6 +81,14 @@ if(USE_CUDA) list(APPEND RUNTIME_SRCS ${CONTRIB_CURAND_SRC_CU}) endif(USE_CURAND) + if(USE_NVTX) +

[GitHub] [tvm] vinx13 commented on pull request #15517: [TIR] Shuffle in PointerValueTypeRewrite for scalar reads

2023-09-17 Thread via GitHub
vinx13 commented on PR #15517: URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722417049 Is it possible to support it in codegen? Usually this can be supported via element extraction e.g `vex.x/y/z`. Alternatively we can set `rewrite_scalar_access_to_vector_shuffle` to false

[GitHub] [tvm] junrushao opened a new pull request, #15765: [CI] Remove Unused GitHub Workflow

2023-09-17 Thread via GitHub
junrushao opened a new pull request, #15765: URL: https://github.com/apache/tvm/pull/15765 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe,

[GitHub] [tvm] tqchen commented on pull request #15517: [TIR] Shuffle in PointerValueTypeRewrite for scalar reads

2023-09-17 Thread via GitHub
tqchen commented on PR #15517: URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722455791 I think we should support via codegen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[GitHub] [tvm] junrushao opened a new pull request, #15766: [Disco] Integrate NCCL with CUDAGraph

2023-09-17 Thread via GitHub
junrushao opened a new pull request, #15766: URL: https://github.com/apache/tvm/pull/15766 This PR integrates NCCL with CUDAGraph by using the default stream that CUDAGraph uses for NCCL communication. -- This is an automated message from the Apache Git Service. To respond to the

[GitHub] [tvm] junrushao commented on pull request #15766: [Disco] Integrate NCCL with CUDAGraph

2023-09-17 Thread via GitHub
junrushao commented on PR #15766: URL: https://github.com/apache/tvm/pull/15766#issuecomment-1722424384 CC: @vinx13 @jinhongyii -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific

[GitHub] [tvm] MasterJH5574 merged pull request #15763: [TIR] Do not drop 4th argument to tir.max

2023-09-17 Thread via GitHub
MasterJH5574 merged PR #15763: URL: https://github.com/apache/tvm/pull/15763 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao commented on pull request #15760: [Runtime][CUDA] NVTX Integration

2023-09-17 Thread via GitHub
junrushao commented on PR #15760: URL: https://github.com/apache/tvm/pull/15760#issuecomment-1722605355 This PR is ready for review! CC: @yzh119 @vinx13 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above

[GitHub] [tvm] CharlieFRuan commented on pull request #15517: [TIR] Shuffle in PointerValueTypeRewrite for scalar reads

2023-09-17 Thread via GitHub
CharlieFRuan commented on PR #15517: URL: https://github.com/apache/tvm/pull/15517#issuecomment-1722601651 Will look into adding Shuffle support for WebGPU! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

[GitHub] [tvm] MasterJH5574 merged pull request #15748: [BugFix] Move symbols that are relevant to the runtime from libtvm to…

2023-09-17 Thread via GitHub
MasterJH5574 merged PR #15748: URL: https://github.com/apache/tvm/pull/15748 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] gmeeker opened a new issue, #15771: [Bug] Tune fails on macOS

2023-09-17 Thread via GitHub
gmeeker opened a new issue, #15771: URL: https://github.com/apache/tvm/issues/15771 tvmc tune appears to have broken between 0.12.0 and 0.13.0. ### Expected behavior Produce an autotuner json file (which worked in 0.12.0). ### Actual behavior ``` [Task 1/25]

[GitHub] [tvm] junrushao closed pull request #15755: [Disco] Use Non-Null Stream for Compute

2023-09-17 Thread via GitHub
junrushao closed pull request #15755: [Disco] Use Non-Null Stream for Compute URL: https://github.com/apache/tvm/pull/15755 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] quic-sanirudh merged pull request #15537: [CPP_RPC] export listdir for RPC

2023-08-24 Thread via GitHub
quic-sanirudh merged PR #15537: URL: https://github.com/apache/tvm/pull/15537 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] adstraw closed pull request #15613: CUDA async copy with barrier synchronization

2023-08-24 Thread via GitHub
adstraw closed pull request #15613: CUDA async copy with barrier synchronization URL: https://github.com/apache/tvm/pull/15613 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

[GitHub] [tvm] adstraw opened a new pull request, #15616: CUDA codegen for async copy with barrier synchronization

2023-08-24 Thread via GitHub
adstraw opened a new pull request, #15616: URL: https://github.com/apache/tvm/pull/15616 Adds codegen support only for CUDA async copy with barrier synchronization. TVM currently uses group based synchronization for async copies through the InjectSoftwarePipeline pass but new "bulk" async

[GitHub] [tvm] tvm-bot commented on pull request #15616: CUDA codegen for async copy with barrier synchronization

2023-08-24 Thread via GitHub
tvm-bot commented on PR #15616: URL: https://github.com/apache/tvm/pull/15616#issuecomment-1691644889 Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews

[GitHub] [tvm-rfcs] ekalda commented on pull request #104: [RFC] Scalable vectors in TIR

2023-08-24 Thread via GitHub
ekalda commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1692114269 Tagging some people who have been involved in related discussions before: @tqchen @kparzysz-quic @masahi -- This is an automated message from the Apache Git Service. To respond to the

[GitHub] [tvm] jwfromm opened a new pull request, #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-08-31 Thread via GitHub
jwfromm opened a new pull request, #15650: URL: https://github.com/apache/tvm/pull/15650 This PR adds the `debug` argument to `export_tvm`. When `debug` is `False`, effects are not included in the output graph. This can make deploying models less cumbersome since its not often theyll use

[GitHub] [tvm] junrushao merged pull request #15647: [Unity][Dlight] GeMV rule max_num_threads awareness

2023-08-31 Thread via GitHub
junrushao merged PR #15647: URL: https://github.com/apache/tvm/pull/15647 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] jwfromm commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-08-31 Thread via GitHub
jwfromm commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1312212862 ## python/tvm/relax/frontend/nn/spec.py: ## @@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]: # pylint: enable=protected-access

[GitHub] [tvm] junrushao commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-08-31 Thread via GitHub
junrushao commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1311911458 ## python/tvm/relax/frontend/nn/spec.py: ## @@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]: # pylint: enable=protected-access

[GitHub] [tvm] junrushao commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-08-31 Thread via GitHub
junrushao commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1311913637 ## python/tvm/relax/frontend/nn/spec.py: ## @@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]: # pylint: enable=protected-access

[GitHub] [tvm] junrushao commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-08-31 Thread via GitHub
junrushao commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1311913637 ## python/tvm/relax/frontend/nn/spec.py: ## @@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]: # pylint: enable=protected-access

[GitHub] [tvm] junrushao commented on a diff in pull request #15633: [Disco][Op] broadcast_from_zero

2023-08-28 Thread via GitHub
junrushao commented on code in PR #15633: URL: https://github.com/apache/tvm/pull/15633#discussion_r1307575087 ## src/relax/op/ccl/ccl.h: ## @@ -35,6 +35,9 @@ namespace relax { /*! \brief AllReduce. */ Expr allreduce(Expr data, String op_type); +/*! \brief Broadcast data

[GitHub] [tvm] junrushao opened a new pull request, #15635: [Disco] Fix ICE from Clang

2023-08-28 Thread via GitHub
junrushao opened a new pull request, #15635: URL: https://github.com/apache/tvm/pull/15635 This PR fixes an error reporting from Clang on the line below: ``` /.../include/tvm/runtime/packed_func.h:1706:3: error: no matching function for call to 'F'

[GitHub] [tvm] yongwww opened a new pull request, #15636: [Unity] RealizeVDevice pass

2023-08-28 Thread via GitHub
yongwww opened a new pull request, #15636: URL: https://github.com/apache/tvm/pull/15636 This pr adds RealizeVDevice pass as mentioned in #15101 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to

[GitHub] [tvm] tqchen commented on a diff in pull request #15633: [Disco][Op] broadcast_from_zero

2023-08-28 Thread via GitHub
tqchen commented on code in PR #15633: URL: https://github.com/apache/tvm/pull/15633#discussion_r1307738990 ## src/relax/op/ccl/ccl.h: ## @@ -35,6 +35,9 @@ namespace relax { /*! \brief AllReduce. */ Expr allreduce(Expr data, String op_type); +/*! \brief Broadcast data from

[GitHub] [tvm] jinhongyii merged pull request #15571: [Unity][Dlight] GeMV rule skip cases of "outer dim being grouped"

2023-08-28 Thread via GitHub
jinhongyii merged PR #15571: URL: https://github.com/apache/tvm/pull/15571 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao opened a new pull request, #15637: [Runtime] Fix ICE from Clang

2023-08-28 Thread via GitHub
junrushao opened a new pull request, #15637: URL: https://github.com/apache/tvm/pull/15637 Backported from #15635. This template is never instantiated on `apache:main` yet, but in case of future ICE, I backported the related fix from Unity branch. -- This is an automated message from the

[GitHub] [tvm] junrushao merged pull request #15634: [Disco] Support ShapeTuple in Disco Protocol

2023-08-28 Thread via GitHub
junrushao merged PR #15634: URL: https://github.com/apache/tvm/pull/15634 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao commented on a diff in pull request #15634: [Disco] Support ShapeTuple in Disco Protocol

2023-08-28 Thread via GitHub
junrushao commented on code in PR #15634: URL: https://github.com/apache/tvm/pull/15634#discussion_r1307612173 ## src/runtime/disco/threaded_session.cc: ## @@ -146,13 +152,17 @@ class DiscoThreadedMessageQueue : public dmlc::Stream { this->Read(>reg_id);

[GitHub] [tvm] LeshengJin commented on a diff in pull request #15633: [Disco][Op] broadcast_from_worker0

2023-08-28 Thread via GitHub
LeshengJin commented on code in PR #15633: URL: https://github.com/apache/tvm/pull/15633#discussion_r1307796184 ## src/relax/op/ccl/ccl.h: ## @@ -35,6 +35,9 @@ namespace relax { /*! \brief AllReduce. */ Expr allreduce(Expr data, String op_type); +/*! \brief Broadcast data

[GitHub] [tvm] junrushao commented on pull request #15635: [Disco] Fix ICE from Clang

2023-08-28 Thread via GitHub
junrushao commented on PR #15635: URL: https://github.com/apache/tvm/pull/15635#issuecomment-1695998415 CC: @spectrometerHBH -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific

[GitHub] [tvm] jinhongyii merged pull request #15632: [Disco][Test] MultiHeadAttention Testcase

2023-08-28 Thread via GitHub
jinhongyii merged PR #15632: URL: https://github.com/apache/tvm/pull/15632 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] kparzysz-quic commented on a diff in pull request #15595: [Runtime] Enhance PackedFunc Metaprogramming with `PackArgs`

2023-08-28 Thread via GitHub
kparzysz-quic commented on code in PR #15595: URL: https://github.com/apache/tvm/pull/15595#discussion_r1307790075 ## include/tvm/runtime/packed_func.h: ## @@ -1622,6 +1641,20 @@ inline TVMRetValue PackedFunc::operator()(Args&&... args) const { return rv; } +template

[GitHub] [tvm] Biubiubiu12 commented on pull request #15274: [TIR][Schedule] Refactor blockize implementation logic by pass

2023-08-31 Thread via GitHub
Biubiubiu12 commented on PR #15274: URL: https://github.com/apache/tvm/pull/15274#issuecomment-1702014777 > Maybe offtopic question: Are/Will OpenCV operators (tscript) that you work on be public somewhere ? Yes, TVMScript is part of the implementation of some CV operator, but I

[GitHub] [tvm] junrushao commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-08-31 Thread via GitHub
junrushao commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1312517589 ## python/tvm/relax/frontend/nn/core.py: ## @@ -360,12 +360,31 @@ def to(self, dtype: Optional[str] = None) -> None: # pylint: disable=invalid-na def

[GitHub] [tvm] lhutton1 commented on pull request #15649: Add output_data_sec section in corstone300.ld

2023-09-01 Thread via GitHub
lhutton1 commented on PR #15649: URL: https://github.com/apache/tvm/pull/15649#issuecomment-1702398281 Thanks for creating the PR @toyowata. cc @ekalda @ashutosh-arm could you take a look? Perhaps we can do it in a follow-up PR, but I think the `task_demo_microtvm.sh` script could check

[GitHub] [tvm] junrushao commented on pull request #15653: [Disco] Add Scatter and Recv

2023-09-01 Thread via GitHub
junrushao commented on PR #15653: URL: https://github.com/apache/tvm/pull/15653#issuecomment-1702566750 CC: @tqchen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] junrushao commented on pull request #15654: [Runtime] Support Loading Standalone `ndarray-cache.json`

2023-09-01 Thread via GitHub
junrushao commented on PR #15654: URL: https://github.com/apache/tvm/pull/15654#issuecomment-1702567048 CC: @tqchen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] tqchen commented on a diff in pull request #15652: [Runtime] Add ShapeTupleObj::Numel and ShapeTuple Printing

2023-09-01 Thread via GitHub
tqchen commented on code in PR #15652: URL: https://github.com/apache/tvm/pull/15652#discussion_r1312977545 ## include/tvm/runtime/container/shape_tuple.h: ## @@ -42,6 +43,9 @@ class ShapeTupleObj : public Object { /*! \brief The size of the shape tuple object. */

[GitHub] [tvm] junrushao commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-08-31 Thread via GitHub
junrushao commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1311906278 ## python/tvm/relax/frontend/nn/spec.py: ## @@ -322,15 +324,18 @@ def _extern_modules() -> List[Tuple[str, List[str]]]: # pylint: enable=protected-access

[GitHub] [tvm] junrushao opened a new pull request, #15653: [Disco] Add Scatter and Recv

2023-09-01 Thread via GitHub
junrushao opened a new pull request, #15653: URL: https://github.com/apache/tvm/pull/15653 This PR adds two internal APIs: ```C++ // Scatter `n - 1` buffers to each worker from worker 0, where `n` is the total number of workers void ScatterFromWorker0(Array buffers); //

[GitHub] [tvm] junrushao commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-09-01 Thread via GitHub
junrushao commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1312854922 ## python/tvm/relax/frontend/nn/core.py: ## @@ -360,12 +360,31 @@ def to(self, dtype: Optional[str] = None) -> None: # pylint: disable=invalid-na def

[GitHub] [tvm] junrushao opened a new pull request, #15652: [Runtime] Add ShapeTupleObj::Numel and ShapeTuple Printing

2023-09-01 Thread via GitHub
junrushao opened a new pull request, #15652: URL: https://github.com/apache/tvm/pull/15652 This commit adds two convenient methods for `ShapeTuple`. ```C++ // Returns the number of elements in the shape, // i.e. the product of all dimensions ShapeTupleObj::index_type

[GitHub] [tvm] junrushao commented on pull request #15652: [Runtime] Add ShapeTupleObj::Numel and ShapeTuple Printing

2023-09-01 Thread via GitHub
junrushao commented on PR #15652: URL: https://github.com/apache/tvm/pull/15652#issuecomment-1702561924 CC: @tqchen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] ashutosh-arm commented on pull request #15649: Add output_data_sec section in corstone300.ld

2023-09-01 Thread via GitHub
ashutosh-arm commented on PR #15649: URL: https://github.com/apache/tvm/pull/15649#issuecomment-1702561718 I don't have much idea of the new flag. It will be good to get it reviewed offline from someone who knows more about linker scripts. -- This is an automated message from the Apache

[GitHub] [tvm] tqchen commented on a diff in pull request #15653: [Disco] Add Scatter and Recv

2023-09-01 Thread via GitHub
tqchen commented on code in PR #15653: URL: https://github.com/apache/tvm/pull/15653#discussion_r1312983067 ## src/runtime/disco/builtin.cc: ## @@ -72,38 +70,48 @@ Module LoadVMModule(std::string path, Device device) { return mod; }

[GitHub] [tvm] haoyang9804 commented on pull request #15386: [Relay] Fix an adaptive_max_pool1d operator conversion bug

2023-09-01 Thread via GitHub
haoyang9804 commented on PR #15386: URL: https://github.com/apache/tvm/pull/15386#issuecomment-1702692212 > From one hand LGTM, from another one it confuses me due to fix for specific op was done on very high level (and CI test was constructed for specific op not for all cases). It looks

[GitHub] [tvm] tqchen commented on a diff in pull request #15653: [Disco] Add Scatter and Recv

2023-09-01 Thread via GitHub
tqchen commented on code in PR #15653: URL: https://github.com/apache/tvm/pull/15653#discussion_r1312983067 ## src/runtime/disco/builtin.cc: ## @@ -72,38 +70,48 @@ Module LoadVMModule(std::string path, Device device) { return mod; }

[GitHub] [tvm] Thrsu opened a new issue, #15651: [Bug] [Unity] [Frontend] [ONNX] Gather produced inconsistent inference results with ONNX

2023-09-01 Thread via GitHub
Thrsu opened a new issue, #15651: URL: https://github.com/apache/tvm/issues/15651 The ONNX [model](https://drive.google.com/file/d/1OyCdBhPRFo3MXvZSLsth4DK4cb7Ui5Z9/view?usp=share_link) produced inconsistent inference results with ONNX when using relax to load model and obtain the

[GitHub] [tvm] Archermmt opened a new pull request, #15645: [Unity][MSC][M0.4 && M0.5] Codegen && Test

2023-08-30 Thread via GitHub
Archermmt opened a new pull request, #15645: URL: https://github.com/apache/tvm/pull/15645 This is a pull request for MSC(Multi-System Compile) RFC: https://discuss.tvm.apache.org/t/rfc-unity-msc-introduction-to-multi-system-compiler/15251/5 Tracking issue:

[GitHub] [tvm-rfcs] tqchen commented on pull request #104: [RFC] Scalable vectors in TIR

2023-08-30 Thread via GitHub
tqchen commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1699183708 BTW, after writing it down, we can find that perhaps it is not necessary (for S1) to explicitly introduce a special vscale. Another approach is that we can mark an SVE scope, and use a

[GitHub] [tvm] junrushao commented on a diff in pull request #15652: [Runtime] Add ShapeTupleObj::Numel and ShapeTuple Printing

2023-09-01 Thread via GitHub
junrushao commented on code in PR #15652: URL: https://github.com/apache/tvm/pull/15652#discussion_r1313190374 ## include/tvm/runtime/container/shape_tuple.h: ## @@ -42,6 +43,9 @@ class ShapeTupleObj : public Object { /*! \brief The size of the shape tuple object. */

[GitHub] [tvm] junrushao merged pull request #15654: [Runtime] Support Loading Standalone `ndarray-cache.json`

2023-09-01 Thread via GitHub
junrushao merged PR #15654: URL: https://github.com/apache/tvm/pull/15654 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm-rfcs] ekalda commented on pull request #104: [RFC] Scalable vectors in TIR

2023-09-01 Thread via GitHub
ekalda commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1703019039 @tqchen Thanks for elaborating on the GPU programming model, I see the parallels between programming for variable number of threads and vectors with unknown lenghts. S1 option looks quite

[GitHub] [tvm] tqchen commented on a diff in pull request #15652: [Runtime] Add ShapeTupleObj::Numel and ShapeTuple Printing

2023-09-01 Thread via GitHub
tqchen commented on code in PR #15652: URL: https://github.com/apache/tvm/pull/15652#discussion_r1313197538 ## include/tvm/runtime/container/shape_tuple.h: ## @@ -42,6 +43,9 @@ class ShapeTupleObj : public Object { /*! \brief The size of the shape tuple object. */

[GitHub] [tvm] Lunderberg opened a new pull request, #15657: [Unity] Implemented BundleModelParams transform

2023-09-01 Thread via GitHub
Lunderberg opened a new pull request, #15657: URL: https://github.com/apache/tvm/pull/15657 Implemented `relax.transform.BundleModelParams`, which groups parameters into user-provided runtime parameters, and a tuple of compile-time model weights. This functionality was previously part of

[GitHub] [tvm] adstraw opened a new pull request, #15656: [Hopper TMA] Add CUDA codegen support for bulk asynchronous copy

2023-09-01 Thread via GitHub
adstraw opened a new pull request, #15656: URL: https://github.com/apache/tvm/pull/15656 Adds CUDA codegen support for bulk asynchronous copy which are new instructions for Hopper. Also includes some cleanup of PR #15616 in the form of comments and tests. Notably this PR does not include

[GitHub] [tvm] gessha commented on issue #12567: [Bug] python.contrib.test_onnx.test_resize numerical accuracy issue

2023-09-01 Thread via GitHub
gessha commented on issue #12567: URL: https://github.com/apache/tvm/issues/12567#issuecomment-1703033730 I was trying to reproduce the bug but I got stuck. Do you know what I did wrong? I built the tvm.ti_cpu container using `~/projects/tvm$ ./docker/build.sh ci_cpu` As

[GitHub] [tvm] jwfromm commented on a diff in pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-09-01 Thread via GitHub
jwfromm commented on code in PR #15650: URL: https://github.com/apache/tvm/pull/15650#discussion_r1313273646 ## python/tvm/relax/frontend/nn/core.py: ## @@ -360,12 +360,31 @@ def to(self, dtype: Optional[str] = None) -> None: # pylint: disable=invalid-na def export_tvm(

[GitHub] [tvm] junrushao opened a new pull request, #15654: [Runtime] Support Loading Standalone `ndarray-cache.json`

2023-09-01 Thread via GitHub
junrushao opened a new pull request, #15654: URL: https://github.com/apache/tvm/pull/15654 Prior to this PR, `ndarray-cache.json` is loaded, parsed along with the concrete weights. This PR adds support to parse this JSON file to a structured C++ class instead for later use. -- This is

[GitHub] [tvm] junrushao commented on pull request #15640: [Disco] Support Sharding via Scatter

2023-09-01 Thread via GitHub
junrushao commented on PR #15640: URL: https://github.com/apache/tvm/pull/15640#issuecomment-1702560479 Continued in: #15652, #15653, #15654, #15655 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go

[GitHub] [tvm] junrushao opened a new pull request, #15655: [Disco] Introduce ShardLoader

2023-09-01 Thread via GitHub
junrushao opened a new pull request, #15655: URL: https://github.com/apache/tvm/pull/15655 This PR introduces `ShardLoader`, an object that allows convenient sharding of each parameter, assuming there is a single shard dimension and the sharding strategy is even. The sharding can be

[GitHub] [tvm] junrushao closed pull request #15640: [Disco] Support Sharding via Scatter

2023-09-01 Thread via GitHub
junrushao closed pull request #15640: [Disco] Support Sharding via Scatter URL: https://github.com/apache/tvm/pull/15640 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] junrushao commented on pull request #15654: [Runtime] Support Loading Standalone `ndarray-cache.json`

2023-09-01 Thread via GitHub
junrushao commented on PR #15654: URL: https://github.com/apache/tvm/pull/15654#issuecomment-1703610989 @kparzysz-quic OK actually I’ve done another round of refactoring on flight, which gets rid of the filesystem dependency. Will submit a patch later today when I got back home! -- This

[GitHub] [tvm] junrushao commented on a diff in pull request #15653: [Disco] Add Scatter and Recv

2023-09-01 Thread via GitHub
junrushao commented on code in PR #15653: URL: https://github.com/apache/tvm/pull/15653#discussion_r1313691599 ## src/runtime/disco/builtin.cc: ## @@ -72,38 +70,48 @@ Module LoadVMModule(std::string path, Device device) { return mod; }

[GitHub] [tvm] junrushao merged pull request #15650: [Unity][Frontend][NN] Make effects optional in nn module.

2023-09-02 Thread via GitHub
junrushao merged PR #15650: URL: https://github.com/apache/tvm/pull/15650 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] jikechao commented on pull request #15386: [Relay] Fix an adaptive_max_pool1d operator conversion bug

2023-09-01 Thread via GitHub
jikechao commented on PR #15386: URL: https://github.com/apache/tvm/pull/15386#issuecomment-1703651509 cc @vvchernov -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] junrushao commented on a diff in pull request #15652: [Runtime] Add ShapeTupleObj::Numel and ShapeTuple Printing

2023-09-02 Thread via GitHub
junrushao commented on code in PR #15652: URL: https://github.com/apache/tvm/pull/15652#discussion_r1313731714 ## include/tvm/runtime/container/shape_tuple.h: ## @@ -42,6 +43,9 @@ class ShapeTupleObj : public Object { /*! \brief The size of the shape tuple object. */

[GitHub] [tvm] junrushao commented on a diff in pull request #15652: [Runtime] Add ShapeTupleObj::Numel and ShapeTuple Printing

2023-09-01 Thread via GitHub
junrushao commented on code in PR #15652: URL: https://github.com/apache/tvm/pull/15652#discussion_r1313514113 ## include/tvm/runtime/container/shape_tuple.h: ## @@ -42,6 +43,9 @@ class ShapeTupleObj : public Object { /*! \brief The size of the shape tuple object. */

[GitHub] [tvm] kparzysz-quic commented on pull request #15654: [Runtime] Support Loading Standalone `ndarray-cache.json`

2023-09-01 Thread via GitHub
kparzysz-quic commented on PR #15654: URL: https://github.com/apache/tvm/pull/15654#issuecomment-1703282818 Hi. Unfortunately the Hexagon toolchain does not support `std::filesystem` and this does not compile for us. Would it be possible to use `std::string` for paths? Sorry about the

[GitHub] [tvm] kparzysz-quic opened a new pull request, #15658: [Runtime] Make `export_library` parameters after `file_name` keyword-only

2023-09-01 Thread via GitHub
kparzysz-quic opened a new pull request, #15658: URL: https://github.com/apache/tvm/pull/15658 This makes the code a bit more readable at a little cost. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to

[GitHub] [tvm] masahi commented on pull request #15708: [Relay] Simplify Conv->bias_add->mul->add to Conv->bias_add

2023-09-12 Thread via GitHub
masahi commented on PR #15708: URL: https://github.com/apache/tvm/pull/15708#issuecomment-1716237070 Have you tried `FoldScaleAxis`? Combined with `SimplifyExpr` I think it already does such optimization. -- This is an automated message from the Apache Git Service. To respond to the

[GitHub] [tvm] vvchernov commented on pull request #15723: fix _convert_simple_rnn

2023-09-12 Thread via GitHub
vvchernov commented on PR #15723: URL: https://github.com/apache/tvm/pull/15723#issuecomment-1716250601 Hello @echuraev! Looks like it isproblem from jenkins: ```[2023-09-12T17:54:08.315Z] + ./ci/scripts/jenkins/s3.py --action upload --bucket tvm-jenkins-artifacts-prod --prefix

[GitHub] [tvm-rfcs] tqchen commented on pull request #89: [RFC] Relax Upstreaming

2023-09-12 Thread via GitHub
tqchen commented on PR #89: URL: https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1715756599 sending another reminder for everyone to chime into related unity discussion threads https://discuss.tvm.apache.org/c/development/unity/14, love to see your participations on all the

[GitHub] [tvm] Lunderberg commented on a diff in pull request #15694: [Unity] Implemented SameShapeConstraint for dataflow pattern matches

2023-09-12 Thread via GitHub
Lunderberg commented on code in PR #15694: URL: https://github.com/apache/tvm/pull/15694#discussion_r1323329466 ## src/relax/ir/dataflow_matcher.cc: ## @@ -443,6 +444,92 @@ bool DFPatternMatcher::VisitDFPattern_(const ShapePatternNode* op, const Expr& e return false; }

[GitHub] [tvm] masahi commented on a diff in pull request #15679: [Unity] Support Padding Reversal in Alter-Op pass

2023-09-12 Thread via GitHub
masahi commented on code in PR #15679: URL: https://github.com/apache/tvm/pull/15679#discussion_r1323414157 ## python/tvm/relax/transform/legalize_ops/manipulate.py: ## @@ -205,3 +213,16 @@ def te_layout_transform(data, name): output_dtype = call_args[0].struct_info.dtype

[GitHub] [tvm] p3achyjr opened a new pull request, #15733: [TFLite][Frontend] Support quantized floor_mod

2023-09-12 Thread via GitHub
p3achyjr opened a new pull request, #15733: URL: https://github.com/apache/tvm/pull/15733 As part of https://github.com/apache/tvm/issues/15148, this PR adds support for floor_div. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to

[GitHub] [tvm] Lunderberg commented on a diff in pull request #15672: [IR] Implemented Variant<...> container

2023-09-12 Thread via GitHub
Lunderberg commented on code in PR #15672: URL: https://github.com/apache/tvm/pull/15672#discussion_r1323342788 ## tests/cpp/container_test.cc: ## @@ -853,3 +854,23 @@ TEST(Optional, PackedCall) { test_ffi(s, static_cast(kTVMObjectHandle)); test_ffi(String(s),

[GitHub] [tvm] masahi commented on a diff in pull request #15678: [UNITY][Pass] Optimize redundant layout transform ops

2023-09-12 Thread via GitHub
masahi commented on code in PR #15678: URL: https://github.com/apache/tvm/pull/15678#discussion_r1322614653 ## python/tvm/relax/transform/optimize_layout_transform.py: ## @@ -0,0 +1,75 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor

<    1   2   3   4   5   6   7   8   9   10   >