[GitHub] [tvm] jinhongyii commented on a diff in pull request #15742: [Unity][Disco] separate computation and communication into 2 stream

2023-09-14 Thread via GitHub
jinhongyii commented on code in PR #15742: URL: https://github.com/apache/tvm/pull/15742#discussion_r1326594671 ## src/runtime/disco/nccl/nccl.cc: ## @@ -91,26 +90,34 @@ void AllReduce(NDArray send, ReduceKind reduce_kind, NDArray recv) { NCCLThreadLocalContext* ctx =

[GitHub] [tvm] jinhongyii commented on pull request #15742: [Unity][Disco] separate computation and communication into 2 stream

2023-09-14 Thread via GitHub
jinhongyii commented on PR #15742: URL: https://github.com/apache/tvm/pull/15742#issuecomment-1719661144 null stream (or default stream equivalently) is used for compute. The stream created in init_ccl is for communication -- This is an automated message from the Apache Git Service. To

[GitHub] [tvm] Thrsu commented on a diff in pull request #15734: [Unity] [Bugfix] Fix KeyError: 'padding' in _avg_pool2d implementation

2023-09-14 Thread via GitHub
Thrsu commented on code in PR #15734: URL: https://github.com/apache/tvm/pull/15734#discussion_r1326228756 ## tests/python/unittest/test_tir_transform_lower_thread_all_reduce.py: ## Review Comment: I have rebased it, please recheck. -- This is an automated message from

[GitHub] [tvm] Lunderberg opened a new pull request, #15749: [UnitTest][Metal] Parametrize allreduce GPU tests

2023-09-14 Thread via GitHub
Lunderberg opened a new pull request, #15749: URL: https://github.com/apache/tvm/pull/15749 As a first step to addressing the Metal codegen errors that required the reversion in https://github.com/apache/tvm/pull/15725, parametrizing the unit tests for `allreduce`. While these tests are

[GitHub] [tvm] Lunderberg commented on pull request #15749: [UnitTest][Metal] Parametrize allreduce GPU tests

2023-09-14 Thread via GitHub
Lunderberg commented on PR #15749: URL: https://github.com/apache/tvm/pull/15749#issuecomment-1719764142 @junrushao @MasterJH5574 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific

[GitHub] [tvm] tlopex commented on pull request #15746: [TFLite][Frontend] Support quantized less

2023-09-14 Thread via GitHub
tlopex commented on PR #15746: URL: https://github.com/apache/tvm/pull/15746#issuecomment-1720409267 I wonder why my new commits will be added to original successful PR. That means I have to create new PR after it is merged? -- This is an automated message from the Apache Git Service. To

[GitHub] [tvm] Hzfengsy merged pull request #15749: [UnitTest][Metal] Parametrize allreduce GPU tests

2023-09-15 Thread via GitHub
Hzfengsy merged PR #15749: URL: https://github.com/apache/tvm/pull/15749 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] tlopex closed pull request #15746: [TFLite][Frontend] Support quantized less

2023-09-14 Thread via GitHub
tlopex closed pull request #15746: [TFLite][Frontend] Support quantized less URL: https://github.com/apache/tvm/pull/15746 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] jikechao opened a new pull request, #15752: [Relay][Bugfix] fix the calculate logic of operator Flip in PyTorch frontend

2023-09-14 Thread via GitHub
jikechao opened a new pull request, #15752: URL: https://github.com/apache/tvm/pull/15752 The original implementation of Flip in PyTorch converter mistaken the type of attribute `axis` in the Flip operator as an integer. Thus, It only parses the first element of the `axis` and will give a

[GitHub] [tvm] junrushao merged pull request #15742: [Unity][Disco] separate computation and communication into 2 stream

2023-09-14 Thread via GitHub
junrushao merged PR #15742: URL: https://github.com/apache/tvm/pull/15742 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] Cydia2018 opened a new pull request, #15753: [WIP][Unity][BYOC]: Add RocBLAS Support

2023-09-14 Thread via GitHub
Cydia2018 opened a new pull request, #15753: URL: https://github.com/apache/tvm/pull/15753 Purpose: This PR introduces RocBLAS and RocBLAS tuning support in Unity. Motivation: The industry is asking for diversity of computing power. We at Kwai AIPCG noticed that, as an alternative to

[GitHub] [tvm-rfcs] DongBaiYue opened a new pull request, #105: [RFC][SYCL] add sycl backend RFC

2023-09-15 Thread via GitHub
DongBaiYue opened a new pull request, #105: URL: https://github.com/apache/tvm-rfcs/pull/105 This RFC is to add a new backend language——SYCL. Previously I also created an RFC topic in[ the forum](https://discuss.tvm.apache.org/t/rfc-sycl-sycl-backend-for-tvm/15678). -- This is an

[GitHub] [tvm] echuraev commented on a diff in pull request #15752: [Relay][Bugfix] fix the wrong calculate logic of operator flip in PyTorch frontend

2023-09-15 Thread via GitHub
echuraev commented on code in PR #15752: URL: https://github.com/apache/tvm/pull/15752#discussion_r1327008892 ## python/tvm/relay/frontend/pytorch.py: ## @@ -2977,7 +2977,12 @@ def nll_loss(self, inputs, input_types): def flip(self, inputs, input_types): data =

[GitHub] [tvm] junrushao merged pull request #15720: [Disco] Make Session.CallWithPacked public

2023-09-11 Thread via GitHub
junrushao merged PR #15720: URL: https://github.com/apache/tvm/pull/15720 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] abhikran-quic commented on a diff in pull request #15678: [UNITY][Pass] Optimize redundant layout transform ops

2023-09-11 Thread via GitHub
abhikran-quic commented on code in PR #15678: URL: https://github.com/apache/tvm/pull/15678#discussion_r1322336789 ## python/tvm/relax/transform/optimize_layout_transform.py: ## @@ -0,0 +1,75 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more

[GitHub] [tvm] abhikran-quic commented on a diff in pull request #15678: [UNITY][Pass] Optimize redundant layout transform ops

2023-09-11 Thread via GitHub
abhikran-quic commented on code in PR #15678: URL: https://github.com/apache/tvm/pull/15678#discussion_r1322336789 ## python/tvm/relax/transform/optimize_layout_transform.py: ## @@ -0,0 +1,75 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more

[GitHub] [tvm] csullivan commented on pull request #15725: Revert "[CodeGenC] Handle GlobalVar callee as internal function call"

2023-09-11 Thread via GitHub
csullivan commented on PR #15725: URL: https://github.com/apache/tvm/pull/15725#issuecomment-1714981323 Prior to merging I would request that we please include a test or otherwise link to a test that fails so that there is something semi-minimal for which can be used to support the desired

[GitHub] [tvm] junrushao commented on pull request #15725: Revert "[CodeGenC] Handle GlobalVar callee as internal function call"

2023-09-11 Thread via GitHub
junrushao commented on PR #15725: URL: https://github.com/apache/tvm/pull/15725#issuecomment-1714983668 Agreed @csullivan. Given it's a codegen test, it's a bit challenging though to have a minimal test if we don't have unittest infra that could potentially access Metal-capable devices. As

[GitHub] [tvm] Lucien0 opened a new issue, #15726: [Bug] [Arith] Expressions of the same form cannot be simplified correctly

2023-09-11 Thread via GitHub
Lucien0 opened a new issue, #15726: URL: https://github.com/apache/tvm/issues/15726 Hi, I constructed the following case: ``` a = tir.Var("a", "int32") b = tir.Var("b", "int32") c = tir.Var("c", "int32") d = tir.Var("d", "int32") ana =

[GitHub] [tvm] cbalint13 commented on a diff in pull request #15685: [Target][TOPI] Use LLVM for x86 CPU feature lookup

2023-09-11 Thread via GitHub
cbalint13 commented on code in PR #15685: URL: https://github.com/apache/tvm/pull/15685#discussion_r1322321004 ## python/tvm/target/x86.py: ## @@ -16,127 +16,19 @@ # under the License. """Common x86 related utilities""" from .._ffi import register_func -from .target import

[GitHub] [tvm] abhikran-quic commented on a diff in pull request #15678: [UNITY][Pass] Optimize redundant layout transform ops

2023-09-11 Thread via GitHub
abhikran-quic commented on code in PR #15678: URL: https://github.com/apache/tvm/pull/15678#discussion_r1322336789 ## python/tvm/relax/transform/optimize_layout_transform.py: ## @@ -0,0 +1,75 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more

[GitHub] [tvm] csullivan commented on pull request #15694: [Unity] Implemented SameShapeConstraint for dataflow pattern matches

2023-09-13 Thread via GitHub
csullivan commented on PR #15694: URL: https://github.com/apache/tvm/pull/15694#issuecomment-1718370549 Thanks @Lunderberg @ganler @masahi, this is merged. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

[GitHub] [tvm] csullivan merged pull request #15694: [Unity] Implemented SameShapeConstraint for dataflow pattern matches

2023-09-13 Thread via GitHub
csullivan merged PR #15694: URL: https://github.com/apache/tvm/pull/15694 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao commented on issue #15716: [Bug] StringImm Object Can't Be Pass to C++ Side from Python Side

2023-09-13 Thread via GitHub
junrushao commented on issue #15716: URL: https://github.com/apache/tvm/issues/15716#issuecomment-1718314358 I'm able to confirm on my end that this bug exists in both main and unity branch. @ysh329 if it doesn't bother you too much, would you mind doing a `git bisect` to find out which

[GitHub] [tvm] csullivan merged pull request #15672: [IR] Implemented Variant<...> container

2023-09-13 Thread via GitHub
csullivan merged PR #15672: URL: https://github.com/apache/tvm/pull/15672 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao merged pull request #15662: [Unity][Frontend][NN] Add nn.MultiLinear

2023-09-04 Thread via GitHub
junrushao merged PR #15662: URL: https://github.com/apache/tvm/pull/15662 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] quic-sanirudh opened a new pull request, #15664: [IR] Use structural equal for Range equality

2023-09-04 Thread via GitHub
quic-sanirudh opened a new pull request, #15664: URL: https://github.com/apache/tvm/pull/15664 This PR adds a small change to verify equality of `tvm.ir.Range` as a structural equal. This assumes that in most cases, comparing two `Range`s means to compare its `min` and `extent` as opposed

[GitHub] [tvm] wrongtest-intellif opened a new pull request, #15665: [Arith] Fix detect non-divisible iteration form like (x % 255) // 16

2023-09-04 Thread via GitHub
wrongtest-intellif opened a new pull request, #15665: URL: https://github.com/apache/tvm/pull/15665 Previously, the optimization from `floordiv(floormod(..))` to `floormod(floordiv(..))` do not check the divisibility. Which may create wrong iteration form. The change add a default

[GitHub] [tvm] toyowata commented on issue #15643: [Bug] [Ethos] apps/microtvm/ethosu/run_demo.sh gets wrong inference result

2023-08-31 Thread via GitHub
toyowata commented on issue #15643: URL: https://github.com/apache/tvm/issues/15643#issuecomment-1701257673 Hi @lhutton1 Yes, I will make a PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go

[GitHub] [tvm] toyowata opened a new pull request, #15649: Add output_data_sec section in corstone300.ld

2023-08-31 Thread via GitHub
toyowata opened a new pull request, #15649: URL: https://github.com/apache/tvm/pull/15649 Fixed output_data_sec section was missing in the corstone300.ld for Ethos-U. See more detail: https://github.com/apache/tvm/issues/15643 -- This is an automated message from the Apache Git

[GitHub] [tvm] tqchen merged pull request #15652: [Runtime] ShapeTuple.Product and ShapeTuple Printing

2023-09-02 Thread via GitHub
tqchen merged PR #15652: URL: https://github.com/apache/tvm/pull/15652 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] tqchen merged pull request #15659: [Runtime] Refactor NDArrayCache Support

2023-09-02 Thread via GitHub
tqchen merged PR #15659: URL: https://github.com/apache/tvm/pull/15659 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] tqchen merged pull request #15653: [Disco] Add `Scatter-From-Worker0`

2023-09-02 Thread via GitHub
tqchen merged PR #15653: URL: https://github.com/apache/tvm/pull/15653 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao opened a new pull request, #15659: [Runtime] Refactor NDArrayCache Support

2023-09-02 Thread via GitHub
junrushao opened a new pull request, #15659: URL: https://github.com/apache/tvm/pull/15659 This PR refactors the NDArrayCache support with the following changes: - Support loading metadata from a string rather than a concrete JSON file on disc; - Remove dependency to

[GitHub] [tvm] junrushao merged pull request #15658: [Runtime] Make `export_library` parameters after `file_name` keyword-only

2023-09-02 Thread via GitHub
junrushao merged PR #15658: URL: https://github.com/apache/tvm/pull/15658 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao commented on pull request #15659: [Runtime] Refactor NDArrayCache Support

2023-09-02 Thread via GitHub
junrushao commented on PR #15659: URL: https://github.com/apache/tvm/pull/15659#issuecomment-1703734529 CC: @kparzysz-quic @tqchen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific

[GitHub] [tvm] junrushao merged pull request #15455: [Unity] Implement R.macro for Relax macros

2023-09-02 Thread via GitHub
junrushao merged PR #15455: URL: https://github.com/apache/tvm/pull/15455 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] multiverstack-intellif commented on pull request #15638: [Arith] MLIR PresburgerSet compile fix mlir >= 160

2023-08-30 Thread via GitHub
multiverstack-intellif commented on PR #15638: URL: https://github.com/apache/tvm/pull/15638#issuecomment-1698588609 > > Thanks for the bugfixes! Definitely not an expert here, and do you think we need to add a test for changes introduced in presurger_set.cc? > > @junrushao , >

[GitHub] [tvm] junrushao commented on pull request #15638: [Arith] MLIR PresburgerSet compile fix mlir >= 160

2023-08-30 Thread via GitHub
junrushao commented on PR #15638: URL: https://github.com/apache/tvm/pull/15638#issuecomment-1698593649 Sounds great, and it seems that we all agree with the changes being made! Will merge it in as soon as the CI is green. -- This is an automated message from the Apache Git Service. To

[GitHub] [tvm] quic-sanirudh commented on pull request #15599: F2qi avgpool bug fix

2023-08-30 Thread via GitHub
quic-sanirudh commented on PR #15599: URL: https://github.com/apache/tvm/pull/15599#issuecomment-1698598177 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

[GitHub] [tvm] kparzysz-quic opened a new pull request, #15666: [Module] Implement custom imported modules serialization

2023-09-04 Thread via GitHub
kparzysz-quic opened a new pull request, #15666: URL: https://github.com/apache/tvm/pull/15666 When a module with imported modules is exported into a shared library, the imported modules are serialized and embedded inside of that library. This is done by generating a raw binary from the

[GitHub] [tvm] junrushao opened a new pull request, #15668: [CI] Allow Limit CPUs in Docker

2023-09-04 Thread via GitHub
junrushao opened a new pull request, #15668: URL: https://github.com/apache/tvm/pull/15668 This PR adds a new flag `--cpus` in `./docker/bash.sh`, which is passed to docker command that allows limiting the number of CPU cores of a docker container. Related materials:

[GitHub] [tvm] junrushao commented on pull request #15668: [CI] Allow Limit CPUs in Docker

2023-09-04 Thread via GitHub
junrushao commented on PR #15668: URL: https://github.com/apache/tvm/pull/15668#issuecomment-1705762320 CC: @tqchen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] haoyang9804 commented on pull request #15683: Fix a bug caused by PyTorch instance_norm when the input shape is [1,1,1,2]

2023-09-06 Thread via GitHub
haoyang9804 commented on PR #15683: URL: https://github.com/apache/tvm/pull/15683#issuecomment-1708513363 cc @vvchernov @echuraev -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific

[GitHub] [tvm] ashutosh-arm closed pull request #15641: [DO_NOT_MERGE][Flaky][CI] Disable flaky autotvm test test_multi_filter

2023-09-06 Thread via GitHub
ashutosh-arm closed pull request #15641: [DO_NOT_MERGE][Flaky][CI] Disable flaky autotvm test test_multi_filter URL: https://github.com/apache/tvm/pull/15641 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above

[GitHub] [tvm] ashutosh-arm closed issue #15611: [Flaky Test] `tests/python/unittest/test_autotvm_droplet_tuner.py::test_multi_filter`

2023-09-06 Thread via GitHub
ashutosh-arm closed issue #15611: [Flaky Test] `tests/python/unittest/test_autotvm_droplet_tuner.py::test_multi_filter` URL: https://github.com/apache/tvm/issues/15611 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the

[GitHub] [tvm] ibsidorenko opened a new pull request, #15686: [Unity] Add new Relax annotation ops: smooth and absmax

2023-09-06 Thread via GitHub
ibsidorenko opened a new pull request, #15686: URL: https://github.com/apache/tvm/pull/15686 This commit adds 2 new Relax ops: 1. "relax.op.smooth" (R.smooth): Multiply elements from a tensor by a scale (if it operates like "multiply") or pass input as is (if it operates like

[GitHub] [tvm] Lunderberg commented on pull request #15646: [TIR] Output DeclBuffer in LowerThreadAllreduce

2023-09-06 Thread via GitHub
Lunderberg commented on PR #15646: URL: https://github.com/apache/tvm/pull/15646#issuecomment-1708824478 @ci-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] junrushao commented on a diff in pull request #15685: [Target][TOPI] Use LLVM for x86 CPU feature lookup

2023-09-06 Thread via GitHub
junrushao commented on code in PR #15685: URL: https://github.com/apache/tvm/pull/15685#discussion_r1317632043 ## src/target/llvm/llvm_module.cc: ## @@ -45,6 +45,7 @@ #include #include #include +#include "llvm/TargetParser/X86TargetParser.h" Review Comment: nit: style

[GitHub] [tvm] cbalint13 commented on a diff in pull request #15685: [Target][TOPI] Use LLVM for x86 CPU feature lookup

2023-09-06 Thread via GitHub
cbalint13 commented on code in PR #15685: URL: https://github.com/apache/tvm/pull/15685#discussion_r1317642603 ## src/target/llvm/llvm_module.cc: ## @@ -45,6 +45,7 @@ #include #include #include +#include "llvm/TargetParser/X86TargetParser.h" Review Comment: done.

[GitHub] [tvm] MasterJH5574 opened a new pull request, #15687: [Unity][Dlight] Fallback rule supporting more spatial workloads

2023-09-06 Thread via GitHub
MasterJH5574 opened a new pull request, #15687: URL: https://github.com/apache/tvm/pull/15687 This PR enhances the fallback dlight GPU rule so that it can support more non-trivial spatial workloads. Particularly, to this end, this PR makes the following changes: * for function

[GitHub] [tvm] lhutton1 commented on issue #12567: [Bug] python.contrib.test_onnx.test_resize numerical accuracy issue

2023-09-06 Thread via GitHub
lhutton1 commented on issue #12567: URL: https://github.com/apache/tvm/issues/12567#issuecomment-1708592870 Hi @gessha, I suspect you need to build tvm with LLVM support. Can you try adding something like the following to config.cmake: ``` set(USE_LLVM llvm-config-16) ``` and

[GitHub] [tvm] junrushao commented on a diff in pull request #15673: [Unity][Disco] Refactor shard loader

2023-09-06 Thread via GitHub
junrushao commented on code in PR #15673: URL: https://github.com/apache/tvm/pull/15673#discussion_r1317585783 ## src/runtime/disco/loader.cc: ## @@ -187,5 +231,16 @@ TVM_REGISTER_GLOBAL("runtime.disco.ShardLoaderLoad") return

[GitHub] [tvm] cbalint13 opened a new pull request, #15685: [Target][TOPI] Use LLVM for x86 CPU feature lookup

2023-09-06 Thread via GitHub
cbalint13 opened a new pull request, #15685: URL: https://github.com/apache/tvm/pull/15685 Hi folks, This PR leverage **LLVM** itself for CPU features lookup, **replacing hard-coded** lists. In order to keep maintainability with X86 families & features we can rely on LLVM itself.

[GitHub] [tvm] masahi opened a new pull request, #15707: [Unity] Fix CUTLASS tests following LiftTransformParams signature change

2023-09-08 Thread via GitHub
masahi opened a new pull request, #15707: URL: https://github.com/apache/tvm/pull/15707 https://github.com/apache/tvm/pull/15657 modified the signature of a mod generated by `LiftTransformParams` pass to take unpacked params as input rather than tuple params. Some cutlass tests needs

[GitHub] [tvm] rutkoor commented on pull request #15679: [Unity] Support Padding Reversal in Alter-Op pass

2023-09-08 Thread via GitHub
rutkoor commented on PR #15679: URL: https://github.com/apache/tvm/pull/15679#issuecomment-1711288315 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] masahi commented on a diff in pull request #15678: [UNITY][Pass] Optimize redundant layout transform ops

2023-09-08 Thread via GitHub
masahi commented on code in PR #15678: URL: https://github.com/apache/tvm/pull/15678#discussion_r1319438923 ## python/tvm/relax/transform/optimize_layout_transform.py: ## @@ -0,0 +1,147 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor

[GitHub] [tvm] junrushao merged pull request #15673: [Disco] Add LoadAll method to Disco Shard Loader

2023-09-08 Thread via GitHub
junrushao merged PR #15673: URL: https://github.com/apache/tvm/pull/15673 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] echuraev merged pull request #15683: Fix a bug caused by PyTorch instance_norm when the input shape is [1,1,1,2]

2023-09-08 Thread via GitHub
echuraev merged PR #15683: URL: https://github.com/apache/tvm/pull/15683 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] sjain58 opened a new pull request, #15708: [Relay] Simplify Conv->bias_add->mul->add to Conv->bias_add

2023-09-08 Thread via GitHub
sjain58 opened a new pull request, #15708: URL: https://github.com/apache/tvm/pull/15708 Simplify Conv->bias_add->mul->add to Conv->bias_add sequence if one of the inputs to Conv, bias_add, mul and add are constant scalars. def @main(%q1: Tensor[(1, 3, 224, 224), float32]) { %0 =

[GitHub] [tvm] rutkoor commented on pull request #15679: [Unity] Support Padding Reversal in Alter-Op pass

2023-09-08 Thread via GitHub
rutkoor commented on PR #15679: URL: https://github.com/apache/tvm/pull/15679#issuecomment-1711281848 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] rutkoor commented on pull request #15679: [Unity] Support Padding Reversal in Alter-Op pass

2023-09-08 Thread via GitHub
rutkoor commented on PR #15679: URL: https://github.com/apache/tvm/pull/15679#issuecomment-1711285581 @tvm-bot re-build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] rutkoor commented on pull request #15679: [Unity] Support Padding Reversal in Alter-Op pass

2023-09-08 Thread via GitHub
rutkoor commented on PR #15679: URL: https://github.com/apache/tvm/pull/15679#issuecomment-1711284797 @tvm-bot re-run -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] Anndrey24 opened a new pull request, #15711: [Bugfix][Strategy] Fix `arm_cpu` int8 conv2d strategy for dotprod and i8mm targets

2023-09-08 Thread via GitHub
Anndrey24 opened a new pull request, #15711: URL: https://github.com/apache/tvm/pull/15711 Whenever both dotprod and i8mm were available together on a target (e.g. `"llvm --device=arm_cpu --mtriple=aarch64-linux-gnu -mattr=+v8.2a,+dotprod,+i8mm"`), the native int8 conv2d implementation

[GitHub] [tvm] Lunderberg commented on pull request #15702: [Unity][Utility] Implement operator<< for tvm::runtime::ShapeTuple

2023-09-08 Thread via GitHub
Lunderberg commented on PR #15702: URL: https://github.com/apache/tvm/pull/15702#issuecomment-1711689431 Whoops! Thank you, and I'm glad we have the functionality! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the

[GitHub] [tvm] Lunderberg closed pull request #15702: [Unity][Utility] Implement operator<< for tvm::runtime::ShapeTuple

2023-09-08 Thread via GitHub
Lunderberg closed pull request #15702: [Unity][Utility] Implement operator<< for tvm::runtime::ShapeTuple URL: https://github.com/apache/tvm/pull/15702 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go

[GitHub] [tvm] Lunderberg commented on pull request #15700: [Contrib] Save/load ShapeTuple in tvm.contrib.tvmjs

2023-09-08 Thread via GitHub
Lunderberg commented on PR #15700: URL: https://github.com/apache/tvm/pull/15700#issuecomment-1711718670 @tqchen For the background PR, the slice index is only included in the saved parameters if two conditions are met. 1. The symbolic variable is required for a computation that

[GitHub] [tvm] junrushao merged pull request #15486: [Unity][Dlight] Matmul rule on int32 workloads

2023-09-06 Thread via GitHub
junrushao merged PR #15486: URL: https://github.com/apache/tvm/pull/15486 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] junrushao merged pull request #15687: [Unity][Dlight] Fallback rule supporting more spatial workloads

2023-09-06 Thread via GitHub
junrushao merged PR #15687: URL: https://github.com/apache/tvm/pull/15687 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] Lunderberg opened a new pull request, #15688: [Unity] Delegate DataflowVar visitor to Var by default

2023-09-06 Thread via GitHub
Lunderberg opened a new pull request, #15688: URL: https://github.com/apache/tvm/pull/15688 Prior to this commit, writing a subclass of `relax::ExprVisitor` or `relax::ExprMutator` required separate overrides for visiting a `relax::DataflowVar` and a `relax::Var`. In the majority of

[GitHub] [tvm] Lunderberg commented on pull request #15672: [IR] Implemented Variant<...> container

2023-09-06 Thread via GitHub
Lunderberg commented on PR #15672: URL: https://github.com/apache/tvm/pull/15672#issuecomment-1709137698 @ci-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] slyubomirsky opened a new pull request, #15689: [Unity][Analysis] Dataflow analysis framework and liveness analysis

2023-09-06 Thread via GitHub
slyubomirsky opened a new pull request, #15689: URL: https://github.com/apache/tvm/pull/15689 As part of #15319, this PR implements liveness analysis, which is implemented using a dataflow analysis framework similar to that described by Adrian Sampson in these lecture notes:

[GitHub] [tvm] cbalint13 commented on pull request #15685: [Target][TOPI] Use LLVM for x86 CPU feature lookup

2023-09-08 Thread via GitHub
cbalint13 commented on PR #15685: URL: https://github.com/apache/tvm/pull/15685#issuecomment-1711439202 > The CI fails because the LLVM version on CI is pretty low (==10). I'm curious if there's any variant of this API on LLVM 10? If not, we should bump LLVM to 15 or 16 Folks,

[GitHub] [tvm] Lunderberg commented on pull request #15577: [Unity] Added known tir.Expr to relax.PrimValue

2023-09-06 Thread via GitHub
Lunderberg commented on PR #15577: URL: https://github.com/apache/tvm/pull/15577#issuecomment-1708887108 @Hzfengsy Thank you, and I do think the value should be a `tir.PrimExpr` and not just a `tir.Var` for a few reasons. * API consistency when applying `BindSymbolicVars`. When a

[GitHub] [tvm] masahi merged pull request #15671: [VM][Adreno] Fix using buffers for weights in VM

2023-09-06 Thread via GitHub
masahi merged PR #15671: URL: https://github.com/apache/tvm/pull/15671 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] Lunderberg commented on pull request #15627: [Unity][Analysis] Improve handling of symbolic variables

2023-09-06 Thread via GitHub
Lunderberg commented on PR #15627: URL: https://github.com/apache/tvm/pull/15627#issuecomment-1708956357 Thank you for the detailed response, @tqchen. I think my main concern is that there are several types of predictability, depending on the specific audience. 1. As a

[GitHub] [tvm] csullivan merged pull request #15657: [Unity] Implemented BundleModelParams transform

2023-09-06 Thread via GitHub
csullivan merged PR #15657: URL: https://github.com/apache/tvm/pull/15657 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] csullivan merged pull request #15626: [Unity] Implement relax.Function.bind_params

2023-09-06 Thread via GitHub
csullivan merged PR #15626: URL: https://github.com/apache/tvm/pull/15626 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[GitHub] [tvm] csullivan commented on pull request #15626: [Unity] Implement relax.Function.bind_params

2023-09-06 Thread via GitHub
csullivan commented on PR #15626: URL: https://github.com/apache/tvm/pull/15626#issuecomment-1708960891 Thanks @Lunderberg and @sunggg! This is merged. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to

[GitHub] [tvm] tqchen commented on pull request #15700: [Contrib] Save/load ShapeTuple in tvm.contrib.tvmjs

2023-09-08 Thread via GitHub
tqchen commented on PR #15700: URL: https://github.com/apache/tvm/pull/15700#issuecomment-1711550038 looking at the background PR, I am not too sure if we want to hardcode slice index in the parameters. This is because in many cases they can be dynamically given by the runtime. I

[GitHub] [tvm] wrongtest-intellif opened a new pull request, #15709: [TIR] Support make shape and array at root scope

2023-09-08 Thread via GitHub
wrongtest-intellif opened a new pull request, #15709: URL: https://github.com/apache/tvm/pull/15709 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[GitHub] [tvm] Anndrey24 opened a new pull request, #15710: [TOPI] Ensure vectorization of input padding in `arm_cpu` int8 conv2d interleaved schedule

2023-09-08 Thread via GitHub
Anndrey24 opened a new pull request, #15710: URL: https://github.com/apache/tvm/pull/15710 When padding the input data, the int8 conv2d interleaved schedule tries to split the `data_im2col` cols axis by a factor of 16 in order to then vectorize over those splits. However, the size of the

[GitHub] [tvm] tqchen commented on pull request #15700: [Contrib] Save/load ShapeTuple in tvm.contrib.tvmjs

2023-09-08 Thread via GitHub
tqchen commented on PR #15700: URL: https://github.com/apache/tvm/pull/15700#issuecomment-1711774724 Definitely agree that the parameters like `lora_scaling`, `temperature` needs to be passed in. These are usually parameters that needs to be decoupled from the weights themselves.

[GitHub] [tvm] tqchen commented on pull request #15700: [Contrib] Save/load ShapeTuple in tvm.contrib.tvmjs

2023-09-08 Thread via GitHub
tqchen commented on PR #15700: URL: https://github.com/apache/tvm/pull/15700#issuecomment-1711783558 To expand a bit, since I think the examples are great illustrating the overall case, there are usually two category of parameter inputs to a function - C0: weights, usually fixed for

Re: [PR] [Release] [Dont Squash] Modify version number to 0.14.0 and 0.15.0.dev on main branch [tvm]

2023-10-16 Thread via GitHub
Hzfengsy merged PR #15934: URL: https://github.com/apache/tvm/pull/15934 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [TOPI][TIR][TE][x86] Extend x86 SIMD (u)int8 coverage for dense & conv2d [tvm]

2023-10-16 Thread via GitHub
ekalda commented on PR #15918: URL: https://github.com/apache/tvm/pull/15918#issuecomment-1764716961 > * So, I had to look into adding zextend, sextend, truncate plus the vectorpermute, vectorshuffle instead. > The good point is that these are lowered to exactly what is needed

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-16 Thread via GitHub
Lunderberg commented on PR #15916: URL: https://github.com/apache/tvm/pull/15916#issuecomment-1764836058 Rebased onto `unity` head to ensure the CI tests haven't broken in the meantime. -- This is an automated message from the Apache Git Service. To respond to the message, please log on

Re: [PR] [Do not merge] Update ci-arm image to use tag 20231013-060139-71caa19f9 [tvm]

2023-10-16 Thread via GitHub
leandron commented on PR #15930: URL: https://github.com/apache/tvm/pull/15930#issuecomment-1764457271 Cc @Liam-Sturge, I will close this so that we can submit a change coming from the tlcpack repo. -- This is an automated message from the Apache Git Service. To respond to the message,

Re: [PR] [Do not merge] Update ci-arm image to use tag 20231013-060139-71caa19f9 [tvm]

2023-10-16 Thread via GitHub
leandron closed pull request #15930: [Do not merge] Update ci-arm image to use tag 20231013-060139-71caa19f9 URL: https://github.com/apache/tvm/pull/15930 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to

Re: [I] [Release] v0.14.0 release schedule [tvm]

2023-10-16 Thread via GitHub
ysh329 commented on issue #15812: URL: https://github.com/apache/tvm/issues/15812#issuecomment-1764657318 Hi all, with help of @Hzfengsy, branch v0.14.0 was created. However, because PR about version modification merged without sqaush, related commit is unverified, more concretely

Re: [PR] [Unity] [Bugfix] Fix bug in interpolate operator's default mode parameter in PyTorch frontend [tvm]

2023-10-16 Thread via GitHub
Hzfengsy merged PR #15933: URL: https://github.com/apache/tvm/pull/15933 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity][Relax] Support Dynamic Tensor as Index, torch frontend [tvm]

2023-10-11 Thread via GitHub
guoyaol commented on PR #15884: URL: https://github.com/apache/tvm/pull/15884#issuecomment-1757864541 @junrushao @MasterJH5574 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific

Re: [PR] [VM] modify the date pointer of tensor to point to the buffer [tvm]

2023-10-11 Thread via GitHub
tqchen commented on PR #15912: URL: https://github.com/apache/tvm/pull/15912#issuecomment-1757926404 Would be great to cross check kv to see if we can work with the offset parameter in DLTensor. The main reason is that the data ptr may not points to an addressable location (it is not an

[PR] [TFLite][Frontend] Support quantized Reverse sequence [tvm]

2023-10-11 Thread via GitHub
tlopex opened a new pull request, #15915: URL: https://github.com/apache/tvm/pull/15915 Support Reverse sequence quantization operation as part of #15148 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above

Re: [PR] [Unity][Schedule] Prototype of peeling tail iterations [tvm]

2023-10-11 Thread via GitHub
kparzysz-quic commented on code in PR #15901: URL: https://github.com/apache/tvm/pull/15901#discussion_r1355671093 ## src/tir/schedule/primitive/loop_transformation.cc: ## @@ -454,6 +454,56 @@ Array Split(ScheduleState self, const StmtSRef& loop_sref, const Array return

Re: [I] [Bug] [Unity] TypeError encountered when converting ONNX model with MaxPool operator [tvm]

2023-10-11 Thread via GitHub
Thrsu closed issue #15895: [Bug] [Unity] TypeError encountered when converting ONNX model with MaxPool operator URL: https://github.com/apache/tvm/issues/15895 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

Re: [PR] [Unity][Relax] Support Dynamic Tensor as Index, torch frontend [tvm]

2023-10-11 Thread via GitHub
Hzfengsy merged PR #15884: URL: https://github.com/apache/tvm/pull/15884 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity] Fix wrong variable name in test_optimize_layout_transform [tvm]

2023-10-11 Thread via GitHub
Hzfengsy merged PR #15893: URL: https://github.com/apache/tvm/pull/15893 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [I] [Bug] StringImm Object Can't Be Pass to C++ Side from Python Side [tvm]

2023-10-11 Thread via GitHub
junrushao commented on issue #15716: URL: https://github.com/apache/tvm/issues/15716#issuecomment-1758184178 Yeah it doesn't do anything concrete to the object system... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use

Re: [PR] [TFLite][Frontend] Support quantized ELU [tvm]

2023-10-11 Thread via GitHub
leandron merged PR #15821: URL: https://github.com/apache/tvm/pull/15821 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

<    1   2   3   4   5   6   7   8   9   10   >