Re: [PR] [IR] Provide well-formed intermediate in ApplyPassToFunction [tvm]

2024-04-04 Thread via GitHub
slyubomirsky commented on PR #16843: URL: https://github.com/apache/tvm/pull/16843#issuecomment-2038627542 Wait, a global var can be mapped to an `ExternFunc`? That's wild. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and

Re: [PR] [TIR] Fix segfaults from ordering of Let/Assert in MakePackedAPI [tvm]

2024-04-04 Thread via GitHub
Lunderberg merged PR #16543: URL: https://github.com/apache/tvm/pull/16543 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

(tvm) branch main updated: [TIR] Fix segfaults from ordering of Let/Assert in MakePackedAPI (#16543)

2024-04-04 Thread lunderberg
This is an automated email from the ASF dual-hosted git repository. lunderberg pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new cd08356e66 [TIR] Fix segfaults from ordering of

Re: [PR] [Runtime] Allow inspection of function names from a compiled .so [tvm]

2024-04-04 Thread via GitHub
Lunderberg commented on PR #16836: URL: https://github.com/apache/tvm/pull/16836#issuecomment-2038492848 I've reverted the implementation of this PR, but kept the same unit tests. The platform-dependent implementation is entirely removed, replaced with a TIR lowering pass. -- This is

(tvm) branch nightly updated (6f74762743 -> cd08356e66)

2024-04-04 Thread github-bot
This is an automated email from the ASF dual-hosted git repository. github-bot pushed a change to branch nightly in repository https://gitbox.apache.org/repos/asf/tvm.git from 6f74762743 [Relax] Provide well-formed output in `transform.LazyGetInput` (#16841) add c84f6bb4fd Bump pillow

(tvm) branch main updated (6f74762743 -> c84f6bb4fd)

2024-04-04 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git from 6f74762743 [Relax] Provide well-formed output in `transform.LazyGetInput` (#16841) add c84f6bb4fd Bump pillow from

(tvm) branch dependabot/pip/apps/microtvm/ethosu/pillow-10.3.0 deleted (was 61558f1e91)

2024-04-04 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a change to branch dependabot/pip/apps/microtvm/ethosu/pillow-10.3.0 in repository https://gitbox.apache.org/repos/asf/tvm.git was 61558f1e91 Bump pillow from 10.2.0 to 10.3.0 in /apps/microtvm/ethosu The

Re: [PR] Bump pillow from 10.2.0 to 10.3.0 in /apps/microtvm/ethosu [tvm]

2024-04-04 Thread via GitHub
lhutton1 merged PR #16838: URL: https://github.com/apache/tvm/pull/16838 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] Bump pillow from 10.2.0 to 10.3.0 in /apps/microtvm/cmsisnn [tvm]

2024-04-04 Thread via GitHub
lhutton1 merged PR #16839: URL: https://github.com/apache/tvm/pull/16839 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

(tvm) branch dependabot/pip/apps/microtvm/cmsisnn/pillow-10.3.0 deleted (was 78f4bc6434)

2024-04-04 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a change to branch dependabot/pip/apps/microtvm/cmsisnn/pillow-10.3.0 in repository https://gitbox.apache.org/repos/asf/tvm.git was 78f4bc6434 Bump pillow from 10.2.0 to 10.3.0 in /apps/microtvm/cmsisnn The

[PR] [Meta-Schedule][OpenCL] Enable MS tuning for Android OpenCL [tvm]

2024-04-04 Thread via GitHub
echuraev opened a new pull request, #16846: URL: https://github.com/apache/tvm/pull/16846 Added OpenCL as a GPU target for Meta-Scheduler. Implemented export function for Android which can be used when MS builder is configured. Added an integration test which checks that MS tuning on

(tvm) branch main updated (c84f6bb4fd -> dd384906e3)

2024-04-04 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git from c84f6bb4fd Bump pillow from 10.2.0 to 10.3.0 in /apps/microtvm/ethosu (#16838) add dd384906e3 Bump pillow from

(tvm) branch dependabot/pip/apps/microtvm/pillow-10.3.0 updated (c0de939622 -> 5b7db0cdec)

2024-04-04 Thread github-bot
This is an automated email from the ASF dual-hosted git repository. github-bot pushed a change to branch dependabot/pip/apps/microtvm/pillow-10.3.0 in repository https://gitbox.apache.org/repos/asf/tvm.git discard c0de939622 Bump pillow from 10.2.0 to 10.3.0 in /apps/microtvm add

Re: [PR] [SVE] Support scalable vectors in LoopVectorizer [tvm]

2024-04-04 Thread via GitHub
ekalda commented on code in PR #16782: URL: https://github.com/apache/tvm/pull/16782#discussion_r1551560120 ## tests/python/tir-transform/test_tir_transform_vectorize.py: ## @@ -64,28 +61,86 @@ def test_vectorize_vector(): assert isinstance(stmt.body.value,

Re: [PR] [SVE] Support scalable vectors in LoopVectorizer [tvm]

2024-04-04 Thread via GitHub
ekalda commented on code in PR #16782: URL: https://github.com/apache/tvm/pull/16782#discussion_r1551559396 ## src/tir/ir/expr.cc: ## @@ -196,7 +196,9 @@ TVM_REGISTER_NODE_TYPE(StringImmNode); // Cast Cast::Cast(DataType t, PrimExpr value, Span span) {

(tvm) branch main updated: [Debug][Disco] Check if a PackedFunc exists before calling it (#16845)

2024-04-04 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new 53f05d8ded [Debug][Disco] Check if a PackedFunc exists

Re: [PR] [Debug][Disco] Check if a PackedFunc exists before calling it [tvm]

2024-04-04 Thread via GitHub
tqchen merged PR #16845: URL: https://github.com/apache/tvm/pull/16845 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [SVE] Support scalable vectors in LoopVectorizer [tvm]

2024-04-04 Thread via GitHub
ekalda commented on code in PR #16782: URL: https://github.com/apache/tvm/pull/16782#discussion_r1551562114 ## src/tir/transforms/vectorize_loop.cc: ## @@ -182,21 +199,29 @@ class Vectorizer : public StmtMutator, public ExprFunctora) && b.same_as(op->b)) { return

Re: [PR] [SVE] Support scalable vectors in LoopVectorizer [tvm]

2024-04-04 Thread via GitHub
ekalda commented on code in PR #16782: URL: https://github.com/apache/tvm/pull/16782#discussion_r1551562489 ## src/tir/transforms/vectorize_loop.cc: ## @@ -433,20 +488,27 @@ class Vectorizer : public StmtMutator, public ExprFunctorVisitExpr(op->value); if

Re: [PR] [SVE] Support scalable vectors in LoopVectorizer [tvm]

2024-04-04 Thread via GitHub
ekalda commented on code in PR #16782: URL: https://github.com/apache/tvm/pull/16782#discussion_r1551562798 ## src/tir/transforms/vectorize_loop.cc: ## @@ -635,19 +701,22 @@ class Vectorizer : public StmtMutator, public ExprFunctora) && b.same_as(op->b)) { return

Re: [PR] [SVE] Support scalable vectors in LoopVectorizer [tvm]

2024-04-04 Thread via GitHub
ekalda commented on PR #16782: URL: https://github.com/apache/tvm/pull/16782#issuecomment-2037036256 Thanks for all the reviews and discussion! The latest version now includes changes as a response to @lhutton1 review. I also looked into using the target info in the LoopVectorizer and

[PR] [relay][feature] save relay IR as onnx for visualize [tvm]

2024-04-04 Thread via GitHub
ShawnZhuang opened a new pull request, #16847: URL: https://github.com/apache/tvm/pull/16847 Prior to this commit, relay ir may not be easy to be viewed in netron. Awareness of distinct Attr types is required in some solutions. this commit provide a method to serializing relay ir to

[PR] [DLight] Fix a corner case for reduction rule [tvm]

2024-04-04 Thread via GitHub
Hzfengsy opened a new pull request, #16848: URL: https://github.com/apache/tvm/pull/16848 The current rule will fail when the output shape is only one element, because of missing `preserve_unit_loops`. This PR fixes it and adds a test case. -- This is an automated message from the Apache

Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]

2024-04-04 Thread via GitHub
ShawnZhuang commented on PR #16847: URL: https://github.com/apache/tvm/pull/16847#issuecomment-203717 discuss on https://discuss.tvm.apache.org/t/relay-onnx-support-saveing-relay-ir-to-onnx-for-visualization/17020 -- This is an automated message from the Apache Git Service. To

Re: [PR] [relay][feature] save relay IR as onnx for visualize [tvm]

2024-04-04 Thread via GitHub
tqchen commented on PR #16847: URL: https://github.com/apache/tvm/pull/16847#issuecomment-2037249162 thank you @ShawnZhuang , one thing that I think worth clarifying is that whether it serves as a onnx exporter(which i believe there were some work in