Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-11 Thread via GitHub
tqchen commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1757910627 I think assuming a single vector width(vscale) and use `kScalableVectorMark=-1` to mark it would be a good tradeoff, given it may not be that useful to create vectors with multiple vector

[PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-11 Thread via GitHub
Lunderberg opened a new pull request, #15916: URL: https://github.com/apache/tvm/pull/15916 Prior to this commit, several transforms assumed that the arguments passed to a `call_tir` builtin were provided as in-line `relax::Tuple` objects. Because it would be equally valid for the

Re: [PR] [Unity] Paged KV Cache as LM Support [tvm]

2023-10-11 Thread via GitHub
MasterJH5574 commented on PR #15910: URL: https://github.com/apache/tvm/pull/15910#issuecomment-1757929983 Mark as draft for an update of docstring and APIs. Sorry that it is not mature enough. -- This is an automated message from the Apache Git Service. To respond to the message, please

Re: [PR] [microNPU][ETHOSU] MatMul legalization support [tvm]

2023-10-13 Thread via GitHub
ekalda merged PR #15780: URL: https://github.com/apache/tvm/pull/15780 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [I] [Bug] autotvm_matmul Check failed:(code == RPCCode::kReturn)is false: code =kShutdown [tvm]

2023-10-13 Thread via GitHub
MrGouYou commented on issue #15151: URL: https://github.com/apache/tvm/issues/15151#issuecomment-1761458134 > > @Johnson9009 Change the session_timeout of rpc.connect(), right? In which file rpc.connect() needs to be modified? Thanks! > > Yes, just try increase the session_timeout of

[PR] [Fix][Unity] Fix TVMError when using relax to load model with Trilu operator [tvm]

2023-10-13 Thread via GitHub
dmilosevic252 opened a new pull request, #15924: URL: https://github.com/apache/tvm/pull/15924 Sets the default value 0 for the k variable, if the value has not been previously set. This is a code patch for fixing this issue: https://github.com/apache/tvm/issues/15729 -- This is an

[PR] fixup resize2d dynamic input support [tvm]

2023-10-13 Thread via GitHub
starrkk opened a new pull request, #15926: URL: https://github.com/apache/tvm/pull/15926 i -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[I] [Bug] Autotvm LocalRunner at gpu, Check failed: (code == RPCCode::kReturn) [tvm]

2023-10-13 Thread via GitHub
MrGouYou opened a new issue, #15927: URL: https://github.com/apache/tvm/issues/15927 When i compile a NN model with Autotvm, get a quite strange problem. compile device: CUDA code snippet: (Note: Using LocalRunner)

Re: [PR] [Fix][Unity] Fix TVMError when using relax to load model with Trilu operator [tvm]

2023-10-13 Thread via GitHub
Hzfengsy commented on code in PR #15924: URL: https://github.com/apache/tvm/pull/15924#discussion_r1358136474 ## python/tvm/relax/frontend/onnx/onnx_frontend.py: ## @@ -637,6 +637,11 @@ def _impl_v14(cls, bb, inputs, attr, params): x = inputs[0] k = inputs[1]

Re: [PR] [Unity][Transform] Canonicalize and use CSE between pattern matches [tvm]

2023-10-13 Thread via GitHub
masahi merged PR #15904: URL: https://github.com/apache/tvm/pull/15904 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Target] LLVM helper functions for any target info [tvm]

2023-10-16 Thread via GitHub
cbalint13 commented on PR #15761: URL: https://github.com/apache/tvm/pull/15761#issuecomment-1765115891 > Hey @cbalint13, I got a compiler warning from this commit saying: > > ``` > /Users/jshao/Projects/tvm-dev/src/target/llvm/llvm_instance.cc:76:81: warning: reference to stack

Re: [PR] [TOPI][TIR][TE][x86] Extend x86 SIMD (u)int8 coverage for dense & conv2d [tvm]

2023-10-16 Thread via GitHub
cbalint13 commented on PR #15918: URL: https://github.com/apache/tvm/pull/15918#issuecomment-1764839468 > ``` > idx_vec = T.allocate_const([0, 1, 4, 5, 2, 3, 6, 7], "int32") > tir.vectorpermute("int32x8", whatever_vector, idx_vec) > ``` > > is not more complex in my

Re: [PR] [Target] LLVM helper functions for any target info [tvm]

2023-10-16 Thread via GitHub
cbalint13 commented on PR #15761: URL: https://github.com/apache/tvm/pull/15761#issuecomment-1765120519 > > /Users/jshao/Projects/tvm-dev/src/target/llvm/llvm_instance.cc:763:49: warning: cast from 'const llvm::MCSubtargetInfo *' to 'llvm::MCSubtargetInfo *' drops const qualifier

Re: [PR] [microNPU][ETHOSU] Fix rounding mode in requantize operation [tvm]

2023-10-16 Thread via GitHub
lhutton1 merged PR #15929: URL: https://github.com/apache/tvm/pull/15929 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [TOPI][TIR][TE][x86] Extend x86 SIMD (u)int8 coverage for dense & conv2d [tvm]

2023-10-16 Thread via GitHub
cbalint13 commented on PR #15918: URL: https://github.com/apache/tvm/pull/15918#issuecomment-1765261587 > > ``` > > idx_vec = T.allocate_const([0, 1, 4, 5, 2, 3, 6, 7], "int32") > > tir.vectorpermute("int32x8", whatever_vector, idx_vec) > > ``` > > > > > > is not more

Re: [PR] [Target] LLVM helper functions for any target info [tvm]

2023-10-16 Thread via GitHub
junrushao commented on PR #15761: URL: https://github.com/apache/tvm/pull/15761#issuecomment-1765093175 Hey @cbalint13, I got a compiler warning from this commit saying: ``` /Users/jshao/Projects/tvm-dev/src/target/llvm/llvm_instance.cc:76:81: warning: reference to stack memory

Re: [PR] [Unity][Frontend][Onnx] Add support for Elu operator [tvm]

2023-10-16 Thread via GitHub
Hzfengsy merged PR #15937: URL: https://github.com/apache/tvm/pull/15937 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Relay] Expose qnn ops directly from relay.qnn module [tvm]

2023-10-17 Thread via GitHub
quic-sanirudh merged PR #15928: URL: https://github.com/apache/tvm/pull/15928 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity] Allow FLegalize to produce Relax operations [tvm]

2023-10-16 Thread via GitHub
sunggg commented on code in PR #15842: URL: https://github.com/apache/tvm/pull/15842#discussion_r1361351524 ## tests/python/relax/test_transform_legalize_ops.py: ## @@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) -> R.Tensor([16, 8]): assert

Re: [PR] [Unity] [Bugfix] Fix TypeError in interpolate caused by scale_factor as tuple [tvm]

2023-10-16 Thread via GitHub
Hzfengsy merged PR #15935: URL: https://github.com/apache/tvm/pull/15935 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity] Allow FLegalize to produce Relax operations [tvm]

2023-10-16 Thread via GitHub
sunggg commented on code in PR #15842: URL: https://github.com/apache/tvm/pull/15842#discussion_r1361318051 ## tests/python/relax/test_transform_legalize_ops.py: ## @@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) -> R.Tensor([16, 8]): assert

[PR] [Unity][MSC][Bugfix] Trilu bugfix && special ops support [tvm]

2023-10-16 Thread via GitHub
Archermmt opened a new pull request, #15938: URL: https://github.com/apache/tvm/pull/15938 The trilu test from test_translate_relay.py results in failures(e.g https://github.com/apache/tvm/pull/15783). This PR fix the bugs -- This is an automated message from the Apache Git Service. To

Re: [PR] [Unity] Paged KV Cache as LM Support [tvm]

2023-10-13 Thread via GitHub
MasterJH5574 commented on code in PR #15910: URL: https://github.com/apache/tvm/pull/15910#discussion_r1358698766 ## tests/python/relax/test_runtime_builtin_paged_attention_kv_cache.py: ## @@ -0,0 +1,420 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or

Re: [PR] [Unity] Allow FLegalize to produce Relax operations [tvm]

2023-10-13 Thread via GitHub
sunggg commented on code in PR #15842: URL: https://github.com/apache/tvm/pull/15842#discussion_r1358446279 ## tests/python/relax/test_transform_legalize_ops.py: ## @@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) -> R.Tensor([16, 8]): assert

Re: [PR] [Unity][Testing] Show failing module in WellFormedInstrument [tvm]

2023-10-13 Thread via GitHub
Lunderberg commented on code in PR #15898: URL: https://github.com/apache/tvm/pull/15898#discussion_r1358767925 ## python/tvm/relax/ir/instrument.py: ## @@ -25,13 +25,19 @@ class WellFormedInstrument: is well formed. It will skip specific passes, like Normalize. """

[I] [Bug] Segmentation fault in FQ2I with per-channel quantized ONNX MatMul [tvm]

2023-10-13 Thread via GitHub
ry3s opened a new issue, #15931: URL: https://github.com/apache/tvm/issues/15931 Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals

Re: [PR] [Fix][Unity] Fix TVMError when using relax to load model with Trilu operator [tvm]

2023-10-17 Thread via GitHub
dmilosevic252 commented on code in PR #15924: URL: https://github.com/apache/tvm/pull/15924#discussion_r1361756821 ## python/tvm/relax/frontend/onnx/onnx_frontend.py: ## @@ -637,6 +637,11 @@ def _impl_v14(cls, bb, inputs, attr, params): x = inputs[0] k =

Re: [PR] [MetaScheduler][ROCm] Add MultiLevelTilingMatrixCore rule for auto-tensorization on ROCm [tvm]

2023-10-17 Thread via GitHub
LeiWang1999 commented on PR #15939: URL: https://github.com/apache/tvm/pull/15939#issuecomment-1766044785 @malixian Awesome! I think community should have a thorough discussion about the infra of HIP source codegen support, as it contains a significant amount of duplicated code to CUDA,

[PR] [MetaScheduler][ROCm] Add MultiLevelTilingMatrixCore rule for auto-tensorization on ROCm [tvm]

2023-10-17 Thread via GitHub
malixian opened a new pull request, #15939: URL: https://github.com/apache/tvm/pull/15939 This Pull Request adds support for AMD Matrix Core in MetaScheduler. ## Changes Made - **Add code generation for HIP**. Although adding hip support seems redundant, the advantages of

Re: [PR] [Unity] fixup resize2d dynamic input support [tvm]

2023-10-15 Thread via GitHub
starrkk commented on code in PR #15932: URL: https://github.com/apache/tvm/pull/15932#discussion_r1359794835 ## python/tvm/relax/frontend/onnx/onnx_frontend.py: ## @@ -1290,7 +1290,7 @@ def _impl_v18(cls, bb, inputs, attr, params): ), "Only one of scales and sizes can

Re: [PR] [Relay] Expose qnn ops directly from relay.qnn module [tvm]

2023-10-15 Thread via GitHub
quic-sanirudh commented on PR #15928: URL: https://github.com/apache/tvm/pull/15928#issuecomment-1763425961 cc @ibsidorenko -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

Re: [I] [Release] v0.14.0 release schedule [tvm]

2023-10-15 Thread via GitHub
ysh329 commented on issue #15812: URL: https://github.com/apache/tvm/issues/15812#issuecomment-1763418982 Hi all, due to a new idea about release flow before and after cut branch, this release may delay. More informations can refer below PRs: - https://github.com/apache/tvm/pull/15847

[PR] [Unity] [Bugfix] Fix bug in interpolate operator's default mode parameter in PyTorch frontend [tvm]

2023-10-15 Thread via GitHub
Thrsu opened a new pull request, #15933: URL: https://github.com/apache/tvm/pull/15933 This PR fixes a bug in the interpolate operator of the PyTorch frontend in TVM. The bug was caused by incorrectly using the `method` keyword instead of the `mode` keyword when retrieving the default

Re: [PR] [WIP][Unity][BYOC]: Add ComposableKernel Support [tvm]

2023-10-17 Thread via GitHub
tiandi111 commented on PR #15740: URL: https://github.com/apache/tvm/pull/15740#issuecomment-1766400704 Current status: TODO-1 is almost done. I've finished integrating vanilla gemm kernels(no fusion capability now) and its profiler. Code still needs to be improved, especially in

[PR] [Minor] Fix compilation warnings for clang [tvm]

2023-10-17 Thread via GitHub
cbalint13 opened a new pull request, #15940: URL: https://github.com/apache/tvm/pull/15940 Fix compilation warnings for clang. --- The warnings was: ``` BUILD/tvm/src/target/llvm/llvm_instance.cc:84:81: warning: reference to stack memory associated with parameter 'Obj'

Re: [PR] [Unity] Allow FLegalize to produce Relax operations [tvm]

2023-10-17 Thread via GitHub
Lunderberg commented on code in PR #15842: URL: https://github.com/apache/tvm/pull/15842#discussion_r1362092455 ## tests/python/relax/test_transform_legalize_ops.py: ## @@ -282,5 +284,77 @@ def main(A: R.Tensor([16, 32]), B: R.Tensor([32, 8])) -> R.Tensor([16, 8]): assert

Re: [PR] [BYOC][TensorRT] TensorRT BYOC integration [tvm]

2023-10-17 Thread via GitHub
ehion commented on PR #6395: URL: https://github.com/apache/tvm/pull/6395#issuecomment-1766615940 i am curious how to run autotvm with tensorrt when i run ``` mod = partition_for_tensorrt(mod, params) * tasks = autotvm.task.extract_from_program( mod['main'],

Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15977: URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779395939 Current CI failures were present in unity head, but should be resolved after PR#15941. (See [this comment](https://github.com/apache/tvm/pull/15941#issuecomment-1779377445) for

Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15971: URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779402991 This may be a duplicate of https://github.com/apache/tvm/pull/15916, which also resolves the analogous problem for `FuseOps`, `RewriteDataflowReshape`, and `FoldConstant`. -- This is

[PR] [Unity] Support symbolic PrimValue arguments [tvm]

2023-10-25 Thread via GitHub
Lunderberg opened a new pull request, #15980: URL: https://github.com/apache/tvm/pull/15980 Prior this this commit, all symbolic variables needed to be defined either by tensor shapes, or by an explicit `tvm.runtime.ShapeTuple` argument. This commit allows arguments `arg:

Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15855: URL: https://github.com/apache/tvm/pull/15855#issuecomment-1779338988 One unrelated question, though. In your pseudocode, you have the signature `def test(x: Object, callback)`, but I wasn't able to pass a callback directly into a relax function. I

Re: [PR] [Unity][UnitTest] Enable BindParams test for R.Prim [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15978: URL: https://github.com/apache/tvm/pull/15978#issuecomment-1779386185 Rebased onto head to re-run CI. Previous failures were present in unity head, and have been resolved. (See [this

Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub
quic-sanirudh commented on PR #15971: URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779451578 > This may be a duplicate of #15916, which also resolves the analogous problem for `FuseOps`, `RewriteDataflowReshape`, and `FoldConstant`. Oh nice, this is great, thanks for

[PR] [Unity][UnitTest] Cleanup test_vm_build.py [tvm]

2023-10-25 Thread via GitHub
Lunderberg opened a new pull request, #15981: URL: https://github.com/apache/tvm/pull/15981 - Removed unused `import os` - Used `tvm.testing.main()` inside `if __name__=="__main__"` - Added parametrized fixture `exec_mode` instead of marking all tests. - Replace

Re: [PR] [BugFix][TIR] fix error in symbolic floormod [tvm]

2023-10-25 Thread via GitHub
JackWeiw commented on code in PR #15961: URL: https://github.com/apache/tvm/pull/15961#discussion_r1372017992 ## src/tir/transforms/inject_ptx_async_copy.cc: ## @@ -113,9 +116,11 @@ class PTXAsyncCopyInjector : public StmtMutator { return PrimExpr();

Re: [PR] [Unity][Transform] Improved canonicalization of non-dataflow Var [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15941: URL: https://github.com/apache/tvm/pull/15941#issuecomment-1779377445 No problem! I'm going to merge this in, as it resolves a few test failures in unity head that resulted from a conflict between

Re: [PR] [Unity][Transform] Improved canonicalization of non-dataflow Var [tvm]

2023-10-25 Thread via GitHub
Lunderberg merged PR #15941: URL: https://github.com/apache/tvm/pull/15941 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Hotfix] Mark python-FFI handling with TVM_DLL [tvm]

2023-10-25 Thread via GitHub
Lunderberg merged PR #15970: URL: https://github.com/apache/tvm/pull/15970 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-25 Thread via GitHub
lhutton1 commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1779468180 Regarding the changes required to support scalability in the data type, I've been prototyping adding a new `scalable_` attribute to `DataType` that wraps `DLDataType`. However,

Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15855: URL: https://github.com/apache/tvm/pull/15855#issuecomment-1779329489 Thank you, @tqchen, and I really like that unit test to validate the desired behavior in the VM, and not just how that behavior impacts a use case. I've updated the PR to include the

Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15977: URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779394478 @tvm-bot re-run -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

Re: [PR] Add missing backtick [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15968: URL: https://github.com/apache/tvm/pull/15968#issuecomment-1779442186 Looks like the CI failures are due to a check that the PR body is non-empty. Can you add a description to the PR? -- This is an automated message from the Apache Git Service. To

Re: [PR] [BugFix][TIR] fix error in symbolic floormod [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15961: URL: https://github.com/apache/tvm/pull/15961#issuecomment-1779454823 It looks like this PR isn't unity-specific. Can the PR be applied to the `main` branch instead, so we get the bugfix on both branches? -- This is an automated message from the Apache

Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15977: URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779387049 @ci-bot re-run -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

Re: [PR] [Unity] Include LegalizeOps in the default relax.build lowering flow [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15864: URL: https://github.com/apache/tvm/pull/15864#issuecomment-1779381464 Sounds good, and thank you! I'm rebasing the commit on top of unity head, as the current CI failures are due to a breakage on `unity` head, and are resolved with

Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub
github-actions[bot] commented on PR #15977: URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779395462 Failed to re-run CI in https://github.com/apache/tvm/actions/runs/6641806894 ``` Traceback (most recent call last): File

Re: [PR] [TVMScript][TIR] Pretty print TIR LLVM function name [tvm]

2023-10-19 Thread via GitHub
cbalint13 commented on PR #15953: URL: https://github.com/apache/tvm/pull/15953#issuecomment-1771566305 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

Re: [PR] [Unity][BYOC] Add support for sliding window in attention op [tvm]

2023-10-19 Thread via GitHub
vinx13 merged PR #15951: URL: https://github.com/apache/tvm/pull/15951 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

[PR] [Unity] Split DecomposeOpsForTraining into two steps [tvm]

2023-10-19 Thread via GitHub
Lunderberg opened a new pull request, #15954: URL: https://github.com/apache/tvm/pull/15954 Prior to this commit, the `DecomposeOpsForTraining` transform directly replaced `relax.nn.batch_norm` into more primitive relax operations. This required the decomposed form of `relax.nn.batch_norm`

Re: [PR] [Unity] Allow FLegalize to produce Relax operations [tvm]

2023-10-19 Thread via GitHub
Lunderberg merged PR #15842: URL: https://github.com/apache/tvm/pull/15842 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity] Split DecomposeOpsForTraining into two steps [tvm]

2023-10-19 Thread via GitHub
Lunderberg commented on PR #15954: URL: https://github.com/apache/tvm/pull/15954#issuecomment-1771588343 @sunggg This is what I was referring to by a different decomposition being applied for training and for inference in https://github.com/apache/tvm/pull/15842. This PR extracts that

Re: [I] [Bug] [Unity] Fail to print out call trace under relax::Normalizer [tvm]

2023-10-19 Thread via GitHub
jinhongyii commented on issue #15880: URL: https://github.com/apache/tvm/issues/15880#issuecomment-1771529875 Hi @Lunderberg just want to check if there's any progress on this issue -- This is an automated message from the Apache Git Service. To respond to the message, please log on to

Re: [PR] [TVMScript][TIR] Pretty print TIR LLVM function name [tvm]

2023-10-19 Thread via GitHub
cbalint13 commented on PR #15953: URL: https://github.com/apache/tvm/pull/15953#issuecomment-1771707055 @tvm-bot rerun ci -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

Re: [PR] [TVMScript][TIR] Pretty print TIR LLVM function name [tvm]

2023-10-19 Thread via GitHub
cbalint13 commented on PR #15953: URL: https://github.com/apache/tvm/pull/15953#issuecomment-1771634992 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[PR] [Disco] Explicitly set the session on DPackedFunc and DModule [tvm]

2023-10-26 Thread via GitHub
yelite opened a new pull request, #15996: URL: https://github.com/apache/tvm/pull/15996 This PR changes how to get the `Session` from `DPackedFunc` or `DModule`. Without this, there will be data corruption in the message channel, if Python gc happens in a different thread than the thread

Re: [PR] [BugFix][TIR] fix error in symbolic floormod [tvm]

2023-10-27 Thread via GitHub
JackWeiw closed pull request #15961: [BugFix][TIR] fix error in symbolic floormod URL: https://github.com/apache/tvm/pull/15961 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

Re: [PR] [Fix][TIR]fix symbolic strides lower [tvm]

2023-10-27 Thread via GitHub
JackWeiw commented on code in PR #15986: URL: https://github.com/apache/tvm/pull/15986#discussion_r1374209716 ## tests/python/unittest/test_tir_transform_lower_opaque_block.py: ## @@ -250,6 +250,34 @@ def transformed_strided_buffer_func( C[i0 * 4 + i1, j] = B[i1,

Re: [PR] [Fix][TIR]fix symbolic strides lower [tvm]

2023-10-27 Thread via GitHub
JackWeiw commented on code in PR #15986: URL: https://github.com/apache/tvm/pull/15986#discussion_r1374212520 ## src/tir/transforms/inject_ptx_async_copy.cc: ## @@ -79,7 +80,7 @@ class PTXAsyncCopyInjector : public StmtMutator { if (indices_lanes == 1) {

Re: [I] [VOTE] Release Apache TVM v0.14.0.rc0 [tvm]

2023-10-27 Thread via GitHub
Hzfengsy commented on issue #15974: URL: https://github.com/apache/tvm/issues/15974#issuecomment-1782380337 +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

Re: [PR] [Fix][TIR]fix symbolic strides lower [tvm]

2023-10-27 Thread via GitHub
JackWeiw closed pull request #15986: [Fix][TIR]fix symbolic strides lower URL: https://github.com/apache/tvm/pull/15986 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

Re: [PR] [Fix][TIR]fix symbolic strides lower [tvm]

2023-10-27 Thread via GitHub
JackWeiw commented on PR #16000: URL: https://github.com/apache/tvm/pull/16000#issuecomment-1782470884 CC @Lunderberg @wrongtest-intellif -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[PR] [WIP][ONNX] Fix interpreting auto_pad parameters in ConvTranspose operator [tvm]

2023-10-27 Thread via GitHub
padreofthegame opened a new pull request, #16001: URL: https://github.com/apache/tvm/pull/16001 Fix in interpreting auto_pad parameters SAME_UPPER and SAME_LOWER in ConvTranspose version 11 operator according to documentation on [](https://onnx.ai/onnx/operators/onnx__ConvTranspose.html).

[PR] [Fix][TIR]fix symbolic strides lower [tvm]

2023-10-27 Thread via GitHub
JackWeiw opened a new pull request, #16000: URL: https://github.com/apache/tvm/pull/16000 compact_buffer_region PASS modify shared buffer stride[0] to T.int64(72) * T.min((n + T.int64(63)) // T.int64(64) * T.int64(64), T.int64(96)) and stride[1] is T.int64(72) but in

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-26 Thread via GitHub
slyubomirsky commented on PR #15916: URL: https://github.com/apache/tvm/pull/15916#issuecomment-1782267071 If we intend to have special cases like `call_tir` where one argument _must_ be a tuple literal (i.e., not following the normal rule of the type system that any member of the type

Re: [I] [VOTE] Release Apache TVM v0.14.0.rc0 [tvm]

2023-10-27 Thread via GitHub
ysh329 commented on issue #15974: URL: https://github.com/apache/tvm/issues/15974#issuecomment-1782363896 cc @leandron @Hzfengsy @vinx13 @areusch @Mousius @tqchen @AndrewZhaoLuo -- This is an automated message from the Apache Git Service. To respond to the message, please log on to

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-26 Thread via GitHub
tqchen commented on PR #15916: URL: https://github.com/apache/tvm/pull/15916#issuecomment-1781923886 The principle here is we make common cases(and their optimizations easy), while placing burdens on less common cases. As for the pass writing patterns, most of our current relax

[PR] [Unity][MSC][M1.4] Add Runner and test with relax [tvm]

2023-10-26 Thread via GitHub
Archermmt opened a new pull request, #15997: URL: https://github.com/apache/tvm/pull/15997 This is a pull request for MSC(Multi-System Compile) RFC: https://discuss.tvm.apache.org/t/rfc-unity-msc-introduction-to-multi-system-compiler/15251/5 Tracking issue:

[PR] [Codegen] Add shuffle for webgpu and metal [tvm]

2023-10-26 Thread via GitHub
vinx13 opened a new pull request, #15998: URL: https://github.com/apache/tvm/pull/15998 cc @MasterJH5574 @tqchen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

[PR] [FFI] Use the correct function in DecRef [tvm]

2023-10-26 Thread via GitHub
yelite opened a new pull request, #15999: URL: https://github.com/apache/tvm/pull/15999 Found this problem when investigating for #15996 cc @Lunderberg -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub
slyubomirsky commented on code in PR #15916: URL: https://github.com/apache/tvm/pull/15916#discussion_r1372273991 ## src/relax/transform/call_tir_rewrite.cc: ## @@ -111,41 +111,69 @@ class CallTIRMutator : public ExprMutator { << expr->struct_info_; }

[PR] [FFI] Allow IntImm arguments to PackedFunc with int parameter [tvm]

2023-10-25 Thread via GitHub
Lunderberg opened a new pull request, #15983: URL: https://github.com/apache/tvm/pull/15983 TVM containers, such as tvm::runtime::Array, require the contained objects to inherit from `ObjectRef`. As a result, the wrapper types `IntImm`, `FloatImm`, and `StringImm` are often used to allow

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub
slyubomirsky commented on code in PR #15916: URL: https://github.com/apache/tvm/pull/15916#discussion_r1372276835 ## python/tvm/relax/op/base.py: ## @@ -97,7 +97,11 @@ def call_tir( ret: Call A call node for the call_tir operator. """ -if isinstance(args,

Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub
tqchen commented on PR #15855: URL: https://github.com/apache/tvm/pull/15855#issuecomment-1780006889 ah i see, one way to get around is define the callback as a global test function and call that with call_packed. e.g. `test.vm.assert_notnull`

Re: [PR] [Unity] Support symbolic PrimValue arguments [tvm]

2023-10-25 Thread via GitHub
masahi merged PR #15980: URL: https://github.com/apache/tvm/pull/15980 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on code in PR #15916: URL: https://github.com/apache/tvm/pull/15916#discussion_r1372341781 ## src/relax/transform/call_tir_rewrite.cc: ## @@ -111,41 +111,69 @@ class CallTIRMutator : public ExprMutator { << expr->struct_info_; }

[PR] Bump werkzeug from 2.2.3 to 3.0.1 in /apps/microtvm [tvm]

2023-10-25 Thread via GitHub
dependabot[bot] opened a new pull request, #15982: URL: https://github.com/apache/tvm/pull/15982 Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.2.3 to 3.0.1. Release notes Sourced from https://github.com/pallets/werkzeug/releases;>werkzeug's releases. 3.0.1

Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub
Lunderberg commented on PR #15855: URL: https://github.com/apache/tvm/pull/15855#issuecomment-1780074018 > ah i see, one way to get around is define the callback as a global test function and call that with call_packed. That's what I ended up doing, with a global definition which can

Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub
slyubomirsky commented on PR #15971: URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779980410 Didn't see it, yep, they're duplicates. There is one case that the other PR misses so hopefully that can be updated. -- This is an automated message from the Apache Git Service. To

Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub
slyubomirsky closed pull request #15971: [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal URL: https://github.com/apache/tvm/pull/15971 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to

Re: [PR] [Unity][BYOC] CoreML Scaffolding [tvm]

2023-10-25 Thread via GitHub
vinx13 merged PR #15556: URL: https://github.com/apache/tvm/pull/15556 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub
tqchen commented on PR #15916: URL: https://github.com/apache/tvm/pull/15916#issuecomment-1780084913 Thanks for the PR, I know this is indeed a generalization and there are some tradeoffs to be considered here. Specifically, we should consider the following alternative: - C0: We

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub
slyubomirsky commented on PR #15916: URL: https://github.com/apache/tvm/pull/15916#issuecomment-1779975180 I think you may need to update the StructInfo inference for `call_tir_inplace` like in #15971, since (without modification) that assumes the argument is a tuple literal. The test

Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub
tqchen commented on code in PR #15916: URL: https://github.com/apache/tvm/pull/15916#discussion_r1372345395 ## include/tvm/relax/expr_functor.h: ## @@ -278,6 +278,37 @@ class ExprVisitor : public ExprFunctor { virtual void VisitSpan(const Span& span); virtual void

Re: [PR] [Unity][Contrib] Add vLLM paged attention kernel [tvm]

2023-10-27 Thread via GitHub
yelite commented on code in PR #15995: URL: https://github.com/apache/tvm/pull/15995#discussion_r1374552066 ## cmake/modules/CUDA.cmake: ## @@ -64,6 +64,7 @@ if(USE_CUDA) message(STATUS "Build with Thrust support") cmake_minimum_required(VERSION 3.13) # to compile

Re: [PR] [ACL] Update Compute Library to v23.08 [tvm]

2023-10-27 Thread via GitHub
leandron commented on PR #15990: URL: https://github.com/apache/tvm/pull/15990#issuecomment-1783145283 > Thanks @leandron! The [change log](https://arm-software.github.io/ComputeLibrary/v23.08/versions_changelogs.xhtml#S2_2_changelog) states `libarm_compute_core` was deprecated and that we

[PR] [Draft][Unity] Allow dynamic indices to TupleGetItem [tvm]

2023-10-27 Thread via GitHub
Lunderberg opened a new pull request, #16002: URL: https://github.com/apache/tvm/pull/16002 This PR updates the type of `TupleGetItem::index` from `int` to `Expr`, to allow access of a tuple at a location specified by a symbolic variable. The lack of this functionality was run into

[PR] [microNPU][ETHOSU] Fix ConcatRewriter args processing [tvm]

2023-10-27 Thread via GitHub
Aleksei-grovety opened a new pull request, #16003: URL: https://github.com/apache/tvm/pull/16003 In ConcatRewriter the case was not considered when the concatenation argument is TupleGetItem. cc @lhutton1, @ekalda, @leandron -- This is an automated message from the Apache Git

Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub
vinx13 merged PR #15977: URL: https://github.com/apache/tvm/pull/15977 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [BugFix][TIR] fix error in symbolic floormod [tvm]

2023-10-25 Thread via GitHub
JackWeiw commented on PR #15961: URL: https://github.com/apache/tvm/pull/15961#issuecomment-1780341103 > It looks like this PR isn't unity-specific. Can the PR be applied to the `main` branch instead, so we get the bugfix on both branches?

Re: [PR] [Fix][TIR] Symbolic strides lower [tvm]

2023-10-25 Thread via GitHub
JackWeiw closed pull request #15984: [Fix][TIR] Symbolic strides lower URL: https://github.com/apache/tvm/pull/15984 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

<    2   3   4   5   6   7   8   9   10   11   >