(tvm) branch main updated: [CLML] Fix in clml pattern check condition (#16933)

2024-04-26 Thread srk
This is an automated email from the ASF dual-hosted git repository.

srk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 1453893be0 [CLML] Fix in clml pattern check condition (#16933)
1453893be0 is described below

commit 1453893be08f34dbde2950a179028d11daf48936
Author: krishnaraj36 
AuthorDate: Sat Apr 27 11:06:31 2024 +0530

[CLML] Fix in clml pattern check condition (#16933)

* [CLML] Fix in clml pattern check condition

Added more check condition to make clml path more robust.
1. Depth_to_space - CLML path only supported for mode="DCR" and NCHW
layout
2. Default checks -  CLML supports less than 4D tensor dimension and
with batch size =1.

* Update clml.py
---
 python/tvm/relay/op/contrib/clml.py| 118 +
 tests/python/contrib/test_clml/test_ops.py |  30 ++--
 2 files changed, 109 insertions(+), 39 deletions(-)

diff --git a/python/tvm/relay/op/contrib/clml.py 
b/python/tvm/relay/op/contrib/clml.py
index 53b022c347..22a7aae2b1 100644
--- a/python/tvm/relay/op/contrib/clml.py
+++ b/python/tvm/relay/op/contrib/clml.py
@@ -93,6 +93,7 @@ class OptimizeBatchnorm(ExprMutator):
 if (
 not isinstance(arg, (Var, Constant))
 and isinstance(arg, tvm.relay.TupleGetItem)
+and isinstance(arg.tuple_value.op, tvm.ir.op.Op)
 and arg.tuple_value.op.name == "nn.batch_norm"
 and (not isinstance(arg.tuple_value.args[0], (Var, Constant)))
 and arg.tuple_value.args[0].op.name == "nn.conv2d"
@@ -260,7 +261,8 @@ def clml_pattern_table():
 )
 )
 pattern = pattern.optional(is_op("nn.relu"))
-pattern = pattern.optional(is_op("clip"))
+# Fusion pattern to support with relu6 layer.
+pattern = pattern.optional(is_op("clip").has_attr({"a_min": 0.0, 
"a_max": 6.0}))
 return pattern
 
 def conv_transpose_pattern():
@@ -276,7 +278,8 @@ def clml_pattern_table():
 )
 )
 pattern = pattern.optional(is_op("nn.relu"))
-pattern = pattern.optional(is_op("clip"))
+# Fusion pattern to support with relu6 layer.
+pattern = pattern.optional(is_op("clip").has_attr({"a_min": 0.0, 
"a_max": 6.0}))
 return pattern
 
 def pad_conv_pattern():
@@ -293,7 +296,8 @@ def clml_pattern_table():
 )
 )
 pattern = pattern.optional(is_op("nn.relu"))
-pattern = pattern.optional(is_op("clip"))
+# Fusion pattern to support with relu6 layer.
+pattern = pattern.optional(is_op("clip").has_attr({"a_min": 0.0, 
"a_max": 6.0}))
 return pattern
 
 def batch_norm_pattern():
@@ -359,6 +363,9 @@ def clml_pattern_table():
 if attrs.data_layout != "NCHW":
 return False
 
+if call.checked_type.shape[0] > 1:
+return False
+
 if (
 (not clip_found)
 and (attrs.kernel_size[0] == 3)
@@ -411,19 +418,13 @@ def clml_pattern_table():
 # Scalars are not supported
 if len(call.args[1].checked_type.shape) == 0:
 return False
+if call.args[0] == call.args[1]:
+return False
 
 if tuple(call.args[0].checked_type.shape) != 
tuple(call.args[1].checked_type.shape):
 return False
 
-for arg in call.args:
-# Avoid any operators with dtype Int64
-if arg.checked_type.dtype == "int64":
-return False
-# No support for batch> 1
-if arg.checked_type.shape[0] > 1:
-return False
-
-return True
+return check_default_op(call)
 
 def check_pad_op(extract):
 call = extract
@@ -433,60 +434,117 @@ def clml_pattern_table():
 # Pad layers before any convolution are not guarenteed to be NCHW.
 if isinstance(call.args[0], tvm.relay.expr.Var):
 return False
-return True
+return check_default_op(call)
 
 def check_softmax_op(extract):
 call = extract
-# supports 2D and 4D tensors
+# supports 2D and 4D tensors.
 if len(call.args[0].checked_type.shape) not in [2, 4]:
 return False
-return True
+return check_default_op(call)
 
 def check_upsampling_op(extract):
 call = extract
 if call.attrs["method"] != "bilinear":
 return False
-return True
+return check_default_op(call)
 
 def check_concat_op(extract):
 call = extract
 if call.attrs["axis"] != 1:
 return False
-return True
+return check_default_op(call)
 
 def check_default_op(extract):
 call = extract
 
 if isinstance(call, tvm.relay.expr.TupleGetItem):
 call = call.tuple_value
+

Re: [PR] [CLML] Fix in clml pattern check condition [tvm]

2024-04-26 Thread via GitHub


srkreddy1238 merged PR #16933:
URL: https://github.com/apache/tvm/pull/16933


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch nightly updated (51cfb70f86 -> 97ff7cc4f1)

2024-04-26 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 51cfb70f86 [Fix][Dlight] Fix GeneralReduction for log-sum-exp (#16923)
 add 5bd10472e9 [SCRIPT][ADRENO] Fix in build config for adreno (#16927)
 add 278a6af085 [Relax][TIR] Introduce new `cumsum` op for gpu (#16934)
 add 97ff7cc4f1 [VM][OPENCL] Take advantage of OpenCL host ptr for improved 
copy (#16929)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relax/backend/dispatch_sort_scan.py |  41 +
 python/tvm/relax/backend_tir/__init__.py   |   1 +
 python/tvm/relax/backend_tir/cumsum.py | 193 +
 src/runtime/relax_vm/paged_kv_cache.cc |  19 ++
 .../relax/test_backend_dispatch_sort_scan.py   |  38 +++-
 tests/scripts/setup-adreno-env.sh  |   3 +-
 tests/scripts/task_build_adreno_bins.sh|   3 +
 tests/scripts/task_config_build_adreno.sh  |   3 +-
 8 files changed, 293 insertions(+), 8 deletions(-)
 create mode 100644 python/tvm/relax/backend_tir/cumsum.py



[PR] [Relax] Allow PrimValue as index in relax.op.take [tvm]

2024-04-26 Thread via GitHub


Lunderberg opened a new pull request, #16940:
URL: https://github.com/apache/tvm/pull/16940

   Prior to this commit, the `relax.op.take` only allowed tensors as the 
`indices` argument.  This commit extends `R.take` to also allow the index to be 
a `relax::PrimValue`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Release] v0.16.0 release schedule [tvm]

2024-04-26 Thread via GitHub


ysh329 commented on issue #16857:
URL: https://github.com/apache/tvm/issues/16857#issuecomment-2080296797

   > Hi all, vote starts (#16912). Everyone is welcomed to vote. Please vote by 
replying to this thread (#16912) explicitly. Vote will close Apr. 25th at 
23:59M GMT.
   
   Hi all, release delay due to lack of voting member.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [3rdparty] Bump FlashInfer for sampling functions [tvm]

2024-04-26 Thread via GitHub


tqchen commented on PR #16935:
URL: https://github.com/apache/tvm/pull/16935#issuecomment-2080278387

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Thrust] Increase static workspace size [tvm]

2024-04-26 Thread via GitHub


tqchen commented on PR #16937:
URL: https://github.com/apache/tvm/pull/16937#issuecomment-2080278272

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Runtime] Allow offset to be specified in NDArray::CreateView [tvm]

2024-04-26 Thread via GitHub


tqchen commented on PR #16938:
URL: https://github.com/apache/tvm/pull/16938#issuecomment-2080274374

   one note is that while such view was OK. the additional byte_offset would 
require special kernel to interact with them(by allowing elem_offset) in the 
arguments, and may not be the best peforming one because there is a tradeoff in 
terms of continugous/alignment and zero copy


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [CI] Upgrade CUDA to 12.4 [tvm]

2024-04-26 Thread via GitHub


tqchen commented on PR #16939:
URL: https://github.com/apache/tvm/pull/16939#issuecomment-2080273868

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [VM][OPENCL] Take advantage of OpenCL host ptr for improved copy (#16929)

2024-04-26 Thread ruihangl
This is an automated email from the ASF dual-hosted git repository.

ruihangl pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 97ff7cc4f1 [VM][OPENCL] Take advantage of OpenCL host ptr for improved 
copy (#16929)
97ff7cc4f1 is described below

commit 97ff7cc4f197ef0fa21093448dd3e45e6f1fd2bc
Author: Siva 
AuthorDate: Sat Apr 27 02:07:44 2024 +0530

[VM][OPENCL] Take advantage of OpenCL host ptr for improved copy (#16929)

We can use OpenCL mapped pointer for these copies for
improved performance.
---
 src/runtime/relax_vm/paged_kv_cache.cc | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/src/runtime/relax_vm/paged_kv_cache.cc 
b/src/runtime/relax_vm/paged_kv_cache.cc
index 64759d465b..efedac235b 100644
--- a/src/runtime/relax_vm/paged_kv_cache.cc
+++ b/src/runtime/relax_vm/paged_kv_cache.cc
@@ -31,6 +31,9 @@
 #include 
 
 #include "kv_state.h"
+#if defined(OPENCL_ENABLE_HOST_PTR)
+#include "../opencl/opencl_common.h"
+#endif
 
 namespace tvm {
 namespace runtime {
@@ -384,6 +387,22 @@ class PlainPagedKVCacheAuxDataManager : public 
PagedKVCacheAuxDataManager {
   return;
 }
 DLTensor copy_dst = *array.operator->();
+#if defined(OPENCL_ENABLE_HOST_PTR)
+tvm::runtime::cl::OpenCLWorkspace* workspace = 
tvm::runtime::cl::OpenCLWorkspace::Global();
+if (workspace->IsOpenCLDevice(copy_dst.device)) {
+  void* nptr = workspace->GetNativePtr(array);
+  uint64_t copy_size;
+  if (shape.defined()) {
+ICHECK_EQ(shape.value().size(), 1);
+copy_size = shape.value()->data[0] * sizeof(int32_t);
+  } else {
+copy_size = 
DeviceAPI::Get(array->device)->GetDataSize(*array.operator->());
+  }
+  memcpy(static_cast(nptr) + dst_elem_offset * sizeof(int32_t), 
vec_data, copy_size);
+  return;
+}
+#endif
+
 if (shape.defined()) {
   ICHECK_EQ(shape.value().size(), 1);
   copy_dst.ndim = 1;



Re: [PR] [VM][OPENCL] Take advantage of OpenCL host ptr for improved copy [tvm]

2024-04-26 Thread via GitHub


MasterJH5574 merged PR #16929:
URL: https://github.com/apache/tvm/pull/16929


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Overriding the StructuralEqual() for easy usage [tvm]

2024-04-26 Thread via GitHub


sdalvi-quic commented on PR #16908:
URL: https://github.com/apache/tvm/pull/16908#issuecomment-2079666750

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [TFLite] Add support for GELU conversion [tvm]

2024-04-26 Thread via GitHub


lhutton1 opened a new pull request, #16936:
URL: https://github.com/apache/tvm/pull/16936

   This commit adds support for converting a TFLite fp32 GELU operation to 
Relay.
   
   Also includes some neighbouring cleanup of version checks to silence 
warnings.
   
   cc @leandron @ekalda 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax][TIR] Introduce new `cumsum` op for gpu [tvm]

2024-04-26 Thread via GitHub


Lunderberg commented on code in PR #16934:
URL: https://github.com/apache/tvm/pull/16934#discussion_r1581185569


##
python/tvm/relax/backend/dispatch_sort_scan.py:
##
@@ -154,7 +154,48 @@ def visit_call_(self, call: relax.Call) -> relax.Expr:
 if call.op.name in ("relax.cumprod", "relax.cumsum"):
 tgt = self._get_target(call.struct_info)
 axis = int(call.attrs.axis) if call.attrs.axis is not None else 
call.attrs.axis
+shape = call.struct_info.shape
 kwargs = {}
+if (
+(axis == -1 or axis == len(shape) - 1)

Review Comment:
   For tensors of unknown shape, the `shape` field is none.  Instead of 
`len(call.struct_info.shape)`, can we use `call.struct_info.ndim`?  
(Alternatively, since it looks like the implementation requires an explicit 
shape in order to apply a reshape, we could add `shape is not None` to this 
condition.)



##
tests/python/relax/test_backend_dispatch_sort_scan.py:
##
@@ -399,5 +400,32 @@ def foo(x: R.Tensor((2, 3), "float32", "vulkan")):
 assert_structural_equal(mod, expected_mod)
 
 
+@tvm.testing.requires_cuda
+def test_dispatch_cumsum_gpu():
+"""Test cumsum kernel dispatch and numerical correctness"""
+
+@I.ir_module
+class Module:
+@R.function
+def main(x: R.Tensor(("m", "n"), "int32")):
+with R.dataflow():
+gv = R.cumsum(x, axis=-1, exclusive=False)
+R.output(gv)
+return gv
+
+size = (8, 2000)
+np_data = np.random.randint(0, 10, size).astype("int32")
+np_cumsum = np.cumsum(np_data, axis=-1)
+for target in ["cuda", "vulkan -supports_int64=1"]:

Review Comment:
   Nitpick: Use `@tvm.testing.parametrize_targets("cuda", "vulkan 
-supports_int64=1")` instead of looping over each target.  This performs each 
test case in a separate pytest environment, 
   
   * Exercises each test in a separate pytest case.  Can distinguish between 
failure on one specific backend as opposed to failure on every backend.
   * Applies the appropriate `@tvm.testing.requires_*` marks for each target.  
Currently, this test would fail if a developer runs it with `set(USE_CUDA ON)` 
and `set(USE_VULKAN OFF)`.
   
   ```python
   @tvm.testing.parametrize_targets("cuda", "vulkan -supports_int64=1")
   def test_dispatch_cumsum_gpu(target, dev):
  ...
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax][TIR] Introduce new `cumsum` op for gpu [tvm]

2024-04-26 Thread via GitHub


Lunderberg commented on PR #16934:
URL: https://github.com/apache/tvm/pull/16934#issuecomment-2079609663

   Whoops, looks like I took too long to review.  I think the changes requested 
should probably be made in a follow-up PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax][TIR] Introduce new `cumsum` op for gpu [tvm]

2024-04-26 Thread via GitHub


tqchen merged PR #16934:
URL: https://github.com/apache/tvm/pull/16934


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [Relax][TIR] Introduce new `cumsum` op for gpu (#16934)

2024-04-26 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 278a6af085 [Relax][TIR] Introduce new `cumsum` op for gpu (#16934)
278a6af085 is described below

commit 278a6af085d1a149bc9ae4ff4a7ac4b33fc6b6bb
Author: Siyuan Feng 
AuthorDate: Fri Apr 26 23:15:38 2024 +0800

[Relax][TIR] Introduce new `cumsum` op for gpu (#16934)
---
 python/tvm/relax/backend/dispatch_sort_scan.py |  41 +
 python/tvm/relax/backend_tir/__init__.py   |   1 +
 python/tvm/relax/backend_tir/cumsum.py | 193 +
 .../relax/test_backend_dispatch_sort_scan.py   |  38 +++-
 4 files changed, 268 insertions(+), 5 deletions(-)

diff --git a/python/tvm/relax/backend/dispatch_sort_scan.py 
b/python/tvm/relax/backend/dispatch_sort_scan.py
index eb82e49d9a..870e6138d7 100644
--- a/python/tvm/relax/backend/dispatch_sort_scan.py
+++ b/python/tvm/relax/backend/dispatch_sort_scan.py
@@ -154,7 +154,48 @@ class SortScanDispatcher(PyExprMutator):
 if call.op.name in ("relax.cumprod", "relax.cumsum"):
 tgt = self._get_target(call.struct_info)
 axis = int(call.attrs.axis) if call.attrs.axis is not None else 
call.attrs.axis
+shape = call.struct_info.shape
 kwargs = {}
+if (
+(axis == -1 or axis == len(shape) - 1)
+and is_gpu_target(tgt)
+and not can_use_thrust(tgt, "tvm.contrib.thrust.sum_scan")
+and call.op.name == "relax.cumsum"
+and call.attrs.exclusive == 0
+):
+from tvm.relax.backend_tir import (  # pylint: 
disable=import-outside-toplevel
+gpu_2d_continuous_cumsum,
+)
+
+dim = 1
+for i in range(len(shape) - 1):
+dim *= shape[i]
+in_dtype = call.args[0].struct_info.dtype
+out_dtype = call.attrs.dtype
+out_dtype = out_dtype or in_dtype
+cumsum_2d_shape = relax.ShapeExpr([dim, shape[-1]])
+reshape = relax.call_pure_packed(
+"vm.builtin.reshape",
+call.args[0],
+cumsum_2d_shape,
+sinfo_args=relax.TensorStructInfo(cumsum_2d_shape, 
out_dtype),
+)
+gv = self.builder_.add_func(
+gpu_2d_continuous_cumsum(in_dtype=in_dtype, 
out_dtype=out_dtype),
+"gpu_2d_continuous_cumsum",
+)
+cumsum = relax.call_tir(
+gv,
+reshape,
+out_sinfo=relax.TensorStructInfo(cumsum_2d_shape, 
out_dtype),
+)
+return relax.call_pure_packed(
+"vm.builtin.reshape",
+cumsum,
+shape,
+sinfo_args=call.struct_info,
+)
+
 with tgt:
 if call.op.name == "relax.cumsum":
 te_func = topi.cuda.cumsum if is_gpu_target(tgt) else 
topi.cumsum
diff --git a/python/tvm/relax/backend_tir/__init__.py 
b/python/tvm/relax/backend_tir/__init__.py
index eeb8fe438f..10def47b8d 100644
--- a/python/tvm/relax/backend_tir/__init__.py
+++ b/python/tvm/relax/backend_tir/__init__.py
@@ -18,3 +18,4 @@
 
 from . import contrib
 from .pattern import get_tir_pattern
+from .cumsum import gpu_2d_continuous_cumsum
diff --git a/python/tvm/relax/backend_tir/cumsum.py 
b/python/tvm/relax/backend_tir/cumsum.py
new file mode 100644
index 00..ade961ecf1
--- /dev/null
+++ b/python/tvm/relax/backend_tir/cumsum.py
@@ -0,0 +1,193 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, too-many-nested-blocks
+"""Backend kernels for cumsum operator."""
+
+import math
+from typing import Optional
+
+from tvm.script import tir as T
+from tvm.tir import PrimFunc
+
+
+def _is_power_of_two(n: int):
+"""Check if n is a power of 2."""
+return n > 0 and (n & (n - 1)) == 0
+
+

Re: [PR] [CI] Use LLVM17 for tests on `ci_cpu` [tvm]

2024-04-26 Thread via GitHub


lhutton1 commented on PR #16931:
URL: https://github.com/apache/tvm/pull/16931#issuecomment-2079535939

   cc @leandron @ekalda @junrushao @yongwww


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [VOTE] Release Apache TVM v0.16.0.rc0 [tvm]

2024-04-26 Thread via GitHub


tqchen commented on issue #16912:
URL: https://github.com/apache/tvm/issues/16912#issuecomment-2079501330

   +1. I checked
   
   - signatures
   - code compiles
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] Init block not discoverable after sch.blockize [tvm]

2024-04-26 Thread via GitHub


nautasolva commented on issue #16889:
URL: https://github.com/apache/tvm/issues/16889#issuecomment-2079420712

   For my usage scenario I need to keep the `T.init()` statement so 
`decompose_reduction` is not an option. Also the fact that the `A_init` block 
is present in the associated module but not discoverable through schedule 
accessors clearly indicates a bug IMO.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [target] Use native architecture for `llvm` target [tvm]

2024-04-26 Thread via GitHub


lhutton1 commented on PR #14981:
URL: https://github.com/apache/tvm/pull/14981#issuecomment-2079018660

   Closing as superseded by: https://github.com/apache/tvm/pull/16513


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [target] Use native architecture for `llvm` target [tvm]

2024-04-26 Thread via GitHub


lhutton1 closed pull request #14981: [target] Use native architecture for 
`llvm` target
URL: https://github.com/apache/tvm/pull/14981


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [SCRIPT][ADRENO] Fix in build config for adreno (#16927)

2024-04-26 Thread srk
This is an automated email from the ASF dual-hosted git repository.

srk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 5bd10472e9 [SCRIPT][ADRENO] Fix in build config for adreno (#16927)
5bd10472e9 is described below

commit 5bd10472e9a1b81a25e355824e84587a6988255c
Author: krishnaraj36 
AuthorDate: Fri Apr 26 15:06:10 2024 +0530

[SCRIPT][ADRENO] Fix in build config for adreno (#16927)

1. Enable CXX environment setting for empty tvm subgraph.
 2. Enable clml profiling and tuning in rpc environment
 3. Enable Opencl when CLML build.
---
 tests/scripts/setup-adreno-env.sh | 3 ++-
 tests/scripts/task_build_adreno_bins.sh   | 3 +++
 tests/scripts/task_config_build_adreno.sh | 3 +--
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/tests/scripts/setup-adreno-env.sh 
b/tests/scripts/setup-adreno-env.sh
index 15c124a0f0..d2c776412e 100755
--- a/tests/scripts/setup-adreno-env.sh
+++ b/tests/scripts/setup-adreno-env.sh
@@ -80,6 +80,7 @@ function def_environment() {
 export RPC_DEVICE_KEY="android"
 export RPC_TARGET="adreno"
 export 
TVM_NDK_CC="${ANDROID_NDK_HOME}/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android28-clang"
+export 
CXX="${ANDROID_NDK_HOME}/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android28-clang"
 }
 
 def_environment
@@ -111,7 +112,7 @@ case ${ENVIRONMENT} in
 adb forward tcp:$((LISTEN_PORT + 1)) tcp:$((LISTEN_PORT + 1))
 adb forward tcp:$((LISTEN_PORT + 2)) tcp:$((LISTEN_PORT + 2))
 adb forward tcp:$((LISTEN_PORT + 3)) tcp:$((LISTEN_PORT + 3))
-adb shell "cd ${TARGET_FOLDER}; killall -9 tvm_rpc-${USER}; sleep 2; 
LD_LIBRARY_PATH=${TARGET_FOLDER}/ ./tvm_rpc-${USER} server --host=0.0.0.0 
--port=${LISTEN_PORT} --port-end=$((LISTEN_PORT + 10)) 
--tracker=127.0.0.1:${TVM_TRACKER_PORT} --key=${RPC_DEVICE_KEY}"
+adb shell "cd ${TARGET_FOLDER}; killall -9 tvm_rpc-${USER}; sleep 2; 
export CLML_PROFILING=1; export CLML_IS_TUNING_RUN=1; export 
CLML_TUNING_CACHE=clml.bin; LD_LIBRARY_PATH=${TARGET_FOLDER}/ ./tvm_rpc-${USER} 
server --host=0.0.0.0 --port=${LISTEN_PORT} --port-end=$((LISTEN_PORT + 10)) 
--tracker=127.0.0.1:${TVM_TRACKER_PORT} --key=${RPC_DEVICE_KEY}"
 ;;
 
   "query")
diff --git a/tests/scripts/task_build_adreno_bins.sh 
b/tests/scripts/task_build_adreno_bins.sh
index 80ac461c4e..38eefd93a6 100755
--- a/tests/scripts/task_build_adreno_bins.sh
+++ b/tests/scripts/task_build_adreno_bins.sh
@@ -31,6 +31,9 @@ cp ../cmake/config.cmake .
 if [ -f "${ADRENO_OPENCL}/CL/cl_qcom_ml_ops.h" ] ; then
 echo set\(USE_CLML "${ADRENO_OPENCL}"\) >> config.cmake
 echo set\(USE_CLML_GRAPH_EXECUTOR "${ADRENO_OPENCL}"\) >> config.cmake
+fi
+if [ -f "${ADRENO_OPENCL}/CL/cl.h" ] ; then
+echo set\(USE_OPENCL "${ADRENO_OPENCL}"\) >> config.cmake
 else
 echo set\(USE_OPENCL ON\) >> config.cmake
 fi
diff --git a/tests/scripts/task_config_build_adreno.sh 
b/tests/scripts/task_config_build_adreno.sh
index afe6407cba..cf8917c9a5 100755
--- a/tests/scripts/task_config_build_adreno.sh
+++ b/tests/scripts/task_config_build_adreno.sh
@@ -26,9 +26,8 @@ cp ../cmake/config.cmake .
 echo set\(USE_OPENCL_GTEST /googletest\) >> config.cmake
 if [ -f "${ADRENO_OPENCL}/CL/cl_qcom_ml_ops.h" ] ; then
 echo set\(USE_CLML ${ADRENO_OPENCL}\) >> config.cmake
-else
-echo set\(USE_OPENCL ON\) >> config.cmake
 fi
+echo set\(USE_OPENCL ON\) >> config.cmake
 echo set\(USE_RPC ON\) >> config.cmake
 echo set\(USE_GRAPH_EXECUTOR ON\) >> config.cmake
 echo set\(USE_LIBBACKTRACE AUTO\) >> config.cmake



Re: [PR] [SCRIPT][ADRENO] Fix in build config for adreno [tvm]

2024-04-26 Thread via GitHub


srkreddy1238 merged PR #16927:
URL: https://github.com/apache/tvm/pull/16927


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [CLML] Fix in clml pattern check condition [tvm]

2024-04-26 Thread via GitHub


krishnaraj36 commented on PR #16933:
URL: https://github.com/apache/tvm/pull/16933#issuecomment-2078764585

   @srkreddy1238 : Can you please take a look in this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [CLML] Fix in clml pattern check condition [tvm]

2024-04-26 Thread via GitHub


krishnaraj36 opened a new pull request, #16933:
URL: https://github.com/apache/tvm/pull/16933

   Added more check condition to make clml path more robust.
   1. Depth_to_space - CLML path only supported for mode="DCR" and NCHW layout
   2. Default checks -  CLML supports less than 4D tensor dimension and with 
batch size =1.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [VOTE] Release Apache TVM v0.16.0.rc0 [tvm]

2024-04-26 Thread via GitHub


ysh329 commented on issue #16912:
URL: https://github.com/apache/tvm/issues/16912#issuecomment-2078718265

   Hi all, please vote. cc @Lunderberg @Hzfengsy @vinx13 @junrushao @tqchen 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org