(tvm) branch nightly updated (7e269dcfc8 -> b3fa6cb873)

2024-02-26 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 7e269dcfc8 [RUNTIME][RPC] Enable RPCObjectRef over multi-hop RPC 
(#16635)
 add 99e22328bf [Disco] Implement `Session.import_python_module` method 
(#16617)
 add 3ec0ca5b0b [Disco] Expose functions to query the per-worker 
device/rank (#16639)
 add b3fa6cb873 [AOT][Testing] Print output values on test failure (#16611)

No new revisions were added by this update.

Summary of changes:
 python/tvm/exec/disco_worker.py |  56 --
 python/tvm/runtime/__init__.py  |   1 +
 python/tvm/runtime/disco/session.py |  26 -
 python/tvm/testing/aot.py   |  76 +++---
 python/tvm/testing/utils.py |   3 +
 src/runtime/disco/builtin.cc|   6 ++
 tests/python/disco/test_callback.py | 130 
 tests/python/relay/aot/test_aot_test_harness.py |  61 +++
 tests/python/relay/aot/test_crt_aot.py  |   1 +
 9 files changed, 335 insertions(+), 25 deletions(-)
 create mode 100644 tests/python/disco/test_callback.py
 create mode 100644 tests/python/relay/aot/test_aot_test_harness.py



Re: [PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]

2024-02-26 Thread via GitHub


chengven027-intellif closed pull request #16644: [Relax][Frontend][Onnx]fix 
name supply bug
URL: https://github.com/apache/tvm/pull/16644


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]

2024-02-26 Thread via GitHub


chengven027-intellif commented on PR #16644:
URL: https://github.com/apache/tvm/pull/16644#issuecomment-1965784035

   sry, Its my fault. This is right.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]

2024-02-26 Thread via GitHub


Lunderberg commented on PR #16641:
URL: https://github.com/apache/tvm/pull/16641#issuecomment-1965751186

   And after poking at it, it looks like there an ambiguity in the relax type 
system.  In most cases, a zero-field `TupleStructInfo` is used to represent a 
void type 
([example](https://github.com/apache/tvm/blob/main/include/tvm/relax/struct_info.h#L457)).
  However, in some cases, a zero-field `TupleStructInfo` is used to represent, 
well, a zero-field tuple 
([example](https://github.com/apache/tvm/blob/main/tests/python/relax/test_tvmscript_parser.py#L1369)).
  Eliding the variable binding for an actual void type make sense, as there are 
no valid uses of a void type.  However, a zero-field tuple can be treated as an 
object, and so removing its variable binding may result in undefined usage.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]

2024-02-26 Thread via GitHub


Lunderberg commented on PR #16641:
URL: https://github.com/apache/tvm/pull/16641#issuecomment-1965741548

   > Does this intersect with the quirky parsing for if-else? For example, if 
the value returned in an if-else is of void type. Would it be safe not to write 
out the return var? Would it still roundtrip?
   
   Good call on a test to add.  Looks like this change does cause an issue with 
round-trips when the elided binding is the last binding in an if/else block.  
I've added a currently-failing unit test for the round-trip.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]

2024-02-26 Thread via GitHub


Lunderberg commented on code in PR #16641:
URL: https://github.com/apache/tvm/pull/16641#discussion_r1503572139


##
python/tvm/script/parser/relax/parser.py:
##
@@ -274,7 +274,21 @@ def post_visit_local_function(self: Parser, node: 
doc.Expr) -> None:
 @dispatch.register(token="relax", type_name="Expr")
 def visit_expr_stmt(self: Parser, node: doc.Expr) -> None:
 value = self.eval_expr(node.value)
-if value is not None:
+if isinstance(value, relax.Expr):
+var = R.emit(value)
+IRBuilder.name("_", var)
+is_void_value = (
+isinstance(var.struct_info, relax.TupleStructInfo) and 
len(var.struct_info.fields) == 0
+)
+
+if not is_void_value:
+self.report_error(
+node,
+f"Non-void relax expressions must be bound to a variable, "
+f"but expression of type {var.struct_info} was used as a 
statement.",
+)

Review Comment:
   At the moment, because I wanted to make the minimal change that would 
support common cases.  I think it would be good to remove the restriction 
altogether, but for the first step, I wanted to make the restriction be 
explicit.
   
   There's a couple of concerns I could see with allowing non-void return value 
to be implicitly ignored.
   
   * Prevent accidentally unused values.  If `cls.add1(a,b)` does an in-place 
update of `a`, but `cls.add2(a,b)` returns a new value, using `cls.add2(a,b)` 
without assigning to a value would likely be an error.
   * Round-trip TVMScript -> Relax -> TVMScript without a pre-processing pass.  
Checking if a value has void type can done while printing the IR.  Checking 
whether a non-void variable could be omitted would require a pre-processing 
step to find any downstream users.
   
   I don't think either of those are definitive arguments, but I figured I'd 
handle the unambiguous beneficial cases first, with a follow-up PR to relax the 
restriction.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]

2024-02-26 Thread via GitHub


Lunderberg commented on code in PR #16641:
URL: https://github.com/apache/tvm/pull/16641#discussion_r1503572139


##
python/tvm/script/parser/relax/parser.py:
##
@@ -274,7 +274,21 @@ def post_visit_local_function(self: Parser, node: 
doc.Expr) -> None:
 @dispatch.register(token="relax", type_name="Expr")
 def visit_expr_stmt(self: Parser, node: doc.Expr) -> None:
 value = self.eval_expr(node.value)
-if value is not None:
+if isinstance(value, relax.Expr):
+var = R.emit(value)
+IRBuilder.name("_", var)
+is_void_value = (
+isinstance(var.struct_info, relax.TupleStructInfo) and 
len(var.struct_info.fields) == 0
+)
+
+if not is_void_value:
+self.report_error(
+node,
+f"Non-void relax expressions must be bound to a variable, "
+f"but expression of type {var.struct_info} was used as a 
statement.",
+)

Review Comment:
   At the moment, because I wanted to make the minimal change that would 
support common cases.  I think it would be good to remove the restriction 
altogether, but for the first step, I wanted to make the restriction be 
explicit.
   
   There's a couple of concerns I could see with allowing non-void return value 
to be implicitly ignored.
   
   * Prevent accidentally unused values.  If two IRModule instances An in-place 
operator that performs `a = a + b` may be represented as `cls.add(a, b)`.  This 
woul
   * Round-trip TVMScript -> Relax -> TVMScript without a pre-processing pass.  
Checking if a value has void type can done while printing the IR.  Checking 
whether a non-void variable could be omitted would require a pre-processing 
step to find any downstream users.
   
   I don't think either of those are definitive arguments, but I figured I'd 
handle the unambiguous beneficial cases first, with a follow-up PR to relax the 
restriction.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug][QNN][QNNX-Frontent] Error reading zero_point parameter in per-channel quantization. [tvm]

2024-02-26 Thread via GitHub


MPolaris opened a new issue, #16646:
URL: https://github.com/apache/tvm/issues/16646

   In QNN-Frontent, the zero_point parameter reading method seems to be 
incorrect. In the '_qnn_conv2d_legalize_cuda' function, data of type uint8 will 
be shift, but only when zero_point is a scalar, i.e. only when Per-Tensor is 
considered, in Per-channel, zero_point will be a 1d array.
   I have submitted a [PR](https://github.com/apache/tvm/pull/16479) to fix 
this issue. Bugs can be replicated through the following code:
   ```python
   import onnx
   import numpy as np
   
   input_tensor = onnx.helper.make_tensor_value_info('input', 
onnx.TensorProto.FLOAT, [1,3,224,224])
   output_tensor = onnx.helper.make_tensor_value_info('output', 
onnx.TensorProto.FLOAT, [1,3,112,112])
   input_q_info = onnx.helper.make_tensor_value_info('input_q', 
onnx.TensorProto.UINT8, [1,3,224,224])
   conv_q_info = onnx.helper.make_tensor_value_info('conv_q', 
onnx.TensorProto.UINT8, [1,3,112,112])
   
   q1_scale = onnx.helper.make_tensor('q1_scale', onnx.TensorProto.FLOAT, [], 
[1])
   q1_zero_point = onnx.helper.make_tensor('q1_zero_point', 
onnx.TensorProto.UINT8, [], [0])
   q2_scale = onnx.helper.make_tensor('q2_scale', onnx.TensorProto.FLOAT, [], 
[1])
   q2_zero_point = onnx.helper.make_tensor('q2_zero_point', 
onnx.TensorProto.UINT8, [], [0])
   weight = onnx.helper.make_tensor('weight', onnx.TensorProto.UINT8, 
[3,3,3,3], np.random.randint(0, 255, (3,3,3,3)).astype(np.uint8))
   bias = onnx.helper.make_tensor('bias', onnx.TensorProto.INT32, [3], 
np.random.randn(3).astype(np.int32))
   w_scale = onnx.helper.make_tensor('w_scale', onnx.TensorProto.FLOAT, [3], 
[1,2,3])
   w_zero_point = onnx.helper.make_tensor('w_zero_point', 
onnx.TensorProto.UINT8, [3], [1,2,3])
   
   input_q = onnx.helper.make_node('QuantizeLinear', ['input', 'q1_scale', 
'q1_zero_point'], ['input_q'], name='input_quantize')
   attrs = {
   "dilations":[1, 1],
   "group":1,
   "kernel_shape":[3, 3],
   "pads":[1, 1, 1, 1],
   "strides":[2, 2]
   }
   conv = onnx.helper.make_node('QLinearConv', ['input_q', 'q1_scale', 
'q1_zero_point', 
   'weight', 'w_scale', 
'w_zero_point', 
   'q2_scale', 'q2_zero_point', 
'bias'], ['conv_q'], name='conv', **attrs)
   output = onnx.helper.make_node('DequantizeLinear', ['conv_q', 'q2_scale', 
'q2_zero_point'], ['output'], name='output_dequantize')
   
   graph = onnx.helper.make_graph(
   [input_q, conv, output],
   'quantized_graph',
   [input_tensor],
   [output_tensor],
   initializer=[q1_scale, q1_zero_point, q2_scale, q2_zero_point, weight, 
bias, w_scale, w_zero_point],
   value_info=[input_q_info, conv_q_info],
   )
   
   model = onnx.helper.make_model(graph, 
opset_imports=[onnx.helper.make_opsetid("com.microsoft", 1), 
onnx.helper.make_opsetid("", 11)])
   
   model_name = "./quantized.onnx"
   onnx.save_model(model, model_name)
   
   import tvm
   from tvm import relay
   onnx_model = onnx.load("./quantized.onnx")
   mod, params = relay.frontend.from_onnx(onnx_model)
   target ="cuda"
   with tvm.transform.PassContext(opt_level=3):
   executor = relay.build_module.create_executor(
   "graph", mod, tvm.cuda(0), target, params
   ).evaluate()


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]

2024-02-26 Thread via GitHub


Lunderberg commented on PR #16588:
URL: https://github.com/apache/tvm/pull/16588#issuecomment-1965686995

   This PR is now updated to perform the checks for `(A+B)*C < (A*B)*D` 
patterns in `RewriteSimplifer`, gated behind the 
`Extension::kComparisonOfProductAndSum` extension flag.  This flag is currently 
disabled by default, with unit tests explicitly enabling the extension.
   
   The updated behavior is sufficient to unblock 
https://github.com/apache/tvm/pull/16589.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]

2024-02-26 Thread via GitHub


slyubomirsky commented on PR #16569:
URL: https://github.com/apache/tvm/pull/16569#issuecomment-1965666918

   No clue why 
`tests/python/tir-transform/test_tir_transform_force_narrow_index_to_i32.py::test_thread_axis2`
 is failing. There is no well-formed error there, but I get a complaint about 
dtypes not matching (for the loop iterator `i0_i1_i2_i3_fused_2`). Not sure why 
it wouldn't have failed before.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TVMScript] Allow use of relax.Expr with void type as a statement [tvm]

2024-02-26 Thread via GitHub


slyubomirsky commented on code in PR #16641:
URL: https://github.com/apache/tvm/pull/16641#discussion_r1503451237


##
python/tvm/script/parser/relax/parser.py:
##
@@ -274,7 +274,21 @@ def post_visit_local_function(self: Parser, node: 
doc.Expr) -> None:
 @dispatch.register(token="relax", type_name="Expr")
 def visit_expr_stmt(self: Parser, node: doc.Expr) -> None:
 value = self.eval_expr(node.value)
-if value is not None:
+if isinstance(value, relax.Expr):
+var = R.emit(value)
+IRBuilder.name("_", var)
+is_void_value = (
+isinstance(var.struct_info, relax.TupleStructInfo) and 
len(var.struct_info.fields) == 0
+)
+
+if not is_void_value:
+self.report_error(
+node,
+f"Non-void relax expressions must be bound to a variable, "
+f"but expression of type {var.struct_info} was used as a 
statement.",
+)

Review Comment:
   I wonder if we should even have this as a rule. Why not let users evaluate 
expressions without binding them regardless of their return type?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]

2024-02-26 Thread via GitHub


slyubomirsky commented on code in PR #16642:
URL: https://github.com/apache/tvm/pull/16642#discussion_r1503443020


##
src/relax/transform/compute_prim_value.cc:
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relax {
+
+namespace {
+
+class PrimValueComputeInjector : public ExprMutator {
+ public:
+  IRModule Finalize() const { return builder_->Finalize(); }
+
+  using ExprMutator::VisitExpr_;
+
+  Expr VisitExpr_(const PrimValueNode* op) override {
+auto node = Downcast(ExprMutator::VisitExpr_(op));
+
+if (node->value->IsInstance() || 
node->value->IsInstance()) {
+  return node;
+}
+
+auto ret_dtype = node->value->dtype;
+auto param_vars = tir::UndefinedVars(node->value);

Review Comment:
   Would this call know which TIR vars are in scope per the Relax scoping rules?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow R.Prim('bool') in relax::If and assert_op [tvm]

2024-02-26 Thread via GitHub


slyubomirsky commented on code in PR #16642:
URL: https://github.com/apache/tvm/pull/16642#discussion_r1503442355


##
python/tvm/relax/transform/transform.py:
##
@@ -463,6 +463,16 @@ def KillAfterLastUse() -> tvm.ir.transform.Pass:
 return _ffi_api.KillAfterLastUse()  # type: ignore
 
 
+def ComputePrimValue() -> tvm.ir.transform.Pass:
+"""Compute all R.prim_value instances

Review Comment:
   I think this description should be more precise. I assume it's supposed to 
come late in the phase ordering since it inserts direct calls to PrimFuncs? 
(And so should probably come after we end purity checking?)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]

2024-02-26 Thread via GitHub


yongwww commented on code in PR #16644:
URL: https://github.com/apache/tvm/pull/16644#discussion_r1503424843


##
python/tvm/relax/frontend/onnx/onnx_frontend.py:
##
@@ -2059,12 +2059,10 @@ def _sanitize_name(self, name: str) -> str:
 if name == "":
 return self._name_supply.fresh_name("empty_")
 
-new_name = name.replace(".", "_")

Review Comment:
   it would be good to have a test case to cover this change. If this change is 
going to align with the description, one option is to refine the description. 
It would be good to avoid modifications to src/ir/supply.cc, given that it 
serves as the foundation for numerous cases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [AOT][Testing] Print output values on test failure (#16611)

2024-02-26 Thread ekalda
This is an automated email from the ASF dual-hosted git repository.

ekalda pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new b3fa6cb873 [AOT][Testing] Print output values on test failure (#16611)
b3fa6cb873 is described below

commit b3fa6cb873c71bfc15054bc9abbcc111c8413c9b
Author: Luke Hutton 
AuthorDate: Mon Feb 26 15:58:16 2024 +

[AOT][Testing] Print output values on test failure (#16611)

This commit enhances the AOT test harness to print the "actual" and
"reference" values when there is a mismatch. This helps when
debugging a failing test. Sample output:
```
Actual, Reference
8.502946, 8.887751
9.810405, 9.108611
8.563767, 9.041000
10.019511, 9.190888

```
---
 python/tvm/testing/aot.py   | 76 -
 tests/python/relay/aot/test_aot_test_harness.py | 61 
 tests/python/relay/aot/test_crt_aot.py  |  1 +
 3 files changed, 123 insertions(+), 15 deletions(-)

diff --git a/python/tvm/testing/aot.py b/python/tvm/testing/aot.py
index 9ee3a84c8a..8d74f545a3 100644
--- a/python/tvm/testing/aot.py
+++ b/python/tvm/testing/aot.py
@@ -425,7 +425,14 @@ def _emit_main_packed_call(main_file, input_map, 
output_list, mod_name):
 main_file.write("\n")
 
 
-def _emit_main_compare(main_file, outputs, output_tolerance, mod_name, 
use_interface_c=False):
+def _emit_main_compare(
+main_file,
+outputs,
+output_tolerance,
+mod_name,
+use_interface_c=False,
+print_output_on_mismatch=False,
+):
 for key in outputs:
 sanitized_tensor_name = re.sub(r"\W", "_", key)
 expected_data_name = _mangle_name(mod_name, 
f"expected_output_data_{sanitized_tensor_name}")
@@ -433,9 +440,11 @@ def _emit_main_compare(main_file, outputs, 
output_tolerance, mod_name, use_inter
 
 comparison_function = "abs"
 tolerance = output_tolerance or 0
+value_format_specifier = "%d"
 if is_float_dtype:
 comparison_function = "fabs"
 tolerance = output_tolerance or 0.001
+value_format_specifier = "%f"
 
 data_length_var_name = (
 _mangle_name(mod_name, f"output_data_{sanitized_tensor_name}") + 
"_len"
@@ -447,15 +456,34 @@ def _emit_main_compare(main_file, outputs, 
output_tolerance, mod_name, use_inter
 )
 else:
 actual_data_name = _mangle_name(mod_name, 
f"output_data_{sanitized_tensor_name}")
-main_file.write(
-f"for (int i = 0; i<{data_length_var_name}; i++) {{\n"
-f"\tif ({comparison_function}({actual_data_name}[i]-"
-f"{expected_data_name}[i]) > {tolerance}) {{\n"
-f'\t\tprintf("{AOT_FAILURE_TOKEN}\\n");\n'
-f"\t\treturn -1;\n"
-f"\t}}\n"
-f"}}"
-)
+
+if print_output_on_mismatch:
+main_file.write(
+f"int mismatch = 0;"
+f'printf("Actual, Reference\\n");\n'
+f"for (int i = 0; i<{data_length_var_name}; i++) {{\n"
+f"\tif ({comparison_function}({actual_data_name}[i]-"
+f"{expected_data_name}[i]) > {tolerance}) {{\n"
+f'\t\tprintf("{value_format_specifier}, 
{value_format_specifier}\\n"'
+f", {actual_data_name}[i], {expected_data_name}[i]);\n"
+f"\t\tmismatch = 1;\n"
+f"\t}}\n"
+f"}}"
+f"if (mismatch == 1) {{\n"
+f'\tprintf("{AOT_FAILURE_TOKEN}\\n");\n'
+f"\treturn -1;\n"
+f"}}"
+)
+else:
+main_file.write(
+f"for (int i = 0; i<{data_length_var_name}; i++) {{\n"
+f"\tif ({comparison_function}({actual_data_name}[i]-"
+f"{expected_data_name}[i]) > {tolerance}) {{\n"
+f'\t\tprintf("{AOT_FAILURE_TOKEN}\\n");\n'
+f"\t\treturn -1;\n"
+f"\t}}\n"
+f"}}"
+)
 
 
 def _emit_main_init_memory_manager(main_file):
@@ -500,6 +528,7 @@ def _create_main(
 use_stack_allocator=True,
 use_workspace_io=False,
 debug_last_error=False,
+print_output_on_mismatch=False,
 ):
 file_path = pathlib.Path(f"{output_path}/" + test_name).resolve()
 # create header file
@@ -568,7 +597,12 @@ def _create_main(
 for compiled_model in compiled_models:
 model = compiled_model.model
 _emit_main_compare(
-main_file, model.outputs, model.output_tolerance, model.name, 
interface_api == "c"
+main_file,
+model.outputs,
+model.output_tolerance,
+model.name,
+interface_api == "c",
+print_output_on_mismatch,
 )
 

Re: [PR] [AOT][Testing] Print output values on test failure [tvm]

2024-02-26 Thread via GitHub


ekalda merged PR #16611:
URL: https://github.com/apache/tvm/pull/16611


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [SVE] Add support for scalable data type strings [tvm]

2024-02-26 Thread via GitHub


ekalda commented on code in PR #16612:
URL: https://github.com/apache/tvm/pull/16612#discussion_r1502820316


##
include/tvm/runtime/data_type.h:
##
@@ -110,7 +111,7 @@ class DataType {
 return -lanes_as_int;
   }
   /*! \return whether type is a scalar type. */
-  bool is_scalar() const { return lanes() == 1; }
+  bool is_scalar() const { return !is_scalable_vector() && lanes() == 1; }

Review Comment:
   Ok yes, thinking about it, this probably is the most solid way to do it, 
even though it reads a bit awkward  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Docs][Datatypes] Minimal example for supporting custom datatypes [tvm]

2024-02-26 Thread via GitHub


RaulMurillo opened a new issue, #16645:
URL: https://github.com/apache/tvm/issues/16645

   The documentation about [Bring Your Own 
Datatypes](https://tvm.apache.org/2020/09/26/bring-your-own-datatypes) is 
partially incomplete, and also the link for more documentation at the end of 
the page is currently broken.
   
   May I suggest providing a minimal working example with the latest version of 
TVM including, for example, the mentioned vector addition using posit datatype? 
Not just a few required instructions, but the whole working code, as in other 
documentation pages.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Relax][Frontend][Onnx]fix name supply bug [tvm]

2024-02-26 Thread via GitHub


chengven027-intellif opened a new pull request, #16644:
URL: https://github.com/apache/tvm/pull/16644

   Hi,  tvm:
   From the description of the function of  _sanitize_name:
   ```
   If the name is None, returns a string input_0, input_1, etc.
   If the input is an empty string, returns empty_0, empty_1, etc.
   If the input is a string that does not start with a letter or 
underscore,
   returns input_. Otherwise, returns an unique input name.
   ```.
  The input to the current model is "input.1".which will be converted into 
"input_1" by this function.
  This conversion is wrong and does not conform to the three situations 
described above.
  So I re-modified the relevant conversion conditions according to the 
above description. I am not sure whether it will affect other cases. @jwfromm 
@gigiblender @tqchen 
  
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [Disco] Expose functions to query the per-worker device/rank (#16639)

2024-02-26 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 3ec0ca5b0b [Disco] Expose functions to query the per-worker 
device/rank (#16639)
3ec0ca5b0b is described below

commit 3ec0ca5b0b3941d9314cfada23dac3101cc163f7
Author: Eric Lunderberg 
AuthorDate: Mon Feb 26 04:06:15 2024 -0600

[Disco] Expose functions to query the per-worker device/rank (#16639)

In addition to the PackedFunc `"runtime.disco.worker_id"`, which
returns the worker ID wrapped in a `ShapeTuple`, this commit adds
`"runtime.disco.worker_rank"`, which returns the worker ID without
wrapping, and `"runtime.disco.device"`, which returns the device for
each worker.

The unit test added in this commit simulates loading of model weights
through a parameter transformation function.
---
 python/tvm/exec/disco_worker.py |  56 +---
 python/tvm/runtime/disco/session.py |   2 +-
 python/tvm/testing/utils.py |   3 +
 src/runtime/disco/builtin.cc|   6 ++
 tests/python/disco/test_callback.py | 130 
 5 files changed, 188 insertions(+), 9 deletions(-)

diff --git a/python/tvm/exec/disco_worker.py b/python/tvm/exec/disco_worker.py
index b5eea6328d..76ce0ff993 100644
--- a/python/tvm/exec/disco_worker.py
+++ b/python/tvm/exec/disco_worker.py
@@ -19,44 +19,84 @@
 import os
 import sys
 
-from tvm import runtime as _  # pylint: disable=unused-import
+from typing import Callable
+
+import tvm
 from tvm._ffi import get_global_func, register_func
 from tvm.runtime import NDArray, ShapeTuple, String
 from tvm.runtime.ndarray import array
 
 
-@register_func("tests.disco.add_one")
-def _add_one(x: int) -> int:  # pylint: disable=invalid-name
+@register_func("tests.disco.add_one", override=True)
+def _add_one(x: int) -> int:
 return x + 1
 
 
 @register_func("tests.disco.add_one_float", override=True)
-def _add_one_float(x: float):  # pylint: disable=invalid-name
+def _add_one_float(x: float):
 return x + 0.5
 
 
 @register_func("tests.disco.add_one_ndarray", override=True)
-def _add_one_ndarray(x: NDArray) -> NDArray:  # pylint: disable=invalid-name
+def _add_one_ndarray(x: NDArray) -> NDArray:
 return array(x.numpy() + 1)
 
 
 @register_func("tests.disco.str", override=True)
-def _str_func(x: str):  # pylint: disable=invalid-name
+def _str_func(x: str):
 return x + "_suffix"
 
 
 @register_func("tests.disco.str_obj", override=True)
-def _str_obj_func(x: String):  # pylint: disable=invalid-name
+def _str_obj_func(x: String):
 assert isinstance(x, String)
 return String(x + "_suffix")
 
 
 @register_func("tests.disco.shape_tuple", override=True)
-def _shape_tuple_func(x: ShapeTuple):  # pylint: disable=invalid-name
+def _shape_tuple_func(x: ShapeTuple):
 assert isinstance(x, ShapeTuple)
 return ShapeTuple(list(x) + [4, 5])
 
 
+@register_func("tests.disco.test_callback", override=True)
+def _make_callback(device: tvm.runtime.Device) -> Callable[[str, int], 
NDArray]:
+"""For use in tests/python/disco/test_callback.py
+
+This function simulates a callback to be used for lazy parameter
+loading.
+
+Parameters
+--
+device: tvm.runtime.Device
+
+The device on which parameters should be located, when
+returned by the callback function.
+
+Returns
+---
+fget_item: Callable[[str,int], NDArray]
+
+A callback function that accepts a parameter's name and index,
+and returns the specified parameter.
+
+"""
+import numpy as np  # pylint: disable=import-outside-toplevel
+
+def fget_item(param_name: str, param_index: int) -> NDArray:
+if param_index == 0:
+assert param_name == "A"
+arr = np.arange(16).reshape([4, 4]).astype("int32")
+elif param_index == 1:
+assert param_name == "B"
+arr = np.arange(4).reshape([2, 2]).astype("float32")
+else:
+raise ValueError(f"Unexpected index {param_index}")
+return tvm.nd.array(arr, device=device)
+
+return fget_item
+
+
 def main():
 """Main worker function"""
 if len(sys.argv) != 5:
diff --git a/python/tvm/runtime/disco/session.py 
b/python/tvm/runtime/disco/session.py
index c54f646e17..1013d14a89 100644
--- a/python/tvm/runtime/disco/session.py
+++ b/python/tvm/runtime/disco/session.py
@@ -377,7 +377,7 @@ class ThreadedSession(Session):
 class ProcessSession(Session):
 """A Disco session backed by pipe-based multi-processing."""
 
-def __init__(self, num_workers: int, entrypoint: str) -> None:
+def __init__(self, num_workers: int, entrypoint: str = 
"tvm.exec.disco_worker") -> None:
 self.__init_handle_by_constructor__(
 _ffi_api.SessionProcess,  # type: ignore # pylint: 
disable=no-member

Re: [PR] [Disco] Expose functions to query the per-worker device/rank [tvm]

2024-02-26 Thread via GitHub


masahi merged PR #16639:
URL: https://github.com/apache/tvm/pull/16639


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [Disco] Implement `Session.import_python_module` method (#16617)

2024-02-26 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 99e22328bf [Disco] Implement `Session.import_python_module` method 
(#16617)
99e22328bf is described below

commit 99e22328bf5c33d3c7f350ec41cb5aac9cfc69c4
Author: Eric Lunderberg 
AuthorDate: Mon Feb 26 04:05:33 2024 -0600

[Disco] Implement `Session.import_python_module` method (#16617)

Import a module into the workers.  If a python module has not yet been
loaded, `Session.get_global_func` cannot load a packed func from it.
---
 python/tvm/runtime/__init__.py  |  1 +
 python/tvm/runtime/disco/session.py | 24 +++-
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/python/tvm/runtime/__init__.py b/python/tvm/runtime/__init__.py
index eccdcbad95..3a68c567ee 100644
--- a/python/tvm/runtime/__init__.py
+++ b/python/tvm/runtime/__init__.py
@@ -40,3 +40,4 @@ from .params import (
 )
 
 from . import executor
+from . import disco
diff --git a/python/tvm/runtime/disco/session.py 
b/python/tvm/runtime/disco/session.py
index b166bd82e9..c54f646e17 100644
--- a/python/tvm/runtime/disco/session.py
+++ b/python/tvm/runtime/disco/session.py
@@ -21,7 +21,7 @@ from typing import Any, Callable, Optional, Sequence, Union
 
 import numpy as np
 
-from ..._ffi import register_object
+from ..._ffi import register_object, register_func
 from ..._ffi.runtime_ctypes import Device
 from ..container import ShapeTuple
 from ..ndarray import NDArray
@@ -153,6 +153,23 @@ class Session(Object):
 """
 return DPackedFunc(_ffi_api.SessionGetGlobalFunc(self, name), self)  # 
type: ignore # pylint: disable=no-member
 
+def import_python_module(self, module_name: str) -> None:
+"""Import a python module in each worker
+
+This may be required before call
+
+Parameters
+--
+module_name: str
+
+The python module name, as it would be used in a python
+`import` statement.
+"""
+if not hasattr(self, "_import_python_module"):
+self._import_python_module = 
self.get_global_func("runtime.disco._import_python_module")
+
+self._import_python_module(module_name)
+
 def call_packed(self, func: DRef, *args) -> DRef:
 """Call a PackedFunc on workers providing variadic arguments.
 
@@ -369,6 +386,11 @@ class ProcessSession(Session):
 )
 
 
+@register_func("runtime.disco._import_python_module")
+def _import_python_module(module_name: str) -> None:
+__import__(module_name)
+
+
 REDUCE_OPS = {
 "sum": 0,
 "prod": 1,



Re: [PR] [Disco] Implement `Session.import_python_module` method [tvm]

2024-02-26 Thread via GitHub


masahi merged PR #16617:
URL: https://github.com/apache/tvm/pull/16617


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] Unable to build TVM with LLVM 12.0.0 [tvm]

2024-02-26 Thread via GitHub


HLearning commented on issue #16013:
URL: https://github.com/apache/tvm/issues/16013#issuecomment-1963714094

   #if TVM_LLVM_VERSION >= 121  
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [SVE] Add support for scalable data type strings [tvm]

2024-02-26 Thread via GitHub


lhutton1 commented on code in PR #16612:
URL: https://github.com/apache/tvm/pull/16612#discussion_r1502309587


##
include/tvm/runtime/data_type.h:
##
@@ -110,7 +111,7 @@ class DataType {
 return -lanes_as_int;
   }
   /*! \return whether type is a scalar type. */
-  bool is_scalar() const { return lanes() == 1; }
+  bool is_scalar() const { return !is_scalable_vector() && lanes() == 1; }

Review Comment:
   Due to placing a call to `is_scalable_vector()` before `lanes()`, 
`is_scalar()` should work with scalable vectors in its current form, I'll add 
some tests for this. I don't think we can simply `return 
!is_scalable_or_fixed_length_vector()` as it wouldn't account for the case when 
lanes == 0. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [AOT][Testing] Print output values on test failure [tvm]

2024-02-26 Thread via GitHub


lhutton1 commented on code in PR #16611:
URL: https://github.com/apache/tvm/pull/16611#discussion_r1502284692


##
tests/python/relay/aot/test_crt_aot.py:
##
@@ -93,6 +93,7 @@ def test_conv_with_params(interface_api, use_unpacked_api, 
test_runner):
 test_runner,
 interface_api,
 use_unpacked_api,
+print_output_on_mismatch=True,

Review Comment:
   I added this just to exercise the case when the option is used, but there is 
no output mismatch



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org