[GitHub] [tvm] areusch commented on issue #13586: [Release] v0.11.0 release schedule

2023-02-02 Thread via GitHub


areusch commented on issue #13586:
URL: https://github.com/apache/tvm/issues/13586#issuecomment-1415218832

   i have 
[asked](https://the-asf.slack.com/archives/CBX4TSBQ8/p1675409181932449) 
asf-infra whether perhaps gitbox has misconfigured our branch protection. i'm 
not able to view that natively in the GH UI, i believe, so i'm left to deduce 
what i can from the asf.yaml and associated docs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] echuraev commented on a diff in pull request #13867: [DOCS][ADRENO] Improved Adreno documentation

2023-02-02 Thread via GitHub


echuraev commented on code in PR #13867:
URL: https://github.com/apache/tvm/pull/13867#discussion_r1095428805


##
docs/how_to/deploy/adreno.rst:
##
@@ -15,41 +15,60 @@
 specific language governing permissions and limitations
 under the License.
 
-Deploy to Adreno GPU
-===
+Deploy to Adreno™ GPU
+
 
-**Authors**: Daniil Barinov, Egor Churaev, Andrey Malyshev
+**Authors**: Daniil Barinov, Egor Churaev, Andrey Malyshev, Siva Rama Krishna
 
 Introduction
 
 
-Adreno is a series of graphics processing unit (GPU) semiconductor
+Adreno™ is a series of graphics processing unit (GPU) semiconductor
 intellectual property cores developed by Qualcomm and used in many of
 their SoCs.
 
-The Adreno GPU accelerates the rendering of complex geometries to
+The Adreno™ GPU accelerates the rendering of complex geometries to
 deliver high-performance graphics and a rich user experience with low
 power consumption.
 
-This guide will demonstrate :ref:`the benefits of using textures with 
Adreno`,
-how to :ref:`build TVM with OpenCL` (needed by Adreno 
devices) and TVM RPC
-enabled. It will also provide :ref:`example 
code` to better understand the differences 
in compiling and deploying models
-for Adreno devices.
+TVM supports deep learning acceleration on Adreno™ GPU by native OpenCL 
backend of TVM and
+also through OpenCLML backend. Native OpenCL backend of TVM is enhanced to 
make it
+Adreno™ friendly by incorporating texture memory usage and Adreno™ friendly 
layouts.
+OpenCLML is an SDK release by Qualcomm that provides kernel acceleration 
library
+for most of the deep learning operators.
 
-.. _advantages_of_the_textures:
+This guide is organized to demonstrate various design aspects of
 
-Advantages of the Textures
---
+- :ref:`OpenCL Backend Ehnahcements`
+- :ref:`About OpenCLML`
+- :ref:`Build and Deploy`
 
-One of the Adreno's advantages is the clever handling of textures. At
+
+
+.. how to :ref:`build TVM with OpenCL` (needed by 
Adreno™ devices) and TVM RPC
+.. enabled. It will also provide :ref:`example 
code` to better understand the differences 
in compiling and deploying models
+.. for Adreno™ devices.

Review Comment:
   If you speak about how the references looks like, then yes. But as far as I 
understand, it is just the problems of the github representation. I took a look 
into other rst doc files and all references looks there in the same way. 
Originally, my comment was about the text:
   ```
   .. how to :ref:`build TVM with OpenCL` (needed by 
Adreno™ devices) and TVM RPC
   .. enabled. It will also provide :ref:`example 
code` to better understand the differences 
in compiling and deploying models
   .. for Adreno™ devices.
   ```
   But for now the document was significantly reworked and this comment is not 
relevant.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] echuraev commented on a diff in pull request #13867: [DOCS][ADRENO] Improved Adreno documentation

2023-02-02 Thread via GitHub


echuraev commented on code in PR #13867:
URL: https://github.com/apache/tvm/pull/13867#discussion_r1095394301


##
gallery/how_to/deploy_models/deploy_model_on_adreno.py:
##
@@ -146,85 +207,24 @@
 img = np.expand_dims(img, 0)
 
 #
-# Load pretrained Pytorch model
-# -
-# Create a Relay graph from a Pytorch ResNet-18 model
-import os
-import torch
-import torchvision
-import tvm
-from tvm import te
-from tvm import relay, rpc
-from tvm.contrib import utils, ndk
-from tvm.contrib import graph_executor
-
-model_name = "resnet18"
-model = getattr(torchvision.models, model_name)(pretrained=True)
-model = model.eval()
-
-# We grab the TorchScripted model via tracing
-input_shape = [1, 3, 224, 224]
-input_data = torch.randn(input_shape)
-scripted_model = torch.jit.trace(model, input_data).eval()
-
+# Convert PyTorch model to Relay module
+# -
+# TVM has frontend api for various frameworks under relay.frontend and now
+# for pytorch model import we have relay.frontend.from_pytorch api.
 # Input name can be arbitrary
 input_name = "input0"
 shape_list = [(input_name, img.shape)]
+
 mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
 
 #
 # Precisions
 # --
-# Since TVM support Mixed Precision, we need to register 
mixed_precision_conversion:
-from tvm.relay.op import register_mixed_precision_conversion
-
-conv2d_acc = "float32"
-
-
-@register_mixed_precision_conversion("nn.conv2d", level=11)
-def conv2d_mixed_precision_rule(call_node: "relay.Call", mixed_precision_type: 
str):
-global conv2d_acc
-return [
-relay.transform.mixed_precision.MIXED_PRECISION_ALWAYS,
-conv2d_acc,
-mixed_precision_type,
-]
-
-
-@register_mixed_precision_conversion("nn.dense", level=11)
-def conv2d_mixed_precision_rule(call_node: "relay.Call", mixed_precision_type: 
str):
-global conv2d_acc
-return [
-relay.transform.mixed_precision.MIXED_PRECISION_ALWAYS,
-conv2d_acc,
-mixed_precision_type,
-]
+from tvm.relay.op.contrib import adreno
 
+adreno.convert_to_dtype(mod["main"], dtype)

Review Comment:
   Could you please add an explanation comment about this function before this 
call.



##
gallery/how_to/deploy_models/deploy_model_on_adreno_tvmc.py:
##
@@ -0,0 +1,184 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+.. _tutorial-deploy-model-on-adreno-tvmc:
+
+Deploy the Pretrained Model on Adreno™ with tvmc Interface
+==
+**Author**: Siva Rama Krishna
+
+This article is a step-by-step tutorial to deploy pretrained Keras resnet50 
model on Adreno™.
+
+Besides that, you should have TVM built for Android.
+See the following instructions on how to build it and setup RPC environment.
+
+`Deploy to Adreno GPU `_
+
+"""
+
+import os
+import tvm
+import numpy as np
+from tvm import relay
+from tvm.driver import tvmc
+from tvm.driver.tvmc.model import TVMCPackage
+from tvm.contrib import utils
+
+#
+# Configuration
+# -
+# Specify Adreno target before compiling to generate texture
+# leveraging kernels and get all the benefits of textures
+# Note: This generated example running on our x86 server for demonstration.
+# If running it on the Android device, we need to
+# specify its instruction set. Set :code:`local_demo` to False if you want
+# to run this tutorial with a real device over rpc.
+local_demo = True
+
+# by default on CPU target will execute.
+# select 'llvm', 'opencl' and 'opencl -device=adreno'
+target = "llvm"
+
+# Change target configuration.
+# Run `adb shell cat /proc/cpuinfo` to find the arch.
+arch = "arm64"
+target_host = "llvm -mtriple=%s-linux-android" % arch
+
+# Auto tuning is compute and time taking task, hence disabling for default 
run. Please enable it if required.
+is_tuning = False
+tune_log = "adreno-resnet50.log"
+
+# To enable OpenCLML accelerated operator library.

[GitHub] [tvm] masahi closed issue #13908: [Bug]

2023-02-02 Thread via GitHub


masahi closed issue #13908: [Bug]
URL: https://github.com/apache/tvm/issues/13908


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [CLML][CODEGEN] CLML native codegen utility (#13837)

2023-02-02 Thread echuraev
This is an automated email from the ASF dual-hosted git repository.

echuraev pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new d35a8ab135 [CLML][CODEGEN] CLML native codegen utility (#13837)
d35a8ab135 is described below

commit d35a8ab1353afc40317396b2ddfda8f35a99ba8a
Author: Siva 
AuthorDate: Fri Feb 3 11:35:55 2023 +0530

[CLML][CODEGEN] CLML native codegen utility (#13837)

* [CLML][CODEGEN] CLML native codegen utility

This util generates native CLML code given a DNN model.
It does import via tvmc, extracts clml_modules, get the json source and
finally generates clml_models.cc that holds source for various sub graphs.
cpp_clml tool has additional infrastructure to compile it as a standalong
binary that runs these models.

This PR adds symbol name to the generates json grpah.
Also, extends const_loader interface to get constant params.

* * review comments

* * review

* * review
---
 apps/cpp_clml/CMakeLists.txt   |  61 ++
 apps/cpp_clml/README.md| 145 
 apps/cpp_clml/clml_runner.cc   | 818 +
 apps/cpp_clml/clml_runner.h| 262 +++
 apps/cpp_clml/main.cc  | 243 ++
 apps/cpp_clml/scripts/clml_codegen.py  |  64 ++
 cmake/modules/contrib/CLML.cmake   |   2 +-
 docker/Dockerfile.ci_adreno|   3 +
 python/tvm/relay/op/contrib/clml.py| 772 +++
 .../backend/contrib/codegen_json/codegen_json.h|   1 +
 src/runtime/const_loader_module.cc |  10 +
 src/runtime/contrib/json/json_runtime.h|   3 +
 12 files changed, 2383 insertions(+), 1 deletion(-)

diff --git a/apps/cpp_clml/CMakeLists.txt b/apps/cpp_clml/CMakeLists.txt
new file mode 100644
index 00..8c0fd53bf9
--- /dev/null
+++ b/apps/cpp_clml/CMakeLists.txt
@@ -0,0 +1,61 @@
+cmake_minimum_required(VERSION 3.13)
+
+project(clml_run VERSION 2.0)
+
+if(NOT DEFINED CMAKE_TOOLCHAIN_FILE)
+  message( FATAL_ERROR "CMAKE_TOOLCHAIN_FILE Not set, forcing exit. Suggested 
value: {ANDROID_NDK_PATH}/build/cmake/android.toolchain.cmake." )
+endif(NOT DEFINED CMAKE_TOOLCHAIN_FILE)
+
+if(NOT DEFINED ANDROID_ABI)
+  message( FATAL_ERROR "ANDROID_ABI Not set, forcing exit. Suggested value(s): 
arm64-v8a (64), armeabi-v7a (32)" )
+endif(NOT DEFINED ANDROID_ABI)
+
+if(NOT DEFINED CLML_SDK)
+  message( FATAL_ERROR "CLML_SDK Not set, forcing exit." )
+endif(NOT DEFINED CLML_SDK)
+
+if (CMAKE_FIND_ROOT_PATH_MODE_LIBRARY STREQUAL "ONLY")
+  set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY BOTH)
+endif()
+
+find_library(CLML_LIBRARIES NAMES libOpenCL.so NO_DEFAULT_PATH PATHS 
${CLML_SDK}/lib ${CLML_SDK}/lib64)
+
+# CMake/Android variables
+set( ANDROID_STL  c++_static CACHE STRING "Target Android STL") # default
+
+# Source variables
+set( OPENCL_INCLUDE_DIRS  ${CLML_SDK} CACHE PATH "filepath to OpenCL headers")
+
+set(CMAKE_CXX_STANDARD 17)
+set(CMAKE_CXX_STANDARD_REQUIRED True)
+
+#we do not want to pass -fno-exceptions
+if(${CMAKE_CXX_FLAGS} MATCHES "-fno-exceptions")
+  message ( WARNING "Disabling -fno-exceptions")
+  string(REGEX REPLACE "-fno-exceptions" "" CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS})
+endif()
+
+#we do not want to pass -fno-rtti
+if(${CMAKE_CXX_FLAGS} MATCHES "-fno-rtti")
+  message ( WARNING "Disabling -fno-rtti")
+  string(REGEX REPLACE "-fno-rtti" "" CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS})
+endif()
+
+set(COMMON_SOURCE_FILES
+clml_models.cc
+clml_runner.cc
+clml_runner.h
+main.cc
+../../3rdparty/cnpy/cnpy.cpp
+)
+
+include_directories(
+src
+${OPENCL_INCLUDE_DIRS}
+"../../3rdparty/dmlc-core/include"
+"../../3rdparty/cnpy/"
+)
+
+add_executable(clml_run ${COMMON_SOURCE_FILES})
+target_link_options(clml_run PRIVATE 
-Wl,--unresolved-symbols=ignore-in-shared-libs)
+target_link_libraries(clml_run ${CLML_LIBRARIES} z)
diff --git a/apps/cpp_clml/README.md b/apps/cpp_clml/README.md
new file mode 100644
index 00..3200492122
--- /dev/null
+++ b/apps/cpp_clml/README.md
@@ -0,0 +1,145 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# OpenCLML Debug Tool
+
+Tool to generate OpenCLML source file given a model from any framework and 
compile it as a native application that runs on Android target.
+This tool helps to debug or triage OpenCLML offloaded sub graphs as a 
standalone application.
+
+### Codegen
+
+Models can be downloaded from well known frameworks like Tensorflow, PyTorch, 
TFLite, Onnx ..etc.
+Assuming  ```resnet50.h5``` is a Keras ResNet50 model file, use the below 
command to generate a OpenCLML source for the model.
+
+```bash
+python3 scripts/clml_codegen.py resnet50.h5
+```
+
+Above command generates ```clml_models.cc``` and 

[GitHub] [tvm] echuraev merged pull request #13837: [CLML][CODEGEN] CLML native codegen utility

2023-02-02 Thread via GitHub


echuraev merged PR #13837:
URL: https://github.com/apache/tvm/pull/13837


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch nightly updated (49849c8c3e -> 666006e926)

2023-02-02 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 49849c8c3e Extend the USE_LIBBACKTRACE option (#13816)
 add f0ea9e461a [RUNTIME] Fix the manual determination of cores in 
FillDataForMeasure (#13849)
 add 9008ec21ba [VM][DMLC] Lower memory usage when loading and dumping 
weights (#13877)
 add 7aecc1a44d [Torch] Fix advanced indexing with NoneType index arguments 
(#13826)
 add 37e1a6862c [QNN][Relay][Topi] Add qnn.dense with weight layout (#13854)
 add ea34e6eb0b [TOPHUB] use keys as a keyword for searching of existing 
statistics (#13874)
 add 099ed94951 [OpenCL] Implement save/load pre-compiled programs (#13868)
 add a89ff3e62f [tir] fix buffer_decl buffer allocation (#13906)
 add 666006e926 [Doc] fix doc for tvm.te.const() (#13904)

No new revisions were added by this update.

Summary of changes:
 apps/cpp_rtvm/README.md|  14 ++
 apps/cpp_rtvm/main.cc  |   9 +
 apps/cpp_rtvm/tvm_runner.cc|  29 ++-
 apps/cpp_rtvm/tvm_runner.h |   4 +
 python/tvm/autotvm/tophub.py   |  10 +
 python/tvm/relay/frontend/pytorch.py   |  40 +++-
 python/tvm/relay/qnn/op/_qnn.py|  11 +-
 python/tvm/relay/qnn/op/legalizations.py   | 134 -
 python/tvm/relay/qnn/op/qnn.py |  64 ++
 python/tvm/relay/qnn/strategy/generic.py   |   6 +
 python/tvm/relay/qnn/strategy/hexagon.py   |  18 ++
 python/tvm/te/operation.py |  14 +-
 python/tvm/topi/hexagon/qnn/__init__.py|   1 +
 .../tvm/topi/hexagon/qnn/dense_alter_op.py |  24 +--
 python/tvm/topi/hexagon/qnn/nn.py  | 216 +
 python/tvm/topi/nn/qnn.py  |  19 ++
 src/relay/backend/te_compiler_cache.cc |  20 +-
 src/relay/op/nn/nn.h   |   5 +
 src/relay/qnn/op/dense.cc  | 105 +-
 src/runtime/contrib/random/mt_random_engine.cc |  10 +-
 src/runtime/file_utils.h   |  37 
 src/runtime/opencl/opencl_common.h |   2 +
 src/runtime/opencl/opencl_device_api.cc|   4 +-
 src/runtime/opencl/opencl_module.cc|  77 
 .../opencl/opencl_wrapper/opencl_wrapper.cc|  12 ++
 src/runtime/vm/executable.cc   |  22 +--
 .../plan_update_buffer_allocation_location.cc  |   7 +
 tests/cpp-runtime/opencl/opencl_compile_to_bin.cc  | 208 
 .../contrib/test_arm_compute_lib/test_dense.py |   6 +-
 .../test_hexagon/test_wo_qnn_canonicalization.py   | 172 +++-
 tests/python/contrib/test_random.py|  28 +++
 tests/python/frontend/pytorch/test_forward.py  |  35 
 tests/python/relay/test_pass_qnn_legalize.py   |  92 +
 .../unittest/test_autotvm_dispatch_context.py  |  16 ++
 ...sform_plan_update_buffer_allocation_location.py |  18 ++
 35 files changed, 1336 insertions(+), 153 deletions(-)
 copy tests/python/relay/test_change_batch.py => 
python/tvm/topi/hexagon/qnn/dense_alter_op.py (60%)
 create mode 100644 tests/cpp-runtime/opencl/opencl_compile_to_bin.cc



[tvm] branch main updated (a89ff3e62f -> 666006e926)

2023-02-02 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


from a89ff3e62f [tir] fix buffer_decl buffer allocation (#13906)
 add 666006e926 [Doc] fix doc for tvm.te.const() (#13904)

No new revisions were added by this update.

Summary of changes:
 python/tvm/te/operation.py | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)



[GitHub] [tvm] junrushao merged pull request #13904: [Doc] fix doc for tvm.te.const()

2023-02-02 Thread via GitHub


junrushao merged PR #13904:
URL: https://github.com/apache/tvm/pull/13904


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity updated: [Unity][CI] Unity specific jenkins setup (do not upstream to main) (#13910)

2023-02-02 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new b1573d20ad [Unity][CI] Unity specific jenkins setup (do not upstream 
to main) (#13910)
b1573d20ad is described below

commit b1573d20adaaf711e19673259b8fedd6766d0cc9
Author: Tianqi Chen 
AuthorDate: Thu Feb 2 23:33:15 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main) (#13910)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/docker_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   5 +
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   5 +
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   5 +
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   5 +
 ci/jenkins/unity_jenkinsfile.groovy| 337 +
 tests/scripts/task_lint.sh |   4 +-
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_extra_lint.sh |  23 ++
 tests/scripts/unity/task_python_relax.sh   |  37 +++
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 ++
 18 files changed, 486 insertions(+), 2 deletions(-)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..f9239e7728 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..a5b86089f0 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..fb14f68e9a 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..a312ee3ab5 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..8623630525 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local 

[GitHub] [tvm] junrushao merged pull request #13910: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread via GitHub


junrushao merged PR #13910:
URL: https://github.com/apache/tvm/pull/13910


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity updated: [Unity][IR] First-class StructInfo (#13907)

2023-02-02 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new 83e1f07d82 [Unity][IR] First-class StructInfo (#13907)
83e1f07d82 is described below

commit 83e1f07d82acd721df515e27ef12693074d79997
Author: Yuchen Jin 
AuthorDate: Thu Feb 2 23:32:15 2023 -0500

[Unity][IR] First-class StructInfo (#13907)

* [Unity][IR] First-class StructInfo

Relax tracks structural information (such as tensor shape) via `StructInfo` 
about the values in Relax.

* Fix rust build

-

Co-authored-by: Junru Shao 
---
 CMakeLists.txt  |   1 +
 include/tvm/relax/struct_info.h | 430 
 rust/tvm/src/ir/relay/mod.rs|   2 +
 src/relax/ir/struct_info.cc | 238 ++
 4 files changed, 671 insertions(+)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index e55b7174dc..88bf6472ce 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -289,6 +289,7 @@ tvm_file_glob(GLOB_RECURSE COMPILER_SRCS
 src/driver/*.cc
 src/support/*.cc
 src/script/*.cc
+src/relax/ir/*.cc
 src/relax/backend/vm/*.cc
 )
 
diff --git a/include/tvm/relax/struct_info.h b/include/tvm/relax/struct_info.h
new file mode 100644
index 00..d21c8db86b
--- /dev/null
+++ b/include/tvm/relax/struct_info.h
@@ -0,0 +1,430 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#ifndef TVM_RELAX_STRUCT_INFO_H_
+#define TVM_RELAX_STRUCT_INFO_H_
+
+#include 
+#include 
+#include 
+// #include 
+#include 
+#include 
+
+namespace tvm {
+namespace relax {
+
+/*!
+ * \brief Opaque object.
+ */
+class ObjectStructInfoNode : public StructInfoNode {
+ public:
+  void VisitAttrs(AttrVisitor* v) { v->Visit("span", ); }
+
+  bool SEqualReduce(const ObjectStructInfoNode* other, SEqualReducer equal) 
const { return true; }
+
+  void SHashReduce(SHashReducer hash_reduce) const { hash_reduce(0); }
+
+  static constexpr const char* _type_key = "relax.ObjectStructInfo";
+  TVM_DECLARE_FINAL_OBJECT_INFO(ObjectStructInfoNode, StructInfoNode);
+};
+
+/*!
+ * \brief Managed reference to ObjectStructInfoNode.
+ * \sa ObjectStructInfoNode
+ */
+class ObjectStructInfo : public StructInfo {
+ public:
+  TVM_DLL ObjectStructInfo(Span span = Span());
+
+  TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(ObjectStructInfo, StructInfo, 
ObjectStructInfoNode);
+};
+
+/*!
+ * \brief Primitive value.
+ */
+class PrimStructInfoNode : public StructInfoNode {
+ public:
+  /*! \brief Underlying data type of the primitive value */
+  DataType dtype;
+
+  void VisitAttrs(AttrVisitor* v) {
+v->Visit("dtype", );
+v->Visit("span", );
+  }
+
+  bool SEqualReduce(const PrimStructInfoNode* other, SEqualReducer equal) 
const {
+return equal(dtype, other->dtype);
+  }
+
+  void SHashReduce(SHashReducer hash_reduce) const { hash_reduce(dtype); }
+
+  static constexpr const char* _type_key = "relax.PrimStructInfo";
+  TVM_DECLARE_FINAL_OBJECT_INFO(PrimStructInfoNode, StructInfoNode);
+};
+
+/*!
+ * \brief Managed reference to PrimStructInfoNode.
+ * \sa PrimStructInfoNode
+ */
+class PrimStructInfo : public StructInfo {
+ public:
+  TVM_DLL PrimStructInfo(DataType dtype, Span span = Span());
+
+  TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(PrimStructInfo, StructInfo, 
PrimStructInfoNode);
+};
+
+/*!
+ * \brief StructInfo of shape value.
+ */
+class ShapeStructInfoNode : public StructInfoNode {
+ public:
+  /*! \brief optionally stores the symbolic value patterns of the shape */
+  Optional> values;
+  /*!
+   * \brief The number of dimension of the shape, can be unknown.
+   * \sa kUnknownNDim
+   */
+  int ndim;
+
+  /*! \return Whether the struct info contains unknown ndim. */
+  bool IsUnknownNdim() const { return ndim == kUnknownNDim; }
+
+  void VisitAttrs(AttrVisitor* v) {
+v->Visit("values", );
+v->Visit("ndim", );
+v->Visit("span", );
+  }
+
+  bool SEqualReduce(const ShapeStructInfoNode* other, SEqualReducer equal) 
const {
+return equal(values, other->values) && equal(ndim, other->ndim);
+  }
+
+  void 

[GitHub] [tvm] junrushao merged pull request #13907: [Unity][IR] First-class StructInfo

2023-02-02 Thread via GitHub


junrushao merged PR #13907:
URL: https://github.com/apache/tvm/pull/13907


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cyx-6 commented on pull request #13900: [Fix][TVMScript] Fix `LetStmt` printing logic

2023-02-02 Thread via GitHub


cyx-6 commented on PR #13900:
URL: https://github.com/apache/tvm/pull/13900#issuecomment-1414913697

   @wrongtest-intellif  Oh, that is just an example of the case that a 
`LetStmt->var` - `y` has usage before the its new value. I picked the outer 
`LetStmt->value` field for the usage of `y` for convenience. So never mind its 
logic, it makes no sense, except its format.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] junrushao commented on pull request #13900: [Fix][TVMScript] Fix `LetStmt` printing logic

2023-02-02 Thread via GitHub


junrushao commented on PR #13900:
URL: https://github.com/apache/tvm/pull/13900#issuecomment-1414902367

   Off the topic, in the future, we might want to change the syntax to:
   
   ```python
   with T.LetStmt(a) as b:
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Hzfengsy merged pull request #13906: [tir] fix buffer_decl buffer allocation

2023-02-02 Thread via GitHub


Hzfengsy merged PR #13906:
URL: https://github.com/apache/tvm/pull/13906


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Hzfengsy commented on pull request #13906: [tir] fix buffer_decl buffer allocation

2023-02-02 Thread via GitHub


Hzfengsy commented on PR #13906:
URL: https://github.com/apache/tvm/pull/13906#issuecomment-1414892052

   Thanks @cconvey for the fixing and continuous improving


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] wrongtest-intellif commented on pull request #13900: [Fix][TVMScript] Fix `LetStmt` printing logic

2023-02-02 Thread via GitHub


wrongtest-intellif commented on PR #13900:
URL: https://github.com/apache/tvm/pull/13900#issuecomment-1414892147

   > ```python
   > x = T.var("int32")
   > y = T.var("int32")
   > with T.let(x, y):
   >   with T.let(y, 0):
   > ```
   
   Thank you for the fix! Here is my small question for the example, does it 
shoud be 
   ```
   with T.let(y, 0):
 with T.let(x, y):
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated (099ed94951 -> a89ff3e62f)

2023-02-02 Thread syfeng
This is an automated email from the ASF dual-hosted git repository.

syfeng pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 099ed94951 [OpenCL] Implement save/load pre-compiled programs (#13868)
 add a89ff3e62f [tir] fix buffer_decl buffer allocation (#13906)

No new revisions were added by this update.

Summary of changes:
 .../plan_update_buffer_allocation_location.cc  |  7 +++
 ...transform_plan_update_buffer_allocation_location.py | 18 ++
 2 files changed, 25 insertions(+)



[GitHub] [tvm] YuchenJin closed pull request #13902: [Unity][CI] setup CI (do not upstream to main)

2023-02-02 Thread via GitHub


YuchenJin closed pull request #13902: [Unity][CI] setup CI (do not upstream to 
main)
URL: https://github.com/apache/tvm/pull/13902


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] YuchenJin commented on pull request #13902: [Unity][CI] setup CI (do not upstream to main)

2023-02-02 Thread via GitHub


YuchenJin commented on PR #13902:
URL: https://github.com/apache/tvm/pull/13902#issuecomment-1414808218

   > #13910 should achieve the asks that leverages a customized jenkins setup 
and skip most of the existing pipelines without starting a stage
   
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] srkreddy1238 commented on a diff in pull request #13867: [DOCS][ADRENO] Improved Adreno documentation

2023-02-02 Thread via GitHub


srkreddy1238 commented on code in PR #13867:
URL: https://github.com/apache/tvm/pull/13867#discussion_r1095321344


##
docs/how_to/deploy/adreno.rst:
##
@@ -15,41 +15,60 @@
 specific language governing permissions and limitations
 under the License.
 
-Deploy to Adreno GPU
-===
+Deploy to Adreno™ GPU
+
 
-**Authors**: Daniil Barinov, Egor Churaev, Andrey Malyshev
+**Authors**: Daniil Barinov, Egor Churaev, Andrey Malyshev, Siva Rama Krishna
 
 Introduction
 
 
-Adreno is a series of graphics processing unit (GPU) semiconductor
+Adreno™ is a series of graphics processing unit (GPU) semiconductor
 intellectual property cores developed by Qualcomm and used in many of
 their SoCs.
 
-The Adreno GPU accelerates the rendering of complex geometries to
+The Adreno™ GPU accelerates the rendering of complex geometries to
 deliver high-performance graphics and a rich user experience with low
 power consumption.
 
-This guide will demonstrate :ref:`the benefits of using textures with 
Adreno`,
-how to :ref:`build TVM with OpenCL` (needed by Adreno 
devices) and TVM RPC
-enabled. It will also provide :ref:`example 
code` to better understand the differences 
in compiling and deploying models
-for Adreno devices.
+TVM supports deep learning acceleration on Adreno™ GPU by native OpenCL 
backend of TVM and
+also through OpenCLML backend. Native OpenCL backend of TVM is enhanced to 
make it
+Adreno™ friendly by incorporating texture memory usage and Adreno™ friendly 
layouts.
+OpenCLML is an SDK release by Qualcomm that provides kernel acceleration 
library
+for most of the deep learning operators.
 
-.. _advantages_of_the_textures:
+This guide is organized to demonstrate various design aspects of
 
-Advantages of the Textures
---
+- :ref:`OpenCL Backend Ehnahcements`
+- :ref:`About OpenCLML`
+- :ref:`Build and Deploy`
 
-One of the Adreno's advantages is the clever handling of textures. At
+
+
+.. how to :ref:`build TVM with OpenCL` (needed by 
Adreno™ devices) and TVM RPC
+.. enabled. It will also provide :ref:`example 
code` to better understand the differences 
in compiling and deploying models
+.. for Adreno™ devices.

Review Comment:
   Do you still see this ? It looks fine in local .



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mikeseven opened a new issue, #13911: [Bug & fix] gen_requirements issue with latest setup tools

2023-02-02 Thread via GitHub


mikeseven opened a new issue, #13911:
URL: https://github.com/apache/tvm/issues/13911

   latest setup tools crash when doing `make cython`
   The issue is in gen_requirements.py:
   `("numpy", "<=1.23.*"),`
   
   must be changed to:
`   ("numpy", "<=1.23"),
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [OpenCL] Implement save/load pre-compiled programs (#13868)

2023-02-02 Thread srk
This is an automated email from the ASF dual-hosted git repository.

srk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 099ed94951 [OpenCL] Implement save/load pre-compiled programs (#13868)
099ed94951 is described below

commit 099ed949519f3b6ae182c31ce69496f18a1f60ad
Author: Egor Churaev 
AuthorDate: Fri Feb 3 05:28:35 2023 +0300

[OpenCL] Implement save/load pre-compiled programs (#13868)

* [OpenCL] Implement save/load pre-compiled programs

Using pre-compiled programs might significantly improve inference time
of the first run.

- Added methods `SupportPreCompiledPrograms` which reports if the module
  supports using pre-compiled programs.
- Method `GetPreCompiledPrograms` returns string with bytes of
  pre-compiled programs.
- Method `SetPreCompiledPrograms` allows user to pass pre-compiled
  programs to the module.

* Fix lint

* Apply comment: PackedFunc is used

* Fix build

* Fix CI and rename functions

* Apply comments
---
 apps/cpp_rtvm/README.md|  14 ++
 apps/cpp_rtvm/main.cc  |   9 +
 apps/cpp_rtvm/tvm_runner.cc|  29 ++-
 apps/cpp_rtvm/tvm_runner.h |   4 +
 src/runtime/opencl/opencl_common.h |   2 +
 src/runtime/opencl/opencl_device_api.cc|   4 +-
 src/runtime/opencl/opencl_module.cc|  77 
 .../opencl/opencl_wrapper/opencl_wrapper.cc|  12 ++
 tests/cpp-runtime/opencl/opencl_compile_to_bin.cc  | 208 +
 9 files changed, 356 insertions(+), 3 deletions(-)

diff --git a/apps/cpp_rtvm/README.md b/apps/cpp_rtvm/README.md
index e696153282..c60a7b0e12 100644
--- a/apps/cpp_rtvm/README.md
+++ b/apps/cpp_rtvm/README.md
@@ -352,3 +352,17 @@ python3 -m tvm.driver.tvmc compile --cross-compiler 
${ANDROID_NDK_HOME}/toolchai
 python3 -m tvm.driver.tvmc run --device="cl" keras-resnet50.tar --rpc-key 
${TVM_RPC_KEY} --rpc-tracker {TVM_TRACKER_HOST}:{TVM_TRACKER_PORT} --print-time
 
 ```
+
+# Use pre-compiled OpenCL kernels
+Using pre-compiled programs might significantly improve inference time of the
+first run. E.g. for topology with ~300 kernels compilation time on Adreno was
+about 26 seconds. But after dumping compiled programs to binary files and reuse
+them on the next runs, the compilation time was significantly decreased (more
+than 1000 times) and starts to be around 25 ms.
+
+To use such functionality, the developer have to pass parameter 
`--pre-compiled`
+to the `rtvm` and specify the file name where pre-compiled programs will be
+stored. If the pre-compiled file name was passed to the `rtvm` then After 
method
+`Load`, method `UsePreCompiledProgram` is called. This method loads 
pre-compiled
+programs if the file exists. In opposite case the file will be created and
+pre-compiled programs will be saved to this file.
diff --git a/apps/cpp_rtvm/main.cc b/apps/cpp_rtvm/main.cc
index 31019ee0c9..c38a5f62bd 100644
--- a/apps/cpp_rtvm/main.cc
+++ b/apps/cpp_rtvm/main.cc
@@ -54,6 +54,7 @@ static const string kUsage =
 "--input- Numpy file for the model input (optional and we use 
random of not given)\n"
 "--output   - Numpy file name to dump the model output as numpy\n"
 "--dump-meta- Dump model meta information\n"
+"--pre-compiled - The file name of a file where pre-compiled programs 
should be stored"
 "\n"
 "  Example\n"
 "  ./rtvm --model=keras-resnet50 --device=\"opencl\" --dump-meta\n"
@@ -66,12 +67,14 @@ static const string kUsage =
  * \arg device The target device to use {llvm, cl, ...etc.}
  * \arg input Numpy file for the model input
  * \arg output Numpy file name to dump the model output as numpy
+ * \arg pre_compiled File name where pre-compiled programs should be stored
  */
 struct ToolArgs {
   string model;
   string device;
   string input;
   string output;
+  string pre_compiled;
   bool dump_meta = false;
 };
 
@@ -84,6 +87,7 @@ void PrintArgs(const ToolArgs& args) {
   LOG(INFO) << "Device= " << args.device;
   LOG(INFO) << "Input = " << args.input;
   LOG(INFO) << "Output= " << args.output;
+  LOG(INFO) << "Pre-compiled  = " << args.pre_compiled;
   LOG(INFO) << "Dump Metadata = " << ((args.dump_meta) ? ("True") : ("False"));
 }
 
@@ -172,6 +176,8 @@ void ParseCmdArgs(int argc, char* argv[], struct ToolArgs& 
args) {
   if (!pmeta.empty()) {
 args.dump_meta = true;
   }
+
+  args.pre_compiled = GetCmdOption(argc, argv, "--pre-compiled=");
 }
 
 /*!
@@ -190,6 +196,9 @@ int ExecuteModel(ToolArgs& args) {
 
   // Load the model
   runner.Load();
+  if (!args.pre_compiled.empty()) {
+runner.UsePreCompiledPrograms(args.pre_compiled);
+  }
 
   // Query Model meta Information
   TVMMetaInfo mInfo = runner.GetMetaInfo();

[GitHub] [tvm] srkreddy1238 merged pull request #13868: [OpenCL] Implement save/load pre-compiled programs

2023-02-02 Thread via GitHub


srkreddy1238 merged PR #13868:
URL: https://github.com/apache/tvm/pull/13868


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tqchen commented on pull request #13902: [Unity][CI] setup CI (do not upstream to main)

2023-02-02 Thread via GitHub


tqchen commented on PR #13902:
URL: https://github.com/apache/tvm/pull/13902#issuecomment-1414695503

   Thanks for the suggestions. https://github.com/apache/tvm/pull/13910 should 
achieve the asks that leverages a customized jenkins setup and skip most of the 
existing pipelines without starting a stage


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tqchen commented on pull request #13910: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread via GitHub


tqchen commented on PR #13910:
URL: https://github.com/apache/tvm/pull/13910#issuecomment-1414694055

   already tested on the unity-staging branch, should be ready to go


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13910: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread via GitHub


tvm-bot commented on PR #13910:
URL: https://github.com/apache/tvm/pull/13910#issuecomment-1414692858

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* cc @Mousius, @areusch, @driazati, @gigiblender, @leandron See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tqchen opened a new pull request, #13910: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread via GitHub


tqchen opened a new pull request, #13910:
URL: https://github.com/apache/tvm/pull/13910

   This PR setup a unity specific jenkins with minimum jenkinsfile without 
sharding and disables most of the tests to reduce overall cost. We can add 
tests of unty branch by configuring the specific groovy file.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tqchen commented on pull request #13910: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread via GitHub


tqchen commented on PR #13910:
URL: https://github.com/apache/tvm/pull/13910#issuecomment-1414693259

   cc @YuchenJin 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13909: [microTVM] unify windows build cmake for standalone crt

2023-02-02 Thread via GitHub


tvm-bot commented on PR #13909:
URL: https://github.com/apache/tvm/pull/13909#issuecomment-1414659212

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] alanmacd opened a new pull request, #13909: [microTVM] unify windows build cmake for standalone crt

2023-02-02 Thread via GitHub


alanmacd opened a new pull request, #13909:
URL: https://github.com/apache/tvm/pull/13909

   unify windows build cmake for standalone crt


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mazen200222 opened a new issue, #13908: [Bug]

2023-02-02 Thread via GitHub


mazen200222 opened a new issue, #13908:
URL: https://github.com/apache/tvm/issues/13908

   Thanks for participating in the TVM community! We use https://discuss.tvm.ai 
for any general usage questions and discussions. The issue tracker is used for 
actionable items such as feature proposals discussion, roadmaps, and bug 
tracking.  You are always welcomed to post on the forum first :smile_cat:
   
   Issues that are inactive for a period of time may get closed. We adopt this 
policy so that we won't lose track of actionable issues that may fall at the 
bottom of the pile. Feel free to reopen a new one if you feel there is an 
additional problem that needs attention when an old one gets closed.
   
   ### Expected behavior
   
   What you were expecting
   
   ### Actual behavior
   
   What actually happened
   
   ### Environment
   
   Any environment details, such as: Operating System, TVM version, etc
   
   ### Steps to reproduce
   
   Preferably a minimal script to cause the issue to occur.
   
   ### Triage
   
   Please refer to the list of label tags 
[here](https://github.com/apache/tvm/wiki/Issue-Triage-Labels) to find the 
relevant tags and add them below in a bullet format (example below).
   
   * needs-triage


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity-staging updated (e597236a03 -> 052cf726bf)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard e597236a03 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)
 new 052cf726bf [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (e597236a03)
\
 N -- N -- N   refs/heads/unity-staging (052cf726bf)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ci/jenkins/unity_jenkinsfile.groovy | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 052cf726bf47dcb2f98715cac7c8799d206a6691
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/docker_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   5 +
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   5 +
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   5 +
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   5 +
 ci/jenkins/unity_jenkinsfile.groovy| 337 +
 tests/scripts/task_lint.sh |   4 +-
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_extra_lint.sh |  23 ++
 tests/scripts/unity/task_python_relax.sh   |  37 +++
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 ++
 18 files changed, 486 insertions(+), 2 deletions(-)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..f9239e7728 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..a5b86089f0 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..fb14f68e9a 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..a312ee3ab5 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..8623630525 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // 

[GitHub] [tvm] gromero commented on issue #13856: [Bug] CMSIS-NN BYOC fails with Zephyr 3.2

2023-02-02 Thread via GitHub


gromero commented on issue #13856:
URL: https://github.com/apache/tvm/issues/13856#issuecomment-1414515680

   btw, the ICE is fixed in gcc / HEAD by now (checked commit 605d1297b91). So 
I guess it's just a matter of a new cut for gcc and Zephyr SDK picking up it. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity-staging updated (b3a437a49b -> e597236a03)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard b3a437a49b [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)
 new e597236a03 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (b3a437a49b)
\
 N -- N -- N   refs/heads/unity-staging (e597236a03)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 tests/scripts/unity/task_extra_lint.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)



[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit e597236a0312e096e37e7553e8121bd0a757210a
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/docker_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   5 +
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   5 +
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   5 +
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   5 +
 ci/jenkins/unity_jenkinsfile.groovy| 337 +
 tests/scripts/task_lint.sh |   4 +-
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_extra_lint.sh |  23 ++
 tests/scripts/unity/task_python_relax.sh   |  37 +++
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 ++
 18 files changed, 486 insertions(+), 2 deletions(-)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..f9239e7728 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..a5b86089f0 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..fb14f68e9a 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..a312ee3ab5 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..8623630525 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // 

[GitHub] [tvm] tqchen commented on pull request #13907: [Unity][IR] First-class StructInfo

2023-02-02 Thread via GitHub


tqchen commented on PR #13907:
URL: https://github.com/apache/tvm/pull/13907#issuecomment-1414460931

   NOTE: cpu/pr-head rust error is not relevant to the pr


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit b3a437a49bad27f41af7fb053887fc53505ae43a
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/docker_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   5 +
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   5 +
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   5 +
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   5 +
 ci/jenkins/unity_jenkinsfile.groovy| 337 +
 tests/scripts/task_lint.sh |   4 +-
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_extra_lint.sh |  24 ++
 tests/scripts/unity/task_python_relax.sh   |  37 +++
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 ++
 18 files changed, 487 insertions(+), 2 deletions(-)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..f9239e7728 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..a5b86089f0 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..fb14f68e9a 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..a312ee3ab5 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..8623630525 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // 

[tvm] branch unity-staging updated (fa52900ec0 -> b3a437a49b)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard fa52900ec0 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)
 new b3a437a49b [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (fa52900ec0)
\
 N -- N -- N   refs/heads/unity-staging (b3a437a49b)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ci/jenkins/unity_jenkinsfile.groovy | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[tvm] branch unity-staging updated (edc3f4e21d -> fa52900ec0)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard edc3f4e21d [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)
 new fa52900ec0 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (edc3f4e21d)
\
 N -- N -- N   refs/heads/unity-staging (fa52900ec0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ci/jenkins/generated/arm_jenkinsfile.groovy|  12 +--
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|  12 +--
 ci/jenkins/generated/cpu_jenkinsfile.groovy|  12 +--
 ci/jenkins/generated/docker_jenkinsfile.groovy |  12 +--
 ci/jenkins/generated/gpu_jenkinsfile.groovy|  12 +--
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|  14 +--
 ci/jenkins/generated/i386_jenkinsfile.groovy   |  12 +--
 ci/jenkins/generated/lint_jenkinsfile.groovy   |  11 +--
 .../generated/minimal_cross_isa_jenkinsfile.groovy |  12 +--
 ci/jenkins/generated/minimal_jenkinsfile.groovy|  11 +--
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |  12 +--
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |  12 +--
 ci/jenkins/unity_jenkinsfile.groovy| 100 -
 .../task_extra_lint.sh}|   7 +-
 14 files changed, 174 insertions(+), 77 deletions(-)
 copy tests/scripts/{task_python_integration_i386only.sh => 
unity/task_extra_lint.sh} (85%)



[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit fa52900ec02ef5ac200d8f83662f8276fbcda820
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/docker_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   5 +
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   5 +
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   5 +
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   5 +
 ci/jenkins/unity_jenkinsfile.groovy| 337 +
 tests/scripts/task_lint.sh |   4 +-
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_extra_lint.sh |  24 ++
 tests/scripts/unity/task_python_relax.sh   |  37 +++
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 ++
 18 files changed, 487 insertions(+), 2 deletions(-)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..f9239e7728 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..a5b86089f0 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..fb14f68e9a 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..a312ee3ab5 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // This file is generated by 'jenkins/generate.py'. Do not edit this file 
directly!
 // Make edits to 'jenkins/Jenkinsfile.j2' and regenerate this with
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..8623630525 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -54,6 +54,11 @@
 // - Periodically cleanup the old versions on local workers
 //
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+return
+
 // = IMPORTANT NOTE =
 // 

[GitHub] [tvm] Joetibz commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2023-02-02 Thread via GitHub


Joetibz commented on issue #4272:
URL: https://github.com/apache/tvm/issues/4272#issuecomment-1414447359

   hello guys I am equally new to tvm, and I am trying to run inference on the 
zynq board without rpc but I keep getting the error stated below, I have not 
even gotten to making the changes as discussed on this forum, can I get some 
guidance?
   OSError: libmkl_intel_lp64.so.1: cannot open shared object file: No such 
file or directory
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit edc3f4e21da3eb93d63733a9652802ec8241ef37
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/docker_jenkinsfile.groovy |   7 +-
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   9 +-
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   7 +-
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   6 +-
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   7 +-
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   6 +-
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   7 +-
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   7 +-
 ci/jenkins/unity_jenkinsfile.groovy| 243 +
 tests/scripts/task_lint.sh |   4 +-
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_python_relax.sh   |  37 
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 +++
 17 files changed, 380 insertions(+), 15 deletions(-)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..4c8c8734c1 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci && is_docs_only_build != 1) {
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..f1c93400c6 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci && is_docs_only_build != 1) {
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..7b8621852e 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci && is_docs_only_build != 1) {
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..8610d465e9 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def ecr_push(full_name) {
   aws_account_id = sh(
 returnStdout: true,
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..7348269061 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci) {
diff --git a/ci/jenkins/generated/hexagon_jenkinsfile.groovy 
b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
index c2f39a0d08..3109f6db05 100644
--- a/ci/jenkins/generated/hexagon_jenkinsfile.groovy
+++ b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
@@ -456,7 +456,7 @@ def prepare() {
   returnStatus: true,
   script: "./${jenkins_scripts_root}/git_change_docker.sh",
   label: 'Check for any docker changes',
-)
+   )
 
 if (skip_ci) {
   // Don't rebuild when skipping CI

[tvm] branch unity-staging updated (d510470e64 -> edc3f4e21d)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard d510470e64 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)
 add 5c38bef497 [Unity] Relax expressions and types (#13901)
 new edc3f4e21d [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (d510470e64)
\
 N -- N -- N   refs/heads/unity-staging (edc3f4e21d)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 include/tvm/ir/expr.h|8 +
 include/tvm/relax/expr.h | 1003 ++
 include/tvm/relax/type.h |  166 
 3 files changed, 1177 insertions(+)
 create mode 100644 include/tvm/relax/expr.h
 create mode 100644 include/tvm/relax/type.h



[tvm] branch unity-staging updated (987c962e40 -> d510470e64)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard 987c962e40 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)
 new d510470e64 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (987c962e40)
\
 N -- N -- N   refs/heads/unity-staging (d510470e64)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ci/jenkins/generated/arm_jenkinsfile.groovy   | 12 ++--
 ci/jenkins/generated/cortexm_jenkinsfile.groovy   | 12 ++--
 ci/jenkins/generated/cpu_jenkinsfile.groovy   | 12 ++--
 ci/jenkins/generated/docker_jenkinsfile.groovy| 12 ++--
 ci/jenkins/generated/gpu_jenkinsfile.groovy   | 12 ++--
 ci/jenkins/generated/hexagon_jenkinsfile.groovy   | 14 +++---
 ci/jenkins/generated/i386_jenkinsfile.groovy  | 12 ++--
 ci/jenkins/generated/lint_jenkinsfile.groovy  | 11 +--
 ci/jenkins/generated/minimal_cross_isa_jenkinsfile.groovy | 12 ++--
 ci/jenkins/generated/minimal_jenkinsfile.groovy   | 11 +--
 ci/jenkins/generated/riscv_jenkinsfile.groovy | 12 ++--
 ci/jenkins/generated/wasm_jenkinsfile.groovy  | 12 ++--
 tests/scripts/task_lint.sh|  4 ++--
 13 files changed, 73 insertions(+), 75 deletions(-)



[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit d510470e64164e3e8d64a0f735de6c163f8c7fa0
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/docker_jenkinsfile.groovy |   7 +-
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   7 +-
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   9 +-
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   7 +-
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   6 +-
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   7 +-
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   6 +-
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   7 +-
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   7 +-
 ci/jenkins/unity_jenkinsfile.groovy| 243 +
 tests/scripts/task_lint.sh |   4 +-
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_python_relax.sh   |  37 
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 +++
 17 files changed, 380 insertions(+), 15 deletions(-)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..4c8c8734c1 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci && is_docs_only_build != 1) {
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..f1c93400c6 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci && is_docs_only_build != 1) {
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..7b8621852e 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci && is_docs_only_build != 1) {
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..8610d465e9 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def ecr_push(full_name) {
   aws_account_id = sh(
 returnStdout: true,
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..7348269061 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -539,7 +539,12 @@ def micro_cpp_unittest(image) {
 
 cancel_previous_build()
 
-prepare()
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+// prepare()
+
 def build() {
   stage('Build') {
 if (!skip_ci) {
diff --git a/ci/jenkins/generated/hexagon_jenkinsfile.groovy 
b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
index c2f39a0d08..3109f6db05 100644
--- a/ci/jenkins/generated/hexagon_jenkinsfile.groovy
+++ b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
@@ -456,7 +456,7 @@ def prepare() {
   returnStatus: true,
   script: "./${jenkins_scripts_root}/git_change_docker.sh",
   label: 'Check for any docker changes',
-)
+   )
 
 if (skip_ci) {
   // Don't rebuild when skipping CI

[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 987c962e40ff6ba97cd12e2c94112613e485b39d
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/docker_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   5 +
 ci/jenkins/generated/lint_jenkinsfile.groovy   |   5 +
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   5 +
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   5 +
 ci/jenkins/unity_jenkinsfile.groovy| 243 +
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_python_relax.sh   |  37 
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 +++
 16 files changed, 367 insertions(+)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..80eabf10c2 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..f063622d1f 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..72f887e2df 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..eb74a41b39 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..f32d2f9d9a 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/hexagon_jenkinsfile.groovy 
b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
index c2f39a0d08..1303d305c4 100644
--- a/ci/jenkins/generated/hexagon_jenkinsfile.groovy
+++ b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any 

[tvm] branch unity-staging updated (0f7c9b45fb -> 987c962e40)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard 0f7c9b45fb [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)
 new 987c962e40 [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (0f7c9b45fb)
\
 N -- N -- N   refs/heads/unity-staging (987c962e40)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ci/jenkins/generated/lint_jenkinsfile.groovy |  5 +++
 ci/jenkins/unity_jenkinsfile.groovy  | 51 
 2 files changed, 56 insertions(+)



[tvm] 01/01: [Unity][CI] Unity specific jenkins setup (do not upstream to main)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 0f7c9b45fbc7738241a611f80c857ed18da2e287
Author: tqchen 
AuthorDate: Wed Feb 1 15:50:41 2023 -0500

[Unity][CI] Unity specific jenkins setup (do not upstream to main)

This PR setup a unity specific jenkins with minimum jenkinsfile
without sharding and disables most of the tests to reduce overall
cost. We can add tests of unty branch by configuring the specific
groovy file.
---
 ci/jenkins/generated/arm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cortexm_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/cpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/docker_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/gpu_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/hexagon_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/i386_jenkinsfile.groovy   |   5 +
 .../generated/minimal_cross_isa_jenkinsfile.groovy |   5 +
 ci/jenkins/generated/minimal_jenkinsfile.groovy|   5 +
 ci/jenkins/generated/riscv_jenkinsfile.groovy  |   5 +
 ci/jenkins/generated/wasm_jenkinsfile.groovy   |   5 +
 ci/jenkins/unity_jenkinsfile.groovy| 192 +
 tests/scripts/unity/README |   2 +
 tests/scripts/unity/task_python_relax.sh   |  37 
 tests/scripts/unity/task_python_relax_gpuonly.sh   |  25 +++
 15 files changed, 311 insertions(+)

diff --git a/ci/jenkins/generated/arm_jenkinsfile.groovy 
b/ci/jenkins/generated/arm_jenkinsfile.groovy
index 2c64e9ab24..80eabf10c2 100644
--- a/ci/jenkins/generated/arm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/arm_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/cortexm_jenkinsfile.groovy 
b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
index 25846f5b4b..f063622d1f 100644
--- a/ci/jenkins/generated/cortexm_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cortexm_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/cpu_jenkinsfile.groovy 
b/ci/jenkins/generated/cpu_jenkinsfile.groovy
index c9e02ba287..72f887e2df 100644
--- a/ci/jenkins/generated/cpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/cpu_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/docker_jenkinsfile.groovy 
b/ci/jenkins/generated/docker_jenkinsfile.groovy
index 6735bd2321..eb74a41b39 100644
--- a/ci/jenkins/generated/docker_jenkinsfile.groovy
+++ b/ci/jenkins/generated/docker_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/gpu_jenkinsfile.groovy 
b/ci/jenkins/generated/gpu_jenkinsfile.groovy
index a5609697af..f32d2f9d9a 100644
--- a/ci/jenkins/generated/gpu_jenkinsfile.groovy
+++ b/ci/jenkins/generated/gpu_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less relevant tests
+// to reduce ci time and resource cost
+// (DO NOT UPSTREAM TO MAIN)
+skip_ci = true
+
 if (skip_ci) {
   // Don't rebuild when skipping CI
   rebuild_docker_images = false
diff --git a/ci/jenkins/generated/hexagon_jenkinsfile.groovy 
b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
index c2f39a0d08..1303d305c4 100644
--- a/ci/jenkins/generated/hexagon_jenkinsfile.groovy
+++ b/ci/jenkins/generated/hexagon_jenkinsfile.groovy
@@ -458,6 +458,11 @@ def prepare() {
   label: 'Check for any docker changes',
 )
 
+// unity: Skip less 

[tvm] branch unity-staging created (now 0f7c9b45fb)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch unity-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git


  at 0f7c9b45fb [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

This branch includes the following new commits:

 new 0f7c9b45fb [Unity][CI] Unity specific jenkins setup (do not upstream 
to main)

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[GitHub] [tvm] github-actions[bot] commented on pull request #13751: [microTVM] Mitigate security vulnerability CVE-2007-4559

2023-02-02 Thread via GitHub


github-actions[bot] commented on PR #13751:
URL: https://github.com/apache/tvm/pull/13751#issuecomment-1414347901

   Failed to re-run CI in https://github.com/apache/tvm/actions/runs/4078496473
   
   
   
   ```
   Traceback (most recent call last):
 File "ci/scripts/github/github_tvmbot.py", line 594, in comment_failure
   raise item
 File "ci/scripts/github/github_tvmbot.py", line 700, in run
   pr.rerun_jenkins_ci()
 File "ci/scripts/github/github_tvmbot.py", line 553, in rerun_jenkins_ci
   post(url, auth=("tvm-bot", TVM_BOT_JENKINS_TOKEN))
 File "/home/runner/work/tvm/tvm/ci/scripts/jenkins/git_utils.py", line 53, 
in post
   with request.urlopen(req, data) as response:
 File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
   return opener.open(url, data, timeout)
 File "/usr/lib/python3.8/urllib/request.py", line 531, in open
   response = meth(req, response)
 File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
   response = self.parent.error(
 File "/usr/lib/python3.8/urllib/request.py", line 569, in error
   return self._call_chain(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
   result = func(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 649, in 
http_error_default
   raise HTTPError(req.full_url, code, msg, hdrs, fp)
   urllib.error.HTTPError: HTTP Error 500: Server Error
   
   ```
   
   with response
   
   ```
   
 
 
   
   
   
   Jenkins [Jenkins]src="/static/bb039fcf/scripts/yui/connection/connection-min.js"> 
 >src="/static/bb039fcf/scripts/yui/datasource/datasource-min.js"> 
 >src="/static/bb039fcf/scripts/yui/autocomplete/autocomplete-min.js"> src="/static/bb039fcf/scripts/yui/menu/menu-min.js">src="/static/bb039fcf/scripts/yui/element/element-min.js">src="/static/bb039fcf/scripts/yui/button/button-min.js">src="/static/bb039fcf/scripts/yui/storage/storage-min.js">src="/static/bb039fcf/scripts/hudson-behavior.js" 
 >type="text/javascript">src="/static/bb039fcf/scripts/sortable.js" 
 >type="text/javascript">href="/static/bb039fcf/scripts/yui/container/assets/container.css" 
 >type="text/css">href="/static/bb039fcf/scripts/yui/container/assets/skins/sam/container.css" 
 >type="text/css">Skip to contentJenkinshttp://www.w3.org/2000/svg; class="" viewBox="0 0 512 
512">https://www.jenkins.io/redirect/search-box; 
class="main-search__icon-trailing">http://www.w3.org/2000/svg; viewBox="0 0 512 512">log
 inDashboard Oops!A problem occurred while processing the request.Logging 
ID=dac9674d-64f1-4693-ae93-946ccbb29959REST APIhttps://www.jenkins.io/; target="_blank">Jenkins 
2.361.2
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on pull request #13751: [microTVM] Mitigate security vulnerability CVE-2007-4559

2023-02-02 Thread via GitHub


gromero commented on PR #13751:
URL: https://github.com/apache/tvm/pull/13751#issuecomment-1414347548

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mkatanbaf commented on a diff in pull request #13885: [microTVM] Refactor required external functions in CRT to platform-template.c

2023-02-02 Thread via GitHub


mkatanbaf commented on code in PR #13885:
URL: https://github.com/apache/tvm/pull/13885#discussion_r1094911794


##
apps/microtvm/zephyr/template_project/microtvm_api_server.py:
##
@@ -413,6 +418,9 @@ def _create_prj_conf(
 "CONFIG_UART_INTERRUPT_DRIVEN=y\n"
 "\n"
 )
+if config_led and not self._is_qemu(board, False):

Review Comment:
   I believe similar to QEMU, the fvp should be excluded too 



##
apps/microtvm/zephyr/template_project/src/host_driven/platform.h:
##
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_APPS_MICROTVM_ZEPHYR_HOST_DRIVEN_PLATFORM_H_
+#define TVM_APPS_MICROTVM_ZEPHYR_HOST_DRIVEN_PLATFORM_H_
+
+#include 
+
+#ifdef CONFIG_LED

Review Comment:
   do we need to place the prototypes of the functions defined in platform.c 
here?



##
apps/microtvm/zephyr/template_project/src/mlperftiny/platform.cc:
##
@@ -162,3 +172,33 @@ int8_t QuantizeFloatToInt8(float value, float scale, int 
zero_point) {
   }
   return (int8_t)(result);
 }
+
+// UART read.
+char TVMPlatformUartRxRead() {

Review Comment:
   why some functions in this file have the `__attribute__((weak))` and some 
don't? What is the logic behind applying it?



##
src/runtime/crt/host/platform-template.h:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/runtime/crt/host/platform-template.h
+ * \brief Template for CRT platform configuration.
+ */
+#ifndef TVM_RUNTIME_CRT_HOST_PLATFORM_TEMPLATE_H_
+#define TVM_RUNTIME_CRT_HOST_PLATFORM_TEMPLATE_H_
+
+#include 
+
+MemoryManagerInterface* memory_manager;
+

Review Comment:
   same comment about the function prototypes



##
apps/microtvm/zephyr/template_project/src/aot_standalone_demo/platform.h:
##
@@ -17,12 +17,13 @@
  * under the License.
  */
 
-#ifndef TVM_APPS_MICROTVM_ZEPHYR_AOT_STANDALONE_DEMO_ZEPHYR_UART_H_
-#define TVM_APPS_MICROTVM_ZEPHYR_AOT_STANDALONE_DEMO_ZEPHYR_UART_H_
+#ifndef TVM_APPS_MICROTVM_ZEPHYR_AOT_STANDALONE_DEMO_PLATFORM_H_
+#define TVM_APPS_MICROTVM_ZEPHYR_AOT_STANDALONE_DEMO_PLATFORM_H_
 
 #include 
+#include 
 
-// Used to read data from the UART.
+extern tvm_workspace_t app_workspace;

Review Comment:
   do we need to place the prototypes of the functions defined in platform.c 
here?



##
apps/microtvm/zephyr/template_project/src/host_driven/platform.c:
##
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "tvm/platform.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 

[tvm] branch unity updated (6c5793124f -> 5c38bef497)

2023-02-02 Thread driazati
This is an automated email from the ASF dual-hosted git repository.

driazati pushed a change to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


omit 6c5793124f [Unity] Relax expressions and types (#13901)
 new 5c38bef497 [Unity] Relax expressions and types (#13901)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (6c5793124f)
\
 N -- N -- N   refs/heads/unity (5c38bef497)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:



[tvm] 01/01: [Unity] Relax expressions and types (#13901)

2023-02-02 Thread driazati
This is an automated email from the ASF dual-hosted git repository.

driazati pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 5c38bef4972556092cfceb248a0faac0548e8b37
Author: Yuchen Jin 
AuthorDate: Thu Feb 2 09:35:41 2023 -0500

[Unity] Relax expressions and types (#13901)
---
 include/tvm/ir/expr.h|8 +
 include/tvm/relax/expr.h | 1003 ++
 include/tvm/relax/type.h |  166 
 3 files changed, 1177 insertions(+)

diff --git a/include/tvm/ir/expr.h b/include/tvm/ir/expr.h
index c8531c8846..d4ba628d36 100644
--- a/include/tvm/ir/expr.h
+++ b/include/tvm/ir/expr.h
@@ -367,6 +367,14 @@ class RelayExprNode : public BaseExprNode {
*   This value is discarded during serialization.
*/
   mutable Type checked_type_ = Type(nullptr);
+
+  /*!
+   * \brief Stores the result of structure information of the
+   *expression that encapsulate both static shape and
+   *runtime information such as shape.
+   */
+  mutable Optional struct_info_ = Optional();
+
   /*!
* \return The checked_type
*/
diff --git a/include/tvm/relax/expr.h b/include/tvm/relax/expr.h
new file mode 100644
index 00..8154b1dd86
--- /dev/null
+++ b/include/tvm/relax/expr.h
@@ -0,0 +1,1003 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#ifndef TVM_RELAX_EXPR_H_
+#define TVM_RELAX_EXPR_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relax {
+
+using Expr = RelayExpr;
+using ExprNode = RelayExprNode;
+using relay::Id;
+
+/*!
+ * \brief Base type of all structure information.
+ *
+ * StructInfo stores possible structure information
+ * deduced during compile-time. It encapsulates
+ * both static type and runtime information such
+ * as shape.
+ *
+ * StructInfo of each non-primitive Expr can be
+ * deduced during compilation in a "best-effort" manner.
+ *
+ * When struct_info appears in function parameter and return
+ * signatures. They will imply a runtime check that matches
+ * the structure information with the value.
+ *
+ * When it appears in Expr, they follow "assume-semantics",
+ * which means the compiler will take the deduced information as it is
+ * and only do best effort prove and checks.
+ *
+ * Each struct info can be uniquely erased to a static-type.
+ * The compiler will still compile the code(with less information)
+ * when we erase to the static type.
+ *
+ * If an StructInfo contains an Expr field, then that field
+ * must be normalized already through NormalizeArg.
+ * This invariant will be checked in constructors
+ * and help us to simplify our assumption
+ * during struct info deduction.
+ */
+class StructInfoNode : public Object {
+ public:
+  /*!
+   * \brief Span that points to the original source code.
+   *Reserved debug information.
+   */
+  mutable Span span;
+
+  static constexpr const char* _type_key = "StructInfo";
+  static constexpr const bool _type_has_method_sequal_reduce = true;
+  static constexpr const bool _type_has_method_shash_reduce = true;
+  static constexpr const uint32_t _type_child_slots = 5;
+  TVM_DECLARE_BASE_OBJECT_INFO(StructInfoNode, Object);
+};
+
+/*!
+ * \brief Managed reference to StructInfoNode.
+ * \sa StructInfoNode
+ */
+class StructInfo : public ObjectRef {
+ public:
+  TVM_DEFINE_OBJECT_REF_METHODS(StructInfo, ObjectRef, StructInfoNode);
+};
+
+/*!
+ * \brief Call corresponds to callable invocation.
+ *  Corresponds to operation in computational graph terminology.
+ */
+class CallNode : public ExprNode {
+ public:
+  /*!
+   * \brief The operator(function) being invoked
+   *
+   *  - It can be tvm::Op which corresponds to the primitive operators.
+   *  - It can also be user defined functions (Function, GlobalVar, Var).
+   */
+  Expr op;
+
+  /*! \brief The arguments(inputs) of the call */
+  tvm::Array args;
+
+  /*! \brief The additional attributes */
+  Attrs attrs;
+
+  /*!
+   * \brief The structure info arguments of a CallNode.
+   * sinfo_args is designed to be non-empty only for intrinsic op (e.g.,
+   * call_tir, call_builtin_with_ctx, etc.) and calls to 

[tvm] branch unity updated (34d29e32ce -> 6c5793124f)

2023-02-02 Thread driazati
This is an automated email from the ASF dual-hosted git repository.

driazati pushed a change to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


omit 34d29e32ce [Unity] Relax expressions and types (#13901)
 new 6c5793124f [Unity] Relax expressions and types (#13901)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (34d29e32ce)
\
 N -- N -- N   refs/heads/unity (6c5793124f)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:



[tvm] 01/01: [Unity] Relax expressions and types (#13901)

2023-02-02 Thread driazati
This is an automated email from the ASF dual-hosted git repository.

driazati pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 6c5793124f836128fab716787e2831dba883241d
Author: Yuchen Jin 
AuthorDate: Thu Feb 2 09:35:41 2023 -0500

[Unity] Relax expressions and types (#13901)
---
 include/tvm/ir/expr.h|8 +
 include/tvm/relax/expr.h | 1003 ++
 include/tvm/relax/type.h |  166 
 3 files changed, 1177 insertions(+)

diff --git a/include/tvm/ir/expr.h b/include/tvm/ir/expr.h
index c8531c8846..d4ba628d36 100644
--- a/include/tvm/ir/expr.h
+++ b/include/tvm/ir/expr.h
@@ -367,6 +367,14 @@ class RelayExprNode : public BaseExprNode {
*   This value is discarded during serialization.
*/
   mutable Type checked_type_ = Type(nullptr);
+
+  /*!
+   * \brief Stores the result of structure information of the
+   *expression that encapsulate both static shape and
+   *runtime information such as shape.
+   */
+  mutable Optional struct_info_ = Optional();
+
   /*!
* \return The checked_type
*/
diff --git a/include/tvm/relax/expr.h b/include/tvm/relax/expr.h
new file mode 100644
index 00..8154b1dd86
--- /dev/null
+++ b/include/tvm/relax/expr.h
@@ -0,0 +1,1003 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#ifndef TVM_RELAX_EXPR_H_
+#define TVM_RELAX_EXPR_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relax {
+
+using Expr = RelayExpr;
+using ExprNode = RelayExprNode;
+using relay::Id;
+
+/*!
+ * \brief Base type of all structure information.
+ *
+ * StructInfo stores possible structure information
+ * deduced during compile-time. It encapsulates
+ * both static type and runtime information such
+ * as shape.
+ *
+ * StructInfo of each non-primitive Expr can be
+ * deduced during compilation in a "best-effort" manner.
+ *
+ * When struct_info appears in function parameter and return
+ * signatures. They will imply a runtime check that matches
+ * the structure information with the value.
+ *
+ * When it appears in Expr, they follow "assume-semantics",
+ * which means the compiler will take the deduced information as it is
+ * and only do best effort prove and checks.
+ *
+ * Each struct info can be uniquely erased to a static-type.
+ * The compiler will still compile the code(with less information)
+ * when we erase to the static type.
+ *
+ * If an StructInfo contains an Expr field, then that field
+ * must be normalized already through NormalizeArg.
+ * This invariant will be checked in constructors
+ * and help us to simplify our assumption
+ * during struct info deduction.
+ */
+class StructInfoNode : public Object {
+ public:
+  /*!
+   * \brief Span that points to the original source code.
+   *Reserved debug information.
+   */
+  mutable Span span;
+
+  static constexpr const char* _type_key = "StructInfo";
+  static constexpr const bool _type_has_method_sequal_reduce = true;
+  static constexpr const bool _type_has_method_shash_reduce = true;
+  static constexpr const uint32_t _type_child_slots = 5;
+  TVM_DECLARE_BASE_OBJECT_INFO(StructInfoNode, Object);
+};
+
+/*!
+ * \brief Managed reference to StructInfoNode.
+ * \sa StructInfoNode
+ */
+class StructInfo : public ObjectRef {
+ public:
+  TVM_DEFINE_OBJECT_REF_METHODS(StructInfo, ObjectRef, StructInfoNode);
+};
+
+/*!
+ * \brief Call corresponds to callable invocation.
+ *  Corresponds to operation in computational graph terminology.
+ */
+class CallNode : public ExprNode {
+ public:
+  /*!
+   * \brief The operator(function) being invoked
+   *
+   *  - It can be tvm::Op which corresponds to the primitive operators.
+   *  - It can also be user defined functions (Function, GlobalVar, Var).
+   */
+  Expr op;
+
+  /*! \brief The arguments(inputs) of the call */
+  tvm::Array args;
+
+  /*! \brief The additional attributes */
+  Attrs attrs;
+
+  /*!
+   * \brief The structure info arguments of a CallNode.
+   * sinfo_args is designed to be non-empty only for intrinsic op (e.g.,
+   * call_tir, call_builtin_with_ctx, etc.) and calls to 

[GitHub] [tvm] gromero commented on pull request #13751: [microTVM] Mitigate security vulnerability CVE-2007-4559

2023-02-02 Thread via GitHub


gromero commented on PR #13751:
URL: https://github.com/apache/tvm/pull/13751#issuecomment-1414281660

   @tvm-bot retrigger
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati commented on a diff in pull request #13902: [Unity][CI] setup CI (do not upstream to main)

2023-02-02 Thread via GitHub


driazati commented on code in PR #13902:
URL: https://github.com/apache/tvm/pull/13902#discussion_r1094983292


##
tests/scripts/task_convert_scripts_to_python.sh:
##
@@ -17,6 +17,9 @@
 # under the License.
 set -euxo pipefail
 
+# SKIP: DO NOT UPSTREAM TO MAIN
+exit 0

Review Comment:
   Can you move this logic into the Jenkins scripts themselves before any 
`node(...)` calls are made? Doing it here still allocates machines, runs the 
docker setup, etc. which is not that much but expensive to end up doing nothing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] manupak commented on issue #13586: [Release] v0.11.0 release schedule

2023-02-02 Thread via GitHub


manupak commented on issue #13586:
URL: https://github.com/apache/tvm/issues/13586#issuecomment-1414229473

   Looks like the GH branch protection is kicking in , kind of like : 
https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-pull-request-reviews-before-merging
   
   Alternatively, you could do a PR to the branch (and merge it)?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13907: [Unity][IR] First-class StructInfo

2023-02-02 Thread via GitHub


tvm-bot commented on PR #13907:
URL: https://github.com/apache/tvm/pull/13907#issuecomment-1414197257

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* No users to tag found in teams: `unity`, `ir` See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] YuchenJin opened a new pull request, #13907: [Unity][IR] First-class StructInfo

2023-02-02 Thread via GitHub


YuchenJin opened a new pull request, #13907:
URL: https://github.com/apache/tvm/pull/13907

   Relax tracks structural information (such as tensor shape) via `StructInfo` 
about the values in Relax.
   
   Original authors are:
   Co-Authored-by: Siyuan Feng 
[hzfen...@sjtu.edu.cn](mailto:hzfen...@sjtu.edu.cn)
   Co-Authored-by: Ruihang Lai [ruiha...@cs.cmu.edu](mailto:ruiha...@cs.cmu.edu)
   Co-Authored-by: Tianqi Chen 
[tianqi.tc...@gmail.com](mailto:tianqi.tc...@gmail.com)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] driazati commented on pull request #13903: [ci] Disable GPU unit tests

2023-02-02 Thread via GitHub


driazati commented on PR #13903:
URL: https://github.com/apache/tvm/pull/13903#issuecomment-1414164474

   > @driazati are there any plans to allow other contributors to provide 
nodes? I remember I asked about this a long time ago but it wasn't a priority, 
now it might be worth re-visiting?
   
   On the technical side this isn't too complicated, it's mostly just a matter 
of making sure the Jenkins head can communicate with the machines and 
installing the client daemon. I'd be open to trying it if the machines are 
there but we'd probably also need some level of commitment to support those 
machines to make sure they're at least as reliable as our current runners
   
   
   > Just a quick question. Can you clarify what are the tests being disabled? 
I imagine it is the diff between `task_python_frontend_lite.sh` and 
`task_python_frontend.sh`, but can you clarify what in practice was running 
before, and it is not running now?
   
   There's that and some more broad disabling of stuff in 
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm-gpu/detail/main/244/pipeline
 vs 
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm-gpu/detail/PR-13903/1/pipeline/101


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] leandron commented on issue #13586: [Release] v0.11.0 release schedule

2023-02-02 Thread via GitHub


leandron commented on issue #13586:
URL: https://github.com/apache/tvm/issues/13586#issuecomment-1414158464

   > The release branches are protected from force pushes / deletes by GitHub, 
but cherry-picking shouldn't require rewriting history right? Are you doing 
something like this?
   > 
   > ```shell
   > git fetch origin
   > git checkout v0.11.0
   > git cherry-pick 0eabbac2160a8a630e1994969f664ccf6233fc7e
   > git push origin
   > ```
   
   I was just trying to produce the tarball with the actual `v0.11.0` with that 
cherry-picked commit, but I'm probably missing something:
   
   ```
   % git remote -v
   origin   g...@github.com:leandron/tvm.git (fetch)
   origin   g...@github.com:leandron/tvm.git (push)
   upstream g...@github.com:apache/tvm.git (fetch)
   upstream g...@github.com:apache/tvm.git (push)
   
   % git fetch upstream
   % git checkout upstream/v0.11.0
   
   % git reset --hard upstream/v0.11.0   
   HEAD is now at 1d9863470 [TIR] Fix PlanAndUpdateBufferAllocationLocation not 
visiting constant buffer (#13605)
   
   % git cherry-pick 0eabbac2160a8a630e1994969f664ccf6233fc7e
   [v0.11.0 2f999d62e] [BugFix][UMA] Protect target registration (#13624)
Author: Balint Cristian 
Date: Fri Dec 16 15:23:26 2022 +0200
4 files changed, 42 insertions(+), 16 deletions(-)
   
   % git push upstream v0.11.0
   Enumerating objects: 42, done.
   Counting objects: 100% (42/42), done.
   Delta compression using up to 8 threads
   Compressing objects: 100% (19/19), done.
   Writing objects: 100% (23/23), 3.93 KiB | 3.93 MiB/s, done.
   Total 23 (delta 18), reused 5 (delta 4), pack-reused 0
   remote: Resolving deltas: 100% (18/18), completed with 18 local objects.
   remote: error: GH006: Protected branch update failed for refs/heads/v0.11.0.
   remote: error: At least 1 approving review is required by reviewers with 
write access. Commits must have valid signatures.
   To github.com:apache/tvm.git
! [remote rejected] v0.11.0 -> v0.11.0 (protected branch hook declined)
   error: failed to push some refs to 'github.com:apache/tvm.git'
   % 
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] YuchenJin commented on pull request #13902: [Unity][CI] setup CI (do not upstream to main)

2023-02-02 Thread via GitHub


YuchenJin commented on PR #13902:
URL: https://github.com/apache/tvm/pull/13902#issuecomment-1414150624

   > @YuchenJin because you're always launching the instances and then exiting, 
you'd incur the minimum charge of 60 seconds for the instances you're running 
even if you don't use any of the compute - might be better to turn off the 
builds in github somewhere?
   
   Thanks for the suggestion! The main reason for adding `exit 0` to a bunch of 
scripts is to avoid the complication when rebasing against the main branch, and 
we don't need to modify the jenkins while reusing the current tvm jenkins ci 
setting. 
   Would love to hear others' thoughts: @driazati @areusch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey commented on a diff in pull request #13906: [tir] fix buffer_decl buffer allocation

2023-02-02 Thread via GitHub


cconvey commented on code in PR #13906:
URL: https://github.com/apache/tvm/pull/13906#discussion_r1094866037


##
tests/python/unittest/test_tir_transform_plan_update_buffer_allocation_location.py:
##
@@ -416,5 +416,26 @@ def test_allocate_const_after_tensorize():
 _ = seq(sch.mod)
 
 
+def test_buffer_decl_allocation():
+"""
+Confirm a fix to
+`src/tir/transforms/plan_update_buffer_allocation_location.cc`
+in which a declared buffer was erroneously duplicated, resulting in a
+TIR-compilation failure.
+"""
+
+@tvm.script.ir_module
+class IRMod:
+@T.prim_func
+def func(a: T.Ptr[T.float32]):
+T.func_attr({"global_symbol": "main", "tir.noalias": True})
+A = T.buffer_decl(1, "float32", data=a)
+for i in range(1):
+A[i] = 0
+
+ir_mod = IRMod
+built_mod = tvm.build(ir_mod)

Review Comment:
   Thanks, that's an excellent suggestion.  Fixed (IMHO) in the newest version 
of the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] github-actions[bot] commented on pull request #13883: [Hexagon] Improve cache management strategy for HexagonBuffer

2023-02-02 Thread via GitHub


github-actions[bot] commented on PR #13883:
URL: https://github.com/apache/tvm/pull/13883#issuecomment-1414114865

   Failed to re-run CI in https://github.com/apache/tvm/actions/runs/4077070226
   
   
   
   ```
   Traceback (most recent call last):
 File "ci/scripts/github/github_tvmbot.py", line 594, in comment_failure
   raise item
 File "ci/scripts/github/github_tvmbot.py", line 700, in run
   pr.rerun_jenkins_ci()
 File "ci/scripts/github/github_tvmbot.py", line 553, in rerun_jenkins_ci
   post(url, auth=("tvm-bot", TVM_BOT_JENKINS_TOKEN))
 File "/home/runner/work/tvm/tvm/ci/scripts/jenkins/git_utils.py", line 53, 
in post
   with request.urlopen(req, data) as response:
 File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
   return opener.open(url, data, timeout)
 File "/usr/lib/python3.8/urllib/request.py", line 531, in open
   response = meth(req, response)
 File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
   response = self.parent.error(
 File "/usr/lib/python3.8/urllib/request.py", line 569, in error
   return self._call_chain(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
   result = func(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 649, in 
http_error_default
   raise HTTPError(req.full_url, code, msg, hdrs, fp)
   urllib.error.HTTPError: HTTP Error 404: Not Found
   
   ```
   
   with response
   
   ```
   
   
   
   Error 404 Not Found
   
   HTTP ERROR 404 Not Found
   
   URI:/job/tvm-arm/job/PR-13883/buildWithParameters
   STATUS:404
   MESSAGE:Not Found
   SERVLET:Stapler
   
   https://eclipse.org/jetty;>Powered by Jetty:// 10.0.11
   
   
   
   
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] adstraw commented on pull request #13883: [Hexagon] Improve cache management strategy for HexagonBuffer

2023-02-02 Thread via GitHub


adstraw commented on PR #13883:
URL: https://github.com/apache/tvm/pull/13883#issuecomment-1414114445

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Hzfengsy commented on a diff in pull request #13906: [tir] fix buffer_decl buffer allocation

2023-02-02 Thread via GitHub


Hzfengsy commented on code in PR #13906:
URL: https://github.com/apache/tvm/pull/13906#discussion_r1094699143


##
tests/python/unittest/test_tir_transform_plan_update_buffer_allocation_location.py:
##
@@ -416,5 +416,26 @@ def test_allocate_const_after_tensorize():
 _ = seq(sch.mod)
 
 
+def test_buffer_decl_allocation():
+"""
+Confirm a fix to
+`src/tir/transforms/plan_update_buffer_allocation_location.cc`
+in which a declared buffer was erroneously duplicated, resulting in a
+TIR-compilation failure.
+"""
+
+@tvm.script.ir_module
+class IRMod:
+@T.prim_func
+def func(a: T.Ptr[T.float32]):
+T.func_attr({"global_symbol": "main", "tir.noalias": True})
+A = T.buffer_decl(1, "float32", data=a)
+for i in range(1):
+A[i] = 0
+
+ir_mod = IRMod
+built_mod = tvm.build(ir_mod)

Review Comment:
   It would be great to make the test "unit" enough, i.e., only test the 
related pass instead of a whole build workflow.
   
   One of the common practices is to test the IRModule before and after the 
pass. Please see the test case `test_1D_cascade_op_rolling_buffer()` in the 
same file. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Hzfengsy commented on a diff in pull request #13906: [tir] fix buffer_decl buffer allocation

2023-02-02 Thread via GitHub


Hzfengsy commented on code in PR #13906:
URL: https://github.com/apache/tvm/pull/13906#discussion_r1094665278


##
src/tir/transforms/plan_update_buffer_allocation_location.cc:
##
@@ -111,6 +112,12 @@ class BufferAllocationLocator : public StmtExprMutator {
 collector(func->body);
 unmanaged_allocations_ = collector.unmanaged_allocations;
 
+for (Var param : func->params) {

Review Comment:
   ```suggestion
   for (const Var& param : func->params) {
   ```



##
tests/python/unittest/test_buffer_decl.py:
##
@@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import tvm

Review Comment:
   Could you please add a testcase in 
`tests/python/unittest/test_tir_transform_plan_update_buffer_allocation_location.py`
 as the code changes are in the related file?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] wangzy0327 commented on issue #13666: [Bug] rocm platform result are not correct

2023-02-02 Thread via GitHub


wangzy0327 commented on issue #13666:
URL: https://github.com/apache/tvm/issues/13666#issuecomment-1413906080

   > > I get the error on AMD gfx908 device . The error is ValueError:Cannot 
find global
   > > function tvm.contrib.miopen.conv2d.setup .
   > > How to fix it ?
   > 
   > What is your setting for USE_MIOPEN configuration variable?
   @mvermeulen 
   Set(USE_MIOPEN ON)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13906: [tir] fix buffer_decl buffer allocation

2023-02-02 Thread via GitHub


tvm-bot commented on PR #13906:
URL: https://github.com/apache/tvm/pull/13906#issuecomment-1413902851

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* cc @Hzfengsy, @junrushao, @quic-sanirudh, @shingjan See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cconvey opened a new pull request, #13906: [tir] fix buffer_decl buffer allocation

2023-02-02 Thread via GitHub


cconvey opened a new pull request, #13906:
URL: https://github.com/apache/tvm/pull/13906

   - Fix a bug where `buffer_decl`, combined with certain usage patterns of the 
resulting buffer, cause an TVM-internal assert failure during TIR-compilation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mvermeulen commented on issue #13666: [Bug] rocm platform result are not correct

2023-02-02 Thread via GitHub


mvermeulen commented on issue #13666:
URL: https://github.com/apache/tvm/issues/13666#issuecomment-1413902505

   > I get the error on AMD gfx908 device . The error is ValueError:Cannot find 
global
   > function tvm.contrib.miopen.conv2d.setup .
   > How to fix it ?
   
   What is your setting for USE_MIOPEN configuration variable?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on issue #13856: [Bug] CMSIS-NN BYOC fails with Zephyr 3.2

2023-02-02 Thread via GitHub


gromero commented on issue #13856:
URL: https://github.com/apache/tvm/issues/13856#issuecomment-1413860335

   > > @Mousius haha ok, that explains all :-) well, not all exactly, I confess 
I'm still intrigued by the "impossible constraint" error. Of course the define 
`-DARM_MATH_AUTOVECTORIZE` avoids the inline asm in question, but I could not 
make sense why the constraint is impossible here. I'm wondering if it's due to 
a register allocation issue in this particular function where the inline asm 
is...
   > 
   > Update on this, it's the use of `-mfloat-abi=hard`, this leads me to think 
that the compiler is using up more of the registers which the `asm` block would 
be using rather than passing in soft float mode thinking
   
   @Mousius yeah, interesting. Also the `Te` constraint will force using  just 
the even registers from `r0` to `r14` accordingly to this 
[doc](https://developer.arm.com/documentation/101754/0617/armclang-Reference/armclang-Inline-Assembler/Inline-assembly-constraint-strings/Constraint-codes-for-AArch32-state?lang=en).
 But I have neither looked at the generated asm code for that function nor 
checked exactly why that constraint is in place for those machine instructions. 
I realized also that if `r14` is removed from the clobber list at least one 
more `Te` constraint is allowed (does not throw the impossible constraint 
error), which also points to lack of general purpose registers in the inline 
asm scope.  I'm not sure if gcc autovectorizing is kicking in and doing better 
than this inline asm.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [TOPHUB] use keys as a keyword for searching of existing statistics (#13874)

2023-02-02 Thread echuraev
This is an automated email from the ASF dual-hosted git repository.

echuraev pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new ea34e6eb0b [TOPHUB] use keys as a keyword for searching of existing 
statistics (#13874)
ea34e6eb0b is described below

commit ea34e6eb0bd47b397a6c29b18b5ff23ef88f4998
Author: Andrey Malyshev 
AuthorDate: Thu Feb 2 16:43:05 2023 +0200

[TOPHUB] use keys as a keyword for searching of existing statistics (#13874)

* [TOPHUB] use keys as a keyword for searching of existing statistics

In case of ARM we might not specify -device and in this case llvm will
be used while even in this case we can determin proper filename with
stat since keys have architecture defined. The same situatin must with
with x86

* Add test on target not having arm_cpu device

* minor fix, add comment

* Fix pylint

* Fix comment
---
 python/tvm/autotvm/tophub.py   | 10 ++
 tests/python/unittest/test_autotvm_dispatch_context.py | 16 
 2 files changed, 26 insertions(+)

diff --git a/python/tvm/autotvm/tophub.py b/python/tvm/autotvm/tophub.py
index f705d591e6..99dd312d87 100644
--- a/python/tvm/autotvm/tophub.py
+++ b/python/tvm/autotvm/tophub.py
@@ -106,10 +106,20 @@ def context(target, extra_files=None):
 if isinstance(tgt, str):
 tgt = Target(tgt)
 
+# The TOPHUB file names rely on Target's device or kind. Both these 
types of
+# information exist in Target.keys, but rules of filling this filed is 
not explicitly
+# defined, we are afraid to rely only on Target.keys. At the same time 
Target.device
+# is filled only if device was pointed explicitly in target string, 
that is not mandatory
+# and in some cases we need to get information about device from 
Target.keys
+# In priority order we verify:
+# 1) Target.device
+# 2) Target.keys
+# 3) Target.kind
 possible_names = []
 device = tgt.attrs.get("device", "")
 if device != "":
 possible_names.append(_alias(device))
+possible_names.extend(tgt.keys)
 possible_names.append(tgt.kind.name)
 
 all_packages = list(PACKAGE_VERSION.keys())
diff --git a/tests/python/unittest/test_autotvm_dispatch_context.py 
b/tests/python/unittest/test_autotvm_dispatch_context.py
index 6ca062047f..ba75992128 100644
--- a/tests/python/unittest/test_autotvm_dispatch_context.py
+++ b/tests/python/unittest/test_autotvm_dispatch_context.py
@@ -19,6 +19,7 @@ The dispatcher can choose which template to use according
 to the parameters of workload"""
 
 from tvm import autotvm
+import tvm
 
 
 @autotvm.template("testing/dispatch_fallback")
@@ -31,5 +32,20 @@ def test_fallback():
 simple_template(2, 3)
 
 
+def test_tophub_kinds_match():
+def verify_arm_cpu(target):
+best_by_targetkey = autotvm.tophub.context(target).best_by_targetkey
+assert len(best_by_targetkey)
+found_arm_cpu = False
+for a, _ in best_by_targetkey:
+if "arm_cpu" in a:
+found_arm_cpu = True
+break
+assert found_arm_cpu
+
+verify_arm_cpu("llvm -device=arm_cpu -mtriple=aarch64-linux-gnu 
-mattr=+neon,+v8.2a,+dotprod")
+verify_arm_cpu("llvm -model=snapdragon835 -mtriple=arm64-linux-android 
-mattr=+neon")
+
+
 if __name__ == "__main__":
 test_fallback()



[GitHub] [tvm] echuraev merged pull request #13874: [TOPHUB] use keys as a keyword for searching of existing statistics

2023-02-02 Thread via GitHub


echuraev merged PR #13874:
URL: https://github.com/apache/tvm/pull/13874


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity updated: [Unity] Relax expressions and types (#13901)

2023-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new 34d29e32ce [Unity] Relax expressions and types (#13901)
34d29e32ce is described below

commit 34d29e32ce6408ab4ac32d6a0670a0f8c842663f
Author: Yuchen Jin 
AuthorDate: Thu Feb 2 09:35:41 2023 -0500

[Unity] Relax expressions and types (#13901)
---
 include/tvm/ir/expr.h|8 +
 include/tvm/relax/expr.h | 1003 ++
 include/tvm/relax/type.h |  166 
 3 files changed, 1177 insertions(+)

diff --git a/include/tvm/ir/expr.h b/include/tvm/ir/expr.h
index c8531c8846..d4ba628d36 100644
--- a/include/tvm/ir/expr.h
+++ b/include/tvm/ir/expr.h
@@ -367,6 +367,14 @@ class RelayExprNode : public BaseExprNode {
*   This value is discarded during serialization.
*/
   mutable Type checked_type_ = Type(nullptr);
+
+  /*!
+   * \brief Stores the result of structure information of the
+   *expression that encapsulate both static shape and
+   *runtime information such as shape.
+   */
+  mutable Optional struct_info_ = Optional();
+
   /*!
* \return The checked_type
*/
diff --git a/include/tvm/relax/expr.h b/include/tvm/relax/expr.h
new file mode 100644
index 00..8154b1dd86
--- /dev/null
+++ b/include/tvm/relax/expr.h
@@ -0,0 +1,1003 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#ifndef TVM_RELAX_EXPR_H_
+#define TVM_RELAX_EXPR_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relax {
+
+using Expr = RelayExpr;
+using ExprNode = RelayExprNode;
+using relay::Id;
+
+/*!
+ * \brief Base type of all structure information.
+ *
+ * StructInfo stores possible structure information
+ * deduced during compile-time. It encapsulates
+ * both static type and runtime information such
+ * as shape.
+ *
+ * StructInfo of each non-primitive Expr can be
+ * deduced during compilation in a "best-effort" manner.
+ *
+ * When struct_info appears in function parameter and return
+ * signatures. They will imply a runtime check that matches
+ * the structure information with the value.
+ *
+ * When it appears in Expr, they follow "assume-semantics",
+ * which means the compiler will take the deduced information as it is
+ * and only do best effort prove and checks.
+ *
+ * Each struct info can be uniquely erased to a static-type.
+ * The compiler will still compile the code(with less information)
+ * when we erase to the static type.
+ *
+ * If an StructInfo contains an Expr field, then that field
+ * must be normalized already through NormalizeArg.
+ * This invariant will be checked in constructors
+ * and help us to simplify our assumption
+ * during struct info deduction.
+ */
+class StructInfoNode : public Object {
+ public:
+  /*!
+   * \brief Span that points to the original source code.
+   *Reserved debug information.
+   */
+  mutable Span span;
+
+  static constexpr const char* _type_key = "StructInfo";
+  static constexpr const bool _type_has_method_sequal_reduce = true;
+  static constexpr const bool _type_has_method_shash_reduce = true;
+  static constexpr const uint32_t _type_child_slots = 5;
+  TVM_DECLARE_BASE_OBJECT_INFO(StructInfoNode, Object);
+};
+
+/*!
+ * \brief Managed reference to StructInfoNode.
+ * \sa StructInfoNode
+ */
+class StructInfo : public ObjectRef {
+ public:
+  TVM_DEFINE_OBJECT_REF_METHODS(StructInfo, ObjectRef, StructInfoNode);
+};
+
+/*!
+ * \brief Call corresponds to callable invocation.
+ *  Corresponds to operation in computational graph terminology.
+ */
+class CallNode : public ExprNode {
+ public:
+  /*!
+   * \brief The operator(function) being invoked
+   *
+   *  - It can be tvm::Op which corresponds to the primitive operators.
+   *  - It can also be user defined functions (Function, GlobalVar, Var).
+   */
+  Expr op;
+
+  /*! \brief The arguments(inputs) of the call */
+  tvm::Array args;
+
+  /*! \brief The additional attributes */
+  Attrs attrs;
+
+  /*!
+   * \brief The structure 

[GitHub] [tvm] tqchen merged pull request #13901: [Unity] Relax expressions and types

2023-02-02 Thread via GitHub


tqchen merged PR #13901:
URL: https://github.com/apache/tvm/pull/13901


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [QNN][Relay][Topi] Add qnn.dense with weight layout (#13854)

2023-02-02 Thread echuraev
This is an automated email from the ASF dual-hosted git repository.

echuraev pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 37e1a6862c [QNN][Relay][Topi] Add qnn.dense with weight layout (#13854)
37e1a6862c is described below

commit 37e1a6862ca1bb77e33ca9c03e1365d50f468bd9
Author: ibsidorenko <98739392+ibsidore...@users.noreply.github.com>
AuthorDate: Thu Feb 2 17:21:00 2023 +0300

[QNN][Relay][Topi] Add qnn.dense with weight layout (#13854)

* [Hexagon][QNN] Improve performance of qnn.mul

This commit imroves performance of qnn.mul operation without QNN
canonicalization.

* [QNN][Relay][Topi] Add qnn.dense with weight layout

This commit adds new Relay operation "qnn.contrib_dense_pack" that supports
different weights layout (nn.dense and qnn.dense do not support this
attribute). This new operation is full analog of "nn.contrib_dense_pack"
operation but in QNN space.
---
 python/tvm/relay/qnn/op/_qnn.py|  11 +-
 python/tvm/relay/qnn/op/legalizations.py   | 134 -
 python/tvm/relay/qnn/op/qnn.py |  64 ++
 python/tvm/relay/qnn/strategy/generic.py   |   6 +
 python/tvm/relay/qnn/strategy/hexagon.py   |  18 ++
 python/tvm/topi/hexagon/qnn/__init__.py|   1 +
 .../hexagon/qnn/{__init__.py => dense_alter_op.py} |  26 +--
 python/tvm/topi/hexagon/qnn/nn.py  | 216 +
 python/tvm/topi/nn/qnn.py  |  19 ++
 src/relay/backend/te_compiler_cache.cc |  20 +-
 src/relay/op/nn/nn.h   |   5 +
 src/relay/qnn/op/dense.cc  | 105 +-
 .../contrib/test_arm_compute_lib/test_dense.py |   6 +-
 .../test_hexagon/test_wo_qnn_canonicalization.py   | 172 +++-
 tests/python/relay/test_pass_qnn_legalize.py   |  92 +
 15 files changed, 779 insertions(+), 116 deletions(-)

diff --git a/python/tvm/relay/qnn/op/_qnn.py b/python/tvm/relay/qnn/op/_qnn.py
index c9c4c86e8b..e2157a051a 100644
--- a/python/tvm/relay/qnn/op/_qnn.py
+++ b/python/tvm/relay/qnn/op/_qnn.py
@@ -93,7 +93,16 @@ def alter_op_layout_qnn_conv2d(attrs, inputs, tinfos, 
out_type):
 
 # qnn.dense
 register_strategy("qnn.dense", strategy.qnn_dense_strategy)
-register_pattern("qnn.dense", OpPattern.OUT_ELEMWISE_FUSABLE)
+
+
+@register_alter_op_layout("qnn.dense")
+def alter_op_layout_qnn_dense(attrs, inputs, tinfos, out_type):
+"""Alternate the layout of qnn.dense"""
+return topi.nn.qnn_dense_alter_layout(attrs, inputs, tinfos, out_type)
+
+
+# qnn.contrib_dense_pack
+register_strategy("qnn.contrib_dense_pack", strategy.qnn_dense_pack_strategy)
 
 # qnn.batch_matmul
 register_strategy("qnn.batch_matmul", strategy.qnn_batch_matmul_strategy)
diff --git a/python/tvm/relay/qnn/op/legalizations.py 
b/python/tvm/relay/qnn/op/legalizations.py
index ef368a016e..53cb41c2fb 100644
--- a/python/tvm/relay/qnn/op/legalizations.py
+++ b/python/tvm/relay/qnn/op/legalizations.py
@@ -340,6 +340,62 @@ def helper_change_dtypes_to_int8(attrs, inputs, types, 
relay_op):
 )
 
 
+def helper_change_dtypes_to_uint8(attrs, inputs, types, relay_op):
+"""Helper function to change dtypes to uint8 x uint8.
+Legalizes QNN dense op for Hexagon DSP. It supports fast u8 x u8 vrmpy 
instruction.
+
+Converting from int8 to uint8 can be done in following manner:
+
+Original equation
+  scale * (QA - zp_a)
+  scale * (QA + 128 - 128 - zp_a)
+  scale * ( (QA + 128) - (zp_a + 128))
+
+Replacing QA + 128 with QA' and (zp_a + 128) with zp_a'
+We get our new quantized uint8 tensor - scale * (QA' - zp_a')
+
+Parameters
+--
+attrs : tvm.ir.Attrs
+Attributes of current convolution
+inputs : list of tvm.relay.Expr
+The args of the Relay expr to be legalized
+types : list of types
+List of input and output types
+
+Returns
+---
+result : tvm.relay.Expr
+The legalized expr
+"""
+# Collect the dtypes.
+data_dtype = types[0].dtype
+kernel_dtype = types[1].dtype
+
+# Do nothing since it is already uint8.
+if data_dtype == "uint8" and kernel_dtype == "uint8":
+return None
+
+# Collect the input exprs.
+data, kernel, input_zero_point, kernel_zero_point, input_scale, 
kernel_scale = inputs
+
+# Shift input if necessary.
+if data_dtype == "int8":
+# Compute (QA + 128) and (zp_a + 128)
+data, input_zero_point = _shift(data, input_zero_point, "uint8")
+
+# Shift kernel if necessary.
+if kernel_dtype == "int8":
+# Compute (QA + 128) and (zp_a + 128)
+kernel, kernel_zero_point = _shift(kernel, kernel_zero_point, "uint8")
+
+# Call qnn.conv2d/qnn.dense with modified inputs and zero points.
+new_attrs = 

[GitHub] [tvm] echuraev merged pull request #13854: [QNN][Relay][Topi] Add qnn.dense with weight layout

2023-02-02 Thread via GitHub


echuraev merged PR #13854:
URL: https://github.com/apache/tvm/pull/13854


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Mousius commented on issue #13856: [Bug] CMSIS-NN BYOC fails with Zephyr 3.2

2023-02-02 Thread via GitHub


Mousius commented on issue #13856:
URL: https://github.com/apache/tvm/issues/13856#issuecomment-1413702466

   > @Mousius haha ok, that explains all :-) well, not all exactly, I confess 
I'm still intrigued by the "impossible constraint" error. Of course the define 
`-DARM_MATH_AUTOVECTORIZE` avoids the inline asm in question, but I could not 
make sense why the constraint is impossible here. I'm wondering if it's due to 
a register allocation issue in this particular function where the inline asm 
is...
   
   Update on this, it's the use of `-mfloat-abi=hard`, this leads me to think 
that the compiler is using up more of the registers which the `asm` block would 
be using rather than passing in soft float mode 樂 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on issue #13856: [Bug] CMSIS-NN BYOC fails with Zephyr 3.2

2023-02-02 Thread via GitHub


gromero commented on issue #13856:
URL: https://github.com/apache/tvm/issues/13856#issuecomment-1413684942

   @Mousius haha ok, that explains all :-) well, not all exactly, I confess I'm 
still intrigued by the "impossible constraint" error. Of course the define 
`-DARM_MATH_AUTOVECTORIZE` avoid the inline asm in question, but I could not 
make sense why the constraint is impossible here. I'm wondering if it's due to 
a register allocation issue is this particular function where the inline asm 
is... :-) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13905: Pass the 'path' parameter passed to cmake_build to the task_build.py script.

2023-02-02 Thread via GitHub


tvm-bot commented on PR #13905:
URL: https://github.com/apache/tvm/pull/13905#issuecomment-1413663932

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* No users to auto-tag found, no teams are specified in PR title See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] alter-xp opened a new pull request, #13905: Pass the 'path' parameter passed to cmake_build to the task_build.py script.

2023-02-02 Thread via GitHub


alter-xp opened a new pull request, #13905:
URL: https://github.com/apache/tvm/pull/13905

   As the PR title states, the purpose of this PR is to pass the path parameter 
given to cmake_build to the task_build.py script. with this PR we will be able 
to control compilation in different directories.
   
   
   
   @driazati @areusch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] echuraev commented on pull request #13868: [OpenCL] Implement save/load pre-compiled programs

2023-02-02 Thread via GitHub


echuraev commented on PR #13868:
URL: https://github.com/apache/tvm/pull/13868#issuecomment-1413616618

   @tqchen could you please review this PR once again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] echuraev commented on a diff in pull request #13837: [CLML][CODEGEN] CLML native codegen utility

2023-02-02 Thread via GitHub


echuraev commented on code in PR #13837:
URL: https://github.com/apache/tvm/pull/13837#discussion_r1094418397


##
apps/cpp_clml/CMakeLists.txt:
##
@@ -0,0 +1,59 @@
+cmake_minimum_required(VERSION 3.13)
+
+project(clml_run VERSION 2.0)
+
+if(NOT DEFINED CMAKE_TOOLCHAIN_FILE)
+  message( FATAL_ERROR "CMAKE_TOOLCHAIN_FILE Not set, forcing exit. Suggested 
value: {ANDROID_NDK_PATH}/build/cmake/android.toolchain.cmake." )
+endif(NOT DEFINED CMAKE_TOOLCHAIN_FILE)
+
+if(NOT DEFINED ANDROID_ABI)
+  message( FATAL_ERROR "ANDROID_ABI Not set, forcing exit. Suggested value(s): 
arm64-v8a (64), armeabi-v7a (32)" )
+endif(NOT DEFINED ANDROID_ABI)
+
+if(NOT DEFINED CLML_SDK)
+  message( FATAL_ERROR "CLML_SDK Not set, forcing exit." )
+endif(NOT DEFINED CLML_SDK)
+
+if (CMAKE_FIND_ROOT_PATH_MODE_LIBRARY STREQUAL "ONLY")
+  set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY BOTH)
+endif()
+
+find_library(CLML_LIBRARIES NAMES libOpenCL.so NO_DEFAULT_PATH PATHS 
${CLML_SDK}/lib ${CLML_SDK}/lib64)

Review Comment:
   Thank you for the clarification



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] wrongtest-intellif opened a new pull request, #13904: [Doc] fix doc for tvm.te.const()

2023-02-02 Thread via GitHub


wrongtest-intellif opened a new pull request, #13904:
URL: https://github.com/apache/tvm/pull/13904

   Now the tvm.te.const() is just alias for tvm.tir.const()


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13904: [Doc] fix doc for tvm.te.const()

2023-02-02 Thread via GitHub


tvm-bot commented on PR #13904:
URL: https://github.com/apache/tvm/pull/13904#issuecomment-1413476581

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* No users to tag found in teams: `doc` See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Mousius commented on issue #13856: [Bug] CMSIS-NN BYOC fails with Zephyr 3.2

2023-02-02 Thread via GitHub


Mousius commented on issue #13856:
URL: https://github.com/apache/tvm/issues/13856#issuecomment-1413466896

   @gromero the version is the latest version on the repo, and the commit is a 
workaround for the ICE: 
https://github.com/ARM-software/CMSIS-NN/commit/245089501eef18e8b638865c5afd6cdf2d03386f
  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Mousius commented on pull request #13903: [ci] Disable GPU unit tests

2023-02-02 Thread via GitHub


Mousius commented on PR #13903:
URL: https://github.com/apache/tvm/pull/13903#issuecomment-1413437597

   @driazati are there any plans to allow other contributors to provide nodes? 
I remember I asked about this a long time ago but it wasn't a priority, now it 
might be worth re-visiting?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] leandron commented on pull request #13903: [ci] Disable GPU unit tests

2023-02-02 Thread via GitHub


leandron commented on PR #13903:
URL: https://github.com/apache/tvm/pull/13903#issuecomment-1413435293

   Just a quick question. Can you clarify what are the tests being disabled? I 
imagine it is the diff between `task_python_frontend_lite.sh` and 
`task_python_frontend.sh`, but can you clarify what in practice was running 
before, and it is not running now?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Mousius commented on pull request #13902: [Unity][CI] setup CI (do not upstream to main)

2023-02-02 Thread via GitHub


Mousius commented on PR #13902:
URL: https://github.com/apache/tvm/pull/13902#issuecomment-1413413979

   @YuchenJin because you're always launching the instances and then exitting, 
you'd incur the minimum charge of 60 seconds for the instances you're running 
even if you don't use any of the compute - might be better to turn off the 
builds in github somewhere?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] srkreddy1238 commented on a diff in pull request #13837: [CLML][CODEGEN] CLML native codegen utility

2023-02-02 Thread via GitHub


srkreddy1238 commented on code in PR #13837:
URL: https://github.com/apache/tvm/pull/13837#discussion_r1094213482


##
apps/cpp_clml/CMakeLists.txt:
##
@@ -0,0 +1,59 @@
+cmake_minimum_required(VERSION 3.13)
+
+project(clml_run VERSION 2.0)
+
+if(NOT DEFINED CMAKE_TOOLCHAIN_FILE)
+  message( FATAL_ERROR "CMAKE_TOOLCHAIN_FILE Not set, forcing exit. Suggested 
value: {ANDROID_NDK_PATH}/build/cmake/android.toolchain.cmake." )
+endif(NOT DEFINED CMAKE_TOOLCHAIN_FILE)
+
+if(NOT DEFINED ANDROID_ABI)
+  message( FATAL_ERROR "ANDROID_ABI Not set, forcing exit. Suggested value(s): 
arm64-v8a (64), armeabi-v7a (32)" )
+endif(NOT DEFINED ANDROID_ABI)
+
+if(NOT DEFINED CLML_SDK)
+  message( FATAL_ERROR "CLML_SDK Not set, forcing exit." )
+endif(NOT DEFINED CLML_SDK)
+
+if (CMAKE_FIND_ROOT_PATH_MODE_LIBRARY STREQUAL "ONLY")
+  set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY BOTH)
+endif()
+
+find_library(CLML_LIBRARIES NAMES libOpenCL.so NO_DEFAULT_PATH PATHS 
${CLML_SDK}/lib ${CLML_SDK}/lib64)

Review Comment:
   CLML doesn't expose any .cmake and using tvm's ```find_opencl``` adds 
additional dependencies.
   I am trying to keep build instructions of this tool very similar to CLML SDK 
sample build.
   
   Also, given a proper download of clml_sdk it's guaranteed to have the OpenCL 
lib.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org