[GitHub] [tvm] mdw-octoml commented on a change in pull request #7728: [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency)

2021-03-23 Thread GitBox


mdw-octoml commented on a change in pull request #7728:
URL: https://github.com/apache/tvm/pull/7728#discussion_r600146495



##
File path: docker/Dockerfile.ci_qemu
##
@@ -64,3 +64,7 @@ RUN bash /install/ubuntu_install_qemu.sh
 COPY install/ubuntu_install_zephyr.sh /install/ubuntu_install_zephyr.sh
 RUN bash /install/ubuntu_install_zephyr.sh
 ENV ZEPHYR_BASE=/opt/zephyrproject/zephyr
+
+# Install ONNX

Review comment:
   I'm surprised this is needed. Why not just install the ONNX pip package?
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [ONNX] Onnx node tests (#7720)

2021-03-23 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 8131364  [ONNX] Onnx node tests (#7720)
8131364 is described below

commit 813136401a11a49d6c15e6013c34dd822a5c4ff6
Author: Matthew Brookhart 
AuthorDate: Tue Mar 23 20:40:32 2021 -0600

[ONNX] Onnx node tests (#7720)

* WIP

* some fixes

* more fixes

* fix some conv_transpose tests

* fix out of bounds slice

* fix flatten import

* fix logsoftmax and softmax tests

* fix Error in Upsample

* fix onehot

* normalize errors

* fix gather with negative indices

* parameterize test

* skip unsupported tests

* clean up

* fix rebase

* fix lint

* add an error message when we find an un-identified tensor
---
 python/tvm/relay/frontend/onnx.py  | 133 +--
 python/tvm/relay/op/transform.py   |   7 +-
 tests/python/frontend/onnx/test_forward.py | 163 +
 3 files changed, 269 insertions(+), 34 deletions(-)

diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
index fab4ae8..d9fc2ff 100644
--- a/python/tvm/relay/frontend/onnx.py
+++ b/python/tvm/relay/frontend/onnx.py
@@ -103,10 +103,11 @@ def get_numpy(tensor_proto):
 def get_type(elem_type):
 """Converts onnx integer datatype to numpy datatype"""
 try:
-from onnx import TensorProto
+from onnx.mapping import TENSOR_TYPE_TO_NP_TYPE
 except ImportError as e:
 raise ImportError("Unable to import onnx which is required 
{}".format(e))
-return TensorProto.DataType.Name(elem_type).lower()
+
+return str(TENSOR_TYPE_TO_NP_TYPE[elem_type])
 
 
 def get_info(info_proto):
@@ -157,7 +158,7 @@ def revert_caffe2_pad(pads):
 return pads
 
 
-def get_pad_pair(input1d, kernel1d, stride1d):
+def get_pad_pair(input1d, kernel1d, stride1d, mode):
 """infer pad size"""
 if input1d % stride1d == 0:
 pad = max(kernel1d - stride1d, 0)
@@ -165,6 +166,8 @@ def get_pad_pair(input1d, kernel1d, stride1d):
 pad = max(kernel1d - (input1d % stride1d), 0)
 pad_before = pad // 2
 pad_after = pad - pad_before
+if "LOWER" in mode:
+return [pad_after, pad_before]
 return [pad_before, pad_after]
 
 
@@ -280,9 +283,9 @@ class Pool(OnnxOpConverter):
 pad_tuple = []
 for axis in range(len(input_shape) - 2):
 axis_shape = input_shape[2 + axis]
-stride = attr["strides"][axis]
+stride = attr.get("strides", [1] * ndim)[axis]
 kernel = attr["kernel_shape"][axis]
-pad = get_pad_pair(axis_shape, kernel, stride)
+pad = get_pad_pair(axis_shape, kernel, stride, 
attr["auto_pad"])
 pad_tuple.append(pad)
 pad_tuple = tuple([val for pair in zip(*pad_tuple) for val 
in pair])
 attr["pads"] = pad_tuple
@@ -444,9 +447,15 @@ class ConvTranspose(OnnxOpConverter):
 @classmethod
 def _impl_v1(cls, inputs, attr, params):
 # get number of channels
-channels = infer_channels(inputs[1], True)
+out_type = infer_type(inputs[1])
+out_shapes = [get_const_tuple(out_type.checked_type.shape)]
+channels = out_shapes[0][1]
 attr["channels"] = channels
 groups = attr.get("group", 1)
+
+if "kernel_shape" not in attr:
+attr["kernel_shape"] = out_shapes[0][2:]
+
 attr["groups"] = groups
 # infer pads for auto_pad
 data = inputs[0]
@@ -528,13 +537,11 @@ class Gemm(OnnxOpConverter):
 if not transB:
 inputs[1] = _op.transpose(inputs[1], axes=(1, 0))
 inputs[0] = _op.nn.batch_flatten(inputs[0])
-
 if alpha != 1.0:
 inputs[0] *= _expr.const(alpha)
 out = _op.nn.dense(inputs[0], inputs[1], units=channels)
-
 if len(inputs) == 3:
-return _op.nn.bias_add(out, _expr.const(beta) * inputs[2])
+out = out + _expr.const(beta) * inputs[2]
 return out
 
 
@@ -618,7 +625,7 @@ class Mod(OnnxOpConverter):
 # Note: attr['fmod'] determines whether the operator should behave 
like np.fmod or np.mod.
 # attr['fmod'] == 0 will behave as np.mod and attr['fmod'] == 1 will 
force fmod treatment.
 # The relay equivalent of np.fmod is relay.mod and np.mod is 
relay.floor_mod
-if attr["fmod"] == 0:
+if attr.get("fmod", 0) == 0:
 op_name = "floor_mod"
 else:
 op_name = "mod"
@@ -849,12 +856,18 @@ class Flatten(OnnxOpConverter):
 @classmethod
 def _impl_v1(cls, inputs, attr, params):

[GitHub] [tvm] jroesch merged pull request #7720: [ONNX] Onnx node tests

2021-03-23 Thread GitBox


jroesch merged pull request #7720:
URL: https://github.com/apache/tvm/pull/7720


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on issue #7730: [Bug] Missing broadcast_to before batch_matmul for CuBLAS

2021-03-23 Thread GitBox


masahi commented on issue #7730:
URL: https://github.com/apache/tvm/issues/7730#issuecomment-805406456


   I see, this is the same issue raised by @csullivan in 
https://github.com/apache/tvm/pull/6616#pullrequestreview-501380546
   
   What was the solution to this problem? @jwfromm @csullivan 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac opened a new issue #7730: [Bug] Missing broadcast_to before batch_matmul for CuBLAS

2021-03-23 Thread GitBox


comaniac opened a new issue #7730:
URL: https://github.com/apache/tvm/issues/7730


   The PR #7348 before batch_matmul because batch_matmul already supported 
implicitly broadcast. However, the CuBLAS implementation doesn't change 
accordingly, which results in the failure of the following case:
   
   ```python
   import numpy as np
   
   import tvm
   from tvm import relay
   from tvm.contrib import graph_runtime
   
   sa = (4, 128, 768)
   sb = (1, 768, 768)
   
   a = relay.var("a", shape=sa)
   b = relay.var("b", shape=sb)
   c = relay.nn.batch_matmul(a, b)
   f = relay.Function([a, b], c)
   mod = tvm.ir.IRModule.from_expr(f)
   mod = relay.transform.InferType()(mod)
   
   with tvm.transform.PassContext(opt_level=3):
   lib = relay.build(mod, target="cuda") # change target to "cuda 
-libs=cublas" will fail
   
   ctx = tvm.gpu(0)
   m = graph_runtime.GraphModule(lib["default"](ctx))
   p = np.random.uniform(0, 1, sa)
   q = np.random.uniform(0 ,1, sb)
   m.set_input("a", p)
   m.set_input("b", q)
   
   ftimer = m.module.time_evaluator("run", ctx, number=1, repeat=10)
   prof_res = np.array(ftimer().results) * 1000
   print(np.mean(prof_res))
   ```
   
   cc @masahi @jwfromm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on issue #7729: [Bug] Line backtrace results in stack overflow

2021-03-23 Thread GitBox


junrushao1994 commented on issue #7729:
URL: https://github.com/apache/tvm/issues/7729#issuecomment-805398055


   I had a fix for this last summer but it somehow got reverted by the recent 
libbacktrace PR...I will submit a PR to fix it later this week. Thanks for 
bringing it up!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on issue #7713: [BUG] Compile failed due to redundant cast.

2021-03-23 Thread GitBox


masahi commented on issue #7713:
URL: https://github.com/apache/tvm/issues/7713#issuecomment-805397022


   What LLVM version are you using? The recent PR introduced this change 
https://github.com/apache/tvm/pull/7617


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-23 Thread GitBox


areusch commented on pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#issuecomment-805394924


   @zhiics I've added Python backwards-compat. Please let me know if you think 
this is adequate. If so, I think we are ready to merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7642: [docs] Getting Started With TVM: Tensor Expressions

2021-03-23 Thread GitBox


comaniac commented on a change in pull request #7642:
URL: https://github.com/apache/tvm/pull/7642#discussion_r600077966



##
File path: tutorials/get_started/tensor_expr_get_started.py
##
@@ -255,41 +340,39 @@
 fadd1(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
+
 # Pack Everything into One Library
-# 
-# In the above example, we store the device and host code separately.
-# TVM also supports export everything as one shared library.
-# Under the hood, we pack the device modules into binary blobs and link
-# them together with the host code.
-# Currently we support packing of Metal, OpenCL and CUDA modules.
-#
+# 
+# In the above example, we store the device and host code separately. TVM also
+# supports export everything as one shared library. Under the hood, we pack
+# the device modules into binary blobs and link them together with the host
+# code. Currently we support packing of Metal, OpenCL and CUDA modules.
+
 fadd.export_library(temp.relpath("myadd_pack.so"))
 fadd2 = tvm.runtime.load_module(temp.relpath("myadd_pack.so"))
 fadd2(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
+
 # .. note:: Runtime API and Thread-Safety
 #
-#   The compiled modules of TVM do not depend on the TVM compiler.
-#   Instead, they only depend on a minimum runtime library.
-#   The TVM runtime library wraps the device drivers and provides
-#   thread-safe and device agnostic calls into the compiled functions.
-#
-#   This means that you can call the compiled TVM functions from any thread,
-#   on any GPUs.
+#   The compiled modules of TVM do not depend on the TVM compiler. Instead,
+#   they only depend on a minimum runtime library. The TVM runtime library
+#   wraps the device drivers and provides thread-safe and device agnostic calls
+#   into the compiled functions.
 #
+#   This means that you can call the compiled TVM functions from any thread, on
+#   any GPUs, provided that you have compiled the code for that GPU.
 
-##
+
 # Generate OpenCL Code

Review comment:
   Ok. I'm not strongly against it but just felt weird and confused when 
reading this part.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #7722: [Topi, Relay] Add cumprod

2021-03-23 Thread GitBox


AndrewZhaoLuo commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-805391908


   @masahi PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac opened a new issue #7729: [Bug] Line backtrace results in stack overflow

2021-03-23 Thread GitBox


comaniac opened a new issue #7729:
URL: https://github.com/apache/tvm/issues/7729


   After apply the PR #7153, some of my models got stack overflow when running 
some passes. IIUC, this is because the containers use LOG that occupies the 
stack. @tkonolige could you help fix this issue? Thanks.
   
   Also cc @tqchen @junrushao1994 @jroesch  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies edited a comment on pull request #7710: [DATA] DataLoader -- a universal interface for wrapping datasets from other machine learning frameworks

2021-03-23 Thread GitBox


electriclilies edited a comment on pull request #7710:
URL: https://github.com/apache/tvm/pull/7710#issuecomment-805370118


   @tqchen @jroesch @mbrookhart @anijain2305 I put up an RFC, please take a 
look: 
   
https://discuss.tvm.apache.org/t/dataloader-an-api-to-wrap-datasets-from-other-machine-learning-frameworks/9498


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies edited a comment on pull request #7710: [DATA] DataLoader -- a universal interface for wrapping datasets from other machine learning frameworks

2021-03-23 Thread GitBox


electriclilies edited a comment on pull request #7710:
URL: https://github.com/apache/tvm/pull/7710#issuecomment-805370118


   @tqchen @jroesch @mbroohart @anijain2305 I put up an RFC, please take a 
look: 
   
https://discuss.tvm.apache.org/t/dataloader-an-api-to-wrap-datasets-from-other-machine-learning-frameworks/9498


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on pull request #7710: [DATA] DataLoader -- a universal interface for wrapping datasets from other machine learning frameworks

2021-03-23 Thread GitBox


electriclilies commented on pull request #7710:
URL: https://github.com/apache/tvm/pull/7710#issuecomment-805370118


   @tqchen @jroesch @anijain2305 I put up an RFC, please take a look: 
   
https://discuss.tvm.apache.org/t/dataloader-an-api-to-wrap-datasets-from-other-machine-learning-frameworks/9498


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-23 Thread GitBox


zhiics commented on pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#issuecomment-805361926


   > @zhiics it's definitely a breaking change, i'm not opposed to the warning 
or some other way to notify downstream users. I don't know if code or forum or 
other is the best channel for that--i'm inclined to lean towards code in case 
forum or elsewhere also has snippets that may become outdated. i'm happy to do 
whatever you propose--which way do you prefer?
   > 
   > @tqchen @jroesch in case they have thoughts
   
   @areusch Thanks. I also prefer code for the the same reason you mentioned.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7722: [Topi, Relay] Add cumprod

2021-03-23 Thread GitBox


masahi commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-805359158


   You've got a merge issue, need to rebase.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on a change in pull request #7642: [docs] Getting Started With TVM: Tensor Expressions

2021-03-23 Thread GitBox


hogepodge commented on a change in pull request #7642:
URL: https://github.com/apache/tvm/pull/7642#discussion_r600034455



##
File path: tutorials/get_started/tensor_expr_get_started.py
##
@@ -302,18 +385,452 @@
 fadd_cl(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
-# Summary
-# ---
-# This tutorial provides a walk through of TVM workflow using
-# a vector add example. The general workflow is
+
+# .. note:: Code Specialization
+#
+#   As you may have noticed, the declarations of A, B and C all take the same
+#   shape argument, n. TVM will take advantage of this to pass only a single
+#   shape argument to the kernel, as you will find in the printed device code.
+#   This is one form of specialization.
+#
+#   On the host side, TVM will automatically generate check code that checks
+#   the constraints in the parameters. So if you pass arrays with different
+#   shapes into fadd, an error will be raised.
+#
+#   We can do more specializations. For example, we can write :code:`n =
+#   tvm.runtime.convert(1024)` instead of :code:`n = te.var("n")`, in the
+#   computation declaration. The generated function will only take vectors with
+#   length 1024.
+
+
+# .. note:: TE Scheduling Primitives
+#
+#   TVM includes a number of different scheduling primitives:
+#
+#   - split: splits a specified axis into two axises by the defined factor.
+#   - tile: tiles will split a computation across two axes by the defined 
factors.
+#   - fuse: fuses two consecutive axises of one computation.
+#   - reorder: can reorder the axises of a computation into a defined order.
+#   - bind: can bind a computation to a specific thread, useful in GPU 
programming.
+#   - compute_at: by default, TVM will compute tensors at the outermost level
+# of the function, or the root, by default. compute_at specifies that one
+# tensor should be computed at the first axis of computation for another
+# operator.
+#   - compute_inline: when marked inline, a computation will be expanded then
+# inserted into the address where the tensor is required.
+#   - compute_root: moves a computation to the outermost layer, or root, of the
+# function. This means that stage of the computation will be fully computed
+# before it moves on to the next stage.
+#
+#   A complete description of these primitives can be found in the
+# [Schedule 
Primitives](https://tvm.apache.org/docs/tutorials/language/schedule_primitives.html)
 docs page.
+
+
+# Example 2: Manually Optimizing Matrix Multiplication with TE
+# 
+#
+# Now we will consider a second, more advanced example, demonstrating how with
+# just 18 lines of python code TVM speeds up a common matrix multiplication 
operation by 18x.
+#
+# **Matrix multiplication is a compute intensive operation. There are two 
important optimizations for good CPU performance:**
+# 1. Increase the cache hit rate of memory access. Both complex numerical
+#computation and hot-spot memory access can be accelerated by a high cache 
hit
+#rate. This requires us to transform the origin memory access pattern to a 
pattern that fits the cache policy.
+# 2. SIMD (Single instruction multi-data), also known as the vector processing
+#unit. On each cycle instead of processing a single value, SIMD can 
process a small batch of data.
+#This requires us to transform the data access pattern in the loop
+#body in uniform pattern so that the LLVM backend can lower it to SIMD.
+#
+# The techniques used in this tutorial are a subset of tricks mentioned in this
+# `repository `_. Some of them
+# have been applied by TVM abstraction automatically, but some of them cannot
+# be automatically applied due to TVM constraints.
+#
+# All the experiment results mentioned below are executed on 2015 15" MacBook
+# equipped with Intel i7-4770HQ CPU. The cache line size should be 64 bytes for
+# all the x86 CPUs.
+
+
+# Preparation and Performance Baseline
+# 
+#
+# We begin by collecting performance data on the `numpy` implementation of
+# matrix multiplication.
+
+import tvm
+import tvm.testing
+from tvm import te
+import numpy
+import timeit
+
+# The size of the matrix
+# (M, K) x (K, N)
+# You are free to try out different shapes, sometimes TVM optimization 
outperforms numpy with MKL.
+M = 1024
+K = 1024
+N = 1024
+
+# The default tensor data type in tvm
+dtype = "float32"
+
+# using Intel AVX2 (Advanced Vector 

[GitHub] [tvm] electriclilies closed pull request #7474: [WIP] [Quantization] Quantization in TVM

2021-03-23 Thread GitBox


electriclilies closed pull request #7474:
URL: https://github.com/apache/tvm/pull/7474


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on a change in pull request #7642: [docs] Getting Started With TVM: Tensor Expressions

2021-03-23 Thread GitBox


hogepodge commented on a change in pull request #7642:
URL: https://github.com/apache/tvm/pull/7642#discussion_r600033842



##
File path: tutorials/get_started/tensor_expr_get_started.py
##
@@ -255,41 +340,39 @@
 fadd1(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
+
 # Pack Everything into One Library
-# 
-# In the above example, we store the device and host code separately.
-# TVM also supports export everything as one shared library.
-# Under the hood, we pack the device modules into binary blobs and link
-# them together with the host code.
-# Currently we support packing of Metal, OpenCL and CUDA modules.
-#
+# 
+# In the above example, we store the device and host code separately. TVM also
+# supports export everything as one shared library. Under the hood, we pack
+# the device modules into binary blobs and link them together with the host
+# code. Currently we support packing of Metal, OpenCL and CUDA modules.
+
 fadd.export_library(temp.relpath("myadd_pack.so"))
 fadd2 = tvm.runtime.load_module(temp.relpath("myadd_pack.so"))
 fadd2(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
+
 # .. note:: Runtime API and Thread-Safety
 #
-#   The compiled modules of TVM do not depend on the TVM compiler.
-#   Instead, they only depend on a minimum runtime library.
-#   The TVM runtime library wraps the device drivers and provides
-#   thread-safe and device agnostic calls into the compiled functions.
-#
-#   This means that you can call the compiled TVM functions from any thread,
-#   on any GPUs.
+#   The compiled modules of TVM do not depend on the TVM compiler. Instead,
+#   they only depend on a minimum runtime library. The TVM runtime library
+#   wraps the device drivers and provides thread-safe and device agnostic calls
+#   into the compiled functions.
 #
+#   This means that you can call the compiled TVM functions from any thread, on
+#   any GPUs, provided that you have compiled the code for that GPU.
 
-##
+
 # Generate OpenCL Code

Review comment:
   This came from an existing document, and since this is an incremental 
refactor my plan is to revisit and break out the CUDA and OpenCL sections into 
their own documents. I'm in agreement, but think that change may be out of the 
scope of the this PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on a change in pull request #7643: [docs] Getting Started with TVM: AutoTVM and Matrix Multiply

2021-03-23 Thread GitBox


hogepodge commented on a change in pull request #7643:
URL: https://github.com/apache/tvm/pull/7643#discussion_r600031303



##
File path: tutorials/get_started/autotvm_matmul.py
##
@@ -0,0 +1,377 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Optimizing Operators with Templates and AutoTVM
+===
+**Authors**:
+`Lianmin Zheng `_,
+`Chris Hoge `_
+
+In this tutorial, we will now show how the TVM Template Extension (TE) language
+can be used to write scheduling templates that can be searched by AutoTVM to
+find optimal configurations of scheduling variables. This process is called
+Auto-Tuning, and builds on TE to help automate the process of optimizing
+operations.
+
+This tutorial builds on the previous `tutorial on how to write a matrix
+multiplication using TE `.
+
+There are two steps in auto-tuning.
+
+- The first step is defining a search space.
+- The second step is running a search algorithm to explore through this space.
+
+In this tutorial, you can learn how to perform these two steps in TVM. The 
whole
+workflow is illustrated by a matrix multiplication example.
+
+.. note::
+  Note that this tutorial will not run on Windows or recent versions of macOS.
+  To get it to run, you will need to wrap the body of this tutorial in a
+  :code:`if __name__ == "__main__":` block.
+"""
+
+
+# Install dependencies
+# 
+# To use autotvm package in TVM, we need to install some extra dependencies.
+#
+# .. code-block:: bash
+#
+#   pip3 install --user psutil xgboost cloudpickle
+#
+# To make TVM run faster in tuning, it is recommended to use cython as FFI of
+# TVM. In the root directory of TVM, execute (change "3" to "2" if you use
+# python2):
+#
+# .. code-block:: bash
+#
+#   pip3 install --user cython
+#   sudo make cython3
+#
+# Now return to python code. Begin by importing the required packages.
+
+import logging
+import sys
+
+import numpy as np
+import tvm
+from tvm import te
+import tvm.testing
+
+# the module is called `autotvm`
+from tvm import autotvm
+
+
+# Basic Matrix Multiplication with TE
+# ---
+# Recall the basic implementation of matrix multiplication using TE. We write
+# it down here with a few changes. We will wrap the multiplication in a python
+# function definition.  For simplicity, we will focus our attention on a split
+# optimization, using a fixed value that defines the block size of the
+# reordering.
+
+
+def matmul_basic(N, L, M, dtype):
+
+a = te.placeholder((n, l), name="a", dtype=dtype)
+B = te.placeholder((L, M), name="B", dtype=dtype)
+
+k = te.reduce_axis((0, L), name="k")
+C = te.compute((N, M), lambda i, j: te.sum(A[i, k] * B[k, j], axis=k), 
name="C")
+s = te.create_schedule(C.op)
+
+# schedule
+y, x = s[C].op.axis
+k = s[C].op.reduce_axis[0]
+
+yo, yi = s[C].split(y, 8)
+xo, xi = s[C].split(x, 8)
+
+s[C].reorder(yo, xo, k, yi, xi)
+
+return s, [A, B, C]
+
+
+
+# Matrix Multiplication with AutoTVM
+# --
+# In the previous schedule code, we use a constant "8" as the tiling factor.
+# However, it might not be the best one because the best tiling factor depends
+# on real hardware environment and input shape.
+#
+# If you want the schedule code to be portable across a wider range of input
+# shapes and target hardware, it is better to define a set of candidate values
+# and pick the best one according to the measurement results on target
+# hardware.
+#
+# In autotvm, we can define a tunable parameter, or a "knob" for such kind of
+# value.
+
+
+# A Basic Matrix Multiplication Template
+# --
+# We begin with an example of how to create a tunable parameter set for the
+# block size of the `split` scheduling operation.
+
+# 

[GitHub] [tvm] tqchen commented on pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-23 Thread GitBox


tqchen commented on pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#issuecomment-805343734


   Given we are pre-1.0 we can just do our best effort when backward compact is 
possible but not necessary have to spend an extra mile of effort. may not hurt 
to post a notice to forum as well. e.g. 
https://discuss.tvm.apache.org/t/notice-tvm-runtime-rpc-upgrade-in-pr7488/9237/5


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch opened a new pull request #7728: [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency)

2021-03-23 Thread GitBox


areusch opened a new pull request #7728:
URL: https://github.com/apache/tvm/pull/7728


   This PR unblocks #7557 by adding `onnx` to the python packages usable in 
ci-qemu. 
   
   Also as part of rebuilding the container, change the method by which we 
install kitware apt keys to make it more portable.
   
   @tmoreau89 @mdw-octoml 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [FIX] Fix temporary allocation size in threefry (#7709)

2021-03-23 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 6f0a656  [FIX] Fix temporary allocation size in threefry (#7709)
6f0a656 is described below

commit 6f0a6561593898053cde051fbb4687eef3adec39
Author: Tristan Konolige 
AuthorDate: Tue Mar 23 13:47:53 2021 -0700

[FIX] Fix temporary allocation size in threefry (#7709)

* [FIX] Fix temporary allocation size in threefry

* bump sizes
---
 python/tvm/topi/random/kernel.py   |  2 +-
 tests/python/topi/python/test_topi_prng.py | 10 +-
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/python/tvm/topi/random/kernel.py b/python/tvm/topi/random/kernel.py
index 728cd68..a09a5f3 100644
--- a/python/tvm/topi/random/kernel.py
+++ b/python/tvm/topi/random/kernel.py
@@ -141,7 +141,7 @@ def _threefry(
 return [x, y]
 
 # temporary buffer for holding the results of _PERMUTATIONS
-tmp = irb.allocate(out_buf.dtype, out_shape, name="tmp", scope="global")
+tmp = irb.allocate(out_buf.dtype, out_shape * nwords, name="tmp", 
scope="global")
 tmp_offset = 0
 
 # Initialize entire key. It is composed of the original key with one
diff --git a/tests/python/topi/python/test_topi_prng.py 
b/tests/python/topi/python/test_topi_prng.py
index 649e541..102e93f 100644
--- a/tests/python/topi/python/test_topi_prng.py
+++ b/tests/python/topi/python/test_topi_prng.py
@@ -87,9 +87,9 @@ def test_threefry_generate(target, ctx):
 gen = tvm.relay.random.threefry_key(0).data.asnumpy()
 
 # check that we can generate some data
-a, rands = threefry_generate(target, ctx, gen, (100,))
+a, rands = threefry_generate(target, ctx, gen, (2048,))
 assert (
-rands.shape[0] == 100 and len(rands.shape) == 1
+rands.shape[0] == 2048 and len(rands.shape) == 1
 ), "Output shape should match requested shape"
 
 # check that gen out does not equal input
@@ -99,13 +99,13 @@ def test_threefry_generate(target, ctx):
 gen = np.array(
 [0, 0, 0, 0, 0, 0, 0, 2 ** 64 - 2, 1 << 63, 0], dtype="uint64"
 )  # make counter large
-a, rands = threefry_generate(target, ctx, gen, (100,))
+a, rands = threefry_generate(target, ctx, gen, (2048,))
 assert gen[4] != a[4], "Overflow of counter should trigger path change"
-assert a[7] == 100, "Overflow of counter should still update counter"
+assert a[7] == 2048, "Overflow of counter should still update counter"
 
 # check generate with path at length limit
 gen = np.array([0, 0, 0, 0, 0, 0, 0, 2 ** 64 - 2, 0, 0], dtype="uint64")  
# make counter large
-a, rands = threefry_generate(target, ctx, gen, (100,))
+a, rands = threefry_generate(target, ctx, gen, (2048,))
 assert (
 gen[0:4] != a[0:4]
 ).any(), "Overflowing counter with no space left in path should change 
state"


[GitHub] [tvm] MarisaKirisame merged pull request #7709: [FIX] Fix temporary allocation size in threefry

2021-03-23 Thread GitBox


MarisaKirisame merged pull request #7709:
URL: https://github.com/apache/tvm/pull/7709


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on issue #7590: [CI][FLAKY] Qemu pipeline timeout

2021-03-23 Thread GitBox


areusch commented on issue #7590:
URL: https://github.com/apache/tvm/issues/7590#issuecomment-805175522


   https://ci.tlcpack.ai/job/tvm/job/PR-7653/8/console on `node.aladdin.cuda0` 
   
   looks like it is stuck in xargs rm -f
   
   in both cases:
   `+ echo 'INFO: NODE_NAME=node.aladdin.cudabuild EXECUTOR_NUMBER=0'`
   
   still waiting for more data, but potentially node.aladdin.cudabuild gets 
stuck finding *.pyc to remove.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7723: [microTVM] Update nrfjprog on reference virtual machine

2021-03-23 Thread GitBox


areusch commented on pull request #7723:
URL: https://github.com/apache/tvm/pull/7723#issuecomment-805061932


   thanks @mehrdadh !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch merged pull request #7723: [microTVM] Update nrfjprog on reference virtual machine

2021-03-23 Thread GitBox


areusch merged pull request #7723:
URL: https://github.com/apache/tvm/pull/7723


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [microTVM] Update nrfjprog on reference virtual machine (#7723)

2021-03-23 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new f88c2be  [microTVM] Update nrfjprog on reference virtual machine 
(#7723)
f88c2be is described below

commit f88c2be21e3c268713f0772274ca206ed35da784
Author: Mehrdad Hessar 
AuthorDate: Tue Mar 23 09:50:37 2021 -0700

[microTVM] Update nrfjprog on reference virtual machine (#7723)

* update nrfjprog and integration test

* merge

* Revert "merge"

This reverts commit 58d5d9187448e6580b6b780821eb2ea42ec34e8e.

* fix comments

* fix clang

* revert format

* new line

* format
---
 apps/microtvm/reference-vm/base-box-tool.py| 28 ++
 .../microtvm/reference-vm/zephyr/base-box/setup.sh | 15 
 .../reference-vm/zephyr/base-box/test-config.json  | 14 ---
 3 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/apps/microtvm/reference-vm/base-box-tool.py 
b/apps/microtvm/reference-vm/base-box-tool.py
old mode 100755
new mode 100644
index 0e82dc2..dbf05f0
--- a/apps/microtvm/reference-vm/base-box-tool.py
+++ b/apps/microtvm/reference-vm/base-box-tool.py
@@ -42,6 +42,12 @@ ALL_PROVIDERS = (
 "vmware_desktop",
 )
 
+# List of microTVM platforms for testing.
+ALL_MICROTVM_PLATFORMS = (
+"stm32f746xx",
+"nrf5340dk",
+)
+
 
 def parse_virtualbox_devices():
 output = subprocess.check_output(["VBoxManage", "list", "usbhost"], 
encoding="utf-8")
@@ -109,6 +115,7 @@ def attach_virtualbox(uuid, vid_hex=None, pid_hex=None, 
serial=None):
 if serial is not None:
 rule_args.extend(["--serialnumber", serial])
 subprocess.check_call(rule_args)
+# TODO(mehrdadh): skip usb attach if it's already attached
 subprocess.check_call(["VBoxManage", "controlvm", uuid, 
"usbattach", dev["UUID"]])
 return
 
@@ -308,13 +315,17 @@ def test_command(args):
 test_config_file = os.path.join(base_box_dir, "test-config.json")
 with open(test_config_file) as f:
 test_config = json.load(f)
+
+# select microTVM test platform
+microtvm_test_platform = test_config[args.microtvm_platform]
+
 for key, expected_type in REQUIRED_TEST_CONFIG_KEYS.items():
-assert key in test_config and isinstance(
-test_config[key], expected_type
+assert key in microtvm_test_platform and isinstance(
+microtvm_test_platform[key], expected_type
 ), f"Expected key {key} of type {expected_type} in 
{test_config_file}: {test_config!r}"
 
-test_config["vid_hex"] = test_config["vid_hex"].lower()
-test_config["pid_hex"] = test_config["pid_hex"].lower()
+microtvm_test_platform["vid_hex"] = 
microtvm_test_platform["vid_hex"].lower()
+microtvm_test_platform["pid_hex"] = 
microtvm_test_platform["pid_hex"].lower()
 
 providers = args.provider
 provider_passed = {p: False for p in providers}
@@ -331,7 +342,7 @@ def test_command(args):
 release_test_dir, user_box_dir, base_box_dir, provider_name
 )
 do_run_release_test(
-release_test_dir, provider_name, test_config, 
args.test_device_serial
+release_test_dir, provider_name, microtvm_test_platform, 
args.test_device_serial
 )
 provider_passed[provider_name] = True
 
@@ -444,6 +455,13 @@ def parse_args():
 ),
 )
 
+parser.add_argument(
+"--microtvm-platform",
+default="stm32f746xx",
+choices=ALL_MICROTVM_PLATFORMS,
+help="For use with 'test' command. MicroTVM platfrom that are used for 
testing.",
+)
+
 return parser.parse_args()
 
 
diff --git a/apps/microtvm/reference-vm/zephyr/base-box/setup.sh 
b/apps/microtvm/reference-vm/zephyr/base-box/setup.sh
index 52af947..7299cea 100644
--- a/apps/microtvm/reference-vm/zephyr/base-box/setup.sh
+++ b/apps/microtvm/reference-vm/zephyr/base-box/setup.sh
@@ -59,17 +59,22 @@ sudo apt install -y llvm
 sudo apt install -y protobuf-compiler libprotoc-dev
 
 # nrfjprog
+NRF_COMMANDLINE_TOOLS_FILE=nRFCommandLineToolsLinuxamd64.tar.gz
+NRF_COMMANDLINE_TOOLS_URL=https://www.nordicsemi.com/-/media/Software-and-other-downloads/Desktop-software/nRF-command-line-tools/sw/Versions-10-x-x/10-12-1/nRFCommandLineTools10121Linuxamd64.tar.gz
+NRF_COMMANDLINE_TOOLS_INSTALLER=nRF-Command-Line-Tools_10_12_1_Linux-amd64.deb
+JLINK_LINUX_INSTALLER=JLink_Linux_V688a_x86_64.deb
+
 cd ~
 mkdir -p nrfjprog
-wget --no-verbose -O nRFCommandLineTools1090Linuxamd64.tar.gz 
https://www.nordicsemi.com/-/media/Software-and-other-downloads/Desktop-software/nRF-command-line-tools/sw/Versions-10-x-x/10-9-0/nRFCommandLineTools1090Linuxamd64tar.gz
+wget --no-verbose -O 

[tvm] 02/02: Merge remote-tracking branch 'origin/main' into test_mdw_qemu_changes

2021-03-23 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 8cbc1644f6c47beeba1c32022bb084be670f59f8
Merge: 87be5dd 37e6df1
Author: Andrew Reusch 
AuthorDate: Tue Mar 23 09:46:15 2021 -0700

Merge remote-tracking branch 'origin/main' into test_mdw_qemu_changes

 CMakeLists.txt |  25 +-
 apps/cpp_rpc/main.cc   |  10 +-
 apps/cpp_rpc/rpc_env.cc|  35 +-
 apps/cpp_rpc/rpc_env.h |   2 +-
 apps/cpp_rpc/rpc_server.cc |  21 +-
 apps/cpp_rpc/rpc_server.h  |   3 +-
 cmake/config.cmake |  10 +-
 cmake/{modules => libs}/Libbacktrace.cmake |   0
 cmake/modules/Logging.cmake|  46 ++
 conda/recipe/build.sh  |   1 +
 docker/build.sh|  28 +-
 .../install/ubuntu_install_ethosn_driver_stack.sh  |   2 +-
 include/tvm/runtime/logging.h  |  29 +-
 include/tvm/tir/analysis.h |  15 +
 python/tvm/auto_scheduler/dispatcher.py|  49 ++-
 python/tvm/auto_scheduler/relay_integration.py |   7 +-
 .../autotvm/graph_tuner/utils/traverse_graph.py|   3 +-
 python/tvm/relay/frontend/onnx.py  |   7 +-
 python/tvm/relay/frontend/pytorch.py   | 100 +++--
 python/tvm/relay/frontend/tflite.py|  17 +-
 python/tvm/script/context_maintainer.py| 210 +++--
 python/tvm/script/intrin.py|  20 +-
 python/tvm/script/node.py  | 150 +++
 python/tvm/script/parser.py| 179 +---
 python/tvm/script/registry.py  |  20 +-
 python/tvm/script/scope_handler.py | 473 ++---
 python/tvm/script/special_stmt.py  | 380 +++--
 python/tvm/script/utils.py |  95 -
 python/tvm/tir/analysis/analysis.py|  23 +
 python/tvm/topi/cuda/nms.py|   4 +-
 python/tvm/topi/cuda/unique.py |  15 +-
 .../search_policy/sketch_policy_rules.cc   |   1 +
 src/printer/text_printer.h |   2 +
 src/printer/tir_text_printer.cc| 109 -
 src/printer/tvmscript_printer.cc   | 232 +-
 src/relay/backend/compile_engine.cc|   2 +-
 src/relay/backend/contrib/ethosn/codegen.cc|  30 +-
 .../backend/contrib/ethosn/ethosn_api_version.h|   4 +
 src/runtime/contrib/random/mt_random_engine.cc |   5 +-
 src/runtime/contrib/tensorrt/tensorrt_runtime.cc   |   8 +
 src/runtime/graph/graph_runtime.cc |   4 +-
 src/runtime/logging.cc |  28 +-
 src/runtime/metal/metal_device_api.mm  | 258 +--
 src/runtime/metal/metal_module.mm  |  88 ++--
 src/runtime/vulkan/vulkan.cc   |   4 +
 src/tir/analysis/block_access_region_detector.cc   | 246 +++
 src/tir/ir/script/script_complete.cc   | 122 ++
 tests/python/contrib/test_ethosn/test_networks.py  |  16 +-
 tests/python/frontend/mxnet/test_forward.py|  60 +++
 tests/python/frontend/onnx/test_forward.py |  39 +-
 tests/python/frontend/pytorch/test_forward.py  |  35 +-
 tests/python/frontend/tflite/test_forward.py   |  19 +-
 tests/python/relay/test_external_codegen.py|  59 ++-
 .../unittest/test_autotvm_graph_tuner_utils.py |  10 +
 tests/python/unittest/test_runtime_graph.py|  24 +-
 tests/python/unittest/test_target_codegen_spirv.py |  30 +-
 .../test_tir_analysis_get_block_access_region.py   |  57 +++
 tests/python/unittest/test_tir_nodes.py|  13 +-
 .../python/unittest/test_tvmscript_error_report.py | 205 +
 tests/python/unittest/test_tvmscript_roundtrip.py  | 170 
 tests/scripts/task_ci_python_setup.sh  |   2 +-
 tests/scripts/task_ci_setup.sh |   2 +-
 tests/scripts/task_config_build_cpu.sh |   1 +
 tests/scripts/task_config_build_gpu.sh |   1 +
 tests/scripts/task_config_build_gpu_vulkan.sh  |   1 +
 65 files changed, 3283 insertions(+), 583 deletions(-)


[tvm] branch ci-docker-staging updated (87798bf -> 8cbc164)

2021-03-23 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


 discard 87798bf  try bumping synr
 new 87be5dd  fix path
 add e467748  [CPP_RPC] allow user supplied work dir (#7670)
 add 2ee860e  [TFLite] Cast operator adapted for MLIR-based convertor 
(#7639)
 add 570767f  Free TensorRT engine and context (#7702)
 add 35b43e1  Change behavior of onnx importer to throw when user provides 
an input no in the graph. (#7699)
 add 9a29141  [Vulkan] Workaround for zero size allocation (#7691)
 add aa494cf  [AutoScheduler] Add function name in message (#7703)
 add 7605f65  [TOPI][CUDA] Fix 0 valid boxes case for NMS when 
return_indices=False (#7700)
 add 10cd83d  [RUNTIME] Cleanup build for libbacktrace (#7706)
 add 27f1085  [torch] Use try_infer_value for clamp min/max (#7712)
 add fffed0f  [TensorIR] TVMScript Parser/Printer (#7630)
 add 4b528de  [TensorIR] add TIRTextPrinter support for Block and 
BlockRealize (#7716)
 add c4b8934  [ETHOSN] Add support for Ethos-N 21.02 driver stack release. 
(#7628)
 add 21fc3bb  [TOPI] Use fixed thread block size in unique op for Vulkan 
(#7718)
 add 318c650  Fix auto scheduler crash when set with consumers is empty 
(#7708)
 add e4b3e90  [CI] Improve docker/build.sh to accept a docker tag 
parameter. (#7707)
 add 43ec869  Fix graph_tuner ancestor duplication (#7704)
 add 4c66fb2  Fix GraphModule.load_params to allow passing parameters that 
are not an expected input (#7665)
 add f09f02e  [TORCH] Implement avg_pool1d (#7694)
 add 37e6df1  [METAL] Fix memory leaks in Metal runtime (#7714)
 new 8cbc164  Merge remote-tracking branch 'origin/main' into 
test_mdw_qemu_changes

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (87798bf)
\
 N -- N -- N   refs/heads/ci-docker-staging (8cbc164)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt |  25 +-
 apps/cpp_rpc/main.cc   |  10 +-
 apps/cpp_rpc/rpc_env.cc|  35 +-
 apps/cpp_rpc/rpc_env.h |   2 +-
 apps/cpp_rpc/rpc_server.cc |  21 +-
 apps/cpp_rpc/rpc_server.h  |   3 +-
 cmake/config.cmake |  10 +-
 cmake/{modules => libs}/Libbacktrace.cmake |   0
 cmake/modules/Logging.cmake|  46 ++
 conda/recipe/build.sh  |   1 +
 docker/build.sh|  28 +-
 .../install/ubuntu_install_ethosn_driver_stack.sh  |   2 +-
 include/tvm/runtime/logging.h  |  29 +-
 include/tvm/tir/analysis.h |  15 +
 python/tvm/auto_scheduler/dispatcher.py|  49 ++-
 python/tvm/auto_scheduler/relay_integration.py |   7 +-
 .../autotvm/graph_tuner/utils/traverse_graph.py|   3 +-
 python/tvm/relay/frontend/onnx.py  |   7 +-
 python/tvm/relay/frontend/pytorch.py   | 100 +++--
 python/tvm/relay/frontend/tflite.py|  17 +-
 python/tvm/script/context_maintainer.py| 210 +++--
 python/tvm/script/intrin.py|  20 +-
 python/tvm/script/node.py  | 150 +++
 python/tvm/script/parser.py| 179 +---
 python/tvm/script/registry.py  |  20 +-
 python/tvm/script/scope_handler.py | 473 ++---
 python/tvm/script/special_stmt.py  | 380 +++--
 python/tvm/script/utils.py |  95 -
 python/tvm/tir/analysis/analysis.py|  23 +
 python/tvm/topi/cuda/nms.py|   4 +-
 python/tvm/topi/cuda/unique.py |  15 +-
 .../search_policy/sketch_policy_rules.cc   |   1 +
 src/printer/text_printer.h |   2 +
 src/printer/tir_text_printer.cc| 109 -
 src/printer/tvmscript_printer.cc   | 232 +-
 

[tvm] 01/02: fix path

2021-03-23 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 87be5dde40efe6743290ec9b26a7b2f2b7333eff
Author: Andrew Reusch 
AuthorDate: Tue Mar 23 09:45:35 2021 -0700

fix path
---
 tests/micro/zephyr/test_zephyr.py | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/tests/micro/zephyr/test_zephyr.py 
b/tests/micro/zephyr/test_zephyr.py
index 09eb97e..c4626d3 100644
--- a/tests/micro/zephyr/test_zephyr.py
+++ b/tests/micro/zephyr/test_zephyr.py
@@ -214,16 +214,17 @@ def test_onnx(platform, west_cmd):
 model, zephyr_board = PLATFORMS[platform]
 
 # Load test images.
-digit_2 = Image.open("testdata/digit-2.jpg").resize((28, 28))
+this_dir = os.path.dirname(__file__)
+digit_2 = Image.open(f"{this_dir}/testdata/digit-2.jpg").resize((28, 28))
 digit_2 = np.asarray(digit_2).astype("float32")
 digit_2 = np.expand_dims(digit_2, axis=0)
 
-digit_9 = Image.open("testdata/digit-9.jpg").resize((28, 28))
+digit_9 = Image.open(f"{this_dir}/testdata/digit-9.jpg").resize((28, 28))
 digit_9 = np.asarray(digit_9).astype("float32")
 digit_9 = np.expand_dims(digit_9, axis=0)
 
 # Load ONNX model and convert to Relay.
-onnx_model = onnx.load("testdata/mnist-8.onnx")
+onnx_model = onnx.load(f"{this_dir}/testdata/mnist-8.onnx")
 shape = (1, 1, 28, 28)
 relay_mod, params = relay.frontend.from_onnx(onnx_model, shape=shape, 
freeze_params=True)
 relay_mod = relay.transform.DynamicToStatic()(relay_mod)


[GitHub] [tvm] mdw-octoml commented on a change in pull request #7723: [microTVM] Update nrfjprog on reference virtual machine

2021-03-23 Thread GitBox


mdw-octoml commented on a change in pull request #7723:
URL: https://github.com/apache/tvm/pull/7723#discussion_r599748630



##
File path: apps/microtvm/reference-vm/zephyr/base-box/test-config.json
##
@@ -1,4 +1,12 @@
-{"vid_hex": "0483",
- "pid_hex": "374b",
- "test_cmd": ["pytest", "tests/micro/qemu/test_zephyr.py", 
"--microtvm-platforms=stm32f746xx"]
+{
+"stm32f746xx": {
+"vid_hex": "0483",
+"pid_hex": "374b",
+"test_cmd": ["pytest", "tests/micro/qemu/test_zephyr.py", 
"--microtvm-platforms=stm32f746xx"]
+},
+"nrf5340dk": {

Review comment:
   Confirmed!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7723: [microTVM] Update nrfjprog on reference virtual machine

2021-03-23 Thread GitBox


areusch commented on a change in pull request #7723:
URL: https://github.com/apache/tvm/pull/7723#discussion_r599737747



##
File path: apps/microtvm/reference-vm/zephyr/base-box/test-config.json
##
@@ -1,4 +1,12 @@
-{"vid_hex": "0483",
- "pid_hex": "374b",
- "test_cmd": ["pytest", "tests/micro/qemu/test_zephyr.py", 
"--microtvm-platforms=stm32f746xx"]
+{
+"stm32f746xx": {
+"vid_hex": "0483",
+"pid_hex": "374b",
+"test_cmd": ["pytest", "tests/micro/qemu/test_zephyr.py", 
"--microtvm-platforms=stm32f746xx"]
+},
+"nrf5340dk": {

Review comment:
   @mdw-octoml could you just confirm you see these USB vid/pid?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on issue #7705: build tvm failed with libbacktrace.a : ld: symbol(s) not found for architecture x86_64

2021-03-23 Thread GitBox


tkonolige commented on issue #7705:
URL: https://github.com/apache/tvm/issues/7705#issuecomment-805048911


   @xiebaiyuan Thanks for the extra information. It looks like the compiler is 
different between libbacktrace and the rest of the cmake project. Can you try 
this branch: 
https://github.com/tkonolige/incubator-tvm/tree/fix_libbacktrace_macos and see 
if that fixes it? (Please make sure you set `USE_LIBBACKTRACE=On`).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-23 Thread GitBox


areusch commented on pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#issuecomment-804961316


   @zhiics it's definitely a breaking change, i'm not opposed to the warning or 
some other way to notify downstream users. I don't know if code or forum or 
other is the best channel for that--i'm inclined to lean towards code in case 
forum or elsewhere also has snippets that may become outdated. i'm happy to do 
whatever you propose--which way do you prefer?
   
   @tqchen @jroesch  in case they have thoughts


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen closed issue #7725: do we have any check point to contine unfinished tuning jobs?

2021-03-23 Thread GitBox


tqchen closed issue #7725:
URL: https://github.com/apache/tvm/issues/7725


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on issue #7725: do we have any check point to contine unfinished tuning jobs?

2021-03-23 Thread GitBox


tqchen commented on issue #7725:
URL: https://github.com/apache/tvm/issues/7725#issuecomment-804933904


   Thanks @xiebaiyuan please open a new thread on 
https://discuss.tvm.apache.org/ We generally use the discuss forum for related 
questions


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7721: [Refactor] Rename TVMContext to Device

2021-03-23 Thread GitBox


tqchen commented on pull request #7721:
URL: https://github.com/apache/tvm/pull/7721#issuecomment-804922459


   @icemelon9 please rebase against the main


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on issue #7727: Use WASM model in browser

2021-03-23 Thread GitBox


tqchen commented on issue #7727:
URL: https://github.com/apache/tvm/issues/7727#issuecomment-804912236


   helloe @majercakdavid , yes it is possible via emscripten polyfill version 
of wasi. Please see 
https://github.com/apache/tvm/tree/main/web#run-wasm-remotely-through-websocket-rpc
 However things could be a bit outdated. We use discourse forum 
https://discuss.tvm.apache.org/ for this type of topic. Please open a follow 
thread there


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen closed issue #7727: Use WASM model in browser

2021-03-23 Thread GitBox


tqchen closed issue #7727:
URL: https://github.com/apache/tvm/issues/7727


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7714: [METAL] Fix memory leaks in Metal runtime

2021-03-23 Thread GitBox


tqchen commented on pull request #7714:
URL: https://github.com/apache/tvm/pull/7714#issuecomment-804910681


   Thanks @echuraev 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [METAL] Fix memory leaks in Metal runtime (#7714)

2021-03-23 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 37e6df1  [METAL] Fix memory leaks in Metal runtime (#7714)
37e6df1 is described below

commit 37e6df1a2654c3a06f3bdfb36fb107fa7a8265eb
Author: Egor Churaev 
AuthorDate: Tue Mar 23 16:39:10 2021 +0300

[METAL] Fix memory leaks in Metal runtime (#7714)

* [METAL] Fix memory leaks in Metal runtime

1. In case when we build runtime without ARC, we can have problems with
   memory releasing. Due to some of Objective-C methods returns
   autoreleased pointers, we should specify `autoreleasepool` blocks to
   determine life cycle of these pointers.
2. Added workaround for problem with work group size.
   Sometimes auto scheduler generates parameters when work group size
   is more than possible. And in this case we got assert from Metal
   library. Added check for this situation and it helps to avoid
   assert.
3. Fixed memory leak problem when fill tensor by random data.
   DLManagedTensor increases reference counter in NDArray but nobody
   delete this DLManagedTensor in proper way. This is why memory which
   was allocated by NDArray was never released.
4. Removed unnecessary retains. It is not necessary use retain in some
   places where they were used, due to we build metal runtime without
   ARC.

* Use const_cast instead of creation DLManagedTensor
---
 src/runtime/contrib/random/mt_random_engine.cc |   5 +-
 src/runtime/metal/metal_device_api.mm  | 258 +
 src/runtime/metal/metal_module.mm  |  88 +
 3 files changed, 189 insertions(+), 162 deletions(-)

diff --git a/src/runtime/contrib/random/mt_random_engine.cc 
b/src/runtime/contrib/random/mt_random_engine.cc
index 699f6bb..81f46b2 100644
--- a/src/runtime/contrib/random/mt_random_engine.cc
+++ b/src/runtime/contrib/random/mt_random_engine.cc
@@ -126,8 +126,9 @@ class RandomEngine {
 } else {
   runtime::NDArray local = runtime::NDArray::Empty(
   std::vector{data->shape, data->shape + data->ndim}, 
data->dtype, {kDLCPU, 0});
-  FillData(()->dl_tensor, size);
-  runtime::NDArray::CopyFromTo(()->dl_tensor, data);
+  DLTensor* tensor = const_cast(local.operator->());
+  FillData(tensor, size);
+  runtime::NDArray::CopyFromTo(tensor, data);
 }
   }
 
diff --git a/src/runtime/metal/metal_device_api.mm 
b/src/runtime/metal/metal_device_api.mm
index 0169a4c..3d7abd1 100644
--- a/src/runtime/metal/metal_device_api.mm
+++ b/src/runtime/metal/metal_device_api.mm
@@ -30,50 +30,54 @@ namespace runtime {
 namespace metal {
 
 MetalWorkspace* MetalWorkspace::Global() {
-  // NOTE: explicitly use new to avoid exit-time destruction of global state
-  // Global state will be recycled by OS as the process exits.
-  static MetalWorkspace* inst = new MetalWorkspace();
-  return inst;
+  @autoreleasepool {
+// NOTE: explicitly use new to avoid exit-time destruction of global state
+// Global state will be recycled by OS as the process exits.
+static MetalWorkspace* inst = new MetalWorkspace();
+return inst;
+  }
 }
 
 void MetalWorkspace::GetAttr(TVMContext ctx, DeviceAttrKind kind, TVMRetValue* 
rv) {
-  this->Init();
-  size_t index = static_cast(ctx.device_id);
-  if (kind == kExist) {
-*rv = int(index < devices.size());
-return;
-  }
-  ICHECK_LT(index, devices.size()) << "Invalid device id " << index;
-  switch (kind) {
-case kMaxThreadsPerBlock: {
-  *rv = static_cast([devices[ctx.device_id] 
maxThreadsPerThreadgroup].width);
-  break;
+  @autoreleasepool {
+this->Init();
+size_t index = static_cast(ctx.device_id);
+if (kind == kExist) {
+  *rv = int(index < devices.size());
+  return;
 }
-case kWarpSize: {
-  // Set warp size to be 1 for safty reason.
-  *rv = 1;
-  break;
+ICHECK_LT(index, devices.size()) << "Invalid device id " << index;
+switch (kind) {
+  case kMaxThreadsPerBlock: {
+*rv = static_cast([devices[ctx.device_id] 
maxThreadsPerThreadgroup].width);
+break;
+  }
+  case kWarpSize: {
+// Set warp size to be 1 for safty reason.
+*rv = 1;
+break;
+  }
+  case kMaxSharedMemoryPerBlock:
+return;
+  case kComputeVersion:
+return;
+  case kDeviceName:
+return;
+  case kMaxClockRate:
+return;
+  case kMultiProcessorCount:
+return;
+  case kMaxThreadDimensions:
+return;
+  case kExist:
+return;
+  case kMaxRegistersPerBlock:
+return;
+  case kGcnArch:
+return;
+  case kApiVersion:
+return;
 }
-case kMaxSharedMemoryPerBlock:
-  return;
-case kComputeVersion:
-  return;
-   

[GitHub] [tvm] tqchen merged pull request #7714: [METAL] Fix memory leaks in Metal runtime

2021-03-23 Thread GitBox


tqchen merged pull request #7714:
URL: https://github.com/apache/tvm/pull/7714


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] majercakdavid opened a new issue #7727: Use WASM model in browser

2021-03-23 Thread GitBox


majercakdavid opened a new issue #7727:
URL: https://github.com/apache/tvm/issues/7727


   Hello,
   please is it possible and if how to use the WASM models in the browser on 
the client side? The problem is that the wasi runtime requires "fs" module to 
work, which is unavailable on the client-side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [TORCH] Implement avg_pool1d (#7694)

2021-03-23 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new f09f02e  [TORCH] Implement avg_pool1d (#7694)
f09f02e is described below

commit f09f02e575b2bd1d9187a4ff2eb178d49fd3dd22
Author: Christoph Gerum 
AuthorDate: Tue Mar 23 09:57:15 2021 +0100

[TORCH] Implement avg_pool1d (#7694)

* [TORCH] Implement avg_pool1d

* [TORCH] Unify creation of avg_pooling operations

* [TORCH] Add tests for avg pooling with padding

* [TORCH] Make format checks happy with unified avg_pool
---
 python/tvm/relay/frontend/pytorch.py  | 84 +++
 tests/python/frontend/pytorch/test_forward.py | 28 -
 2 files changed, 72 insertions(+), 40 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 8ae1e86..cb9ea6a 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -1353,47 +1353,54 @@ class PyTorchOpConverter:
 beta = _expr.const(float(inputs[1]), dtype=dtype)
 return _op.log(_op.exp(inputs[0] * beta) + _expr.const(1.0, 
dtype=dtype)) / beta
 
-def avg_pool2d(self, inputs, input_types):
-data = inputs[0]
-
-pool_size = self.convert_const_list(inputs[1])
-strides = self.convert_const_list(inputs[2] if inputs[2] else 
pool_size)
-padding = inputs[3]
-ceil_mode = int(inputs[4])
-count_include_pad = int(inputs[5])
-
-def func(x):
-return _op.nn.avg_pool2d(
-x,
-pool_size=pool_size,
-strides=strides,
-padding=padding,
-ceil_mode=ceil_mode,
-count_include_pad=count_include_pad,
-)
+def make_avg_pool(self, dim):
+def avg_pool(inputs, input_types):
+data = inputs[0]
 
-if self.is_quantized_tensor(data):
-return qnn_torch.apply_with_upcast(data, func)
+pool_size = self.convert_const_list(inputs[1])
+strides = self.convert_const_list(inputs[2] if inputs[2] else 
pool_size)
+padding = inputs[3]
+ceil_mode = int(inputs[4])
+count_include_pad = int(inputs[5])
 
-return func(data)
+def func(x):
+if dim == 1:
+return _op.nn.avg_pool1d(
+x,
+pool_size=pool_size,
+strides=strides,
+padding=padding,
+ceil_mode=ceil_mode,
+count_include_pad=count_include_pad,
+)
+elif dim == 2:
+return _op.nn.avg_pool2d(
+x,
+pool_size=pool_size,
+strides=strides,
+padding=padding,
+ceil_mode=ceil_mode,
+count_include_pad=count_include_pad,
+)
+elif dim == 3:
+return _op.nn.avg_pool3d(
+x,
+pool_size=pool_size,
+strides=strides,
+padding=padding,
+ceil_mode=ceil_mode,
+count_include_pad=count_include_pad,
+)
+else:
+msg = "Average Pooling dimension should be between 1 and 3"
+raise RuntimeError(msg)
 
-def avg_pool3d(self, inputs, input_types):
-data = inputs[0]
+if self.is_quantized_tensor(data):
+return qnn_torch.apply_with_upcast(data, func)
 
-pool_size = inputs[1]
-strides = inputs[2] if inputs[2] else pool_size
-padding = inputs[3]
-ceil_mode = int(inputs[4])
-count_include_pad = int(inputs[5])
+return func(data)
 
-return _op.nn.avg_pool3d(
-data,
-pool_size=pool_size,
-strides=strides,
-padding=padding,
-ceil_mode=ceil_mode,
-count_include_pad=count_include_pad,
-)
+return avg_pool
 
 def linear(self, inputs, input_types):
 # https://pytorch.org/docs/stable/nn.functional.html#linear
@@ -2350,8 +2357,9 @@ class PyTorchOpConverter:
 "aten::log_softmax": self.log_softmax,
 "aten::sigmoid": self.sigmoid,
 "aten::softplus": self.softplus,
-"aten::avg_pool2d": self.avg_pool2d,
-"aten::avg_pool3d": self.avg_pool3d,
+"aten::avg_pool1d": self.make_avg_pool(1),
+"aten::avg_pool2d": self.make_avg_pool(2),
+"aten::avg_pool3d": self.make_avg_pool(3),
 

[GitHub] [tvm] masahi commented on pull request #7694: [TORCH] Implement avg_pool1d

2021-03-23 Thread GitBox


masahi commented on pull request #7694:
URL: https://github.com/apache/tvm/pull/7694#issuecomment-804733311


   Thanks @cgerum 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #7694: [TORCH] Implement avg_pool1d

2021-03-23 Thread GitBox


masahi merged pull request #7694:
URL: https://github.com/apache/tvm/pull/7694


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hebowen325 commented on issue #7660: bug about tvm/apps/sgx

2021-03-23 Thread GitBox


hebowen325 commented on issue #7660:
URL: https://github.com/apache/tvm/issues/7660#issuecomment-804691495


   @nhynes I find that issues about sgx in tvm were mostly solved by you, could 
you help me? I would be very grateful indeed for any help you could give me.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7162: Fix Segmentation Fault For Tensorrt BYOC when TVM_TENSORRT_CACHE_DIR is Set

2021-03-23 Thread GitBox


comaniac commented on pull request #7162:
URL: https://github.com/apache/tvm/pull/7162#issuecomment-804679951


   Gentle ping @lsy643 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7665: [Runtime] Fix GraphRuntime.load_params to allow passing parameters that are not an input

2021-03-23 Thread GitBox


comaniac commented on pull request #7665:
URL: https://github.com/apache/tvm/pull/7665#issuecomment-804677134


   Thanks @jtuyls @tkonolige 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Fix GraphModule.load_params to allow passing parameters that are not an expected input (#7665)

2021-03-23 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 4c66fb2  Fix GraphModule.load_params to allow passing parameters that 
are not an expected input (#7665)
4c66fb2 is described below

commit 4c66fb2e4b99e376fbaec15d975e4e4d1d8321ab
Author: Jorn Tuyls 
AuthorDate: Tue Mar 23 07:18:04 2021 +

Fix GraphModule.load_params to allow passing parameters that are not an 
expected input (#7665)
---
 src/runtime/graph/graph_runtime.cc  |  4 +-
 tests/python/relay/test_external_codegen.py | 59 +
 tests/python/unittest/test_runtime_graph.py | 24 +++-
 3 files changed, 69 insertions(+), 18 deletions(-)

diff --git a/src/runtime/graph/graph_runtime.cc 
b/src/runtime/graph/graph_runtime.cc
index 5c7b756..b11a573 100644
--- a/src/runtime/graph/graph_runtime.cc
+++ b/src/runtime/graph/graph_runtime.cc
@@ -201,7 +201,9 @@ void GraphRuntime::LoadParams(const std::string& 
param_blob) {
 void GraphRuntime::LoadParams(dmlc::Stream* strm) {
   Map params = ::tvm::runtime::LoadParams(strm);
   for (auto& p : params) {
-uint32_t eid = this->entry_id(input_nodes_[GetInputIndex(p.first)], 0);
+int in_idx = GetInputIndex(p.first);
+if (in_idx < 0) continue;
+uint32_t eid = this->entry_id(input_nodes_[in_idx], 0);
 data_entry_[eid].CopyFrom(p.second);
   }
 }
diff --git a/tests/python/relay/test_external_codegen.py 
b/tests/python/relay/test_external_codegen.py
index 0d729b7..ab6695e 100644
--- a/tests/python/relay/test_external_codegen.py
+++ b/tests/python/relay/test_external_codegen.py
@@ -23,9 +23,29 @@ import tvm
 from tvm import te
 import tvm.relay.testing
 import tvm.relay.transform
+
 from tvm import relay
 from tvm import runtime
+from tvm.relay import transform
 from tvm.contrib import utils
+from tvm.relay.build_module import bind_params_by_name
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+
+
+def update_lib(lib):
+test_dir = os.path.dirname(os.path.realpath(os.path.expanduser(__file__)))
+source_dir = os.path.join(test_dir, "..", "..", "..")
+contrib_path = os.path.join(source_dir, "src", "runtime", "contrib")
+
+kwargs = {}
+kwargs["options"] = ["-O2", "-std=c++14", "-I" + contrib_path]
+tmp_path = utils.tempdir()
+lib_name = "lib.so"
+lib_path = tmp_path.relpath(lib_name)
+lib.export_library(lib_path, fcompile=False, **kwargs)
+lib = tvm.runtime.load_module(lib_path)
+
+return lib
 
 
 def check_result(mod, map_inputs, out_shape, result, tol=1e-5, target="llvm", 
ctx=tvm.cpu()):
@@ -33,21 +53,6 @@ def check_result(mod, map_inputs, out_shape, result, 
tol=1e-5, target="llvm", ct
 print("Skip test on Windows for now")
 return
 
-def update_lib(lib):
-test_dir = 
os.path.dirname(os.path.realpath(os.path.expanduser(__file__)))
-source_dir = os.path.join(test_dir, "..", "..", "..")
-contrib_path = os.path.join(source_dir, "src", "runtime", "contrib")
-
-kwargs = {}
-kwargs["options"] = ["-O2", "-std=c++14", "-I" + contrib_path]
-tmp_path = utils.tempdir()
-lib_name = "lib.so"
-lib_path = tmp_path.relpath(lib_name)
-lib.export_library(lib_path, fcompile=False, **kwargs)
-lib = tvm.runtime.load_module(lib_path)
-
-return lib
-
 def check_vm_result():
 with tvm.transform.PassContext(opt_level=3, 
disabled_pass=["AlterOpLayout"]):
 exe = relay.vm.compile(mod, target=target)
@@ -329,6 +334,29 @@ def test_extern_dnnl_const():
 check_result(mod, {"data0": i_data}, (1, 32, 14, 14), ref_res.asnumpy(), 
tol=1e-5)
 
 
+def test_load_params_with_constants_in_ext_codegen():
+# After binding params and partitioning graph_module.get_params()
+# might contain parameters that are not an graph runtime input but
+# for example constants in external function.
+y_in = np.ones((1,)).astype("float32")
+params = {"y": y_in}
+mod = tvm.IRModule()
+x = relay.var("x", shape=(1, 10))
+y = relay.var("y", shape=(1,))
+xcb = compiler_begin(x, "ccompiler")
+ycb = compiler_begin(y, "ccompiler")
+z = relay.add(xcb, ycb)
+zce = compiler_end(z, "ccompiler")
+mod["main"] = relay.Function([x, y], zce)
+mod["main"] = bind_params_by_name(mod["main"], params)
+mod = transform.PartitionGraph()(mod)
+
+graph_module = relay.build(mod, target="llvm", params=params)
+lib = update_lib(graph_module.get_lib())
+rt_mod = tvm.contrib.graph_runtime.create(graph_module.get_json(), lib, 
tvm.cpu(0))
+rt_mod.load_params(runtime.save_param_dict(graph_module.get_params()))
+
+
 if __name__ == "__main__":
 test_multi_node_subgraph()
 test_extern_gcc_single_op()
@@ -337,3 +365,4 @@ if __name__ == "__main__":
 test_extern_gcc_consts()
  

[GitHub] [tvm] comaniac merged pull request #7665: [Runtime] Fix GraphRuntime.load_params to allow passing parameters that are not an input

2021-03-23 Thread GitBox


comaniac merged pull request #7665:
URL: https://github.com/apache/tvm/pull/7665


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7698: [TVMC] Python Scripting Init Files

2021-03-23 Thread GitBox


comaniac commented on pull request #7698:
URL: https://github.com/apache/tvm/pull/7698#issuecomment-804674880


   Just a guess. Is that due to circular dependency?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7564: [BYOC] Exclude external params from Graph Runtime

2021-03-23 Thread GitBox


comaniac commented on pull request #7564:
URL: https://github.com/apache/tvm/pull/7564#issuecomment-804670541


   > We did explore using the same fix as for Ethos-N (giving a 'null' 
ConstantUpdater), but this doesn't work for ACL as ACL does load the constants 
from MetadataModule. Ethos-N just directly serializes the constants into the 
binary module so it doesn't care if they're missing in the MetadataModule.
   
   Could we improve ACL's customized runtime so that it could know if the 
constant should load from MetadataModule or itself? To me, Ethos-N is just an 
extreme case that completely gets rid of constants in MetadataModule, but 
constant updater should be capable of supporting all three cases: all in 
MetadataModule, some in MetadataModule, and none in MetadataModule.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] neoming opened a new issue #7726: VTA make failed

2021-03-23 Thread GitBox


neoming opened a new issue #7726:
URL: https://github.com/apache/tvm/issues/7726


   
   I follow this tutorial:
   
[bitstream-generation-with-xilinx-toolchains](https://tvm.apache.org/docs/vta/install.html#bitstream-generation-with-xilinx-toolchains)
   I change the `vivado.tcl` file
from `set scripts_vivado_version 2020.1` to `set scripts_vivado_version 
2020.2`  And I successfully generate the ip. But some error was reported when 
generate bitstream。the output are as follows:
   
   ```
   # for {set i 0} {$i < $inp_part} {incr i} {
   #   # Create instance: inp_mem, and set properties
   #   set inp_mem [ create_bd_cell -type ip -vlnv 
xilinx.com:ip:blk_mem_gen:8.4 inp_mem_${i} ]
   #   [ init_bram_property $inp_mem $inp_mem_width $inp_mem_depth ]
   #   # If module has more than 1 mem port, the naming convention changes
   #   if {$inp_part > 1} {
   # set porta [get_bd_intf_pins load_0/inp_mem_${i}_V_PORTA]
   # set portb [get_bd_intf_pins compute_0/inp_mem_${i}_V_PORTA]
   #   } else {
   # set porta [get_bd_intf_pins load_0/inp_mem_V_PORTA]
   # set portb [get_bd_intf_pins compute_0/inp_mem_V_PORTA]
   #   }
   #   # Create interface connections
   #   connect_bd_intf_net -intf_net load_0_inp_mem_V_PORTA \
   # [get_bd_intf_pins $inp_mem/BRAM_PORTA] \
   # $porta
   #   connect_bd_intf_net -intf_net compute_0_inp_mem_V_PORTA \
   # [get_bd_intf_pins $inp_mem/BRAM_PORTB] \
   # $portb
   # }
   WARNING: [BD 5-232] No interface pins matched 'get_bd_intf_pins 
load_0/inp_mem_V_PORTA'
   WARNING: [BD 5-232] No interface pins matched 'get_bd_intf_pins 
compute_0/inp_mem_V_PORTA'
   ERROR: [BD 5-106] Arguments to the connect_bd_intf_net command cannot be 
empty.
   ERROR: [Common 17-39] 'connect_bd_intf_net' failed due to earlier errors.
   
   while executing
   "connect_bd_intf_net -intf_net load_0_inp_mem_V_PORTA  [get_bd_intf_pins 
$inp_mem/BRAM_PORTA]  $porta"
   ("for" body line 14)
   invoked from within
   "for {set i 0} {$i < $inp_part} {incr i} {
 # Create instance: inp_mem, and set properties
 set inp_mem [ create_bd_cell -type ip -vlnv xilinx.com:ip..."
   (file 
"/home/yons/tvm/3rdparty/vta-hw/hardware/xilinx/scripts/vivado.tcl" line 214)
   Vivado% 
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-23 Thread GitBox


zhiics commented on pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#issuecomment-804657364


   @areusch yeah, warning may not be needed. I was just trying to make sure 
that we don't break all the downstream deployment by letting them know that 
what is going to happen there. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics edited a comment on pull request #7564: [BYOC] Exclude external params from Graph Runtime

2021-03-23 Thread GitBox


zhiics edited a comment on pull request #7564:
URL: https://github.com/apache/tvm/pull/7564#issuecomment-804654019


   The approach I was thinking is that we can probably use the `MetadataModule` 
as the only place to save the weights. Other modules including 
`GraphRuntimeModule` can directly query the needed parameters from it (external 
modules have done that).
   
   The way we filter the redundant parameters in the this PR can be used by 
users after the build (i.e. we can document it somewhere) but should not be 
part of the `build` codebase in my opinion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7564: [BYOC] Exclude external params from Graph Runtime

2021-03-23 Thread GitBox


zhiics commented on pull request #7564:
URL: https://github.com/apache/tvm/pull/7564#issuecomment-804654019


   The approach I was thinking is that we can probably use the `MetadataModule` 
as the only place to save the weights. Other modules including 
`GraphRuntimeModule` can directly query the needed parameters from it (external 
modules have done that).
   
   The way we filter the redundant parameters in the this PR can be used by 
users after the build but should not be part of the `build` codebase in my 
opinion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org