[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479616661



##
File path: src/runtime/contrib/vitis_ai/vitis_ai_runtime.cc
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file vitis_ai_runtime.cc
+ */
+#include 
+#include 
+
+#include "vitis_ai_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+TVM_REGISTER_PASS_CONFIG_OPTION("target_", String);
+TVM_REGISTER_PASS_CONFIG_OPTION("vai_build_dir_", String);
+
+std::shared_ptr load_xgraph_model(const std::string& 
model_path) {
+  std::string model_name = model_path + "/" + "dpu_xgraph.json";
+  std::string model_weights = model_path + "/" + "dpu_xgraph.h5";
+  return pyxir::load(model_name, model_weights);
+}
+
+void VitisAIRuntime::Init(const std::string& model_path, const std::string& 
target) {
+  model_path_ = model_path;
+  target_ = target;
+  xgraph_ = load_xgraph_model(model_path_);
+  in_tensor_names_ = xgraph_->get_input_names();
+  out_tensor_names_ =  xgraph_->get_meta_attr("tvm_out_tensors").get_strings();
+  pyxir::partition(xgraph_, std::vector{target}, "");

Review comment:
   At this stage, it is really just assigning targets to the subgraph, 
which in this case means assigning the DPU target to every operation because we 
can handle the subgraph completely.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479614422



##
File path: docs/deploy/vitis_ai.rst
##
@@ -0,0 +1,617 @@
+Vitis-AI Integration
+
+
+`Vitis-AI `__ is Xilinx's
+development stack for hardware-accelerated AI inference on Xilinx
+platforms, including both edge devices and Alveo cards. It consists of
+optimized IP, tools, libraries, models, and example designs. It is
+designed with high efficiency and ease of use in mind, unleashing the
+full potential of AI acceleration on Xilinx FPGA and ACAP.
+
+The current Vitis-AI Byoc flow inside TVM enables acceleration of Neural
+Network model inference on edge and cloud. The identifiers for the
+supported edge and cloud Deep Learning Processor Units (DPU's) are
+DPUCZDX8G respectively DPUCADX8G. DPUCZDX8G and DPUCADX8G are hardware
+accelerators for convolutional neural networks (CNN's) on top of the
+Xilinx `Zynq Ultrascale+
+MPSoc 
`__
+respectively
+`Alveo `__
+(U200/U250) platforms. For more information about the DPU identifiers
+see the section on `DPU naming information <#dpu-naming-information>`__.
+
+On this page you will find information on how to
+`build <#build-instructions>`__ TVM with Vitis-AI and on how to `get
+started <#getting-started>`__ with an example.
+
+DPU naming information
+--
+
++-+-+-++---+--+
+| DPU | Application | HW Platform  
   | Quantization Method
| Quantization Bitwidth 
| Design Target|
++=+=+=++===+==+
+| Deep Learning Processing Unit   | C: CNN R: RNN   | AD: Alveo DDR AH: Alveo 
HBM VD: Versal DDR with AIE & PL ZD: Zynq DDR   | X: DECENT I: Integer 
threshold F: Float threshold R: RNN   | 4: 4-bit 8: 8-bit 16: 16-bit M: Mixed 
Precision   | G: General purpose H: High throughput L: Low latency C: Cost 
optimized   |
++-+-+-++---+--+
+
+Build instructions
+--
+
+This section lists the instructions for building TVM with Vitis-AI for
+both `cloud <#cloud-dpucadx8g>`__ and `edge <#edge-dpuczdx8g>`__.
+
+Cloud (DPUCADX8G)
+~
+
+For Vitis-AI acceleration in the cloud TVM has to be built on top of the
+Xilinx Alveo platform.
+
+System requirements
+^^^
+
+The following table lists system requirements for running docker
+containers as well as Alveo cards.
+
++-+--+
+| **Component**   | **Requirement**
  |
++=+==+
+| Motherboard | PCI Express 
3.0-compliant with one dual-width x16 slot   |
++-+--+
+| System Power Supply | 225W   
  |
++-+--+
+| Operating System| Ubuntu 16.04, 18.04
  |
++-+--+
+| | CentOS 7.4, 7.5
  |
++-+--+
+| | RHEL 7.4, 7.5 

[GitHub] [incubator-tvm] siju-samuel commented on pull request #6357: [Torch] Add cast to double, fix flatten conversion

2020-08-28 Thread GitBox


siju-samuel commented on pull request #6357:
URL: https://github.com/apache/incubator-tvm/pull/6357#issuecomment-683243985


   Thanks @masahi @leandron This PR is merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel merged pull request #6357: [Torch] Add cast to double, fix flatten conversion

2020-08-28 Thread GitBox


siju-samuel merged pull request #6357:
URL: https://github.com/apache/incubator-tvm/pull/6357


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (d9450f8 -> 2d752d2)

2020-08-28 Thread sijusamuel
This is an automated email from the ASF dual-hosted git repository.

sijusamuel pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from d9450f8  [Target][Codegen] Use target class in all codegens (#6347)
 add 2d752d2  [Torch] Add cast to double, fix flatten conversion (#6357)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  | 15 ++-
 tests/python/frontend/pytorch/test_forward.py | 21 +
 2 files changed, 35 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479613590



##
File path: tests/python/contrib/test_vitis_ai_codegen.py
##
@@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, W0611
+"""Vitis-AI codegen tests."""
+
+import numpy as np
+
+import tvm
+from tvm import relay
+from tvm.relay import transform
+from tvm.relay.op.contrib.vitis_ai import annotation
+from tvm.contrib.target import vitis_ai
+
+import pyxir
+import pyxir.contrib.target.DPUCADX8G
+
+def set_func_attr(func, compile_name, symbol_name):
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", compile_name)
+func = func.with_attr("global_symbol", symbol_name)
+return func
+
+def _create_graph():
+shape = (10, 10)
+mod = tvm.IRModule()
+x = relay.var('x', shape=shape)
+y = relay.var('y', shape=shape)
+z = x + x
+p = y * y
+func = relay.Function([x, y], p - z)
+mod["main"] = func
+params = {}
+params["x"] = np.random.rand(10, 10).astype('float32')
+params["y"] = np.random.rand(10, 10).astype('float32')
+return mod, params
+
+
+def _construct_model(func, params=None):

Review comment:
   Yes, that's mainly what we want to test here. We could additionally test 
that the operation is actually assigned for execution on the DPU accelerator. 
What other tests would you add?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


zhiics commented on pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-683242606


   Thanks @junrushao1994 @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (b368f9d -> d9450f8)

2020-08-28 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b368f9d  [CMAKE] Compatible for ROCm before 3.7 (#6359)
 add d9450f8  [Target][Codegen] Use target class in all codegens (#6347)

No new revisions were added by this update.

Summary of changes:
 include/tvm/target/codegen.h  |   2 +-
 include/tvm/target/target.h   |   3 +-
 python/tvm/target/target.py   |   3 +
 src/target/build_common.h |  17 
 src/target/codegen.cc |   6 +-
 src/target/llvm/codegen_amdgpu.cc |  50 
 src/target/llvm/codegen_blob.cc   |   7 +-
 src/target/llvm/codegen_hexagon.cc|  31 ++--
 src/target/llvm/codegen_nvptx.cc  |  17 ++--
 src/target/llvm/llvm_common.cc| 101 ++--
 src/target/llvm/llvm_common.h |  21 +++--
 src/target/llvm/llvm_module.cc| 141 +-
 src/target/opt/build_cuda_on.cc   |   2 +-
 src/target/source/codegen_aocl.cc |  17 ++--
 src/target/source/codegen_c_host.cc   |   7 +-
 src/target/source/codegen_metal.cc|   6 +-
 src/target/source/codegen_opencl.cc   |   2 +-
 src/target/source/codegen_vhls.cc |   3 +-
 src/target/spirv/build_vulkan.cc  |   6 +-
 src/target/stackvm/codegen_stackvm.cc |   2 +-
 src/target/target.cc  |  20 -
 src/target/target_kind.cc |   1 +
 tests/cpp/build_module_test.cc|  11 ++-
 23 files changed, 276 insertions(+), 200 deletions(-)



[GitHub] [incubator-tvm] zhiics merged pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


zhiics merged pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479612734



##
File path: tests/python/contrib/test_vitis_ai_codegen.py
##
@@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, W0611
+"""Vitis-AI codegen tests."""
+
+import numpy as np
+
+import tvm
+from tvm import relay
+from tvm.relay import transform
+from tvm.relay.op.contrib.vitis_ai import annotation
+from tvm.contrib.target import vitis_ai
+
+import pyxir
+import pyxir.contrib.target.DPUCADX8G

Review comment:
   Yes, indeed, we will be adding a pytest importorskip at the top of these 
test files.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479611862



##
File path: tests/python/contrib/test_vitis_ai_codegen.py
##
@@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, W0611
+"""Vitis-AI codegen tests."""
+
+import numpy as np
+
+import tvm
+from tvm import relay
+from tvm.relay import transform
+from tvm.relay.op.contrib.vitis_ai import annotation
+from tvm.contrib.target import vitis_ai
+
+import pyxir
+import pyxir.contrib.target.DPUCADX8G
+
+def set_func_attr(func, compile_name, symbol_name):
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", compile_name)
+func = func.with_attr("global_symbol", symbol_name)
+return func
+
+def _create_graph():
+shape = (10, 10)
+mod = tvm.IRModule()
+x = relay.var('x', shape=shape)
+y = relay.var('y', shape=shape)
+z = x + x
+p = y * y
+func = relay.Function([x, y], p - z)
+mod["main"] = func
+params = {}
+params["x"] = np.random.rand(10, 10).astype('float32')
+params["y"] = np.random.rand(10, 10).astype('float32')
+return mod, params
+
+
+def _construct_model(func, params=None):
+mod = tvm.IRModule()
+mod["main"] = func
+if params is None:
+params = {}
+mod = annotation(mod, params, "DPUCADX8G")
+mod = transform.MergeCompilerRegions()(mod)
+mod = transform.PartitionGraph()(mod)
+fcompile = tvm._ffi.get_global_func("relay.ext.vai")
+subgraph_mod = tvm.IRModule()
+for _, funcnode in mod.functions.items():
+if funcnode.attrs and 'Compiler' in funcnode.attrs and \
+   funcnode.attrs['Compiler'] == 'vai':
+subgraph_mod["main"] = funcnode
+with tvm.transform.PassContext(opt_level=3, 
config={'target_':'DPUCADX8G'}):
+fcompile(subgraph_mod["main"])
+
+
+def test_add():
+shape = (10, 10)
+x = relay.var('x', shape=shape)
+y = x + x
+func = relay.Function([x], y)
+_construct_model(func)
+
+def test_relu():
+shape = (10, 10)
+x = relay.var('x', shape=shape)
+y = relay.nn.relu(x)
+func = relay.Function([x], y)
+_construct_model(func)
+
+def test_conv2d():
+x = relay.var('x', shape=(1, 3, 224, 224))
+w = relay.const(np.zeros((16, 3, 3, 3), dtype='float32'))
+y = relay.nn.conv2d(x, w, strides=[2, 2], padding=[1, 1, 1, 1], 
kernel_size=[3, 3])
+func = relay.Function([x], y)
+params = {}
+params["x"] = np.zeros((16, 3, 3, 3), dtype='float32')
+_construct_model(func, params)
+
+
+def test_global_avg_pool2d():
+shape = (10, 10, 10, 10)
+x = relay.var('x', shape=shape)
+y = relay.nn.global_avg_pool2d(x)
+func = relay.Function([x], y)
+_construct_model(func)
+
+def test_annotate():
+"""Test annotation with Vitis-AI DP (DPUCADX8G)"""
+def partition():
+data = relay.var("data", relay.TensorType((1, 3, 224, 224), "float32"))
+weight = relay.var("weight", relay.TensorType((16, 3, 3, 3), 
"float32"))
+bn_gamma = relay.var("bn_gamma", relay.TensorType((16, ), "float32"))
+bn_beta = relay.var("bn_beta", relay.TensorType((16, ), "float32"))
+bn_mmean = relay.var("bn_mean", relay.TensorType((16, ), "float32"))
+bn_mvar = relay.var("bn_var", relay.TensorType((16, ), "float32"))
+
+conv = relay.nn.conv2d(
+data=data,
+weight=weight,
+kernel_size=(3, 3),
+channels=16,
+padding=(1, 1))
+bn_output = relay.nn.batch_norm(conv, bn_gamma, bn_beta, bn_mmean,
+bn_mvar)
+
+func = relay.Function([data, weight, bn_gamma, bn_beta, bn_mmean,
+   bn_mvar], bn_output.astuple())
+mod = tvm.IRModule()
+mod["main"] = func
+params = {}
+params["weight"] = np.random.rand(16, 3, 3, 3).astype('float32')
+params["bn_gamma"] = np.random.rand(16).astype('float32')
+params["bn_beta"] = np.random.r

[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479611732



##
File path: tests/python/contrib/test_vitis_ai_codegen.py
##
@@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, W0611
+"""Vitis-AI codegen tests."""
+
+import numpy as np
+
+import tvm
+from tvm import relay
+from tvm.relay import transform
+from tvm.relay.op.contrib.vitis_ai import annotation
+from tvm.contrib.target import vitis_ai
+
+import pyxir
+import pyxir.contrib.target.DPUCADX8G
+
+def set_func_attr(func, compile_name, symbol_name):
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", compile_name)
+func = func.with_attr("global_symbol", symbol_name)
+return func
+
+def _create_graph():
+shape = (10, 10)
+mod = tvm.IRModule()
+x = relay.var('x', shape=shape)
+y = relay.var('y', shape=shape)
+z = x + x
+p = y * y
+func = relay.Function([x, y], p - z)
+mod["main"] = func
+params = {}
+params["x"] = np.random.rand(10, 10).astype('float32')
+params["y"] = np.random.rand(10, 10).astype('float32')
+return mod, params
+
+
+def _construct_model(func, params=None):
+mod = tvm.IRModule()
+mod["main"] = func
+if params is None:
+params = {}
+mod = annotation(mod, params, "DPUCADX8G")
+mod = transform.MergeCompilerRegions()(mod)
+mod = transform.PartitionGraph()(mod)
+fcompile = tvm._ffi.get_global_func("relay.ext.vai")
+subgraph_mod = tvm.IRModule()
+for _, funcnode in mod.functions.items():
+if funcnode.attrs and 'Compiler' in funcnode.attrs and \
+   funcnode.attrs['Compiler'] == 'vai':
+subgraph_mod["main"] = funcnode
+with tvm.transform.PassContext(opt_level=3, 
config={'target_':'DPUCADX8G'}):
+fcompile(subgraph_mod["main"])
+
+
+def test_add():

Review comment:
   See comment above. The idea here is to just run these operations through 
the TVM - PyXIR interface to keep that stable.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-683241292


   The CI is green. Could you guys take another look? Thanks! @zhiics @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479610922



##
File path: tests/python/contrib/test_vitis_ai_codegen.py
##
@@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, W0611
+"""Vitis-AI codegen tests."""

Review comment:
   Yes, as we require an FPGA backend for e2e testing, we are doing that 
internally at the moment. The tests in this PR are jusr meant to keep the TVM - 
PyXIR interface stable. We think that these two parts together should keep the 
TVM Vitis-AI codegen stable.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


jtuyls commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479610650



##
File path: python/tvm/relay/op/contrib/vitis_ai.py
##
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument, no-else-return, E1102
+"""VITISAI codegen supported operators."""
+
+import numpy as np
+
+from tvm import relay
+import tvm._ffi
+from tvm.relay.expr import Tuple, TupleGetItem
+from tvm.relay import transform
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+
+import pyxir
+import pyxir.frontend.tvm
+
+
+@transform.function_pass(opt_level=0)
+class VitisAIAnnotationPass:
+"""The explicit pass wrapper around VitisAIAnnotationPass."""
+def __init__(self, compiler, relay_ids):
+self.compiler = compiler
+self.relay_ids = relay_ids
+def transform_function(self, func, mod, ctx):
+"""Transform func to annotate."""
+annotator = self
+class Annotator(tvm.relay.ExprMutator):
+"""Annotator for VITIS-AI DPU."""
+def visit_tuple(self, tup):
+field_list = []
+cond = int(hash(tup))
+for field in tup.fields:
+if cond in annotator.relay_ids:
+field_list.append(compiler_begin(super().visit(field), 
annotator.compiler))
+else:
+field_list.append(super().visit(field))
+if cond in annotator.relay_ids:
+return compiler_end(Tuple(field_list), annotator.compiler)
+else:
+return Tuple(field_list)
+
+def visit_tuple_getitem(self, op):
+if  int(hash(op.tuple_value)) in annotator.relay_ids:
+tuple_value = compiler_begin(super().visit(op.tuple_value),
+ annotator.compiler)
+return compiler_end(TupleGetItem(tuple_value, op.index), 
annotator.compiler)
+else:
+tuple_value = super().visit(op.tuple_value)
+return TupleGetItem(tuple_value, op.index)
+def visit_call(self, call):
+if int(hash(call)) in annotator.relay_ids:
+new_args = []
+for arg in call.args:
+ann = compiler_begin(super().visit(arg),
+ annotator.compiler)
+new_args.append(ann)
+new_call = relay.Call(call.op, new_args, call.attrs,
+  call.type_args)
+return compiler_end(new_call, annotator.compiler)
+
+else:
+return super().visit_call(call)
+return Annotator().visit(func)
+
+
+
+def annotation(mod, params, target):
+"""
+An annotator for VITISAI.
+"""
+xgraph = pyxir.frontend.tvm.from_relay(mod, params, postprocessing=None)
+xgraph = pyxir.partition(xgraph, targets=[target])
+layers = xgraph.get_layers()
+relay_ids = [list(np.array(layer.attrs['relay_id']).flatten())
+ for layer in layers if layer.target == target]
+relay_ids_flatten = [item for sublist in relay_ids for item in sublist]
+mod = VitisAIAnnotationPass("vai", relay_ids_flatten)(mod)

Review comment:
   Yes, we are indeed deferring partitioning to pyxir. At the moment our 
partitioning is more complicated than the other BYOC backends and we choose to 
abstract that away so that we can leverage our logic to integrate with multiple 
frameworks, e.g. TVM, ONNX Runtime. In this way, we don't have to replicate 
this functionality for every framework we want to support.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-28 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r478615669



##
File path: python/tvm/target/datatype.py
##
@@ -14,73 +14,153 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-"""Custom datatype functionality"""
-import tvm._ffi
+"""Bring Your Own Datatypes custom datatype framework
 
-import tvm.runtime._ffi_api
-from tvm.runtime import DataType
-import tvm.tir
-from tvm.tir.expr import Cast as _Cast, FloatImm as _FloatImm
+TODO(@gussmith23 @hypercubestart) link to BYODT docs when they exist"""
+import tvm
+from tvm.runtime import convert, DataType
+from tvm.tir.expr import (Call as _Call, Cast as _Cast,
+  FloatImm as _FloatImm, BinaryOpExpr as _BinaryOpExpr)
+from tvm.tir.op import call_pure_extern
+from tvm._ffi import register_func as _register_func
+from tvm.tir import call_intrin
 
 
 def register(type_name, type_code):
 """Register a custom datatype with the given type name and type code
-Currently, the type code is manually allocated by the user, and the
-user must ensure that no two custom types share the same code.
-Generally, this should be straightforward, as the user will be
-manually registering all of their custom types.
+
+Currently, the type code is manually allocated by the user, and the user
+must ensure that no two custom types share the same code. Generally, this
+should be straightforward, as the user will be manually registering all of
+their custom types.
+
+Example:
+
+.. code-block:: python
+
+# Register a dtype named 'posites2' under type code 130.
+tvm.datatype.register('posites2', 130)
+
 
 Parameters
 --
 type_name : str
-The name of the custom datatype
+The name of the custom datatype.
 
 type_code : int
-The type's code, which should be >= kCustomBegin
+The type's code, which should be >= kCustomBegin. See
+include/tvm/runtime/data_type.h.
 """
 tvm.runtime._ffi_api._datatype_register(type_name, type_code)
 
 
 def get_type_name(type_code):
-"""Get the type name from the type code
+"""Get the type name of a custom datatype from the type code.
+
+Note that this only works for custom datatypes registered with
+tvm.datatype.register(). It does not work for TVM-native types.
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_name(130) == 'posites2'
 
 Parameters
 --
 type_code : int
-The type code
+The type code of the custom datatype.
+
+Returns
+---
+type_name : String
+The name of the custom datatype.
+
 """
 return tvm.runtime._ffi_api._datatype_get_type_name(type_code)
 
 
 def get_type_code(type_name):
-"""Get the type code from the type name
+"""Get the type code of a custom datatype from its type name
+
+Note that this only works for custom datatypes registered with
+tvm.datatype.register(). It does not work for TVM-native types.
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_code('posites2') == 130
 
 Parameters
 --
 type_name : str
 The type name
+
+Returns
+---
+type_code : int
+The type code of the custom datatype.
 """
 return tvm.runtime._ffi_api._datatype_get_type_code(type_name)
 
 
 def get_type_registered(type_code):
-"""Get a boolean representing whether the type is registered
+"""Returns true if a custom datatype is registered under the given type 
code
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_registered(130)
 
 Parameters
 --
 type_code: int
 The type code
+
+Returns
+---
+type_registered : bool
+True if a custom datatype is registered under this type code, and false
+otherwise.
 """
 return tvm.runtime._ffi_api._datatype_get_type_registered(type_code)
 
 
-def register_op(lower_func, op_name, target, type_name, src_type_name=None):
-"""Register an external function which computes the given op.
+def register_op(lower_func,
+op_name,
+target,
+src_type_name,
+dest_type_name=None,
+intrinsic_name=None):
+"""Register a lowering function for a specific operator of a custom 
datatype
+
+At build time, Relay must lower operators over custom datatypes into
+operators it understands how to compile. For each custom datatype operator
+which Relay finds while lowering custom datatypes, Relay expects to find a
+user-defined lowering fu

[incubator-tvm-site] branch asf-site updated: Build at Fri Aug 28 21:00:04 PDT 2020

2020-08-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 65f8e83  Build at Fri Aug 28 21:00:04 PDT 2020
65f8e83 is described below

commit 65f8e83a33a960149ddfd3ec6ef54cee241ee1cf
Author: tqchen 
AuthorDate: Fri Aug 28 21:00:05 2020 -0700

Build at Fri Aug 28 21:00:04 PDT 2020
---
 atom.xml|   2 +-
 community.html  |   1 +
 images/community/cmuscs.png | Bin 0 -> 86895 bytes
 rss.xml |   4 ++--
 4 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/atom.xml b/atom.xml
index 8a638c4..9a1f6eb 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
  TVM
  https://tvm.apache.org"; rel="self"/>
  https://tvm.apache.org"/>
- 2020-08-27T16:37:12-07:00
+ 2020-08-28T21:00:02-07:00
  https://tvm.apache.org
  

diff --git a/community.html b/community.html
index fac7bb9..cee29e4 100644
--- a/community.html
+++ b/community.html
@@ -213,6 +213,7 @@ in alphabetical order.
   
   
   
+  
   
   
   
diff --git a/images/community/cmuscs.png b/images/community/cmuscs.png
new file mode 100644
index 000..e5b0ea8
Binary files /dev/null and b/images/community/cmuscs.png differ
diff --git a/rss.xml b/rss.xml
index de3f83f..3cb909c 100644
--- a/rss.xml
+++ b/rss.xml
@@ -5,8 +5,8 @@
 TVM - 
 https://tvm.apache.org
 https://tvm.apache.org"; rel="self" 
type="application/rss+xml" />
-Thu, 27 Aug 2020 16:37:12 -0700
-Thu, 27 Aug 2020 16:37:12 -0700
+Fri, 28 Aug 2020 21:00:02 -0700
+Fri, 28 Aug 2020 21:00:02 -0700
 60
 
 



[incubator-tvm-site] branch master updated: Add CMU SCS logo

2020-08-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/master by this push:
 new 5950228  Add CMU SCS logo
5950228 is described below

commit 5950228aa5802023052415eafcb874835f01baf9
Author: tqchen 
AuthorDate: Fri Aug 28 20:59:52 2020 -0700

Add CMU SCS logo
---
 community.md|   1 +
 images/community/cmuscs.png | Bin 0 -> 86895 bytes
 2 files changed, 1 insertion(+)

diff --git a/community.md b/community.md
index ed38404..0d47eca 100644
--- a/community.md
+++ b/community.md
@@ -73,6 +73,7 @@ in alphabetical order.
   
   
   
+  
   
   
   
diff --git a/images/community/cmuscs.png b/images/community/cmuscs.png
new file mode 100644
index 000..e5b0ea8
Binary files /dev/null and b/images/community/cmuscs.png differ



[GitHub] [incubator-tvm] lixiaoquan commented on pull request #6289: [Relay] Enhance relay.split(), allow splitted dim to be dynamic

2020-08-28 Thread GitBox


lixiaoquan commented on pull request #6289:
URL: https://github.com/apache/incubator-tvm/pull/6289#issuecomment-683213607


   > @lixiaoquan Let's wait for the CI :)
   
   It seems CI didn't report the status, Let me tirgger it again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


comaniac commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479578714



##
File path: src/runtime/contrib/vitis_ai/vitis_ai_runtime.cc
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file vitis_ai_runtime.cc
+ */
+#include 
+#include 
+
+#include "vitis_ai_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+TVM_REGISTER_PASS_CONFIG_OPTION("target_", String);
+TVM_REGISTER_PASS_CONFIG_OPTION("vai_build_dir_", String);

Review comment:
   ditto.

##
File path: cmake/modules/contrib/VITISAI.cmake
##
@@ -0,0 +1,49 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+if(USE_VITIS_AI)
+  set(PYXIR_SHARED_LIB libpyxir.so)
+  find_package(PythonInterp 3.6 REQUIRED)
+  if(NOT PYTHON)
+find_program(PYTHON NAMES python3 python3.6)
+  endif()
+  if(PYTHON)
+execute_process(COMMAND "${PYTHON_EXECUTABLE}" "-c"
+  "import pyxir as px; print(px.get_include_dir()); 
print(px.get_lib_dir());"
+  RESULT_VARIABLE __result
+  OUTPUT_VARIABLE __output
+  OUTPUT_STRIP_TRAILING_WHITESPACE)
+
+if(__result MATCHES 0)
+  string(REGEX REPLACE ";" ";" __values ${__output})
+  string(REGEX REPLACE "\r?\n" ";"__values ${__values})
+  list(GET __values 0 PYXIR_INCLUDE_DIR)
+  list(GET __values 1 PYXIR_LIB_DIR)
+endif()
+
+  else()
+  message(STATUS "To find Pyxir, Python interpreter is required to be found.")

Review comment:
   indent

##
File path: src/runtime/contrib/vitis_ai/vitis_ai_runtime.cc
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file vitis_ai_runtime.cc
+ */
+#include 
+#include 
+
+#include "vitis_ai_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+TVM_REGISTER_PASS_CONFIG_OPTION("target_", String);
+TVM_REGISTER_PASS_CONFIG_OPTION("vai_build_dir_", String);
+
+std::shared_ptr load_xgraph_model(const std::string& 
model_path) {
+  std::string model_name = model_path + "/" + "dpu_xgraph.json";
+  std::string model_weights = model_path + "/" + "dpu_xgraph.h5";
+  return pyxir::load(model_name, model_weights);
+}
+
+void VitisAIRuntime::Init(const std::string& model_path, const std::string& 
target) {
+  model_path_ = model_path;
+  target_ = target;
+  xgraph_ = load_xgraph_model(model_path_);
+  in_tensor_names_ = xgraph_->get_input_names();
+  out_tensor_names_ =  xgraph_->get_meta_attr("tvm_out_tensors").get_strings();
+  pyxir::partition(xgraph_, std::vector{target}, "");

Review comment:
   Out of curiousity, what does this partition do?

##
File path: tests/python/contrib/test_vitis_ai_codegen.p

[GitHub] [incubator-tvm] tqchen opened a new pull request #6360: [DOCKER] Use clear name that is separate from ASF brand for cache

2020-08-28 Thread GitBox


tqchen opened a new pull request #6360:
URL: https://github.com/apache/incubator-tvm/pull/6360


   cc @tmoreau89 @ZihengJiang @yzhliu 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479572929



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   Thanks! Updated!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


comaniac commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479572325



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   I vote `UpdateTargetConfigKeyValueEntry` in this case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ZihengJiang commented on pull request #6289: [Relay] Enhance relay.split(), allow splitted dim to be dynamic

2020-08-28 Thread GitBox


ZihengJiang commented on pull request #6289:
URL: https://github.com/apache/incubator-tvm/pull/6289#issuecomment-683184105


   @lixiaoquan Let's wait for the CI :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan commented on pull request #6289: [Relay] Enhance relay.split(), allow splitted dim to be dynamic

2020-08-28 Thread GitBox


lixiaoquan commented on pull request #6289:
URL: https://github.com/apache/incubator-tvm/pull/6289#issuecomment-683182527


   @ZihengJiang  Could you help to merge this? Thanks a lot



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-28 Thread GitBox


zhiics commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r479556767



##
File path: src/runtime/vm/vm.cc
##
@@ -146,12 +155,17 @@ PackedFunc VirtualMachine::GetFunction(const std::string& 
name,
   auto func_index = gvit->second;
   const auto& vm_func = exec_->functions[func_index];
   const auto& param_names = vm_func.params;
-  // TODO(icemelon9): For heterogeneous execution, get input device 
information
-  TVMContext ctx = ctxs_[0];
   CHECK_EQ(args.size() - 1, param_names.size())
   << "The number of provided parameters doesn't match the number of 
arguments";
+  CHECK_EQ(param_names.size(), vm_func.params_device_type.size())
+  << "The number of provided parameters doesn't match the number of 
assigned devices";
   std::vector func_args(param_names.size());
   for (int i = 1; i < args.size(); ++i) {
+TVMContext ctx;
+int device_type = vm_func.params_device_type[i - 1];
+ctx.device_type = DLDeviceType(device_type);

Review comment:
   good point





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (c899b3c -> b368f9d)

2020-08-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c899b3c  Improve Rust bindings: Map, Array, String, various IR nodes 
(#6339)
 add b368f9d  [CMAKE] Compatible for ROCm before 3.7 (#6359)

No new revisions were added by this update.

Summary of changes:
 cmake/util/FindROCM.cmake | 5 +
 1 file changed, 5 insertions(+)



[GitHub] [incubator-tvm] tqchen merged pull request #6359: [CMAKE] Compatible for ROCm before 3.7

2020-08-28 Thread GitBox


tqchen merged pull request #6359:
URL: https://github.com/apache/incubator-tvm/pull/6359


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #6357: [Torch] Add cast to double, fix flatten conversion

2020-08-28 Thread GitBox


masahi commented on a change in pull request #6357:
URL: https://github.com/apache/incubator-tvm/pull/6357#discussion_r479517945



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -996,12 +996,28 @@ def _impl(inputs, input_types):
 return _op.transform.transpose(data, axes)
 return _impl
 
+
 def _flatten():
 def _impl(inputs, input_types):
 data = inputs[0]
-return _op.nn.batch_flatten(data)
+start_dim = 0
+end_dim = -1
+
+if len(inputs) > 0:
+start_dim = inputs[1]
+if len(inputs) > 1:
+end_dim = inputs[2]

Review comment:
   thanks, it's cleaner.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-28 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r479504222



##
File path: python/tvm/target/datatype.py
##
@@ -14,73 +14,153 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-"""Custom datatype functionality"""
-import tvm._ffi
+"""Bring Your Own Datatypes custom datatype framework
 
-import tvm.runtime._ffi_api
-from tvm.runtime import DataType
-import tvm.tir
-from tvm.tir.expr import Cast as _Cast, FloatImm as _FloatImm
+TODO(@gussmith23 @hypercubestart) link to BYODT docs when they exist"""
+import tvm
+from tvm.runtime import convert, DataType
+from tvm.tir.expr import (Call as _Call, Cast as _Cast,
+  FloatImm as _FloatImm, BinaryOpExpr as _BinaryOpExpr)
+from tvm.tir.op import call_pure_extern
+from tvm._ffi import register_func as _register_func
+from tvm.tir import call_intrin
 
 
 def register(type_name, type_code):
 """Register a custom datatype with the given type name and type code
-Currently, the type code is manually allocated by the user, and the
-user must ensure that no two custom types share the same code.
-Generally, this should be straightforward, as the user will be
-manually registering all of their custom types.
+
+Currently, the type code is manually allocated by the user, and the user
+must ensure that no two custom types share the same code. Generally, this
+should be straightforward, as the user will be manually registering all of
+their custom types.
+
+Example:
+
+.. code-block:: python
+
+# Register a dtype named 'posites2' under type code 130.
+tvm.datatype.register('posites2', 130)
+
 
 Parameters
 --
 type_name : str
-The name of the custom datatype
+The name of the custom datatype.
 
 type_code : int
-The type's code, which should be >= kCustomBegin
+The type's code, which should be >= kCustomBegin. See
+include/tvm/runtime/data_type.h.
 """
 tvm.runtime._ffi_api._datatype_register(type_name, type_code)
 
 
 def get_type_name(type_code):
-"""Get the type name from the type code
+"""Get the type name of a custom datatype from the type code.
+
+Note that this only works for custom datatypes registered with
+tvm.datatype.register(). It does not work for TVM-native types.
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_name(130) == 'posites2'
 
 Parameters
 --
 type_code : int
-The type code
+The type code of the custom datatype.
+
+Returns
+---
+type_name : String
+The name of the custom datatype.
+
 """
 return tvm.runtime._ffi_api._datatype_get_type_name(type_code)
 
 
 def get_type_code(type_name):
-"""Get the type code from the type name
+"""Get the type code of a custom datatype from its type name
+
+Note that this only works for custom datatypes registered with
+tvm.datatype.register(). It does not work for TVM-native types.
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_code('posites2') == 130
 
 Parameters
 --
 type_name : str
 The type name
+
+Returns
+---
+type_code : int
+The type code of the custom datatype.
 """
 return tvm.runtime._ffi_api._datatype_get_type_code(type_name)
 
 
 def get_type_registered(type_code):
-"""Get a boolean representing whether the type is registered
+"""Returns true if a custom datatype is registered under the given type 
code
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_registered(130)
 
 Parameters
 --
 type_code: int
 The type code
+
+Returns
+---
+type_registered : bool
+True if a custom datatype is registered under this type code, and false
+otherwise.
 """
 return tvm.runtime._ffi_api._datatype_get_type_registered(type_code)
 
 
-def register_op(lower_func, op_name, target, type_name, src_type_name=None):
-"""Register an external function which computes the given op.
+def register_op(lower_func,
+op_name,
+target,
+src_type_name,
+dest_type_name=None,
+intrinsic_name=None):
+"""Register a lowering function for a specific operator of a custom 
datatype
+
+At build time, Relay must lower operators over custom datatypes into

Review comment:
   TIR?





This is an automated message from the Apache Git Service.
To respond to the message, please

[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-28 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r478711025



##
File path: python/tvm/relay/frontend/change_datatype.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=unused-argument
+"""Change Datatype Pass"""
+from ..function import Function
+from ..expr_functor import ExprMutator
+from ..transform.transform import function_pass
+from ..expr import var, bind
+
+# TODO(@gussmith23) what's the right opt level here?
+@function_pass(opt_level=0)
+class ChangeDatatype(ExprMutator):
+"""Mutator for changing the datatype of Relay programs.
+
+Example:
+
+.. code-block:: python
+
+from tvm.relay.testing.inception_v3 import get_workload
+expr, params = get_workload()
+
+def change_dtype(src, dst, expr, params):
+cdtype = ChangeDatatype(src, dst)
+expr = cdtype.visit(expr)
+expr = relay.ir_pass.infer_type(expr)
+params = dict((p, tvm.nd.array(params[p].asnumpy().astype(dst))) 
for p in params)
+return expr, params
+"""
+def __init__(self, src, dst):
+self.src = src
+self.dst = dst
+super().__init__()
+
+def transform_function(self, func, mod, ctx):
+return self.visit(func)
+
+def visit_constant(self, const):
+if const.data.dtype == self.src:
+return const.astype(self.dst)
+# TODO(hypercubestart): should we raise an error in this case, or 
return const?
+return const

Review comment:
   I'm having trouble thinking of a case where const.data.dtype != src. In 
our tests, the only test that uses relay.ConstantNode is test_batch_norm where 
there is an epsilon constant and a 1f const, but the type of these constants is 
always the same as the type of src (prob due to type inference)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs

2020-08-28 Thread GitBox


kevinthesun commented on a change in pull request #6024:
URL: https://github.com/apache/incubator-tvm/pull/6024#discussion_r479490473



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -1458,6 +1458,15 @@ def _impl(inputs, attr, params, mod):
 
 return ret
 
+def _dyn():
+for d in data_shape:
+if not isinstance(d, int):
+return True
+return False
+
+if _dyn():

Review comment:
   @lixiaoquan Can you simply raise an error here and add a TODO? We can 
merge this PR so that backend changes can take effect.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (4c9a391 -> c899b3c)

2020-08-28 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 4c9a391  quanitze operation expanded to take const argument (#6127)
 add c899b3c  Improve Rust bindings: Map, Array, String, various IR nodes 
(#6339)

No new revisions were added by this update.

Summary of changes:
 rust/tvm-macros/src/object.rs  |   6 +-
 rust/tvm-rt/src/array.rs   |  45 +++-
 rust/tvm-rt/src/function.rs|   2 +-
 rust/tvm-rt/src/lib.rs |   1 +
 rust/tvm-rt/src/map.rs | 264 +
 rust/tvm-rt/src/object/mod.rs  |   4 +
 rust/tvm-rt/src/object/object_ptr.rs   |   4 +-
 rust/tvm-rt/src/string.rs  | 118 ++---
 rust/tvm-sys/src/datatype.rs   |   3 +-
 rust/tvm-sys/src/packed_func.rs|  31 +++
 .../bytearray_test.go => rust/tvm/src/ir/arith.rs  |  46 ++--
 rust/tvm/src/ir/mod.rs |  40 +++-
 rust/tvm/src/ir/relay/mod.rs   |  10 +-
 rust/tvm/src/ir/tir.rs |  93 
 rust/tvm/src/transform.rs  |  32 ++-
 15 files changed, 619 insertions(+), 80 deletions(-)
 create mode 100644 rust/tvm-rt/src/map.rs
 copy golang/src/bytearray_test.go => rust/tvm/src/ir/arith.rs (52%)
 create mode 100644 rust/tvm/src/ir/tir.rs



[GitHub] [incubator-tvm] jroesch merged pull request #6339: Improve Rust bindings: Map, Array, String, various IR nodes

2020-08-28 Thread GitBox


jroesch merged pull request #6339:
URL: https://github.com/apache/incubator-tvm/pull/6339


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-28 Thread GitBox


zhiics commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r479481384



##
File path: src/runtime/vm/vm.cc
##
@@ -68,8 +68,17 @@ inline ObjectRef CopyTo(ObjectRef src, const DLContext& ctx) 
{
 if (nd_array->ctx.device_type != ctx.device_type) {
   return nd_array.CopyTo(ctx);
 }
+return src;
+  } else {
+CHECK(src->IsInstance())
+<< "VM data must be NDArray or a list of NDArray, but received: " << 
src->_type_key;
+std::vector ret;
+ADT adt = Downcast(src);
+for (size_t i = 0; i < adt.size(); i++) {
+  ret.push_back(CopyTo(adt[i], ctx));
+}
+return ADT(0, ret.begin(), ret.end());

Review comment:
   Yeah, for the input we only used Tuple whose tag is 0. But you are 
right, it's better to use tag.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479480327



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   Or `SetKeyValueEntry`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479479672



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   `ProcessTarget` sounds a little bit vague, because it doesn't mention in 
which way we process it...
   
   We are dealing with a dictionary `Map`, which I call "target 
config", because it is a configuration used to generate a target, not the 
target class itself. This method aims to update one of its key-value entry. So 
what about `UpdateTargetConfigKeyValueEntry`? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (34647ed -> 4c9a391)

2020-08-28 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 34647ed  Add docker/lint.sh, for running dockerized lint scripts 
locally (#6333)
 add 4c9a391  quanitze operation expanded to take const argument (#6127)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  2 +-
 tests/python/frontend/tflite/test_forward.py | 27 +++
 2 files changed, 28 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] anijain2305 commented on pull request #6127: quanitze operation expanded to take const argument

2020-08-28 Thread GitBox


anijain2305 commented on pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#issuecomment-683072852


   Thanks for the changes @d-smirnov This is merged!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 merged pull request #6127: quanitze operation expanded to take const argument

2020-08-28 Thread GitBox


anijain2305 merged pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


comaniac commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479476578



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   Hmmm...the reason I feel `UpdateTarget` is weird is that it's not clear 
about what is being updated since we didn't pass new values as arguments, like 
`dict.update(key, new_value)`. Maybe `ProcessTarget` is better? People will at 
least know that we are doing something internally with this target.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


comaniac commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479476578



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   Hmmm...the reason I feel `UpdateTarget` is weird is that it's not clear 
about what is being updated since we didn't pass new values as argument. For 
exmple, `dict.update(key, new_value)`. Maybe `ProcessTarget` is better? People 
will at least know that we are doing something internally with this target.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479471092



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   I feel like the word "canonicalize" is too strong, because it doesn't 
really have a canonical form. Maybe "update" is appropriate here, because what 
it does is to just update the key-value in the config. What do you think?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479471812



##
File path: src/target/llvm/llvm_module.cc
##
@@ -252,18 +251,21 @@ class LLVMModuleNode final : public runtime::ModuleNode {
   LOG(FATAL) << "Fail to load module: " << msg;
 }
 std::string target_;

Review comment:
   I changed it to "target_metadata" because it is something from metadata 
section of a llvm module





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479471092



##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   I feel like the work "canonicalize" is too strong, because it doesn't 
really have a canonical form. Maybe "update" is appropriate here, because what 
it does is to just update the key-value in the config. What do you think?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479470329



##
File path: src/target/llvm/codegen_nvptx.cc
##
@@ -254,14 +254,35 @@ inline int DetectCUDAComputeVersion() {
   }
 }
 
-runtime::Module BuildNVPTX(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   Good idea. I moved it to `build_common.h`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479468506



##
File path: src/target/llvm/codegen_nvptx.cc
##
@@ -254,14 +254,35 @@ inline int DetectCUDAComputeVersion() {
   }
 }
 
-runtime::Module BuildNVPTX(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,
+   Map* target_config, bool 
error_if_inconsistent) {
+  if (target_config->count(key)) {
+const ObjectRef& obj = (*target_config)[key];
+CHECK(obj->IsInstance())
+<< "TypeError: In code generation for AMDGPU, expect key \"" << key

Review comment:
   Oooops





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479468055



##
File path: src/target/llvm/llvm_common.cc
##
@@ -140,16 +132,43 @@ std::unique_ptr 
GetLLVMTargetMachine(const std::string& tar
   }
 
   std::string err;
-  const llvm::Target* target = 
llvm::TargetRegistry::lookupTarget(target_triple, err);
-  if (target == nullptr) {
+  const llvm::Target* llvm_target = 
llvm::TargetRegistry::lookupTarget(target_triple, err);
+  if (llvm_target == nullptr) {
 CHECK(allow_null) << err << " target_triple=" << target_triple;
 return nullptr;
   }
   llvm::TargetMachine* tm =
-  target->createTargetMachine(target_triple, mcpu, mattr, opt, 
llvm::Reloc::PIC_);
+  llvm_target->createTargetMachine(target_triple, mcpu, mattr, opt, 
llvm::Reloc::PIC_);
   return std::unique_ptr(tm);
 }
 
+std::string LLVMTargetToString(const Target& target) {
+  std::ostringstream os;
+  os << "llvm";
+  if (Optional mtriple = target->GetAttr("mtriple")) {
+os << " -mtriple=" << mtriple.value();
+  }
+  if (Optional mcpu = target->GetAttr("mcpu")) {
+os << " -mcpu=" << mcpu.value();
+  }
+  if (Optional> mattr = target->GetAttr>("mattr")) 
{
+bool is_first;
+os << " -mattr=";
+for (const String& attr : mattr.value()) {
+  if (is_first) {
+is_first = false;
+  } else {
+os << ",";
+  }

Review comment:
   👍





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


junrushao1994 commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479466091



##
File path: src/target/llvm/llvm_module.cc
##
@@ -252,18 +251,21 @@ class LLVMModuleNode final : public runtime::ModuleNode {
   LOG(FATAL) << "Fail to load module: " << msg;
 }
 std::string target_;

Review comment:
   This is something confused me as well...I didn't understand why here is 
a local variable whose name ends with underscore...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6359: [CMAKE] Compatible for ROCm before 3.7

2020-08-28 Thread GitBox


tqchen commented on pull request #6359:
URL: https://github.com/apache/incubator-tvm/pull/6359#issuecomment-683028367







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6359: [CMAKE] Compatible for ROCm before 3.7

2020-08-28 Thread GitBox


tqchen edited a comment on pull request #6359:
URL: https://github.com/apache/incubator-tvm/pull/6359#issuecomment-683028367


   cc @electriclilies @tmoreau89 @junrushao1994  @mvermeulen
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #6359: [CMAKE] Compatible for ROCm before 3.7

2020-08-28 Thread GitBox


tqchen opened a new pull request #6359:
URL: https://github.com/apache/incubator-tvm/pull/6359


   This is needed as the CI binary docker still uses ROCm before 3.7, making it 
work for both.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on pull request #6251: [ONNX] Add Clip importer to handle when min/max are provided as inputs.

2020-08-28 Thread GitBox


icemelon9 commented on pull request #6251:
URL: https://github.com/apache/incubator-tvm/pull/6251#issuecomment-683019353


   @csullivan could you fix the CI?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] electriclilies commented on a change in pull request #6351: Dynamic ONNX Importer

2020-08-28 Thread GitBox


electriclilies commented on a change in pull request #6351:
URL: https://github.com/apache/incubator-tvm/pull/6351#discussion_r479459655



##
File path: tests/python/relay/test_op_level10.py
##
@@ -326,6 +326,23 @@ def verify_batch_matmul(x_shape, y_shape, out_shape, 
dtype="float32"):
 z = intrp.evaluate(func)(x_np, y_np)
 tvm.testing.assert_allclose(z.asnumpy(), z_np, rtol=1e-5)
 
+def verify_dynamic_batch_matmul(x_shape, y_shape, out_shape, dtype="float32"):
+x = relay.var("x", relay.TensorType(x_shape, dtype))
+y = relay.var("y", relay.TensorType((relay.Any(), ) * len(y_shape), dtype))
+z = relay.nn.batch_matmul(x, y)
+
+func = relay.Function([x, y], z)
+x_np = np.random.uniform(size=x_shape).astype(dtype)
+y_np = np.random.uniform(size=y_shape).astype(dtype)
+z_np = tvm.topi.testing.batch_matmul(x_np, y_np)
+
+for target, ctx in ctx_list():
+for kind in ["vm", "debug"]:
+mod = tvm.ir.IRModule.from_expr(func)

Review comment:
   Looks like you forgot to continue if GPU is the target hardware, which 
is causing the current CI failure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-28 Thread GitBox


icemelon9 commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r479448619



##
File path: python/tvm/runtime/vm.py
##
@@ -307,8 +307,14 @@ def __init__(self, exe, ctx, memory_cfg=None):
 
 def _setup_ctx(self, ctx, memory_cfg):
 """Init context and allocators."""
-if isinstance(ctx, tvm.runtime.TVMContext):
-ctx = [ctx]
+ctxs = ctx
+if not isinstance(ctx, (list, tuple)):
+assert isinstance(ctx, tvm.runtime.TVMContext)

Review comment:
   Add an error message here

##
File path: src/runtime/vm/executable.cc
##
@@ -631,9 +653,10 @@ Instruction DeserializeInstruction(const 
VMInstructionSerializer& instr) {
   dtype.bits = instr.fields[3];
   dtype.lanes = instr.fields[4];
 
-  RegName dst = instr.fields[5];
+  Index device_type = instr.fields[5];
+  RegName dst = instr.fields[6];

Review comment:
   update the number of fields check in line 647

##
File path: src/runtime/vm/vm.cc
##
@@ -68,8 +68,17 @@ inline ObjectRef CopyTo(ObjectRef src, const DLContext& ctx) 
{
 if (nd_array->ctx.device_type != ctx.device_type) {
   return nd_array.CopyTo(ctx);
 }
+return src;
+  } else {
+CHECK(src->IsInstance())
+<< "VM data must be NDArray or a list of NDArray, but received: " << 
src->_type_key;
+std::vector ret;
+ADT adt = Downcast(src);
+for (size_t i = 0; i < adt.size(); i++) {
+  ret.push_back(CopyTo(adt[i], ctx));
+}
+return ADT(0, ret.begin(), ret.end());

Review comment:
   why not use `adt->tag`?

##
File path: python/tvm/runtime/vm.py
##
@@ -307,8 +307,14 @@ def __init__(self, exe, ctx, memory_cfg=None):
 
 def _setup_ctx(self, ctx, memory_cfg):
 """Init context and allocators."""
-if isinstance(ctx, tvm.runtime.TVMContext):
-ctx = [ctx]
+ctxs = ctx
+if not isinstance(ctx, (list, tuple)):
+assert isinstance(ctx, tvm.runtime.TVMContext)
+ctxs = [ctx]
+# CPU is required for executing shape functions
+if ctx.device_type != tvm.cpu(0).device_type:

Review comment:
   probably check all ctxs to see if there is a cpu ctx.

##
File path: src/runtime/vm/vm.cc
##
@@ -164,18 +178,15 @@ PackedFunc VirtualMachine::GetFunction(const std::string& 
name,
   }
 }
 
-TVMContext VirtualMachine::GetParamsContext() const {
+TVMContext VirtualMachine::GetContext(Index device_type) const {

Review comment:
   similar here for this function

##
File path: src/runtime/vm/vm.cc
##
@@ -146,12 +155,17 @@ PackedFunc VirtualMachine::GetFunction(const std::string& 
name,
   auto func_index = gvit->second;
   const auto& vm_func = exec_->functions[func_index];
   const auto& param_names = vm_func.params;
-  // TODO(icemelon9): For heterogeneous execution, get input device 
information
-  TVMContext ctx = ctxs_[0];
   CHECK_EQ(args.size() - 1, param_names.size())
   << "The number of provided parameters doesn't match the number of 
arguments";
+  CHECK_EQ(param_names.size(), vm_func.params_device_type.size())
+  << "The number of provided parameters doesn't match the number of 
assigned devices";
   std::vector func_args(param_names.size());
   for (int i = 1; i < args.size(); ++i) {
+TVMContext ctx;
+int device_type = vm_func.params_device_type[i - 1];
+ctx.device_type = DLDeviceType(device_type);

Review comment:
   We should create a map from device type to ctx in the `Init`. So here we 
can just look up the corresponding context.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-28 Thread GitBox


comaniac commented on a change in pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#discussion_r479451668



##
File path: src/target/llvm/llvm_common.cc
##
@@ -140,16 +132,43 @@ std::unique_ptr 
GetLLVMTargetMachine(const std::string& tar
   }
 
   std::string err;
-  const llvm::Target* target = 
llvm::TargetRegistry::lookupTarget(target_triple, err);
-  if (target == nullptr) {
+  const llvm::Target* llvm_target = 
llvm::TargetRegistry::lookupTarget(target_triple, err);
+  if (llvm_target == nullptr) {
 CHECK(allow_null) << err << " target_triple=" << target_triple;
 return nullptr;
   }
   llvm::TargetMachine* tm =
-  target->createTargetMachine(target_triple, mcpu, mattr, opt, 
llvm::Reloc::PIC_);
+  llvm_target->createTargetMachine(target_triple, mcpu, mattr, opt, 
llvm::Reloc::PIC_);
   return std::unique_ptr(tm);
 }
 
+std::string LLVMTargetToString(const Target& target) {
+  std::ostringstream os;
+  os << "llvm";
+  if (Optional mtriple = target->GetAttr("mtriple")) {
+os << " -mtriple=" << mtriple.value();
+  }
+  if (Optional mcpu = target->GetAttr("mcpu")) {
+os << " -mcpu=" << mcpu.value();
+  }
+  if (Optional> mattr = target->GetAttr>("mattr")) 
{
+bool is_first;
+os << " -mattr=";
+for (const String& attr : mattr.value()) {
+  if (is_first) {
+is_first = false;
+  } else {
+os << ",";
+  }

Review comment:
   ```suggestion
 if (!is_first) {
   os << ',';
 }
 is_first = false;
   ```

##
File path: src/target/llvm/codegen_amdgpu.cc
##
@@ -228,23 +233,50 @@ inline int DetectROCMApiVersion() {
   return 305;
 }
 
-runtime::Module BuildAMDGPU(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   This is more like "canonicalize" instead of "update". What do you think?

##
File path: src/target/llvm/codegen_nvptx.cc
##
@@ -254,14 +254,35 @@ inline int DetectCUDAComputeVersion() {
   }
 }
 
-runtime::Module BuildNVPTX(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,
+   Map* target_config, bool 
error_if_inconsistent) {
+  if (target_config->count(key)) {
+const ObjectRef& obj = (*target_config)[key];
+CHECK(obj->IsInstance())
+<< "TypeError: In code generation for AMDGPU, expect key \"" << key

Review comment:
   AMDGPU?

##
File path: src/target/llvm/llvm_module.cc
##
@@ -252,18 +251,21 @@ class LLVMModuleNode final : public runtime::ModuleNode {
   LOG(FATAL) << "Fail to load module: " << msg;
 }
 std::string target_;

Review comment:
   Better to use other names...

##
File path: src/target/llvm/codegen_nvptx.cc
##
@@ -254,14 +254,35 @@ inline int DetectCUDAComputeVersion() {
   }
 }
 
-runtime::Module BuildNVPTX(IRModule mod, std::string target) {
+static void UpdateTargetConfig(const String& key, const String& value,

Review comment:
   If this function can be used, we should move it to be a target utility 
function.

##
File path: src/target/llvm/llvm_common.cc
##
@@ -58,53 +59,45 @@ void InitializeLLVM() {
   }
 }
 
-void ParseLLVMTargetOptions(const std::string& target_str, std::string* 
triple, std::string* mcpu,
+void ParseLLVMTargetOptions(const Target& target, std::string* triple, 
std::string* mcpu,
 std::string* mattr, llvm::TargetOptions* options) {
-  // setup target triple
-  size_t start = 0;
-  if (target_str.length() >= 4 && target_str.substr(0, 4) == "llvm") {
-start = 4;
-  }
   // simple parser
   triple->resize(0);
   mcpu->resize(0);
   mattr->resize(0);
-
   bool soft_float_abi = false;
-  std::string key, value;
-  std::istringstream is(target_str.substr(start, target_str.length() - start));
-  while (is >> key) {
-if (key == "-system-lib" || key == "-system-lib=0" || key == 
"-system-lib=1") {
-  continue;
-}
-size_t pos = key.find('=');
-if (pos != std::string::npos) {
-  CHECK_GE(key.length(), pos + 1) << "invalid argument " << key;
-  value = key.substr(pos + 1, key.length() - 1);
-  key = key.substr(0, pos);
-} else {
-  CHECK(is >> value) << "Unspecified value for option " << key;
+  if (const Optional& v = target->GetAttr("mtriple")) {
+*triple = v.value();
+  }
+  if (const Optional& v = target->GetAttr("mcpu")) {
+*mcpu = v.value();
+  }
+  if (const Optional>& v = 
target->GetAttr>("mattr")) {
+std::ostringstream os;
+bool is_first = true;
+for (const String& s : v.value()) {
+  if (is_first) {
+is_first = false;
+  } else {
+os << ',';
+  }

Review comment:
   ```suggestion
 if (!is_first) {
   os << ',';
 }
 is_first = false;
   ```





[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


comaniac commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479420731



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -17,6 +17,74 @@
 """
 Common utility functions shared by TVMC modules.
 """
+import argparse
+import re
+
+from tvm import relay
+from tvm import transform
 
 class TVMCException(Exception):
 """TVMC Exception"""
+
+
+def convert_graph_layout(mod, desired_layout):
+"""Alter the layout of the input graph.
+
+Parameters
+--
+mod : tvm.relay.Module
+The relay module to convert.
+desired_layout : str
+The layout to convert to.
+
+Returns
+---
+mod : tvm.relay.Module
+The converted module.
+"""
+
+# Assume for the time being that graphs only have
+# conv2d as heavily-sensitive operators.
+desired_layouts = {
+"nn.conv2d": [desired_layout, "default"],
+"qnn.conv2d": [desired_layout, "default"],
+}
+
+# Convert the layout of the graph where possible.
+seq = transform.Sequential(
+[
+relay.transform.RemoveUnusedFunctions(),
+relay.transform.ConvertLayout(desired_layouts),
+]
+)
+with transform.PassContext(opt_level=3):
+return seq(mod)
+
+
+def parse_input_shapes(shapes_str):
+""" Parsing function for tensor shape syntax. """
+shapes = []
+# Split up string into comma seperated sections ignoring commas in ()s
+match = re.findall(r"(\(.*?\)|.+?),?", shapes_str)
+if match:
+for inp in match:
+# Test for and remove brackets
+shape = re.match(r"\((.*)\)", inp)
+if shape and shape.lastindex == 1:
+# Remove white space and extract numbers
+strshape = shape[1].replace(" ", "").split(",")
+try:
+shapes.append([int(i) for i in strshape])
+except ValueError:
+raise argparse.ArgumentTypeError(
+f"expected numbers in shape '{shape[1]}'"

Review comment:
   Per https://github.com/apache/incubator-tvm/pull/4250, we don't use 
f-string for now.

##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,305 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import logging
+import os.path
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm.contrib import cc
+from tvm.contrib import util
+
+from . import common, frontends
+from .main import register_parser
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to generate target libraries, e.g. 
'aarch64-linux-gnu-gcc'",
+)
+parser.add_argument(
+"--dump-code",
+metavar="FORMAT",
+default="",
+help="comma separarated list of formats to export, e.g. 'asm,ll,relay' 
"
+)
+parser.add_argument(
+"--model-format",
+choices=frontends.get_frontends(),
+help="specify input model format",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="module.tar",
+help="output the compiled module to an archive",
+)
+parser.add_argument(
+"--target",
+help="compilation target as plain string, inline JSON or path to a 
JSON file",
+required=True
+)
+parser.add_argument(
+"--tuning-records",
+metavar="PATH",
+default="",
+help="path to an auto-tuning log file from AutoTVM"
+)
+parser.add_argument(
+"--desired-layout",
+choices=["NCHW", "NHWC"],
+default=None,
+  

[GitHub] [incubator-tvm] tqchen closed issue #6354: Performance of same op and workload in different model varies differently.

2020-08-28 Thread GitBox


tqchen closed issue #6354:
URL: https://github.com/apache/incubator-tvm/issues/6354


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6354: Performance of same op and workload in different model varies differently.

2020-08-28 Thread GitBox


tqchen commented on issue #6354:
URL: https://github.com/apache/incubator-tvm/issues/6354#issuecomment-682973939


   Thanks @nolanliou , this is indeed an interesting observation. It might 
relates to other ops. It would be great if we can move some of the trouble 
shooting discusison to a https://discuss.tvm.ai/ thread, and print out the 
relay model to check their differences.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6356: Perf tools to further improve the performance of given schedule.

2020-08-28 Thread GitBox


tqchen commented on issue #6356:
URL: https://github.com/apache/incubator-tvm/issues/6356#issuecomment-682972283


   Thanks @xutianming , let us followup in 
https://discuss.tvm.ai/t/how-to-further-improve-the-performance-of-given-schedule/7711/2



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #6356: Perf tools to further improve the performance of given schedule.

2020-08-28 Thread GitBox


tqchen closed issue #6356:
URL: https://github.com/apache/incubator-tvm/issues/6356


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #6333: Add docker/lint.sh, for running dockerized lint scripts locally

2020-08-28 Thread GitBox


tmoreau89 commented on pull request #6333:
URL: https://github.com/apache/incubator-tvm/pull/6333#issuecomment-682836102


   Thanks @areusch , @leandron , @tkonolige , @junrushao1994 the PR has been 
merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (02b643b -> 34647ed)

2020-08-28 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 02b643b  typo (#6352)
 add 34647ed  Add docker/lint.sh, for running dockerized lint scripts 
locally (#6333)

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile|   2 +
 Makefile   |  10 +-
 docker/bash.sh |  25 ++-
 ...nstall_ethosn_driver_stack.sh => dev_common.sh} |  54 +++
 docker/lint.sh |  78 +
 docs/contribute/pull_request.rst   |  16 +-
 .../task_lint.sh => lint/check_asf_header.sh}  |  64 +++-
 .../lint/clang_format.sh   |  11 +-
 .../tvm/target/arm_isa.py => tests/lint/cppdocs.sh |  25 ++-
 .../CODEGENC.cmake => tests/lint/cpplint.sh|   7 +-
 tests/lint/filter_untracked.py |  71 +
 .../ubuntu_install_dgl.sh => tests/lint/jnilint.sh |   7 +-
 conda/tvm/build.sh => tests/lint/pylint.sh |   9 +-
 tests/python/unittest/test_filter_untracked.py | 177 +
 tests/scripts/task_lint.sh |  34 +---
 15 files changed, 443 insertions(+), 147 deletions(-)
 copy docker/{install/ubuntu_install_ethosn_driver_stack.sh => dev_common.sh} 
(52%)
 mode change 100755 => 100644
 create mode 100755 docker/lint.sh
 copy tests/{scripts/task_lint.sh => lint/check_asf_header.sh} (55%)
 copy conda/conda_build_config.yaml => tests/lint/clang_format.sh (80%)
 mode change 100644 => 100755
 copy python/tvm/target/arm_isa.py => tests/lint/cppdocs.sh (66%)
 mode change 100644 => 100755
 copy cmake/modules/contrib/CODEGENC.cmake => tests/lint/cpplint.sh (79%)
 mode change 100644 => 100755
 create mode 100644 tests/lint/filter_untracked.py
 copy docker/install/ubuntu_install_dgl.sh => tests/lint/jnilint.sh (90%)
 mode change 100644 => 100755
 copy conda/tvm/build.sh => tests/lint/pylint.sh (84%)
 mode change 100644 => 100755
 create mode 100644 tests/python/unittest/test_filter_untracked.py



[GitHub] [incubator-tvm] tmoreau89 merged pull request #6333: Add docker/lint.sh, for running dockerized lint scripts locally

2020-08-28 Thread GitBox


tmoreau89 merged pull request #6333:
URL: https://github.com/apache/incubator-tvm/pull/6333


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Beya2019 opened a new pull request #6358: [Relay] add conv2d_transpose alter layout

2020-08-28 Thread GitBox


Beya2019 opened a new pull request #6358:
URL: https://github.com/apache/incubator-tvm/pull/6358


   RFC: https://github.com/apache/incubator-tvm/pull/4335
 https://discuss.tvm.ai/t/layout-conversion-pass/4009
   
   add conv2d_transpose  convert_op_layout and related test case in 
test_pass_convert_op_layout.py .
   
   Would you please have a look at this @yzhliu  @ZihengJiang @anijain2305 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479369876



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="a.tar",
+help="output the compiled module to an archive",
+)
+parser.add_argument(
+"--sanitize-diagnostics",
+action="store_true",
+default=True,

Review comment:
   This is now removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479369467



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",

Review comment:
   I'm clarifying this and will reply soon.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479369247



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="a.tar",

Review comment:
   I called it `module.tar` now, but I'm not very happy with that name. Do 
you have a suggestion?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479368959



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",

Review comment:
   Example added.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479368782



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="a.tar",
+help="output the compiled module to an archive",
+)
+parser.add_argument(
+"--sanitize-diagnostics",
+action="store_true",
+default=True,
+dest="sanitize_diagnostics",
+help="enable diagnostic sanitization",
+)
+parser.add_argument(
+"--no-sanitize-diagnostics",
+action="store_false",
+dest="sanitize_diagnostics",
+help="disable diagnostic sanitization",
+)
+parser.add_argument(
+"--target",
+type=parse_target,
+action="append",
+metavar="TARGET[,TARGET]...",
+help=f"compilation target(s): {', '.join(VALID_TARGETS)}, default 
llvm",
+)
+parser.add_argument("--tuner-file", default="", help="tuner file")
+parser.add_argument(
+"--alter-layout",

Review comment:
   It is now called `--desired-layout`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479368531



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="a.tar",
+help="output the compiled module to an archive",
+)
+parser.add_argument(
+"--sanitize-diagnostics",
+action="store_true",
+default=True,
+dest="sanitize_diagnostics",
+help="enable diagnostic sanitization",
+)
+parser.add_argument(
+"--no-sanitize-diagnostics",
+action="store_false",
+dest="sanitize_diagnostics",
+help="disable diagnostic sanitization",
+)
+parser.add_argument(
+"--target",

Review comment:
   We now have only one target that connects directly to the Python target 
API from #6315.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479367990



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"

Review comment:
   `--target` is a required argument now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r479367226



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}

Review comment:
   TODO added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anilmartha commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


anilmartha commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479365062



##
File path: src/runtime/contrib/vitis_ai/vitis_ai_runtime.cc
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file vitis_ai_runtime.cc
+ */
+#include 
+#include 
+
+#include "vitis_ai_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+TVM_REGISTER_PASS_CONFIG_OPTION("target_", String);

Review comment:
   Sure. Will use similar to relay.ext.ethos-n.options in our pass config.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6357: [Torch] Add cast to double, fix flatten conversion

2020-08-28 Thread GitBox


leandron commented on a change in pull request #6357:
URL: https://github.com/apache/incubator-tvm/pull/6357#discussion_r479344758



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -996,12 +996,28 @@ def _impl(inputs, input_types):
 return _op.transform.transpose(data, axes)
 return _impl
 
+
 def _flatten():
 def _impl(inputs, input_types):
 data = inputs[0]
-return _op.nn.batch_flatten(data)
+start_dim = 0
+end_dim = -1
+
+if len(inputs) > 0:
+start_dim = inputs[1]
+if len(inputs) > 1:
+end_dim = inputs[2]

Review comment:
   minor pythonic suggestion here: 
   ```
   start_dim = inputs[1] if len(inputs) > 0 else 0
   end_dim = inputs[2] if len(inputs) > 1 else -1
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #6355: [BYOC][ETHOSN] Introduce further operator support

2020-08-28 Thread GitBox


mbaret commented on pull request #6355:
URL: https://github.com/apache/incubator-tvm/pull/6355#issuecomment-682599692


   cc @Leo-arm @masahi @zhiics @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


mbaret commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479200937



##
File path: src/runtime/contrib/vitis_ai/vitis_ai_runtime.cc
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file vitis_ai_runtime.cc
+ */
+#include 
+#include 
+
+#include "vitis_ai_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+TVM_REGISTER_PASS_CONFIG_OPTION("target_", String);

Review comment:
   I think this is too non-specific, it's not a generic target parameter 
but actually one very specific to Vitis-AI. We've used 
'relay.ext.ethos-n.options' to namespace our pass config, perhaps you could use 
something similar?

##
File path: src/runtime/contrib/vitis_ai/vitis_ai_runtime.cc
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file vitis_ai_runtime.cc
+ */
+#include 
+#include 
+
+#include "vitis_ai_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+TVM_REGISTER_PASS_CONFIG_OPTION("target_", String);
+TVM_REGISTER_PASS_CONFIG_OPTION("vai_build_dir_", String);
+
+std::shared_ptr load_xgraph_model(const std::string& 
model_path) {
+  std::string model_name = model_path + "/" + "dpu_xgraph.json";
+  std::string model_weights = model_path + "/" + "dpu_xgraph.h5";
+  return pyxir::load(model_name, model_weights);
+}

Review comment:
   This seems quite fragile to me. Is there a way you can stream these 
files into a binary artifact? That could them be built into the .so and you 
wouldn't need to keep track of model paths.

##
File path: tests/python/contrib/test_vitis_ai_codegen.py
##
@@ -0,0 +1,203 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, W0611
+"""Vitis-AI codegen tests."""
+
+import numpy as np
+
+import tvm
+from tvm import relay
+from tvm.relay import transform
+from tvm.relay.op.contrib.vitis_ai import annotation
+from tvm.contrib.target import vitis_ai
+
+import pyxir
+import pyxir.contrib.target.DPUCADX8G
+
+def set_func_attr(func, compile_name, symbol_name):
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", compile_name)
+func = func.with_attr("global_symbol", symbol_name)
+return func
+
+def _create_graph():
+shape = (10, 10)
+mod = tvm.IRModule()
+x = relay.var('x', shape=shape)
+y = relay.var('y', shape=shape)
+z = x + x
+p = y * y
+func = relay.Function([x, y], p - z)
+mod["main"] = func
+params = {}
+params["x"] = np.

[GitHub] [incubator-tvm] masahi opened a new pull request #6357: [Torch] Add cast to double, fix flatten conversion

2020-08-28 Thread GitBox


masahi opened a new pull request #6357:
URL: https://github.com/apache/incubator-tvm/pull/6357


   This is another fix to support hummingbird project.
   
   * Cast to double was missing
   * Conversion of `torch.flatten` was wrong and not tested.
   
   please review @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] xutianming commented on issue #6354: Performance of same op and workload in different model varies differently.

2020-08-28 Thread GitBox


xutianming commented on issue #6354:
URL: https://github.com/apache/incubator-tvm/issues/6354#issuecomment-682582033


   I also got similar questions. Same models written in PyTorch (Bert for 
example) seem to be slower than TensorFlow.
   As for Relay IR, models from different frontends should be the same.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] xutianming opened a new issue #6356: Perf tools to further improve the performance of given schedule.

2020-08-28 Thread GitBox


xutianming opened a new issue #6356:
URL: https://github.com/apache/incubator-tvm/issues/6356


   Dear developers,
   
   I was optimizing a TextCNN model with TVM on intel x86.
   I wrote my own Conv1D NCWc schedule based on Conv2D
   
   
![image](https://user-images.githubusercontent.com/4970790/91565965-8fba0600-e975-11ea-85a4-83d66b2cbdf3.png)
   
   TVM stack only has operator-level performance tools. 
   **How could I further locate the hot-spot of the operator**  ?
   
   I tried gdb, and the ZMM registers was utilized.
   
![image](https://user-images.githubusercontent.com/4970790/91566257-f808e780-e975-11ea-9c36-abc16099fb04.png)
   
   I also tried linux-perf, but didn't get much clue.
   
   
   I asked the same question on [TVM 
discuss](https://discuss.tvm.ai/t/how-to-further-improve-the-performance-of-given-schedule/7711),
 but got no reply.
   
   Sincerely hope for your reply.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Wheest commented on a change in pull request #6137: Better grouped convolution for CPU targets

2020-08-28 Thread GitBox


Wheest commented on a change in pull request #6137:
URL: https://github.com/apache/incubator-tvm/pull/6137#discussion_r479231672



##
File path: topi/python/topi/arm_cpu/group_conv2d.py
##
@@ -0,0 +1,310 @@
+import tvm
+from tvm import autotvm
+from tvm import te
+from ..util import get_const_tuple
+from ..nn.pad import pad
+from .. import tag
+
+from ..nn.conv2d import group_conv2d_nchw
+from ..nn.util import infer_pad
+from ..nn.conv2d import _get_workload as _get_conv2d_workload
+
+from tvm.autotvm.task.space import SplitEntity, OtherOptionEntity
+
+
+def group_conv2d_nchw(data, kernel, strides, padding, dilation, groups,
+  out_dtype):
+"""Compute group_conv2d with NCHW layout"""
+return group_conv2d_nchw_spatial_pack(data, kernel, strides, padding,
+  dilation, groups, out_dtype)
+
+
+def schedule_group_conv2d_nchw(outs):
+"""Compute group_conv2d with NCHW layout"""
+return schedule_group_conv2d_nchwc(outs)
+
+
+def _get_default_config(cfg, data, kernel, strides, padding, groups, out_dtype,
+layout='NCHW'):
+"""
+Get default schedule config for the workload
+"""
+static_data_shape = []
+for dim in get_const_tuple(data.shape):
+if isinstance(dim, tvm.tir.Var):
+static_data_shape.append(1)
+else:
+static_data_shape.append(dim)
+data = te.placeholder(static_data_shape, dtype=data.dtype)
+
+wkl = _get_conv2d_workload(data, kernel, strides, padding, out_dtype,
+   layout)
+_fallback_schedule(cfg, wkl)
+
+
+def _fallback_schedule(cfg, wkl):
+simd_width = 4 # assume ARM SIMD Width is 4
+HPAD, WPAD = wkl.hpad, wkl.wpad
+HSTR, WSTR = wkl.hstride, wkl.wstride
+out_width = (wkl.width + 2 * WPAD - wkl.wkernel) // WSTR + 1
+G = wkl.groups
+KPG = wkl.out_filter // G
+CPG = wkl.in_filter // G
+oc_bn = 1
+
+for bn in range(simd_width, 0, -1):
+if KPG % bn == 0:
+oc_bn = bn
+break
+
+ic_bn = 1
+for bn in range(oc_bn, 0, -1):
+if CPG % bn == 0:
+ic_bn = bn
+break
+
+reg_n = 1
+for n in range(31, 0, -1):
+if out_width % n == 0:
+reg_n = n
+break
+
+cfg["tile_ic"] = SplitEntity([wkl.in_filter // ic_bn, ic_bn])
+cfg["tile_oc"] = SplitEntity([wkl.out_filter // oc_bn, oc_bn])
+cfg["tile_ow"] = SplitEntity([out_width // reg_n, reg_n])
+cfg["unroll_kw"] = OtherOptionEntity(False)
+
+
+@autotvm.register_topi_compute("group_conv2d_nchw.arm_cpu")
+def group_conv2d_nchw_spatial_pack(cfg, data, kernel, strides, padding,
+   dilation, groups, out_dtype='float32'):
+assert isinstance(dilation, int) or len(dilation) == 2
+if isinstance(dilation, int):
+dilation_h, dilation_w = dilation, dilation
+else:
+dilation_h, dilation_w = dilation
+
+assert isinstance(padding, int) or len(padding) == 2 or len(padding) == 4
+if isinstance(padding, int):
+HPAD, WPAD = padding, padding
+elif len(padding) == 2:
+HPAD, WPAD = padding
+else:
+HPAD, _, WPAD, _ = padding

Review comment:
   I've got a suggestion for extending _get_workload.  In the new commit 
`505c127` I've added a 2nd Workload, `Workload_asym`.  `_get_workload` takes an 
optional argument `asymmetric_pad` which means it will return this workload 
instead.  Ideally the old conv2d `Workload` would be deprecated so that all 
conv2d workloads support asymmetric padding. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anilmartha commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


anilmartha commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479188637



##
File path: python/tvm/relay/op/contrib/vitis_ai.py
##
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument, no-else-return, E1102
+"""VITISAI codegen supported operators."""
+
+import numpy as np
+
+from tvm import relay
+import tvm._ffi
+from tvm.relay.expr import Tuple, TupleGetItem
+from tvm.relay import transform
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+
+import pyxir
+import pyxir.frontend.tvm
+
+
+@transform.function_pass(opt_level=0)
+class VitisAIAnnotationPass:

Review comment:
   Yes. Once we have support for multiple subgraphs we can use standard 
annotation pass. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


mbaret commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479182496



##
File path: python/tvm/relay/op/contrib/vitis_ai.py
##
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument, no-else-return, E1102
+"""VITISAI codegen supported operators."""
+
+import numpy as np
+
+from tvm import relay
+import tvm._ffi
+from tvm.relay.expr import Tuple, TupleGetItem
+from tvm.relay import transform
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+
+import pyxir
+import pyxir.frontend.tvm
+
+
+@transform.function_pass(opt_level=0)
+class VitisAIAnnotationPass:

Review comment:
   Ah I see. Would I be right in understanding then that you could switch 
to the standard annotation mechanism once you have support for multiple 
subgraphs?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anilmartha commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


anilmartha commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479179663



##
File path: python/tvm/relay/op/contrib/vitis_ai.py
##
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument, no-else-return, E1102
+"""VITISAI codegen supported operators."""
+
+import numpy as np
+
+from tvm import relay
+import tvm._ffi
+from tvm.relay.expr import Tuple, TupleGetItem
+from tvm.relay import transform
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+
+import pyxir
+import pyxir.frontend.tvm
+
+
+@transform.function_pass(opt_level=0)
+class VitisAIAnnotationPass:

Review comment:
   @mbaret 
   We could use the standard op based annotation, but our DPU supports one 
subgraph per model at this point of time.  If we use op based annotations for 
yolov3 kind of networks, it generates multiple subgraphs. To avoid multiple 
subgraph generation we would be using a custom annotation pass.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6343: [BYOC][CONTRIB] Vitis-AI codegen integration

2020-08-28 Thread GitBox


mbaret commented on a change in pull request #6343:
URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479148460



##
File path: python/tvm/relay/op/contrib/vitis_ai.py
##
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument, no-else-return, E1102
+"""VITISAI codegen supported operators."""
+
+import numpy as np
+
+from tvm import relay
+import tvm._ffi
+from tvm.relay.expr import Tuple, TupleGetItem
+from tvm.relay import transform
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+
+import pyxir
+import pyxir.frontend.tvm
+
+
+@transform.function_pass(opt_level=0)
+class VitisAIAnnotationPass:

Review comment:
   I'm interested as to why you use a custom annotation pass here. Is there 
something we could do to improve the standard annotation passes to make it work 
for your use case?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret opened a new pull request #6355: [BYOC][ETHOSN] Introduce further operator support

2020-08-28 Thread GitBox


mbaret opened a new pull request #6355:
URL: https://github.com/apache/incubator-tvm/pull/6355


   This PR introduces support for the following operators via the Ethos-N 
codegen:
- Quantized Fully Connected
- Quantized Addition
- Depth-to-space
- Max/Avg Pool 2D
- Quantized Relu (Clip)
- Reshape
- Quantized Sigmoid
   
   Additionally, tests for mobilenet, inceptionv3/4 and ssd mobilenet are added.
   
   Co-authored-by: Leo Blonk 
   Co-authored-by: Tristan O'Connor 
   Co-authored-by: Ramana Radhakrishnan 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] nolanliou opened a new issue #6354: Performance of same op and workload in different model varies differently.

2020-08-28 Thread GitBox


nolanliou opened a new issue #6354:
URL: https://github.com/apache/incubator-tvm/issues/6354


   Compared two similar Bert models running on CPU with TVM, one is PyTorch 
model, the other is MXNet model. Due to the large performance difference, I did 
some profiling. The result shows the run time of the same operation(matmul) 
with same workload varies big.
   
   ENV:
   1. TVM: build with MKL.
   2. Intel CPU
   3. OpenMP: `KMP_AFFINITY=compact,1,0 OMP_NUM_THREADS=24`
   
   Model inference time:
   ```
   # mxnet model
   TVM Mean inference time: 5.53 ms
   # pytorch model
   TVM Mean inference time: 23.05 ms
   ```
   
   Profiling result:
   ```
   # MXNet model
   Node Name   Ops.Time(us)   Time(%)  
Shape.  Inputs  Outputs
   - 
   fused_nn_dense_add_15fused_nn_dense_add_1   308.926   5.58 
(32, 768)  3   1
   fused_nn_dense_add_11 fused_nn_dense_add_1   307.277   5.551
(32, 768)3   1
   
   # PyTorch Model
   Node Name   Ops.Time(us)   Time(%)  
Shape.  Inputs  Outputs
   - 
   fused_nn_dense_add_3fused_nn_dense_add_3   1783.757.631
(32, 768) 3   1
   fused_nn_dense_add_31  fused_nn_dense_add_31593.086.815
(32, 768)3   1
   ```
   
   IR code (same between PyTorch model and MXNet model)
   ```
 attr [0] "compute_scope" = "fused_nn_dense_add_3_compute_";
 attr [C: handle] "storage_scope" = "global";
 allocate(C, float32, [24576]) {
   attr [0] "extern_scope" = 0;
   @tir.tvm_call_packed("tvm.contrib.cblas.matmul", 
@tir.tvm_stack_make_array(placeholder, @tir.tvm_stack_make_shape(32, 3072, 
dtype=handle), 0, 2, 0f32, 0, dtype=handle), 
@tir.tvm_stack_make_array(placeholder_1, @tir.tvm_stack_make_shape(768, 3072, 
dtype=handle), 0, 2, 0f32, 0, dtype=handle), @tir.tvm_stack_make_array(C, 
@tir.tvm_stack_make_shape(32, 768, dtype=handle), 0, 2, 0f32, 0, dtype=handle), 
False, True, dtype=int32)
   for (ax0: int32, 0, 32) "parallel" {
 for (ax1: int32, 0, 768) {
   T_add[((ax0*768) + ax1)] = ((float32*)C[((ax0*768) + ax1)] + 
(float32*)placeholder_2[ax1])
 }
   }
   ```
   
   However, when setting `OMP_NUM_THREADS=1` the model inference time is same, 
seems it's a problem with multiple threads. 
   
   What may cause the difference? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kongroo commented on pull request #6349: [CODEGEN][CUDA]: fix cuda half math function is undefined: herf

2020-08-28 Thread GitBox


kongroo commented on pull request #6349:
URL: https://github.com/apache/incubator-tvm/pull/6349#issuecomment-682406774


   > Could you add test cases for half math of erf to verify the accuracy?
   
   I have added a test case for half erf, but ci failed. It seems that this 
condition `#if defined(__CUDA_ARCH__) && (__CUDA_ARCH__ >= 530)` is not met in 
the ci environment, and herf is still undefined. Should I just remove the test 
case?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6127: quanitze operation expanded to take const argument

2020-08-28 Thread GitBox


d-smirnov commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r478944325



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -1850,7 +1850,7 @@ def _test_quantize_dequantize(data):
 # First TFLite quantize op converts float32 tensor to int8 tensor - Qnn 
quantize.
 # Second TFLite quantize op converts int8 tensor to int8 tensor - Qnn 
requantize.
 data_in = tf.keras.layers.Input(shape=data.shape[1:])
-relu = tf.keras.layers.ReLU()(data_in)
+relu = tf.keras.layers.ReLU()(data)

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6127: quanitze operation expanded to take const argument

2020-08-28 Thread GitBox


d-smirnov commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r478944450



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -2726,7 +2726,13 @@ def convert_quantize(self, op):
 assert len(input_tensors) == 1, "input tensors length should be 1"
 input_tensor = input_tensors[0]
 input_tensor_type_str = 
self.get_tensor_type_str(input_tensor.tensor.Type())
-in_expr = self.get_expr(input_tensor.tensor_idx)
+
+if self.has_expr(input_tensor.tensor_idx):

Review comment:
   Replaced





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on pull request #6333: Add docker/lint.sh, for running dockerized lint scripts locally

2020-08-28 Thread GitBox


leandron commented on pull request #6333:
URL: https://github.com/apache/incubator-tvm/pull/6333#issuecomment-682387712


   > @zhiics @leandron please take a look when you have a minute and explicitly 
approve if you're good w/ this change
   
   LGTM, thanks @areusch 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org