[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-07 Thread GitBox


junrushao1994 commented on pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227#issuecomment-670827521


   Thanks! I will take a look this weekend :-)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Hzfengsy commented on a change in pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-07 Thread GitBox


Hzfengsy commented on a change in pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227#discussion_r467364184



##
File path: python/tvm/hybrid/parser.py
##
@@ -0,0 +1,754 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Hybrid Script Parser For TIR"""
+# pylint: disable=invalid-name, missing-docstring, 
inconsistent-return-statements, no-else-return
+# pylint: disable=unnecessary-comprehension, unused-argument, 
import-outside-toplevel
+# pylint: disable=unused-import
+import json
+import numbers
+import operator
+from typed_ast import ast3 as ast
+
+import tvm._ffi
+from tvm import tir
+from tvm._ffi.base import TVMError
+from tvm.ir import GlobalVar
+from tvm.tir import all as _all
+from tvm.tir import expr as _expr
+
+from . import scope_emitter, special_stmt, scope_handler, intrin
+from .meta_unparser import MetaUnparser
+from .registry import Registry
+
+
+class HybridParserError(RuntimeError):
+"""Hybrid Parser Runtime Error"""
+
+
+class HybridParser(ast.NodeVisitor):
+"""Python AST visitor pass which finally lowers it to TIR
+Notes for extension:
+1. To support new types of AST nodes. Add a function visit_xxx().
+2. To support new functions
+We divide allowed function calls in hybrid script into 3 categories,
+which is scope_handler, intrin and special_stmt.
+1) scope_handler: scope_handler functions correspond to StmtNodes 
without body, which can be
+further classified into 2 categories: with scope handler can for scope 
handlers
+2) intrin: intrin functions corresponds to the remaining IRNodes 
(StmtNodes without body,
+PrimExprNodes and more)
+3) special_stmt: special_stmt functions don't correspond to an IRNode 
in the AST directly.
+It is usually used for some information that is not suitable to be 
printed directly.
+When visiting With node, we check with_scope registry.
+When visiting For node, we check for_scope registry.
+"""
+
+_binop_maker = {
+ast.Add: tir.Add,
+ast.Sub: tir.Sub,
+ast.Mult: tir.Mul,
+ast.Div: tir.Div,
+ast.FloorDiv: tir.FloorDiv,
+ast.Mod: tir.FloorMod,
+ast.BitOr: operator.or_,
+ast.BitAnd: operator.and_,
+ast.BitXor: operator.xor,
+ast.Gt: tir.GT,
+ast.GtE: tir.GE,
+ast.Lt: tir.LT,
+ast.LtE: tir.LE,
+ast.Eq: tir.EQ,
+ast.NotEq: tir.NE,
+ast.And: tir.And,
+ast.Or: tir.Or,
+}
+
+_unaryop_maker = {
+ast.USub: operator.neg,
+ast.Invert: operator.invert,
+ast.Not: tir.Not
+}
+
+def __init__(self, src, base_lienno):
+self.params = None
+self.buffer_map = None
+self.dict_attr = None
+self.scope_emitter = None
+
+self.src = src.split('\n')
+self.base_lineno = base_lienno
+self.current_lineno = 0
+self.current_col_offset = 0
+self.meta = None
+
+self.functions = {}
+
+self._in_with_func_arg = False
+self._assign_target = None
+
+def init_function_parsing_env(self):
+"""Initialize function parsing environment"""
+self.params = []  # parameter list
+self.buffer_map = {}  # buffer map
+self.dict_attr = {}  # dict attr
+self.scope_emitter = scope_emitter.ScopeEmitter(self)  # scope emitter
+
+@staticmethod
+def is_meta(node):
+"""Judge whether an AST node is META"""
+return isinstance(node, ast.Assign) and len(node.targets) == 1 \
+   and isinstance(node.targets[0], ast.Name) and 
node.targets[0].id == "__tvm_meta__"
+
+def init_meta(self, meta_dict):
+if meta_dict is not None:
+self.meta = tvm.ir.load_json(json.dumps(meta_dict))
+
+def visit(self, node):
+"""Override method in ast.NodeVisitor"""
+old_lineno, old_col_offset = self.current_lineno, 
self.current_col_offset
+
+if hasattr(node, "lineno"):
+self.current_lineno = self.base_lineno + node.lineno - 1
+if hasattr(node, "col_offset"):
+self.current_col_offset = node.col_offset
+
+method = 'visit_' 

[GitHub] [incubator-tvm] electriclilies opened a new issue #6237: Resize produces incorrect results

2020-08-07 Thread GitBox


electriclilies opened a new issue #6237:
URL: https://github.com/apache/incubator-tvm/issues/6237


   The relay reshape function produces incorrect results. For scale factors 
other than 2, the output of relay.resize does not match the testing function. 
   
   The test I changed:
   
   `--- a/tests/python/relay/test_op_level5.py
   +++ b/tests/python/relay/test_op_level5.py
   @@ -66,7 +66,7 @@ def test_resize():
tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, 
rtol=1e-4, atol=1e-6)
for method in ["bilinear", "nearest_neighbor"]:
for layout in ["NHWC", "NCHW"]:
   -verify_resize((1, 4, 4, 4), 2, method, layout)
   +verify_resize((1, 4, 4, 4), 7, method, layout)`
   
   
   In this particular case, we get this error:
   
   `   AssertionError: 
  Not equal to tolerance rtol=0.0001, atol=1e-06
  
  Mismatched elements: 832 / 3136 (26.5%)
  Max absolute difference: 0.8489792
  Max relative difference: 711.5619
   x: array(0.507647, 0.792792, 0.245963, 0.439271],
   [0.507647, 0.792792, 0.245963, 0.439271],
   [0.507647, 0.792792, 0.245963, 0.439271],...
   y: array(0.507647, 0.792792, 0.245963, 0.439271],
   [0.507647, 0.792792, 0.245963, 0.439271],
   [0.507647, 0.792792, 0.245963, 0.439271],...
   `
   
   cc @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] windclarion edited a comment on pull request #6221: [TFLite] axis can be a scalar

2020-08-07 Thread GitBox


windclarion edited a comment on pull request #6221:
URL: https://github.com/apache/incubator-tvm/pull/6221#issuecomment-670817189


   @leandron I didn't find tflite converter test case in 
tests/python/frontend/tflite/test_forward.py,   those code didn't use any 
function 
in python/tvm/relay/frontend/tflite.py, where can I find some similar 
tflite converter test cases?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] windclarion commented on pull request #6221: [TFLite] axis can be a scalar

2020-08-07 Thread GitBox


windclarion commented on pull request #6221:
URL: https://github.com/apache/incubator-tvm/pull/6221#issuecomment-670817189


   I didn't find tflite converter test case in 
tests/python/frontend/tflite/test_forward.py,   those code didn't use any 
function 
in python/tvm/relay/frontend/tflite.py, where can I find some similar 
tflite converter test case?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] vinx13 opened a new pull request #6236: [TOPI, Cuda] Fix conv2d_transpose output padding

2020-08-07 Thread GitBox


vinx13 opened a new pull request #6236:
URL: https://github.com/apache/incubator-tvm/pull/6236


   This PR fix out-of-boundary access in the implementation of conv2d transpose 
on CUDA.
   
   Fixed #6179 
   
   cc @areusch @abergeron @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6162: [Parser] Parser 2.0 part 2

2020-08-07 Thread GitBox


jroesch commented on a change in pull request #6162:
URL: https://github.com/apache/incubator-tvm/pull/6162#discussion_r467340470



##
File path: src/parser/parser.cc
##
@@ -1231,14 +1335,38 @@ class Parser {
   }
 }
 default: {
-  std::stringstream msg;
-  msg << "expected an expression found  " << Pretty(next->token_type);
-  diag_ctx.Emit({next->line, next->column, msg.str()});
-  diag_ctx.Render(std::cout);
+  this->diag_ctx->EmitFatal(DiagnosticBuilder(DiagnosticLevel::Error, 
next->span)
+<< "expected an expression found  "
+<< Pretty(next->token_type));
   return Expr();
 }
   }
 });
+
+if (WhenMatch(TokenType::Period)) {
+  auto index = Match(TokenType::Integer).ToNumber();
+  expr = relay::TupleGetItem(expr, index);
+}
+
+return expr;
+  }
+
+  /*! \brief Parse a hierarchical name. */
+  Array ParseHierName() {

Review comment:
   Yeah I will document this, was quickly hacking up with Jason on the 
stream in order to unblock, worth coming back to it. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on pull request #6198: [Relay][Dynamic] Add Dynamic Resize Op

2020-08-07 Thread GitBox


kevinthesun commented on pull request #6198:
URL: https://github.com/apache/incubator-tvm/pull/6198#issuecomment-670791631


   Thanks @mbrookhart @electriclilies @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (bfd46ab -> 9ad33fe)

2020-08-07 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository.

kevinthesun pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bfd46ab  [runtime][cublas] fix typo (#6230)
 add 9ad33fe  [Relay][Dynamic] Add Dynamic Resize Op (#6198)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/dyn/__init__.py   |   2 +
 python/tvm/relay/op/dyn/{ => image}/__init__.py   |   6 +-
 python/tvm/relay/op/dyn/image/_image.py   |  76 +++
 python/tvm/relay/op/{ => dyn/image}/_make.py  |   2 +-
 python/tvm/relay/op/image/image.py|  17 +++-
 python/tvm/topi/image/resize.py   |  18 ++--
 src/relay/op/dyn/image/resize.cc  | 109 ++
 src/relay/op/image/resize.cc  |   1 +
 src/relay/op/make_op.h|   3 +
 src/relay/transforms/dynamic_to_static.cc |  16 
 tests/python/relay/dyn/test_dynamic_op_level5.py  |  69 ++
 tests/python/relay/test_pass_dynamic_to_static.py |  86 +
 12 files changed, 373 insertions(+), 32 deletions(-)
 copy python/tvm/relay/op/dyn/{ => image}/__init__.py (87%)
 create mode 100644 python/tvm/relay/op/dyn/image/_image.py
 copy python/tvm/relay/op/{ => dyn/image}/_make.py (93%)
 create mode 100644 src/relay/op/dyn/image/resize.cc
 create mode 100644 tests/python/relay/dyn/test_dynamic_op_level5.py



[GitHub] [incubator-tvm] kevinthesun merged pull request #6198: [Relay][Dynamic] Add Dynamic Resize Op

2020-08-07 Thread GitBox


kevinthesun merged pull request #6198:
URL: https://github.com/apache/incubator-tvm/pull/6198


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6232: [Relay][Op] Add unbiased variance op and corresponding support in pytorch frontend

2020-08-07 Thread GitBox


masahi commented on pull request #6232:
URL: https://github.com/apache/incubator-tvm/pull/6232#issuecomment-670787826


   Do we need to introduce new Relay ops? Can we just add `unbiased` argument 
to existing ops like PyTorch?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-07 Thread GitBox


tqchen commented on a change in pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227#discussion_r467312444



##
File path: python/tvm/hybrid/registry.py
##
@@ -0,0 +1,240 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Hybrid Script Parser Function Registry """
+# pylint: disable=inconsistent-return-statements
+import inspect
+from enum import IntEnum
+from typed_ast import ast3 as ast
+
+
+class Category(IntEnum):
+"""Categories of registered functions"""
+INTRIN = 0
+WITH_SCOPE = 1
+FOR_SCOPE = 2
+SPECIAL_STMT = 3
+
+
+class Registry(object):
+"""Registration map
+All these maps are static
+"""
+intrin = dict()
+with_scope = dict()
+for_scope = dict()
+special_stmt = dict()
+
+host_dict = {
+Category.INTRIN: intrin,
+Category.WITH_SCOPE: with_scope,
+Category.FOR_SCOPE: for_scope,
+Category.SPECIAL_STMT: special_stmt
+}
+
+
+class CallArgumentReader(object):
+"""A helper class which read required argument from passed arguments"""
+
+def __init__(self, func_name, args, kwargs, parser):
+self.func_name = func_name
+self.args = args
+self.kwargs = kwargs
+self.parser = parser
+
+def get_func_compulsory_arg(self, pos, name):
+"""Get corresponding function argument from argument list which is 
compulsory"""
+
+if len(self.args) >= pos:
+arg = self.args[pos - 1]
+elif name not in self.kwargs.keys():
+self.parser.report_error(self.func_name + " misses argument " + 
name)
+else:
+arg = self.kwargs[name]
+
+return arg
+
+def get_func_optional_arg(self, pos, name, default):
+"""Get corresponding function argument from argument list which is 
optional.
+If user doesn't provide the argument, set it to default value
+"""
+
+if len(self.args) >= pos:
+arg = self.args[pos - 1]
+elif name in self.kwargs.keys():
+arg = self.kwargs[name]
+else:
+return default
+
+return arg
+
+
+def func_wrapper(func_name, func_to_register, arg_list, need_parser_and_node, 
need_body, concise):
+"""Helper function to wrap a function to be registered """
+
+def wrap_func(parser, node, args, kwargs):
+reader = CallArgumentReader(func_name, args, kwargs, parser)
+internal_args = list()
+
+if need_body and not isinstance(node, ast.For):
+# automatically parse body for with scope handlers
+if isinstance(node, ast.With):
+# the with scope handler is used inside with context
+parser.scope_emitter.new_scope()
+parser.scope_emitter.node_stack[-1].extend(reversed(node.body))
+body = parser.get_body()
+parser.scope_emitter.pop_scope()
+else:
+# the with scope handler is used in concise scoping
+if not concise:
+parser.report_error("Concise scoping is not allowed here")
+body = parser.get_body()
+
+if need_parser_and_node:
+internal_args.append(parser)
+internal_args.append(node)
+
+for i, arg_info in enumerate(arg_list):
+if len(arg_info) == 1:
+arg_name, = arg_info
+if need_body and arg_name == "body":
+internal_args.append(body)
+else:
+internal_args.append(reader.get_func_compulsory_arg(i + 1, 
arg_name))
+else:
+arg_name, default = arg_info
+internal_args.append(reader.get_func_optional_arg(i + 1, 
arg_name, default=default))
+
+return func_to_register(*internal_args)
+
+return wrap_func
+
+
+def register_func(category, origin_func, need_parser_and_node, need_body, 
concise):

Review comment:
   Seems that we can refactor the code a bit to remove Category.
   
   - change this function to wrap_function
   - Break register_scope_handler into
   - register_with_scope
   - register_for_scope
   
   Then we don't need to introduce the Category enum
  

[GitHub] [incubator-tvm] tqchen commented on pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-07 Thread GitBox


tqchen commented on pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227#issuecomment-670752886


   cc @Hzfengsy @weberlo @junrushao1994 @were please help to review the PR 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-07 Thread GitBox


comaniac commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467257422



##
File path: src/relay/backend/contrib/ethosn/codegen.cc
##
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/ethosn/codegen.cc
+ * \brief The Relay -> Ethos-N command stream compiler.
+ */
+#include 
+#include 
+
+#include "codegen_ethosn.h"
+#include "ethosn_api.h"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+sl::TensorInfo GetTensorInfo(std::map> 
tensor_table,
+ const Call& call) {
+  if (tensor_table.find(call) != tensor_table.end()) return 
tensor_table[call][0];
+
+  return sl::TensorInfo();
+}
+
+void InferTensorsVisitor::InferCall(const CallNode* cn) {

Review comment:
   Can we inline this function to `InferTensorsVisitor::VisitExpr_(const 
CallNode* cn)`? I didn't see any other reference to this function.

##
File path: src/relay/backend/contrib/ethosn/codegen_ethosn.h
##
@@ -0,0 +1,331 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/ethosn/codegen_ethosn.h
+ * \brief The Relay -> Ethos-N command stream compiler.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_CODEGEN_ETHOSN_H_
+#define TVM_RELAY_BACKEND_CONTRIB_ETHOSN_CODEGEN_ETHOSN_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../runtime/contrib/ethosn/ethosn_runtime.h"
+#include "../codegen_c/codegen_c.h"
+#include "ethosn_api.h"
+#include "ethosn_support_library/Support.hpp"
+#include "ethosn_support_library/SupportQueries.hpp"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+
+/*!
+ * \brief A struct to hold an uncompiled support library network alongside
+ * the desired order of input and output operation ids.
+ */
+struct NetworkWithIDs {
+  struct hash_pair {
+template 
+size_t operator()(const std::pair& p) const {
+  return std::hash{}(p.first) ^ std::hash{}(p.second);
+}
+  };
+  std::shared_ptr network;
+  std::unordered_map input_ids;
+  std::unordered_map, unsigned int, hash_pair> 
output_ids;
+};
+
+/*!
+ * \brief A base class for error handling using ErrorReporter.
+ */
+class ErrorReportingPass {
+ public:
+  ErrorReportingPass(const IRModule& mod, const GlobalVar& var) : mod_(mod), 
var_(var) {}
+
+  /*!
+   * \brief Report fatal errors for an expression.
+   * \param expr The expression to report errors at.
+   * \param err The errors to report.
+   */
+  void ReportFatalError(const ObjectRef& expr, const EthosnError& err) {
+for (const auto& msg : err.msgs) {
+  error_reporter_.ReportAt(this->var_, expr, ErrorBuilder() << msg);
+}
+error_reporter_.RenderErrors(this->mod_);
+  }
+
+ protected:
+  /*! \brief An ErrorReporter object to render the errors.*/
+  ErrorReporter error_reporter_;
+  /*! \brief The module to report errors for. */
+  IRModule mod_;
+  /*! \brief The GlobalVar to report errors for. */
+  GlobalVar var_;
+};
+
+/*!
+ * \brief A custom pass to infer the support library tensor information
+ * for a Relay expression.
+ *
+ * Support Library requires that tensors are explicitly 

[GitHub] [incubator-tvm] tqchen commented on pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-07 Thread GitBox


tqchen commented on pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229#issuecomment-670718670


   cc @FrozenGene 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] weberlo commented on a change in pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-07 Thread GitBox


weberlo commented on a change in pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#discussion_r467267386



##
File path: python/tvm/relay/backend/aot/aot.py
##
@@ -0,0 +1,282 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Defines the entry point into the AoT compiler.
+"""
+import ctypes
+import os
+import subprocess
+import tempfile
+import time
+
+import tvm
+from tvm import relay, get_global_func, register_func
+from tvm.relay.function import Function
+from tvm.relay.expr import Expr, Let, GlobalVar
+from tvm.relay.adt import Constructor
+from tvm.relay.expr_functor import ExprFunctor
+from tvm.relay.backend import compile_engine
+from .little_cpp import (PackedCall, CPPFunction, Invoke, Decl, CPPIf,
+ CPPTuple, CPPMatch, CPPConstructor, CPPTupleGetItem,
+ CPPRefCreate, CPPRefRead, CPPRefWrite)
+from . import to_source
+from .convert import convert
+
+TVM_PATH = os.environ['TVM_HOME']
+
+def must_run_process(args):
+proc = subprocess.run(args, check=True)
+assert proc.returncode == 0
+
+def compile_cpp(source, lib_name, flags=None, lib_path=None):
+"""
+Compiles the given source into a C++ library
+and returns the full path to the compiled library.
+"""
+if flags is None:
+flags = []
+
+if lib_path is None:
+lib_path = os.curdir
+
+debug_source_path = os.path.join(lib_path, 'source.cc')

Review comment:
   can you put this functionality behind a debug flag?

##
File path: python/tvm/relay/backend/aot/aot.py
##
@@ -0,0 +1,282 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Defines the entry point into the AoT compiler.
+"""
+import ctypes
+import os
+import subprocess
+import tempfile
+import time
+
+import tvm
+from tvm import relay, get_global_func, register_func
+from tvm.relay.function import Function
+from tvm.relay.expr import Expr, Let, GlobalVar
+from tvm.relay.adt import Constructor
+from tvm.relay.expr_functor import ExprFunctor
+from tvm.relay.backend import compile_engine
+from .little_cpp import (PackedCall, CPPFunction, Invoke, Decl, CPPIf,
+ CPPTuple, CPPMatch, CPPConstructor, CPPTupleGetItem,
+ CPPRefCreate, CPPRefRead, CPPRefWrite)
+from . import to_source
+from .convert import convert
+
+TVM_PATH = os.environ['TVM_HOME']
+
+def must_run_process(args):
+proc = subprocess.run(args, check=True)
+assert proc.returncode == 0
+
+def compile_cpp(source, lib_name, flags=None, lib_path=None):
+"""
+Compiles the given source into a C++ library
+and returns the full path to the compiled library.
+"""
+if flags is None:
+flags = []
+
+if lib_path is None:
+lib_path = os.curdir
+
+debug_source_path = os.path.join(lib_path, 'source.cc')
+# Write out the file for debugging.
+with open(debug_source_path, 'w') as source_file:
+source_file.write(source)
+
+# with tempfile.TmporaryDirectory() as tmpdir:

Review comment:
   remove





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-07 Thread GitBox


yzhliu commented on pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#issuecomment-670707260


   @MarisaKirisame @sergei-grechanik @tqchen I addressed most of the comments, 
please take a look again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-07 Thread GitBox


yzhliu commented on a change in pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#discussion_r467258411



##
File path: src/te/autodiff/ad_simplify.cc
##
@@ -0,0 +1,1294 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ad_simplify.cc
+ * \brief Simplify tensor compute generated by tensor-level autodiff.
+ *
+ * The major simplification we do in this file is to eliminate
+ * the Jacobian tensor created by autodiff.
+ *
+ * Jacobian tensor is sparse because one output element usually relates
+ * to a small portion of the inputs. For example, element-wise function has a 
one-to-one mapping
+ * between input tensor and output tensor, thus the Jacobian is diagonal.
+ *
+ * Generally, we have Out_{\beta} = f( In_{A \alpha} ) in which A is a matrix,
+ * \alpha and \beta are vectors represent the indices of In and Out 
respectively.
+ * i.e., the non-zero Jacobian indices is a linear combination of the input 
indices.
+ * Thereby we solve linear equations of \beta = A \alpha,
+ * as well as linear inequalities of their domain ranges.
+ *
+ * Refer to Urban S, van der Smagt P. Automatic differentiation for tensor 
algebras[J].
+ * arXiv preprint arXiv:1711.01348, 2017. for more details.
+ *
+ * Implement-wise, we extract the equations in the compute definition via 
NonzeronessCondition,
+ * replace the compute expression with solved new axes, and create a selection 
node
+ * (non-zero-condition ? new_compute_expression : 0).
+ *
+ * Due to TVM's restriction, we also lift the reduction to the top of the 
compute stage.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "ad_util.h"
+
+namespace tvm {
+namespace te {
+
+using arith::DivMode;
+using arith::kFloorDiv;
+using arith::kTruncDiv;
+using arith::ARITH_SIMPLIFY_REWRITE_CANONICAL_REWRITE;
+
+template 
+Map Merge(Map original, const Map& update) {
+  for (const auto& p : update) {
+original.Set(p.first, p.second);
+  }
+  return std::move(original);
+}
+
+// Combine all expressions from the container using &&.
+template 
+PrimExpr All(const container& c) {
+  PrimExpr res;
+  for (const auto& e : c) {
+if (res.get()) {
+  res = res && e;
+} else {
+  res = e;
+}
+  }
+  if (res.get()) {
+return res;
+  } else {
+return const_true();
+  }
+}
+
+Map IterVarsToMap(const Array& itervars) {
+  Map res;
+  for (const IterVar& v : itervars) {
+res.Set(v->var, v->dom);
+  }
+  return res;
+}
+
+// Given a map from vars to ranges create an array of itervars
+Array IterVarsFromMap(const Array& vars, const Map& 
vranges,
+   IterVarType iter_type = kDataPar, std::string 
thread_tag = "") {
+  Array res;
+  for (const Var& v : vars) {
+CHECK(vranges.count(v)) << "A range for the variable " << v << " was not 
provided in map "
+<< vranges;
+res.push_back(IterVar(vranges[v], v, iter_type, thread_tag));
+  }
+  return res;
+}
+
+Array IterVarsToVars(const Array& itervars) {
+  Array res;
+  for (const IterVar& v : itervars) {
+res.push_back(v->var);
+  }
+  return res;
+}
+
+template 
+bool is_const_value(const PrimExpr& e, ValueType value) {
+  static_assert(std::is_integral::value,
+"Comparison to non-integer values is forbidden.");
+  if (const tir::IntImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::FloatImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::CastNode* c = e.as()) {
+return is_const_value(c->value, value);
+  } else if (const tir::BroadcastNode* b = e.as()) {
+return is_const_value(b->value, value);
+  } else {
+return false;
+  }
+}
+
+// Return true if this combiner is just a sum.
+bool IsSumCombiner(const CommReducer& combiner, const Map& 
vranges) {
+  arith::Analyzer analyzer;
+  analyzer.Bind(vranges);
+  if (combiner->result.size() != 1) {
+return false;
+  }
+
+  if (!is_const_value(analyzer.Simplify(combiner->identity_element[0],
+
ARITH_SIMPLIFY_REWRITE_CANONICAL_REWRITE),
+  0)) {
+return false;
+  }
+

[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-07 Thread GitBox


yzhliu commented on a change in pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#discussion_r467258142



##
File path: src/te/autodiff/ad_simplify.cc
##
@@ -0,0 +1,1305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ad_simplify.cc
+ * \brief Simplify tensor compute generated by tensor-level autodiff.
+ *
+ * The major simplification we do in this file is to eliminate
+ * the Jacobian tensor created by autodiff.
+ *
+ * Jacobian tensor is sparse because one output element usually relates
+ * to a small portion of the inputs. For example, element-wise function has a 
one-to-one mapping
+ * between input tensor and output tensor, thus the Jacobian is diagonal.
+ *
+ * Generally, we have Out_{\beta} = f( In_{A \alpha} ) in which A is a matrix,
+ * \alpha and \beta are vectors represent the indices of In and Out 
respectively.
+ * i.e., the non-zero Jacobian indices is a linear combination of the input 
indices.
+ * Thereby we solve linear equations of \beta = A \alpha,
+ * as well as linear inequalities of their domain ranges.
+ *
+ * Refer to Urban S, van der Smagt P. Automatic differentiation for tensor 
algebras[J].
+ * arXiv preprint arXiv:1711.01348, 2017. for more details.
+ *
+ * Implement-wise, we extract the equations in the compute definition via 
NonzeronessCondition,
+ * replace the compute expression with solved new axes, and create a selection 
node
+ * (non-zero-condition ? new_compute_expression : 0).
+ *
+ * Due to TVM's restriction, we also lift the reduction to the top of the 
compute stage.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "ad_util.h"
+
+namespace tvm {
+namespace te {
+
+using arith::DivMode;
+using arith::kFloorDiv;
+using arith::kTruncDiv;
+
+template 
+Map Merge(Map original, const Map& update) {
+  for (const auto& p : update) {
+original.Set(p.first, p.second);
+  }
+  return std::move(original);
+}
+
+// Concatenate two arrays
+template 
+Array Concat(Array a, const Array& b) {
+  for (const auto& x : b) {
+a.push_back(x);
+  }
+  return std::move(a);
+}
+
+// Combine all expressions from the container using &&.
+template 
+PrimExpr All(const container& c) {
+  PrimExpr res;
+  for (const auto& e : c) {
+if (res.get()) {
+  res = res && e;
+} else {
+  res = e;
+}
+  }
+  if (res.get()) {
+return res;
+  } else {
+return const_true();
+  }
+}
+
+Map IterVarsToMap(const Array& itervars) {
+  Map res;
+  for (const IterVar& v : itervars) {
+res.Set(v->var, v->dom);
+  }
+  return res;
+}
+
+// Given a map from vars to ranges create an array of itervars
+Array IterVarsFromMap(const Array& vars, const Map& 
vranges,
+   IterVarType iter_type = kDataPar, std::string 
thread_tag = "") {
+  Array res;
+  for (const Var& v : vars) {
+CHECK(vranges.count(v)) << "A range for the variable " << v << " was not 
provided in map "
+<< vranges;
+res.push_back(IterVar(vranges[v], v, iter_type, thread_tag));
+  }
+  return res;
+}
+
+Array IterVarsToVars(const Array& itervars) {
+  Array res;
+  for (const IterVar& v : itervars) {
+res.push_back(v->var);
+  }
+  return res;
+}
+
+template 
+inline bool is_const_value(const PrimExpr& e, ValueType value) {
+  static_assert(std::is_integral::value,
+"Comparison to non-integer values is forbidden.");
+  if (const tir::IntImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::FloatImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::CastNode* c = e.as()) {
+return is_const_value(c->value, value);
+  } else if (const tir::BroadcastNode* b = e.as()) {
+return is_const_value(b->value, value);
+  } else {
+return false;
+  }
+}
+
+// Return true if this combiner is just a sum.
+bool IsSumCombiner(const CommReducer& combiner, const Map& 
vranges) {
+  arith::Analyzer analyzer;
+  analyzer.Bind(vranges);
+  if (combiner->result.size() != 1) {
+return false;
+  }
+
+  if (!is_const_value(analyzer.Simplify(combiner->identity_element[0],
+ 

[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-07 Thread GitBox


yzhliu commented on a change in pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#discussion_r467257593



##
File path: src/te/autodiff/ad_simplify.cc
##
@@ -0,0 +1,1305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ad_simplify.cc
+ * \brief Simplify tensor compute generated by tensor-level autodiff.
+ *
+ * The major simplification we do in this file is to eliminate
+ * the Jacobian tensor created by autodiff.
+ *
+ * Jacobian tensor is sparse because one output element usually relates
+ * to a small portion of the inputs. For example, element-wise function has a 
one-to-one mapping
+ * between input tensor and output tensor, thus the Jacobian is diagonal.
+ *
+ * Generally, we have Out_{\beta} = f( In_{A \alpha} ) in which A is a matrix,
+ * \alpha and \beta are vectors represent the indices of In and Out 
respectively.
+ * i.e., the non-zero Jacobian indices is a linear combination of the input 
indices.
+ * Thereby we solve linear equations of \beta = A \alpha,
+ * as well as linear inequalities of their domain ranges.
+ *
+ * Refer to Urban S, van der Smagt P. Automatic differentiation for tensor 
algebras[J].
+ * arXiv preprint arXiv:1711.01348, 2017. for more details.
+ *
+ * Implement-wise, we extract the equations in the compute definition via 
NonzeronessCondition,
+ * replace the compute expression with solved new axes, and create a selection 
node
+ * (non-zero-condition ? new_compute_expression : 0).
+ *
+ * Due to TVM's restriction, we also lift the reduction to the top of the 
compute stage.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "ad_util.h"
+
+namespace tvm {
+namespace te {
+
+using arith::DivMode;
+using arith::kFloorDiv;
+using arith::kTruncDiv;
+
+template 
+Map Merge(Map original, const Map& update) {
+  for (const auto& p : update) {
+original.Set(p.first, p.second);
+  }
+  return std::move(original);
+}
+
+// Concatenate two arrays
+template 
+Array Concat(Array a, const Array& b) {
+  for (const auto& x : b) {
+a.push_back(x);
+  }
+  return std::move(a);
+}
+
+// Combine all expressions from the container using &&.
+template 
+PrimExpr All(const container& c) {
+  PrimExpr res;
+  for (const auto& e : c) {
+if (res.get()) {
+  res = res && e;
+} else {
+  res = e;
+}
+  }
+  if (res.get()) {
+return res;
+  } else {
+return const_true();
+  }
+}
+
+Map IterVarsToMap(const Array& itervars) {
+  Map res;
+  for (const IterVar& v : itervars) {
+res.Set(v->var, v->dom);
+  }
+  return res;
+}
+
+// Given a map from vars to ranges create an array of itervars
+Array IterVarsFromMap(const Array& vars, const Map& 
vranges,
+   IterVarType iter_type = kDataPar, std::string 
thread_tag = "") {
+  Array res;
+  for (const Var& v : vars) {
+CHECK(vranges.count(v)) << "A range for the variable " << v << " was not 
provided in map "
+<< vranges;
+res.push_back(IterVar(vranges[v], v, iter_type, thread_tag));
+  }
+  return res;
+}
+
+Array IterVarsToVars(const Array& itervars) {
+  Array res;
+  for (const IterVar& v : itervars) {
+res.push_back(v->var);
+  }
+  return res;
+}
+
+template 
+inline bool is_const_value(const PrimExpr& e, ValueType value) {
+  static_assert(std::is_integral::value,
+"Comparison to non-integer values is forbidden.");
+  if (const tir::IntImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::FloatImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::CastNode* c = e.as()) {
+return is_const_value(c->value, value);
+  } else if (const tir::BroadcastNode* b = e.as()) {
+return is_const_value(b->value, value);
+  } else {
+return false;
+  }
+}
+
+// Return true if this combiner is just a sum.
+bool IsSumCombiner(const CommReducer& combiner, const Map& 
vranges) {
+  arith::Analyzer analyzer;
+  analyzer.Bind(vranges);
+  if (combiner->result.size() != 1) {
+return false;
+  }
+
+  if (!is_const_value(analyzer.Simplify(combiner->identity_element[0],
+ 

[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-07 Thread GitBox


yzhliu commented on a change in pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#discussion_r467257282



##
File path: src/te/autodiff/ad_simplify.cc
##
@@ -0,0 +1,1305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ad_simplify.cc
+ * \brief Simplify tensor compute generated by tensor-level autodiff.
+ *
+ * The major simplification we do in this file is to eliminate
+ * the Jacobian tensor created by autodiff.
+ *
+ * Jacobian tensor is sparse because one output element usually relates
+ * to a small portion of the inputs. For example, element-wise function has a 
one-to-one mapping
+ * between input tensor and output tensor, thus the Jacobian is diagonal.
+ *
+ * Generally, we have Out_{\beta} = f( In_{A \alpha} ) in which A is a matrix,
+ * \alpha and \beta are vectors represent the indices of In and Out 
respectively.
+ * i.e., the non-zero Jacobian indices is a linear combination of the input 
indices.
+ * Thereby we solve linear equations of \beta = A \alpha,
+ * as well as linear inequalities of their domain ranges.
+ *
+ * Refer to Urban S, van der Smagt P. Automatic differentiation for tensor 
algebras[J].
+ * arXiv preprint arXiv:1711.01348, 2017. for more details.
+ *
+ * Implement-wise, we extract the equations in the compute definition via 
NonzeronessCondition,
+ * replace the compute expression with solved new axes, and create a selection 
node
+ * (non-zero-condition ? new_compute_expression : 0).
+ *
+ * Due to TVM's restriction, we also lift the reduction to the top of the 
compute stage.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "ad_util.h"
+
+namespace tvm {
+namespace te {
+
+using arith::DivMode;
+using arith::kFloorDiv;
+using arith::kTruncDiv;
+
+template 
+Map Merge(Map original, const Map& update) {
+  for (const auto& p : update) {
+original.Set(p.first, p.second);
+  }
+  return std::move(original);
+}
+
+// Concatenate two arrays
+template 
+Array Concat(Array a, const Array& b) {
+  for (const auto& x : b) {
+a.push_back(x);
+  }
+  return std::move(a);
+}
+
+// Combine all expressions from the container using &&.
+template 
+PrimExpr All(const container& c) {
+  PrimExpr res;
+  for (const auto& e : c) {
+if (res.get()) {
+  res = res && e;
+} else {
+  res = e;
+}
+  }
+  if (res.get()) {
+return res;
+  } else {
+return const_true();
+  }
+}
+
+Map IterVarsToMap(const Array& itervars) {
+  Map res;
+  for (const IterVar& v : itervars) {
+res.Set(v->var, v->dom);
+  }
+  return res;
+}
+
+// Given a map from vars to ranges create an array of itervars
+Array IterVarsFromMap(const Array& vars, const Map& 
vranges,
+   IterVarType iter_type = kDataPar, std::string 
thread_tag = "") {
+  Array res;
+  for (const Var& v : vars) {
+CHECK(vranges.count(v)) << "A range for the variable " << v << " was not 
provided in map "
+<< vranges;
+res.push_back(IterVar(vranges[v], v, iter_type, thread_tag));
+  }
+  return res;
+}
+
+Array IterVarsToVars(const Array& itervars) {
+  Array res;
+  for (const IterVar& v : itervars) {
+res.push_back(v->var);
+  }
+  return res;
+}
+
+template 
+inline bool is_const_value(const PrimExpr& e, ValueType value) {
+  static_assert(std::is_integral::value,
+"Comparison to non-integer values is forbidden.");
+  if (const tir::IntImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::FloatImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::CastNode* c = e.as()) {
+return is_const_value(c->value, value);
+  } else if (const tir::BroadcastNode* b = e.as()) {
+return is_const_value(b->value, value);
+  } else {
+return false;
+  }
+}
+
+// Return true if this combiner is just a sum.
+bool IsSumCombiner(const CommReducer& combiner, const Map& 
vranges) {
+  arith::Analyzer analyzer;
+  analyzer.Bind(vranges);
+  if (combiner->result.size() != 1) {
+return false;
+  }
+
+  if (!is_const_value(analyzer.Simplify(combiner->identity_element[0],
+ 

[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-07 Thread GitBox


yzhliu commented on a change in pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#discussion_r467256980



##
File path: src/te/autodiff/ad_simplify.cc
##
@@ -0,0 +1,1305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ad_simplify.cc
+ * \brief Simplify tensor compute generated by tensor-level autodiff.
+ *
+ * The major simplification we do in this file is to eliminate
+ * the Jacobian tensor created by autodiff.
+ *
+ * Jacobian tensor is sparse because one output element usually relates
+ * to a small portion of the inputs. For example, element-wise function has a 
one-to-one mapping
+ * between input tensor and output tensor, thus the Jacobian is diagonal.
+ *
+ * Generally, we have Out_{\beta} = f( In_{A \alpha} ) in which A is a matrix,
+ * \alpha and \beta are vectors represent the indices of In and Out 
respectively.
+ * i.e., the non-zero Jacobian indices is a linear combination of the input 
indices.
+ * Thereby we solve linear equations of \beta = A \alpha,
+ * as well as linear inequalities of their domain ranges.
+ *
+ * Refer to Urban S, van der Smagt P. Automatic differentiation for tensor 
algebras[J].
+ * arXiv preprint arXiv:1711.01348, 2017. for more details.
+ *
+ * Implement-wise, we extract the equations in the compute definition via 
NonzeronessCondition,
+ * replace the compute expression with solved new axes, and create a selection 
node
+ * (non-zero-condition ? new_compute_expression : 0).
+ *
+ * Due to TVM's restriction, we also lift the reduction to the top of the 
compute stage.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "ad_util.h"
+
+namespace tvm {
+namespace te {
+
+using arith::DivMode;
+using arith::kFloorDiv;
+using arith::kTruncDiv;
+
+template 
+Map Merge(Map original, const Map& update) {
+  for (const auto& p : update) {
+original.Set(p.first, p.second);
+  }
+  return std::move(original);
+}
+
+// Concatenate two arrays
+template 
+Array Concat(Array a, const Array& b) {
+  for (const auto& x : b) {
+a.push_back(x);
+  }
+  return std::move(a);
+}
+
+// Combine all expressions from the container using &&.
+template 
+PrimExpr All(const container& c) {
+  PrimExpr res;
+  for (const auto& e : c) {
+if (res.get()) {
+  res = res && e;
+} else {
+  res = e;
+}
+  }
+  if (res.get()) {
+return res;
+  } else {
+return const_true();
+  }
+}
+
+Map IterVarsToMap(const Array& itervars) {
+  Map res;
+  for (const IterVar& v : itervars) {
+res.Set(v->var, v->dom);
+  }
+  return res;
+}
+
+// Given a map from vars to ranges create an array of itervars
+Array IterVarsFromMap(const Array& vars, const Map& 
vranges,
+   IterVarType iter_type = kDataPar, std::string 
thread_tag = "") {
+  Array res;
+  for (const Var& v : vars) {
+CHECK(vranges.count(v)) << "A range for the variable " << v << " was not 
provided in map "
+<< vranges;
+res.push_back(IterVar(vranges[v], v, iter_type, thread_tag));
+  }
+  return res;
+}
+
+Array IterVarsToVars(const Array& itervars) {
+  Array res;
+  for (const IterVar& v : itervars) {
+res.push_back(v->var);
+  }
+  return res;
+}
+
+template 
+inline bool is_const_value(const PrimExpr& e, ValueType value) {
+  static_assert(std::is_integral::value,
+"Comparison to non-integer values is forbidden.");
+  if (const tir::IntImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::FloatImmNode* i = e.as()) {
+return i->value == value;
+  } else if (const tir::CastNode* c = e.as()) {
+return is_const_value(c->value, value);
+  } else if (const tir::BroadcastNode* b = e.as()) {
+return is_const_value(b->value, value);
+  } else {
+return false;
+  }
+}
+
+// Return true if this combiner is just a sum.
+bool IsSumCombiner(const CommReducer& combiner, const Map& 
vranges) {
+  arith::Analyzer analyzer;
+  analyzer.Bind(vranges);
+  if (combiner->result.size() != 1) {
+return false;
+  }
+
+  if (!is_const_value(analyzer.Simplify(combiner->identity_element[0],
+ 

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-07 Thread GitBox


comaniac commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467229947



##
File path: cmake/config.cmake
##
@@ -198,6 +198,16 @@ set(USE_DNNL_CODEGEN OFF)
 set(USE_ARM_COMPUTE_LIB OFF)
 set(USE_ARM_COMPUTE_LIB_GRAPH_RUNTIME OFF)
 
+# Whether to build with Arm Ethos-N support
+# Possible values:
+# - OFF: disable Arm Ethos-N support
+# - path/to/arm-ethos-N-stack: use a specific version of the
+#   Ethos-N driver stack
+set(USE_ETHOSN OFF)
+# If USE_ETHOSN is enabled, use Ethos-N hardware (ON) or
+# software test infrastructure (OFF)
+set(USE_ETHOSN_HW ON)

Review comment:
   Ah I see. Then I'd suggest the comments for this flag to something like:
   ```
   # Whether Ethos-N is available on this machine. Software test infra will be 
used when OFF.
   set(USE_ETHOSN_HW OFF)
   ```
   
   And set up a checker for the case that USE_ETHOSN_HW=ON but USE_ETHOSN=OFF.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #6121: [TOPI] Support int4/int8 conv2d tensor core with HWNC layout

2020-08-07 Thread GitBox


anijain2305 commented on pull request #6121:
URL: https://github.com/apache/incubator-tvm/pull/6121#issuecomment-670660890


   @Shawn-Inspur @Laurawly Can you please review when you get time? This will 
unblock us to connect it Relay and all the way to QNN then. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige opened a new pull request #6235: [TESTS] Decrease test times by introducing testing model

2020-08-07 Thread GitBox


tkonolige opened a new pull request #6235:
URL: https://github.com/apache/incubator-tvm/pull/6235


   In most unit tests, resnet is used as the default model for testing. As 
these are unit tests, they don't require a full sized model, and using resnet 
makes the run slowly. This PR introduces a new small synthetic model for use in 
testing. I've replaced most occurrences of resnet in the unit tests with this 
model. Time spent running `tests/scripts/task_python_unittest.sh` is 118.07s 
with this model vs 571.41s with resnet (on my Mac laptop, no gpu).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] weberlo commented on issue #2563: [RFC][μTVM] Bringing TVM to Bare-Metal Devices

2020-08-07 Thread GitBox


weberlo commented on issue #2563:
URL: https://github.com/apache/incubator-tvm/issues/2563#issuecomment-670653285


   > How do you think the difference between MicroTVM and MCUNet?
   
   Hi @wang-y-z.  I wasn't aware of this work until now.  Thanks for the 
pointer!  It's a bit embarrassing to see them compare against the old runtime 
that was designed purely for AutoTVM purposes (and it only _happened_ to be 
able to run entire models).  Because of that design goal, it makes no use of 
flash memory, so it runs out of memory very quickly .
   
   I'd say TinyNAS isn't comparable to µTVM, since µTVM doesn't currently do 
any architecture search.  You could imagine using only TinyNAS to produce a 
model, then importing the result and running it with µTVM.
   
   TinyEngine is an interesting point of comparison, since it uses a 
codegen-based approach, and this is the approach we want to move towards going 
forward.  For the past few months, we've focused on strengthening support for 
autotuning and deployment with the C graph runtime.  However, as we look at 
smaller devices, there are a lot of mechanisms in the graph runtime that cause 
unnecessarily high memory usage (e.g., runtime overhead and JSON parsing).  
With the prototype Relay AoT compiler being merged soon (#6219), we'll have a 
good starting point for an entirely codegen-based approach.
   
   Though the codegen approach seems to give them the most benefit (Figure 4), 
the model-adaptive/memory-aware optimizations in TinyEngine look compelling as 
well, and it would certainly be interesting to see how they could be 
implemented in TVM.
   
   > By the way, can you tell me what's going on about MicroTVM on RICS-V 
device and if you have plan to support the User defined extensions for RV?
   
   We haven't prioritized RISC-V-specific features, since we're still building 
up all of the device-agnostic infrastructure.  Is there a use case for 
user-defined extensions you have in mind?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] shizukanaskytree commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-08-07 Thread GitBox


shizukanaskytree commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-670644424


   Missing op `aten::copy_`:
   
   error:
   ```
   NotImplementedError: The following operators are not implemented: 
['aten::copy_']
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] NarendraPatwardhan opened a new issue #6234: Conversion from Onnx fails for Efficientnet-b0.

2020-08-07 Thread GitBox


NarendraPatwardhan opened a new issue #6234:
URL: https://github.com/apache/incubator-tvm/issues/6234


   I have a pytorch model converted to onnx which I aim to convert to TVM. The 
model has 1 input and 2 outputs. The conversion from pytorch to onnx is 
successful (checked with onnxruntime) but from onnx to tvm I receive the 
following error. The network is similar to the efficientnet-b0.
   
   File "to_tvm.py", line 99, in tune_and_evaluate
   mod, params, input_shape = get_network()
   
 File "to_tvm.py", line 35, in get_network
   mod, params = relay.frontend.from_onnx(model, shape=shape_dict, 
dtype=dtype)
   
 File 
"/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py",
 line 1879, in from_onnx
   mod, params = g.from_onnx(graph, opset)
   
 File 
"/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py",
 line 1707, in from_onnx
   op = self._convert_operator(op_name, inputs, attr, opset)
   
 File 
"/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py",
 line 1807, in _convert_operator
   sym = convert_map[op_name](inputs, attrs, self._params)
   
 File 
"/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/common.py",
 line 417, in __call__
   return get_relay_op(op_name)(*inputs, **new_attrs)
   
 File 
"/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/op/tensor.py",
 line 870, in clip
   return _make.clip(a, a_min, a_max)
   
 File 
"/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/_ffi/_ctypes/packed_func.py",
 line 213, in __call__
   raise get_last_ffi_error()
   
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 [bt] (3) 
/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x65)
 [0x7fcff601fb35]
 [bt] (2) 
/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x950a5b)
 [0x7fcff5d27a5b]
 [bt] (1) 
/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::runtime::TVMPODValue_::operator
 double() const+0x170) [0x7fcff5782030]
 [bt] (0) 
/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x43)
 [0x7fcff5771bb3]
 File "/home/user/libs/tvm/include/tvm/runtime/packed_func.h", line 418
   TVMError: Check failed: type_code_ == kDLFloat (8 vs. 2) : expected float 
but get Object



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-07 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467068235



##
File path: cmake/config.cmake
##
@@ -198,6 +198,16 @@ set(USE_DNNL_CODEGEN OFF)
 set(USE_ARM_COMPUTE_LIB OFF)
 set(USE_ARM_COMPUTE_LIB_GRAPH_RUNTIME OFF)
 
+# Whether to build with Arm Ethos-N support
+# Possible values:
+# - OFF: disable Arm Ethos-N support
+# - path/to/arm-ethos-N-stack: use a specific version of the
+#   Ethos-N driver stack
+set(USE_ETHOSN OFF)
+# If USE_ETHOSN is enabled, use Ethos-N hardware (ON) or
+# software test infrastructure (OFF)
+set(USE_ETHOSN_HW ON)

Review comment:
   Good catch, yes it should be off by default. The naming of USE_ETHOSN is 
really because it toggles both the codegen and runtime support. USE_ETHOSN_HW 
determines whether or not we mock the inference function in the runtime or use 
real hardware.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6232: [Relay][Op] Add unbiased variance op and corresponding support in pytorch frontend

2020-08-07 Thread GitBox


leandron commented on a change in pull request #6232:
URL: https://github.com/apache/incubator-tvm/pull/6232#discussion_r467041812



##
File path: python/tvm/relay/op/reduce.py
##
@@ -376,6 +408,39 @@ def std(data, axis=None, keepdims=False, exclude=False):
 return sqrt(_make._variance(data, m, axis, keepdims, exclude))
 
 
+def unbiased_std(data, axis=None, keepdims=False, exclude=False):
+"""Computes the unbiased standard deviation of data over given axes.
+
+Parameters
+--
+data : relay.Expr
+The input data
+
+axis : None or int or tuple of int
+Axis or axes along which a standard deviation operation is performed.
+The default, axis=None, will compute the standard deviation of all 
elements in the
+input array. If axis is negative it counts from the last to the first 
axis.
+
+keepdims : bool
+If this is set to True, the axes which are reduced are left in the 
result as dimensions
+with size one.
+With this option, the result will broadcast correctly against the 
input array.
+
+exclude : bool
+If `exclude` is true, reduction will be performed on the axes that are
+NOT in axis instead.
+
+Returns
+---
+result : relay.Expr
+The computed result.
+"""
+axis = [axis] if isinstance(axis, int) else axis
+m = mean(data, axis, True, exclude)
+
+return sqrt(_make._unbiased_variance(data, m, axis, keepdims, exclude))

Review comment:
   Understood. Thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ShawnZhuang closed pull request #6233: temp

2020-08-07 Thread GitBox


ShawnZhuang closed pull request #6233:
URL: https://github.com/apache/incubator-tvm/pull/6233


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ShawnZhuang opened a new pull request #6233: temp

2020-08-07 Thread GitBox


ShawnZhuang opened a new pull request #6233:
URL: https://github.com/apache/incubator-tvm/pull/6233


   Signed-off-by: shawn.zhuang 
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] shiwenloong commented on a change in pull request #6232: [Relay][Op] Add unbiased variance op and corresponding support in pytorch frontend

2020-08-07 Thread GitBox


shiwenloong commented on a change in pull request #6232:
URL: https://github.com/apache/incubator-tvm/pull/6232#discussion_r467022417



##
File path: python/tvm/relay/op/reduce.py
##
@@ -376,6 +408,39 @@ def std(data, axis=None, keepdims=False, exclude=False):
 return sqrt(_make._variance(data, m, axis, keepdims, exclude))
 
 
+def unbiased_std(data, axis=None, keepdims=False, exclude=False):
+"""Computes the unbiased standard deviation of data over given axes.
+
+Parameters
+--
+data : relay.Expr
+The input data
+
+axis : None or int or tuple of int
+Axis or axes along which a standard deviation operation is performed.
+The default, axis=None, will compute the standard deviation of all 
elements in the
+input array. If axis is negative it counts from the last to the first 
axis.
+
+keepdims : bool
+If this is set to True, the axes which are reduced are left in the 
result as dimensions
+with size one.
+With this option, the result will broadcast correctly against the 
input array.
+
+exclude : bool
+If `exclude` is true, reduction will be performed on the axes that are
+NOT in axis instead.
+
+Returns
+---
+result : relay.Expr
+The computed result.
+"""
+axis = [axis] if isinstance(axis, int) else axis
+m = mean(data, axis, True, exclude)
+
+return sqrt(_make._unbiased_variance(data, m, axis, keepdims, exclude))

Review comment:
   This replacement is of no problem. But the current implementation of 
`unbiased_variance` and `unbiased_std` follows how `variance` and `std` were 
implemented. I think taking the same implementation style will be better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] shiwenloong commented on a change in pull request #6232: [Relay][Op] Add unbiased variance op and corresponding support in pytorch frontend

2020-08-07 Thread GitBox


shiwenloong commented on a change in pull request #6232:
URL: https://github.com/apache/incubator-tvm/pull/6232#discussion_r467018665



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -1263,27 +1263,32 @@ def _impl(inputs, input_types):
 unbiased = bool(inputs[2])
 
 if unbiased:
-msg = "Currently only supports standard-deviation calculated via 
the biased "\
-  "estimator. PyTorch's Bessel's correction is not supported."
-raise NotImplementedError(msg)
+std_op = _op.reduce.unbiased_std
+else:
+std_op = _op.reduce.std

Review comment:
   This change will make the code cleaner. It has been applied. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6232: [Relay][Op] Add unbiased variance op and corresponding support in pytorch frontend

2020-08-07 Thread GitBox


leandron commented on a change in pull request #6232:
URL: https://github.com/apache/incubator-tvm/pull/6232#discussion_r466997354



##
File path: python/tvm/relay/op/reduce.py
##
@@ -376,6 +408,39 @@ def std(data, axis=None, keepdims=False, exclude=False):
 return sqrt(_make._variance(data, m, axis, keepdims, exclude))
 
 
+def unbiased_std(data, axis=None, keepdims=False, exclude=False):
+"""Computes the unbiased standard deviation of data over given axes.
+
+Parameters
+--
+data : relay.Expr
+The input data
+
+axis : None or int or tuple of int
+Axis or axes along which a standard deviation operation is performed.
+The default, axis=None, will compute the standard deviation of all 
elements in the
+input array. If axis is negative it counts from the last to the first 
axis.
+
+keepdims : bool
+If this is set to True, the axes which are reduced are left in the 
result as dimensions
+with size one.
+With this option, the result will broadcast correctly against the 
input array.
+
+exclude : bool
+If `exclude` is true, reduction will be performed on the axes that are
+NOT in axis instead.
+
+Returns
+---
+result : relay.Expr
+The computed result.
+"""
+axis = [axis] if isinstance(axis, int) else axis
+m = mean(data, axis, True, exclude)
+
+return sqrt(_make._unbiased_variance(data, m, axis, keepdims, exclude))

Review comment:
   I'm not very familiar with the specifics here, but, could this be 
replaced by the example below, to avoid repetition?
   ```
   return sqrt(unbiased_variance(data, axis, keepdims, exclude))
   ```

##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -1263,27 +1263,32 @@ def _impl(inputs, input_types):
 unbiased = bool(inputs[2])
 
 if unbiased:
-msg = "Currently only supports standard-deviation calculated via 
the biased "\
-  "estimator. PyTorch's Bessel's correction is not supported."
-raise NotImplementedError(msg)
+std_op = _op.reduce.unbiased_std
+else:
+std_op = _op.reduce.std

Review comment:
   minor suggestion: you could use the same as you did below with `axis`, 
to make this statement shorter.
   ```
   std_op = _op.reduce.unbiased_std if unbiased else _op.reduce.std
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene merged pull request #6230: [runtime][cublas] fix typo

2020-08-07 Thread GitBox


FrozenGene merged pull request #6230:
URL: https://github.com/apache/incubator-tvm/pull/6230


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [runtime][cublas] fix typo (#6230)

2020-08-07 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new bfd46ab  [runtime][cublas] fix typo (#6230)
bfd46ab is described below

commit bfd46abf11b972c451a1d6085a8090a599121743
Author: cloud-mxd 
AuthorDate: Fri Aug 7 19:38:11 2020 +0800

[runtime][cublas] fix typo (#6230)
---
 src/runtime/contrib/cublas/cublas.cc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/runtime/contrib/cublas/cublas.cc 
b/src/runtime/contrib/cublas/cublas.cc
index 467ae5f..24468a7 100644
--- a/src/runtime/contrib/cublas/cublas.cc
+++ b/src/runtime/contrib/cublas/cublas.cc
@@ -174,7 +174,7 @@ inline void CallLtIgemm(TVMArgs args, TVMRetValue* ret, 
cublasLtHandle_t hdl) {
   cublasLtMatmulDesc_t operationDesc = nullptr;
 #if CUDART_VERSION >= 11000
   CHECK_CUBLAS_ERROR(cublasLtMatmulDescCreate(, 
CUBLAS_COMPUTE_32I, CUDA_R_32I));
-#elif
+#else
   CHECK_CUBLAS_ERROR(cublasLtMatmulDescCreate(, CUDA_R_32I));
 #endif
   CHECK_CUBLAS_ERROR(cublasLtMatmulDescSetAttribute(operationDesc, 
CUBLASLT_MATMUL_DESC_TRANSB,



[GitHub] [incubator-tvm] FrozenGene commented on pull request #6230: [runtime][cublas] fix typo

2020-08-07 Thread GitBox


FrozenGene commented on pull request #6230:
URL: https://github.com/apache/incubator-tvm/pull/6230#issuecomment-670474765


   Thanks @cloud-mxd @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6230: [runtime][cublas] fix typo

2020-08-07 Thread GitBox


cloud-mxd commented on pull request #6230:
URL: https://github.com/apache/incubator-tvm/pull/6230#issuecomment-670462979


   @tqchen Can merge ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] shiwenloong opened a new pull request #6232: [Relay][Op] Add unbiased variance op and corresponding support in pytorch frontend

2020-08-07 Thread GitBox


shiwenloong opened a new pull request #6232:
URL: https://github.com/apache/incubator-tvm/pull/6232


   Unbiased variance uses `N-1` as the divisor in the calculation, where N 
represents the number of elements. `torch.std` and `torch.var` are unbiased by 
default in pytorch and these unbiased ops can't be converted.
   This PR adds unbiased variance op and corresponding support in pytorch 
frontend.
   
   @masahi @junrushao1994 Please help to review this PR. Thanks.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jainris commented on pull request #6223: [TFLite] Implemented ONE_HOT Operator for TFLite.

2020-08-07 Thread GitBox


jainris commented on pull request #6223:
URL: https://github.com/apache/incubator-tvm/pull/6223#issuecomment-670433150


   "continuous-integration/jenkins/pr-merge" check failed. I don't think it is 
because of my changes. 
   Trying a rerun of the checks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (3d8ad7a -> b3c42f9)

2020-08-07 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 3d8ad7a  [C++ RPC] fix typo to keep same with source code (#6220)
 add b3c42f9  [Relay][Pass] Support combine multiple dense op just into 
dense (#6062)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/transform.h  |   4 +-
 python/tvm/relay/transform/transform.py|  31 +++-
 src/relay/transforms/combine_parallel_dense.cc | 166 +-
 .../relay/test_pass_combine_parallel_dense.py  | 189 -
 4 files changed, 379 insertions(+), 11 deletions(-)



[GitHub] [incubator-tvm] MarisaKirisame merged pull request #6062: [Relay][Pass] Support combine multiple dense op just into dense

2020-08-07 Thread GitBox


MarisaKirisame merged pull request #6062:
URL: https://github.com/apache/incubator-tvm/pull/6062


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-07 Thread GitBox


jcf94 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r466767169



##
File path: include/tvm/auto_scheduler/auto_schedule.h
##
@@ -42,19 +42,14 @@ class TuningOptionsNode : public Object {
   int early_stopping;
   /*! \brief The number of programs to be measured at each search round. */
   int num_measures_per_round;
-  /*!
-   * \brief Verbosity level.
-   * 0 for silent, 1 to output information during schedule searching.
-   */
+  /*! \brief Verbosity level. 0 for silent, 1 to output information during 
schedule searching. */
   int verbose;
   /*! \brief ProgramBuilder which builds the program */
   ProgramBuilder builder;
   /*! \brief ProgramRunner which runs the program and measures time costs */
   ProgramRunner runner;
   /*! \brief MeasureCallback functions to be called after each measure batch */
   Optional> measure_callbacks;
-  /*! \brief SearchCallback functions to be called before schedule search */
-  Optional> pre_search_callbacks;

Review comment:
   Oh, just moved this to another position after the code refactoring. 
Let's see `SearchPolicy`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 edited a comment on pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-07 Thread GitBox


jcf94 edited a comment on pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#issuecomment-669172637


   ~~This PR shares the base class with cost models, will rebase the code after 
#6187 has been merged.~~
   ~~The other parts of code is ready for review.~~
   Ready for review.
   cc @merrymercy @comaniac @FrozenGene @junrushao1994 @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 edited a comment on pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-07 Thread GitBox


jcf94 edited a comment on pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#issuecomment-669172637


   ~~This PR shares the base class with cost models, will rebase the code after 
#6187 has been merged.~~
   ~~The other parts of code is ready for review.~~
   Rebased with #6187, ready for review.
   cc @merrymercy @comaniac @FrozenGene @junrushao1994 @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] windclarion opened a new pull request #6231: [uTVM] fix crt building and running error

2020-08-07 Thread GitBox


windclarion opened a new pull request #6231:
URL: https://github.com/apache/incubator-tvm/pull/6231


   1. include\tvm\runtime\crt\module.h function TVMSystemLibEntryPoint need 
extern "C", or else linker complain this symbol can't be found.
   
   2. src\target\source\codegen_c_host.cc  function GenerateFuncRegistry:   f 
need cast, or else C++ compiler say type not match
   
   L291 array _tvm_func_array miss "};", so build fail
   
   system_lib_registry and system_lib name need use new name in PR #6145 
   
   3. src\support\str_escape.h  function StrEscape  convert to octal need 3bit, 
but unsigned char c only use LSB 2bit, because mask macro is 0x03, should be 
0x07.
   
   '0' + ((c >> 6) & 0x03) need cast to unsigned char,  because ostringstream 
treat it as int, not unsigned char, so value is error. ex.
   c = 0x17, means  we have 23  functions to register, so ((c >> 6) & 0x03) == 
0,  and '0' + ((c >> 6) & 0x03)  is the int value of '0', which is  48,  but  
ostringstream treat it as int, so  we get  a string "485055",  in fact it 
should be "027"
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #6213: fix compilation error with cuda 11

2020-08-07 Thread GitBox


icemelon9 commented on a change in pull request #6213:
URL: https://github.com/apache/incubator-tvm/pull/6213#discussion_r466838537



##
File path: src/runtime/contrib/cublas/cublas.cc
##
@@ -172,7 +172,11 @@ inline void CallLtIgemm(TVMArgs args, TVMRetValue* ret, 
cublasLtHandle_t hdl) {
   cublasLtOrder_t order_COL32 = CUBLASLT_ORDER_COL32;
   cublasLtOrder_t order_COL4_4R2_8C = CUBLASLT_ORDER_COL4_4R2_8C;
   cublasLtMatmulDesc_t operationDesc = nullptr;
+#if CUDART_VERSION >= 11000
+  CHECK_CUBLAS_ERROR(cublasLtMatmulDescCreate(, 
CUBLAS_COMPUTE_32I, CUDA_R_32I));
+#elif

Review comment:
   No condition in the `#elif`. This is causing compilation error. 
@lanchongyizu 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org