[GitHub] [incubator-tvm] leonwanghui commented on a change in pull request #5892: Add TVM application extension with WASM runtime

2020-07-10 Thread GitBox


leonwanghui commented on a change in pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#discussion_r453150111



##
File path: apps/wasm-graphcompiler-tvm/README.md
##
@@ -0,0 +1,191 @@
+# WebAssembly GraphCompiler for Deep Learning Framework with TVM Runtime
+

Review comment:
   Sure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs

2020-07-10 Thread GitBox


lixiaoquan commented on a change in pull request #6024:
URL: https://github.com/apache/incubator-tvm/pull/6024#discussion_r453138358



##
File path: src/relay/op/tensor/transform.cc
##
@@ -2146,7 +2146,18 @@ Array StridedSliceCompute(const Attrs& 
attrs, const Array();
   CHECK(param != nullptr);
-  if (param->begin && param->end && param->strides) {
+
+  bool dyn = false;
+  for (auto& v : out_type.as()->shape) {
+if (const tir::VarNode* var_node = v.as()) {
+  if (var_node->name_hint == "any_dim") {
+dyn = true;
+break;
+  }
+}
+  }
+
+  if (param->begin && param->end && param->strides && !dyn) {

Review comment:
   topi::strided_slice requires static shape because it will get value from 
each dim. Should we change this requirement or just use DynamicStrideSlice?
   ```
   diff --git a/topi/include/topi/transform.h b/topi/include/topi/transform.h
   index b5fc02ae7..329a62ce7 100644
   --- a/topi/include/topi/transform.h
   +++ b/topi/include/topi/transform.h
   @@ -610,6 +610,7 @@ inline Tensor strided_slice(const Tensor& x, const 
Array& begin, const
  for (size_t i = 0; i < src_tensor_dim; ++i) {
int64_t begin_range = stride_vec[i] < 0 ? -1 : 0;
int64_t dim_i = GetConstInt(x->shape[i]);
   +LOG(INFO) << dim_i; // it is -1 for modified case
int64_t end_range = stride_vec[i] < 0 ? dim_i - 1 : dim_i;
// transform negative indices to positive value, clips on the correct 
range
auto index_canonicalization = [dim_i, begin_range, end_range](int64_t 
index) {
   @@ -635,6 +636,8 @@ inline Tensor strided_slice(const Tensor& x, const 
Array& begin, const
out_shape.push_back(slice_size);
  }

   +//  LOG(INFO) << out_shape;
   +
  return compute(
  out_shape,
  [&](const Array& indices) {
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-10 Thread GitBox


tqchen commented on pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#issuecomment-656955959


   I made a change request about the namespace change to `auto_schedule`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-10 Thread GitBox


tqchen commented on pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#issuecomment-656950356


   Guys, please another look and 
https://tvm.apache.org/docs/contribute/code_review.html#approve-and-request-changes-explicitly



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-10 Thread GitBox


tqchen edited a comment on pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#issuecomment-656950356


   Folks, please another look and 
https://tvm.apache.org/docs/contribute/code_review.html#approve-and-request-changes-explicitly



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-10 Thread GitBox


tqchen commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r453131345



##
File path: src/ansor/auto_schedule.h
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ansor/auto_schedule.h
+ * \brief The user interface of the Ansor auto-scheduler. This is the entry 
structure to get
+ * schedule search requirements from upper level (Python API), and returns a 
high performance
+ * schedule after search process.
+ */
+
+#ifndef TVM_ANSOR_AUTO_SCHEDULE_H_
+#define TVM_ANSOR_AUTO_SCHEDULE_H_
+
+#include 
+
+#include "measure.h"
+#include "search_policy/search_policy.h"
+
+namespace tvm {
+namespace ansor {

Review comment:
   Let us change the namespace to `auto_schedule`, so that the module can 
be a generic module of tvm.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-10 Thread GitBox


tqchen commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r453131345



##
File path: src/ansor/auto_schedule.h
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ansor/auto_schedule.h
+ * \brief The user interface of the Ansor auto-scheduler. This is the entry 
structure to get
+ * schedule search requirements from upper level (Python API), and returns a 
high performance
+ * schedule after search process.
+ */
+
+#ifndef TVM_ANSOR_AUTO_SCHEDULE_H_
+#define TVM_ANSOR_AUTO_SCHEDULE_H_
+
+#include 
+
+#include "measure.h"
+#include "search_policy/search_policy.h"
+
+namespace tvm {
+namespace ansor {

Review comment:
let us change the namespace to `auto_schedule`, so that the module can 
be a generic module of tvm.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-10 Thread GitBox


tqchen commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r453131345



##
File path: src/ansor/auto_schedule.h
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ansor/auto_schedule.h
+ * \brief The user interface of the Ansor auto-scheduler. This is the entry 
structure to get
+ * schedule search requirements from upper level (Python API), and returns a 
high performance
+ * schedule after search process.
+ */
+
+#ifndef TVM_ANSOR_AUTO_SCHEDULE_H_
+#define TVM_ANSOR_AUTO_SCHEDULE_H_
+
+#include 
+
+#include "measure.h"
+#include "search_policy/search_policy.h"
+
+namespace tvm {
+namespace ansor {

Review comment:
   Let us change the namespace to `auto_schedule`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu closed issue #5972: [RESULT][VOTE] Release Apache TVM (incubating) v0.6.1.rc1

2020-07-10 Thread GitBox


yzhliu closed issue #5972:
URL: https://github.com/apache/incubator-tvm/issues/5972


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs

2020-07-10 Thread GitBox


kevinthesun commented on a change in pull request #6024:
URL: https://github.com/apache/incubator-tvm/pull/6024#discussion_r453119255



##
File path: src/relay/op/tensor/transform.cc
##
@@ -2146,7 +2146,18 @@ Array StridedSliceCompute(const Attrs& 
attrs, const Array();
   CHECK(param != nullptr);
-  if (param->begin && param->end && param->strides) {
+
+  bool dyn = false;
+  for (auto& v : out_type.as()->shape) {
+if (const tir::VarNode* var_node = v.as()) {
+  if (var_node->name_hint == "any_dim") {
+dyn = true;
+break;
+  }
+}
+  }
+
+  if (param->begin && param->end && param->strides && !dyn) {

Review comment:
   I printed output_shape in topi::strided_slice and it is (0, 0) for that 
modified test_any case, which causes an runtime error. Maybe we can fix this 
bug directly in topi::strided_slice.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs

2020-07-10 Thread GitBox


kevinthesun edited a comment on pull request #6024:
URL: https://github.com/apache/incubator-tvm/pull/6024#issuecomment-656929230


   Can't topi.strided_slice support dynamic shape + const attr? What's the 
error? We'd better fix topi.strided_slice if there is any issue since it should 
be able to support dynamic input shape.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6034: Dispatcher warning for target-keys=tracing,cpu

2020-07-10 Thread GitBox


tqchen commented on issue #6034:
URL: https://github.com/apache/incubator-tvm/issues/6034#issuecomment-656930585


   Ideally we would like to supress the warning and only show once per error 
kind



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs

2020-07-10 Thread GitBox


kevinthesun edited a comment on pull request #6024:
URL: https://github.com/apache/incubator-tvm/pull/6024#issuecomment-656929230


   Can topi.strided_slice support dynamic shape + const attr? What's the error?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs

2020-07-10 Thread GitBox


kevinthesun commented on pull request #6024:
URL: https://github.com/apache/incubator-tvm/pull/6024#issuecomment-656929230


   I think this case is already supported and we don't need this change, since 
dynamic input shape + constant attrs can be supported by topi.strided_slice. 
There has already been  a test case in test_any.py.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6036: [LLVM/CPU] Terminate basic block after "ret" instruction

2020-07-10 Thread GitBox


tqchen commented on pull request #6036:
URL: https://github.com/apache/incubator-tvm/pull/6036#issuecomment-656924042


   Thanks @kparzysz-quic !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-07-10 Thread GitBox


tqchen commented on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-656923788


   I think it is fine to just build for crt and not targeting c++ for now, as 
long as it works. The app was removed from CI due to i386 compact issue of CRT, 
we can add it back if we confirmed that the issue has been resolved



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [LLVM/CPU] Terminate basic block after "ret" instruction (#6036)

2020-07-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new c9c77c6  [LLVM/CPU] Terminate basic block after "ret" instruction 
(#6036)
c9c77c6 is described below

commit c9c77c6b76f7cff3bc6afbf9d3ef2200e3fdbb91
Author: Krzysztof Parzyszek 
AuthorDate: Fri Jul 10 17:36:17 2020 -0500

[LLVM/CPU] Terminate basic block after "ret" instruction (#6036)

* [LLVM/CPU] Terminate basic block after "ret" instruction

"Ret" is a terminator in LLVM IR and there should be no instructions
in the basic block following it. When generating a "ret", end the
current block and start a new one.
---
 src/target/llvm/codegen_cpu.cc | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/target/llvm/codegen_cpu.cc b/src/target/llvm/codegen_cpu.cc
index f855dd5..41fa3c5 100644
--- a/src/target/llvm/codegen_cpu.cc
+++ b/src/target/llvm/codegen_cpu.cc
@@ -781,6 +781,9 @@ llvm::Value* CodeGenCPU::CreateIntrinsic(const CallNode* 
op) {
 return CreateStaticHandle();
   } else if (op->op.same_as(builtin::tvm_throw_last_error())) {
 builder_->CreateRet(ConstInt32(-1));
+auto next_block = std::next(builder_->GetInsertBlock()->getIterator());
+llvm::BasicBlock* new_bb = llvm::BasicBlock::Create(*ctx_, "cont", 
function_, &*next_block);
+builder_->SetInsertPoint(new_bb);
 return ConstInt32(-1);
   } else if (op->op.same_as(builtin::tvm_struct_get())) {
 CHECK_EQ(op->args.size(), 3U);



[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-07-10 Thread GitBox


tqchen edited a comment on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-656923788


   I think it is fine to just build for crt and not targeting c++ for now, as 
long as it works. The app was removed from the CI due to i386 compact issue of 
CRT, we can add it back if we confirmed that the issue has been resolved



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6036: [LLVM/CPU] Terminate basic block after "ret" instruction

2020-07-10 Thread GitBox


tqchen merged pull request #6036:
URL: https://github.com/apache/incubator-tvm/pull/6036


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm-site] branch master updated: fix 0.6.1 sha512 link (#11)

2020-07-10 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/master by this push:
 new 5e457e0  fix 0.6.1 sha512 link (#11)
5e457e0 is described below

commit 5e457e0eb6be8c9a92fd8a5a7643db4d4075faee
Author: Yizhi Liu 
AuthorDate: Fri Jul 10 15:12:06 2020 -0700

fix 0.6.1 sha512 link (#11)
---
 download.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/download.md b/download.md
index 8ec67d3..57f49b7 100644
--- a/download.md
+++ b/download.md
@@ -17,8 +17,8 @@ Choose your flavor of download from the following links:
 
 | Version | Source | PGP | SHA |
 | --- | -- | --- | --- |
+| 0.6.1   | 
[apache-tvm-src-v0.6.1-incubating.tar.gz](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz)
 | 
[.asc](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz.asc)
 | 
[.sha512](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz.sha512)
 | 
 | 0.6.0   | 
[apache-tvm-src-v0.6.0-incubating.tar.gz](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz)
 | 
[.asc](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz.asc)
 | 
[.sha512](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz.sha512)
 | 
-| 0.6.1   | 
[apache-tvm-src-v0.6.1-incubating.tar.gz](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz)
 | 
[.asc](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz.asc)
 | 
[.sha512](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz.sha512)
 | 
 
 
 



[incubator-tvm-site] branch master updated: fix 0.6.1 sha512 link (#11)

2020-07-10 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/master by this push:
 new 5e457e0  fix 0.6.1 sha512 link (#11)
5e457e0 is described below

commit 5e457e0eb6be8c9a92fd8a5a7643db4d4075faee
Author: Yizhi Liu 
AuthorDate: Fri Jul 10 15:12:06 2020 -0700

fix 0.6.1 sha512 link (#11)
---
 download.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/download.md b/download.md
index 8ec67d3..57f49b7 100644
--- a/download.md
+++ b/download.md
@@ -17,8 +17,8 @@ Choose your flavor of download from the following links:
 
 | Version | Source | PGP | SHA |
 | --- | -- | --- | --- |
+| 0.6.1   | 
[apache-tvm-src-v0.6.1-incubating.tar.gz](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz)
 | 
[.asc](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz.asc)
 | 
[.sha512](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz.sha512)
 | 
 | 0.6.0   | 
[apache-tvm-src-v0.6.0-incubating.tar.gz](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz)
 | 
[.asc](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz.asc)
 | 
[.sha512](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz.sha512)
 | 
-| 0.6.1   | 
[apache-tvm-src-v0.6.1-incubating.tar.gz](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz)
 | 
[.asc](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.1/apache-tvm-src-v0.6.1-incubating.tar.gz.asc)
 | 
[.sha512](https://dist.apache.org/repos/dist/release/incubator/tvm/tvm-v0.6.0/apache-tvm-src-v0.6.0-incubating.tar.gz.sha512)
 | 
 
 
 



[GitHub] [incubator-tvm] areusch edited a comment on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-07-10 Thread GitBox


areusch edited a comment on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-656878983


   @liangfu thanks for taking a look. I looked more into your error and it 
looks like I overlooked testing bundle_dynamic most recently. that is fixable, 
however...
   
   I did some more investigation and there are a couple of problems:
   1. I believe the performance regression is related to switching to `c` 
compiler, based on some profiling.
   2. with the CRT changes proposed in this PR, we can't use `--system-lib` 
because the global CRT memory manager needs to be initialized before functions 
can be registered, and the current `--system-lib` approach registers functions 
in static initialization. especially for targeting embedded bare-metal systems, 
I think it would be wise to allow a way to do arbitrary initialization before 
requiring use of TVM components.
   3. I tried using just plain `llvm` to avoid/prove the performance 
regression; however, there isn't a way to create a statically-linked 
non-system-lib right now (i.e. see [codegen init 
here](https://github.com/apache/incubator-tvm/blob/master/src/target/llvm/llvm_module.cc#L209)).
 i'd like to fix this, but I don't want to make this PR any larger than it is 
right now.
   4. finally, all of this is conspiring to make the `bundle_deploy` make 
situation more complex. bundle_deploy produces 3 targets: C main 
statically-linked against CRT runtime, c++ main() dynamically loading pure-c 
CRT, and c++ main dynamically loading c++ Runtime. the last one requires 
--system-lib, while the other two can't have it. so we'll need to produce a 
model/test_model that targets the c++ runtime to keep the last target alive.
   
   in a future PR, I plan to add an additional flag to the target string 
--runtime=. this flag would allow us to produce a statically-linked 
llvm module compatible with the C runtime. would it be okay to do the following 
til then:
   1. use `c` module for all CRT bundle_deploy targets
   2. build 2 copies of test_model and model: one for CRT and one for c++ 
runtime
   3. for now, CRT targets will need to take a performance regression until we 
can re-enable the llvm module approach according to my plan above.
   
   maybe @tqchen has more thoughts here?
   
   also, it would be helpful if we could add this app to the CI somehow. any 
thoughts on that?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch edited a comment on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-07-10 Thread GitBox


areusch edited a comment on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-656878983


   @liangfu thanks for taking a look. I looked more into your error and it 
looks like I overlooked testing bundle_dynamic most recently. that is fixable, 
however...
   
   I did some more investigation and there are a couple of problems:
   1. I believe the performance regression is related to switching to `c` 
compiler, based on some profiling.
   2. with the CRT changes proposed in this PR, we can't use `--system-lib` 
because the global CRT memory manager needs to be initialized before functions 
can be registered, and the current `--system-lib` approach registers functions 
in static initialization. especially for targeting embedded bare-metal systems, 
I think it would be wise to allow a way to do arbitrary initialization before 
requiring use of TVM components.
   3. I tried using just plain `llvm` to avoid/prove the performance 
regression; however, there isn't a way to create a statically-linked 
non-system-lib right now (i.e. see [codegen init 
here](https://github.com/apache/incubator-tvm/blob/master/src/target/llvm/llvm_module.cc#L209)).
 i'd like to fix this, but I don't want to make this PR any larger than it is 
right now.
   4. finally, all of this is conspiring to make the `bundle_deploy` make 
situation more complex. bundle_deploy produces 3 targets: C main 
statically-linked against CRT runtime, c++ main() dynamically loading pure-c 
CRT, and c++ main dynamically loading c++ Runtime. the last one requires 
--system-lib, while the other two can't have it. so we'll need to produce a 
model/test_model that targets the c++ runtime to keep the last target alive.
   
   in a future PR, I plan to add an additional flag to the target string 
--runtime=. this flag would allow us to produce a statically-linked 
llvm module compatible with the C runtime. would it be okay to do the following 
til then:
   1. use `c` module for all CRT bundle_deploy targets
   2. build 2 copies of test_model and model: one for CRT and one for c++ 
runtime
   3. for now, CRT targets will need to take a performance regression until we 
can re-enable the llvm module approach according to my plan above.
   
   maybe @tqchen has more thoughts here?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-07-10 Thread GitBox


areusch commented on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-656878983


   @liangfu thanks for taking a look. I looked more into your error and it 
looks like I overlooked testing bundle_dynamic most recently. that is fixable, 
however...
   
   I did some more investigation and there are a couple of problems:
   1. I believe the performance regression is related to switching to 'c' 
compiler, based on some profiling.
   2. with the CRT changes proposed in this PR, we can't use --system-lib 
because the global CRT memory manager needs to be initialized before functions 
can be registered, and the current --system-lib approach registers functions in 
static initialization. especially for targeting embedded bare-metal systems, I 
think it would be wise to allow a way to do arbitrary initialization before 
requiring use of TVM components.
   3. I tried using just plain `llvm` to avoid/prove the performance 
regression; however, there isn't a way to create a statically-linked 
non-system-lib right now (i.e. see (codegen init 
here|https://github.com/apache/incubator-tvm/blob/master/src/target/llvm/llvm_module.cc#L209)).
 i'd like to fix this, but I don't want to make this PR any larger than it is 
right now.
   4. finally, all of this is conspiring to make the `bundle_deploy` make 
situation more complex. bundle_deploy produces 3 targets: C main 
statically-linked against CRT runtime, c++ main() dynamically loading pure-c 
CRT, and c++ main dynamically loading c++ Runtime. the last one requires 
--system-lib, while the other two can't have it. so we'll need to produce a 
model/test_model that targets the c++ runtime to keep the last target alive.
   
   in a future PR, I plan to add an additional flag to the target string 
--runtime=. this flag would allow us to produce a statically-linked 
llvm module compatible with the C runtime. would it be okay to do the following 
til then:
   1. use 'c' module for all CRT bundle_deploy targets
   2. build 2 copies of test_model and model: one for CRT and one for c++ 
runtime
   3. for now, CRT targets will need to take a performance regression until we 
can re-enable the llvm module approach according to my plan above.
   
   maybe @tqchen has more thoughts here?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] tag v0.6.1 created (now 0d0d515)

2020-07-10 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to tag v0.6.1
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at 0d0d515  (commit)
No new revisions were added by this update.



[GitHub] [incubator-tvm] tqchen commented on pull request #5753: Support module based interface runtime

2020-07-10 Thread GitBox


tqchen commented on pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#issuecomment-656842251


   Thanks @FrozenGene I left some comments. Wrt to the use of Map, let us get 
around it for now by not exposing get_params, instead, we directly store the 
params into the graph runtime py class during relay.build(as it is only needed 
for backward compat reasons).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5753: Support module based interface runtime

2020-07-10 Thread GitBox


tqchen commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r453029054



##
File path: src/runtime/container.cc
##
@@ -21,13 +21,19 @@
  * \file src/runtime/container.cc
  * \brief Implementations of common containers.
  */
+#include 

Review comment:
   Good catch @FrozenGene can we remove the use of Map and use 
std::unordered_map for now?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m opened a new pull request #6038: [Frontend][TFLite] Fix fully_connected converter when batch size is not 1

2020-07-10 Thread GitBox


trevor-m opened a new pull request #6038:
URL: https://github.com/apache/incubator-tvm/pull/6038


   The fully_connected converter used the shapes from the TFLite model to 
reshape the data tensor. However, the TFLite model shapes do not reflect those 
provided by the `data_shape` parameter in `from_tflite()`. The TFLite model 
shape `input_tensor.tensor.ShapeAsNumpy()` will give a batch size of `1` 
because the TFLite model obviously doesn't know about the `data_shape` dict the 
user provided to the relay importer.
   
   For this particular op, the reshape can always be set to `(-1, n_units)` 
without needing to calculate a batch size.
   
   For inceptionv4, without this PR we would incorrectly compute a batch size 
of 1 adding this reshape from %515: `(4, 1, 1, 1536)` to %516: `(1, 1536)`. 
This ultimately causes the model output to become `(1, 1001)` instead of `(4, 
1001)`
   ```
 ..
 %515 = nn.avg_pool2d(%514, pool_size=[8, 8], padding=[0, 0, 0, 0], 
layout="NHWC") /* ty=Tensor[(4, 1, 1, 1536), float32] */;   
   
 %516 = reshape(%515, meta[relay.Constant][0] /* ty=Tensor[(2), int32] */ 
/* ty=Tensor[(2), int32] */, newshape=[1, 1536]) /* ty=Tensor[(1, 1536), 
float32] */;
 %517 = nn.dense(%516, %v_param_299, units=None) /* ty=Tensor[(1, 1001), 
float32] */;
  
 %518 = nn.bias_add(%517, %v_param_300) /* ty=Tensor[(1, 1001), float32] 
*/;
 nn.softmax(%518, axis=1) /* ty=Tensor[(1, 1001), float32] */   

   
   } 
   ```
   
   Btw, should the reshape op have some validation to prevent reshapes where 
the numbers of elements don't match? `(4, 1, 1, 1536)` -> `(1, 1536)` shouldn't 
be valid.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #6008: [Relay][Dyn] Dynamic TopK Op

2020-07-10 Thread GitBox


zhiics commented on pull request #6008:
URL: https://github.com/apache/incubator-tvm/pull/6008#issuecomment-656815967


   Thanks @mbrookhart @lixiaoquan @kevinthesun 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #6008: [Relay][Dyn] Dynamic TopK Op

2020-07-10 Thread GitBox


zhiics merged pull request #6008:
URL: https://github.com/apache/incubator-tvm/pull/6008


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Relay][Dyn] Dynamic TopK Op (#6008)

2020-07-10 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 474d472  [Relay][Dyn] Dynamic TopK Op (#6008)
474d472 is described below

commit 474d47234f8a2378f9135fa3200ca7ce75459889
Author: Matthew Brookhart 
AuthorDate: Fri Jul 10 11:18:34 2020 -0700

[Relay][Dyn] Dynamic TopK Op (#6008)

* add dynamic topk op

* add topk to dynamic_to_static pass

* fix TF test

* fix pylint
---
 python/tvm/relay/op/_algorithm.py | 35 ++-
 python/tvm/relay/op/algorithm.py  | 13 ++--
 python/tvm/relay/op/dyn/__init__.py   |  1 +
 python/tvm/relay/op/{ => dyn}/_algorithm.py   | 52 
 python/tvm/relay/op/strategy/generic.py   |  3 +-
 src/relay/analysis/util.cc|  9 +--
 src/relay/op/algorithm/topk.cc| 24 +++
 src/relay/op/{ => dyn}/algorithm/topk.cc  | 43 +++--
 src/relay/transforms/dynamic_to_static.cc | 20 +-
 tests/python/relay/dyn/test_dynamic_op_level6.py  | 76 +++
 tests/python/relay/test_pass_dynamic_to_static.py | 53 
 11 files changed, 211 insertions(+), 118 deletions(-)

diff --git a/python/tvm/relay/op/_algorithm.py 
b/python/tvm/relay/op/_algorithm.py
index 5a20480..cded2e1 100644
--- a/python/tvm/relay/op/_algorithm.py
+++ b/python/tvm/relay/op/_algorithm.py
@@ -35,25 +35,6 @@ register_strategy("topk", strategy.topk_strategy)
 register_pattern("topk", OpPattern.OPAQUE)
 
 @script
-def _topk_shape_func_input_data(data, k, axis):
-ndim = len(data.shape)
-val_out = output_tensor((ndim,), "int64")
-indices_out = output_tensor((ndim,), "int64")
-
-for i in const_range(ndim):
-if i != axis:
-val_out[i] = int64(data.shape[i])
-indices_out[i] = int64(data.shape[i])
-else:
-if k[0] < 1:
-val_out[i] = int64(data.shape[i])
-indices_out[i] = int64(data.shape[i])
-else:
-val_out[i] = int64(k[0])
-indices_out[i] = int64(k[0])
-return val_out, indices_out
-
-@script
 def _topk_shape_func_input_shape(data_shape, k, axis):
 ndim = data_shape.shape[0]
 val_out = output_tensor((ndim,), "int64")
@@ -72,22 +53,16 @@ def _topk_shape_func_input_shape(data_shape, k, axis):
 indices_out[i] = int64(k)
 return val_out, indices_out
 
-@_reg.register_shape_func("topk", True)
+@_reg.register_shape_func("topk", False)
 def topk_shape_func(attrs, inputs, _):
 """
 Shape func for topk.
 """
 axis = attrs.axis
-if attrs.k is not None:
-if axis < 0:
-axis += inputs[0].shape[0]
-val_out, indices_out = \
-_topk_shape_func_input_shape(inputs[0], attrs.k, convert(axis))
-else:
-if axis < 0:
-axis += len(inputs[0].shape)
-val_out, indices_out = \
-_topk_shape_func_input_data(inputs[0], inputs[1], convert(axis))
+if axis < 0:
+axis += inputs[0].shape[0]
+val_out, indices_out = \
+_topk_shape_func_input_shape(inputs[0], attrs.k, convert(axis))
 ret_type = attrs.ret_type
 if ret_type == "both":
 ret = [val_out, indices_out]
diff --git a/python/tvm/relay/op/algorithm.py b/python/tvm/relay/op/algorithm.py
index d31e89a..5aeb7e6 100644
--- a/python/tvm/relay/op/algorithm.py
+++ b/python/tvm/relay/op/algorithm.py
@@ -16,8 +16,10 @@
 # under the License.
 """Classic algorithm operation"""
 from __future__ import absolute_import as _abs
+import numpy as np
 from . import _make
-from ..expr import TupleWrapper, const
+from .dyn import _make as _dyn_make
+from ..expr import TupleWrapper, Expr, Constant
 
 def argsort(data, axis=-1, is_ascend=1, dtype="int32"):
 """Performs sorting along the given axis and returns an array of indicies
@@ -82,9 +84,12 @@ def topk(data, k=1, axis=-1, ret_type="both",
 out : relay.Expr or List[relay.Expr]
 The computed result.
 """
-if isinstance(k, int):
-k = const(k, "int64")
-out = _make.topk(data, k, axis, ret_type, is_ascend, dtype)
+if isinstance(k, Constant):
+k = np.asscalar(k.data.asnumpy())
+if isinstance(k, Expr):
+out = _dyn_make.topk(data, k, axis, ret_type, is_ascend, dtype)
+else:
+out = _make.topk(data, k, axis, ret_type, is_ascend, dtype)
 if ret_type == "both":
 return TupleWrapper(out, 2)
 return out
diff --git a/python/tvm/relay/op/dyn/__init__.py 
b/python/tvm/relay/op/dyn/__init__.py
index d659203..f4d47a6 100644
--- a/python/tvm/relay/op/dyn/__init__.py
+++ b/python/tvm/relay/op/dyn/__init__.py
@@ -17,4 +17,5 @@
 # pylint: disable=wildcard-import, redefined-builtin, invalid-name
 """The Relay n

[GitHub] [incubator-tvm] jroesch merged pull request #5958: [REFACTOR][RELAY] Move invoke_tvm_op and shape_func to vm dialect

2020-07-10 Thread GitBox


jroesch merged pull request #5958:
URL: https://github.com/apache/incubator-tvm/pull/5958


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [REFACTOR][RELAY] Move invoke_tvm_op and shape_func to vm dialect (#5958)

2020-07-10 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new ba04c6a  [REFACTOR][RELAY] Move invoke_tvm_op and shape_func to vm 
dialect (#5958)
ba04c6a is described below

commit ba04c6a393371d539ea8ee417520c7909521a370
Author: Zhi <5145158+zhi...@users.noreply.github.com>
AuthorDate: Fri Jul 10 11:03:23 2020 -0700

[REFACTOR][RELAY] Move invoke_tvm_op and shape_func to vm dialect (#5958)

* [REFACTOR][RELAY] Move invoke_tvm_op and shape_func to vm dialect

* address comments
---
 include/tvm/relay/attrs/memory.h   |  13 ---
 include/tvm/relay/attrs/vm.h   |  47 +++
 python/tvm/relay/op/__init__.py|   2 +-
 python/tvm/relay/op/memory/memory.py   |  40 -
 python/tvm/relay/op/vm/__init__.py |   2 +-
 python/tvm/relay/op/vm/vm.py   |  48 +++
 python/tvm/relay/transform/memory_alloc.py |   4 +-
 src/relay/backend/vm/compiler.cc   |   4 +-
 src/relay/op/memory/memory.cc  | 124 
 src/relay/op/vm/vm.cc  | 127 +
 src/relay/transforms/fold_constant.cc  |   4 +-
 11 files changed, 230 insertions(+), 185 deletions(-)

diff --git a/include/tvm/relay/attrs/memory.h b/include/tvm/relay/attrs/memory.h
index 7429c39..b737103 100644
--- a/include/tvm/relay/attrs/memory.h
+++ b/include/tvm/relay/attrs/memory.h
@@ -74,19 +74,6 @@ struct AllocTensorAttrs : public 
tvm::AttrsNode {
   }
 };
 
-/*!
- * \brief Options for the shape function operator.
- */
-struct ShapeFuncAttrs : public tvm::AttrsNode {
-  Array is_input;
-
-  TVM_DECLARE_ATTRS(ShapeFuncAttrs, "relay.attrs.ShapeFuncAttrs") {
-TVM_ATTR_FIELD(is_input).describe(
-"A bool indicating whether the shape function should"
-"expect shape or input in each position.");
-  }
-};
-
 }  // namespace relay
 }  // namespace tvm
 #endif  // TVM_RELAY_ATTRS_MEMORY_H_
diff --git a/include/tvm/relay/attrs/vm.h b/include/tvm/relay/attrs/vm.h
new file mode 100644
index 000..9144f47
--- /dev/null
+++ b/include/tvm/relay/attrs/vm.h
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/relay/attrs/vm.h
+ * \brief Attributes for Relay vm operators.
+ */
+#ifndef TVM_RELAY_ATTRS_VM_H_
+#define TVM_RELAY_ATTRS_VM_H_
+
+#include 
+
+namespace tvm {
+namespace relay {
+
+/*!
+ * \brief Options for the shape function operator.
+ */
+struct ShapeFuncAttrs : public tvm::AttrsNode {
+  Array is_input;
+
+  TVM_DECLARE_ATTRS(ShapeFuncAttrs, "relay.attrs.ShapeFuncAttrs") {
+TVM_ATTR_FIELD(is_input).describe(
+"A bool indicating whether the shape function should"
+"expect shape or input in each position.");
+  }
+};
+
+}  // namespace relay
+}  // namespace tvm
+#endif  // TVM_RELAY_ATTRS_VM_H_
diff --git a/python/tvm/relay/op/__init__.py b/python/tvm/relay/op/__init__.py
index a45d466..011042b 100644
--- a/python/tvm/relay/op/__init__.py
+++ b/python/tvm/relay/op/__init__.py
@@ -27,7 +27,7 @@ from .reduce import *
 from .tensor import *
 from .transform import *
 from .algorithm import *
-from .vm import *
+from . import vm
 from . import nn
 from . import annotation
 from . import memory
diff --git a/python/tvm/relay/op/memory/memory.py 
b/python/tvm/relay/op/memory/memory.py
index 4092545..b426a0e 100644
--- a/python/tvm/relay/op/memory/memory.py
+++ b/python/tvm/relay/op/memory/memory.py
@@ -19,27 +19,6 @@
 from __future__ import absolute_import as _abs
 from . import _make
 
-def invoke_tvm_op(func, inputs, outputs):
-"""Call a primitive function with the TVM operator calling convention.
-
-Parameters
---
-func : tvm.relay.Expr
-The input expr.
-
-inputs : tvm.relay.Expr
-A tuple of the inputs to pass to the TVM function.
-
-outputs : tvm.relay.Expr
-A tuple of the outputs to pass to the TVM function.
-
-Returns
----
-result : tvm.relay.Expr
-The invoke_tvm_op call node.
-"""
-r

[GitHub] [incubator-tvm] giuseros commented on pull request #6027: Bug fix] Fix in arm_cpu/conv2d_alter_op for NHWC quantized

2020-07-10 Thread GitBox


giuseros commented on pull request #6027:
URL: https://github.com/apache/incubator-tvm/pull/6027#issuecomment-656807720


   Thank you @icemelon9 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on pull request #6027: Bug fix] Fix in arm_cpu/conv2d_alter_op for NHWC quantized

2020-07-10 Thread GitBox


icemelon9 commented on pull request #6027:
URL: https://github.com/apache/incubator-tvm/pull/6027#issuecomment-656806953


   Thanks @giuseros 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Bug fix] Fix in arm_cpu/conv2d_alter_op for NHWC quantized (#6027)

2020-07-10 Thread haichen
This is an automated email from the ASF dual-hosted git repository.

haichen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new b6ee52b  [Bug fix] Fix in arm_cpu/conv2d_alter_op for NHWC quantized 
(#6027)
b6ee52b is described below

commit b6ee52bff90acb868412b764a93fbdeb483859b9
Author: Giuseppe Rossini 
AuthorDate: Fri Jul 10 18:58:22 2020 +0100

[Bug fix] Fix in arm_cpu/conv2d_alter_op for NHWC quantized (#6027)

* Bug fix] Fix in arm_cpu/conv2d_alter_op for NHWC quantized

Few minor typos to be fixed in topi/arm_cpu/conv2d_alter_op.py for the
NHWC quantized route:
- Kernel shape was misread (CO, IC, KH, KW) -> (KH, KW, IC, OC)
- Pad along the K dimension was misspelled: pad_k -> pad_K
- Workload name was wrong: "conv2d_NHWC_int8_without_tranform.arm_cpu"
  -> "conv2d_NHWC_quantized_without_transform.arm_cpu"

This submission fixes those errors and add a further test for 
conv2d_alter_op.py

Change-Id: I0622df05f1d4d15311946f6e75f1840a34815a5b

* Move -target to -mtriple

Change-Id: Ieff80c774e8ab0fa7f48d83d50a79f3a62e8fe13

* Retrigger tests

Change-Id: I5541bed54eacc5063bf4a4fda725209cc23f621e
---
 python/tvm/relay/op/strategy/arm_cpu.py |  2 +-
 tests/python/relay/test_pass_alter_op_layout.py | 65 +
 topi/python/topi/arm_cpu/conv2d_alter_op.py | 11 +++--
 3 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/python/tvm/relay/op/strategy/arm_cpu.py 
b/python/tvm/relay/op/strategy/arm_cpu.py
index d682aad..e639e22 100644
--- a/python/tvm/relay/op/strategy/arm_cpu.py
+++ b/python/tvm/relay/op/strategy/arm_cpu.py
@@ -284,7 +284,7 @@ def 
conv2d_gemm_without_weight_transform_strategy_arm_cpu(attrs, inputs, out_typ
 name="conv2d_NHWC_quantized_without_transform.arm_cpu")
 else:
 raise RuntimeError(
-"Unsupported conv2d_gemm_without_weight_transform layout {0} with 
datatype {1}".
+"Unsupported conv2d_NHWC_quantized_without_transform layout {0} 
with datatype {1}".
 format(layout, data.dtype))
 return strategy
 
diff --git a/tests/python/relay/test_pass_alter_op_layout.py 
b/tests/python/relay/test_pass_alter_op_layout.py
index bbe10c7..77105f0 100644
--- a/tests/python/relay/test_pass_alter_op_layout.py
+++ b/tests/python/relay/test_pass_alter_op_layout.py
@@ -1053,6 +1053,70 @@ def test_alter_layout_nhwc_arm():
 
 assert tvm.ir.structural_equal(a, b), "Actual = \n" + str(a)
 
+def test_alter_layout_nhwc_int8_aarch64():
+""" Check that AlterOplayout does not alter NHWC data layout. """
+from tvm import autotvm
+expected_workload_shape = (20, 42, 4, 16)
+
+# We use Int8Fallback  to disable the fallback flag
+# and to test the new workload produced during the pass
+class Int8Fallback(autotvm.FallbackContext):
+def _query_inside(self, target, workload):
+key = (target, workload)
+if key in self.memory:
+return self.memory[key]
+cfg = autotvm.task.space.FallbackConfigEntity()
+cfg.is_fallback = False
+cfg.cost = 0
+self.memory[key] = cfg
+return cfg
+def update(self, target, workload, cfg):
+key = (str(target), workload)
+assert workload[2][1] == expected_workload_shape
+assert workload[0] == 
"conv2d_NHWC_quantized_without_transform.arm_cpu"
+self.memory[key] = cfg
+
+def alter_conv2d(attrs, inputs, tinfos, out_type):
+import topi
+with tvm.target.create("llvm -device=arm_cpu 
-mtriple=aarch64-linux-gnu"):
+with Int8Fallback():
+tmp =  topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, 
out_type)
+return tmp
+
+# Check NHWC conversion.
+def before_nhwc_int8():
+x = relay.var("x", shape=(1, 56, 56, 73), dtype='int8')
+weight = relay.var('weight1', shape=(3, 3, 73, 79), dtype='int8')
+y = relay.nn.conv2d(x, weight,
+channels=79,
+kernel_size=(3, 3),
+data_layout='NHWC',
+kernel_layout='HWIO',
+out_dtype='int32')
+y = relay.Function(analysis.free_vars(y), y)
+return y
+
+def expected_nhwc_int8():
+x = relay.var("x", shape=(1, 56, 56, 73), dtype='int8')
+weight = relay.var('weight1', shape=(3, 3, 73, 79), dtype='int8')
+tile_rows = 4
+tile_cols = 16
+weight_transformed = 
relay.nn.contrib_conv2d_gemm_weight_transform(weight, tile_rows, tile_cols)
+y = relay.nn.contrib_conv2d_gemm_without_weight_transform(x, 
weight_transformed,
+channels=79,
+kerne

[GitHub] [incubator-tvm] icemelon9 merged pull request #6027: Bug fix] Fix in arm_cpu/conv2d_alter_op for NHWC quantized

2020-07-10 Thread GitBox


icemelon9 merged pull request #6027:
URL: https://github.com/apache/incubator-tvm/pull/6027


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6035: Add creation of Hexagon device in RPC client

2020-07-10 Thread GitBox


tqchen merged pull request #6035:
URL: https://github.com/apache/incubator-tvm/pull/6035


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Add creation of Hexagon device in RPC client (#6035)

2020-07-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 3b6c3be  Add creation of Hexagon device in RPC client (#6035)
3b6c3be is described below

commit 3b6c3be63db812eab035684da87a7de8edac2631
Author: Krzysztof Parzyszek 
AuthorDate: Fri Jul 10 12:57:05 2020 -0500

Add creation of Hexagon device in RPC client (#6035)
---
 python/tvm/rpc/client.py | 4 
 1 file changed, 4 insertions(+)

diff --git a/python/tvm/rpc/client.py b/python/tvm/rpc/client.py
index 2f96c9b..60eb08d 100644
--- a/python/tvm/rpc/client.py
+++ b/python/tvm/rpc/client.py
@@ -186,6 +186,10 @@ class RPCSession(object):
 """Construct extension device."""
 return self.context(12, dev_id)
 
+def hexagon(self, dev_id=0):
+"""Construct Hexagon device."""
+return self.context(14, dev_id)
+
 def webgpu(self, dev_id=0):
 """Construct WebGPU device."""
 return self.context(15, dev_id)



[GitHub] [incubator-tvm] giuseros opened a new pull request #6037: Fix conv2_gemm after target structure update

2020-07-10 Thread GitBox


giuseros opened a new pull request #6037:
URL: https://github.com/apache/incubator-tvm/pull/6037


   After target structure changed in this RFC:
   
   https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/42
   
   The conv2d optimizations was broken for the following reasons:
   - "target" is now called mtriple (this changes how we test if the
 architecture is AArch64)
   - when we invoke "clang.create_llvm" we still need to specify the
 "--target" option (set to aarch64-linux-gnu)
   
   This submission reverts those changes
   
   Change-Id: I04c597b91ca5800ddf4471255e2a358c60bc048e
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5753: Support module based interface runtime

2020-07-10 Thread GitBox


zhiics commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r452985355



##
File path: src/runtime/graph/graph_runtime_factory.h
##
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/runtime/graph_runtime_factory.h
+ * \brief Graph runtime factory creating graph runtime.
+ */
+
+#ifndef TVM_RUNTIME_GRAPH_GRAPH_RUNTIME_FACTORY_H_
+#define TVM_RUNTIME_GRAPH_GRAPH_RUNTIME_FACTORY_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./graph_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+class TVM_DLL GraphRuntimeFactory : public runtime::ModuleNode {
+ public:
+  /*!
+   * \brief Initialize the GraphRuntimeFactory with graph and context.
+   * \param graph_json The execution graph.
+   * \param params The params of graph.
+   * \param module_name The module name of graph.
+   */
+  void Init(const std::string& graph_json, const tvm::Map& params,
+const std::string& module_name = "default");
+
+  /*!
+   * \brief Get member function to front-end
+   * \param name The name of the function.
+   * \param sptr_to_self The pointer to the module node.
+   * \return The corresponding member function.
+   */
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final;
+
+  /*!
+   * \return The type key of the executor.
+   */
+  const char* type_key() const override { return "GraphRuntimeFactory"; }
+
+  /*!
+   * \brief Save the module to binary stream.
+   * \param stream The binary stream to save to.
+   */
+  void SaveToBinary(dmlc::Stream* stream) override;
+
+  /*!
+   * \brief Create a specific runtime module
+   * \param module The module we will be used for creating runtime
+   * \param ctxs The context of the host and devices where graph nodes will be
+   *  executed on.
+   * \return created runtime module
+   */
+  Module RuntimeCreate(Module module, const std::vector& ctxs);
+
+  /*!
+   * \brief Create a specific debug runtime module
+   * \param module The module we will be used for creating runtime
+   * \param ctxs The context of the host and devices where graph nodes will be
+   *  executed on.
+   * \return created debug runtime module
+   */
+  Module DebugRuntimeCreate(Module module, const std::vector& 
ctxs);
+
+  /*!
+   * \brief Select the specific module
+   * \param name The name of the module
+   * \return selected module
+   */
+  Module SelectModule(const std::string& name);
+
+  /*!
+   * \brief Set params.
+   * \param graph_runtime The graph runtime we want to set the params into.
+   * \param params The graph params value we want to set.
+   */
+  void SetParams(GraphRuntime* graph_runtime,
+ const tvm::Map& params) const {
+tvm::Map value = params;
+// upload big arrays first to avoid memory issue in rpc mode
+std::vector keys;
+for (const auto& p : value) {
+  keys.emplace_back(p.first);
+}
+std::sort(std::begin(keys), std::end(keys),
+  [&](const std::string& lhs, const std::string& rhs) -> bool {
+auto lhs_shape = value[lhs].Shape();
+auto rhs_shape = value[rhs].Shape();
+auto lhs_prod = std::accumulate(std::begin(lhs_shape), 
std::end(lhs_shape), 1,
+std::multiplies());
+auto rhs_prod = std::accumulate(std::begin(rhs_shape), 
std::end(rhs_shape), 1,
+std::multiplies());
+return lhs_prod > rhs_prod;

Review comment:
   can we just compare GetDataSize of lhs and rhs?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5753: Support module based interface runtime

2020-07-10 Thread GitBox


zhiics commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r452985355



##
File path: src/runtime/graph/graph_runtime_factory.h
##
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/runtime/graph_runtime_factory.h
+ * \brief Graph runtime factory creating graph runtime.
+ */
+
+#ifndef TVM_RUNTIME_GRAPH_GRAPH_RUNTIME_FACTORY_H_
+#define TVM_RUNTIME_GRAPH_GRAPH_RUNTIME_FACTORY_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./graph_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+class TVM_DLL GraphRuntimeFactory : public runtime::ModuleNode {
+ public:
+  /*!
+   * \brief Initialize the GraphRuntimeFactory with graph and context.
+   * \param graph_json The execution graph.
+   * \param params The params of graph.
+   * \param module_name The module name of graph.
+   */
+  void Init(const std::string& graph_json, const tvm::Map& params,
+const std::string& module_name = "default");
+
+  /*!
+   * \brief Get member function to front-end
+   * \param name The name of the function.
+   * \param sptr_to_self The pointer to the module node.
+   * \return The corresponding member function.
+   */
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final;
+
+  /*!
+   * \return The type key of the executor.
+   */
+  const char* type_key() const override { return "GraphRuntimeFactory"; }
+
+  /*!
+   * \brief Save the module to binary stream.
+   * \param stream The binary stream to save to.
+   */
+  void SaveToBinary(dmlc::Stream* stream) override;
+
+  /*!
+   * \brief Create a specific runtime module
+   * \param module The module we will be used for creating runtime
+   * \param ctxs The context of the host and devices where graph nodes will be
+   *  executed on.
+   * \return created runtime module
+   */
+  Module RuntimeCreate(Module module, const std::vector& ctxs);
+
+  /*!
+   * \brief Create a specific debug runtime module
+   * \param module The module we will be used for creating runtime
+   * \param ctxs The context of the host and devices where graph nodes will be
+   *  executed on.
+   * \return created debug runtime module
+   */
+  Module DebugRuntimeCreate(Module module, const std::vector& 
ctxs);
+
+  /*!
+   * \brief Select the specific module
+   * \param name The name of the module
+   * \return selected module
+   */
+  Module SelectModule(const std::string& name);
+
+  /*!
+   * \brief Set params.
+   * \param graph_runtime The graph runtime we want to set the params into.
+   * \param params The graph params value we want to set.
+   */
+  void SetParams(GraphRuntime* graph_runtime,
+ const tvm::Map& params) const {
+tvm::Map value = params;
+// upload big arrays first to avoid memory issue in rpc mode
+std::vector keys;
+for (const auto& p : value) {
+  keys.emplace_back(p.first);
+}
+std::sort(std::begin(keys), std::end(keys),
+  [&](const std::string& lhs, const std::string& rhs) -> bool {
+auto lhs_shape = value[lhs].Shape();
+auto rhs_shape = value[rhs].Shape();
+auto lhs_prod = std::accumulate(std::begin(lhs_shape), 
std::end(lhs_shape), 1,
+std::multiplies());
+auto rhs_prod = std::accumulate(std::begin(rhs_shape), 
std::end(rhs_shape), 1,
+std::multiplies());
+return lhs_prod > rhs_prod;

Review comment:
   can we just compare GetDataSize of left and rhs?

##
File path: src/runtime/container.cc
##
@@ -21,13 +21,19 @@
  * \file src/runtime/container.cc
  * \brief Implementations of common containers.
  */
+#include 

Review comment:
   Shouldn't we avoid including node into runtime?

##
File path: src/runtime/graph/graph_runtime_factory.h
##
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * dist

[GitHub] [incubator-tvm] icemelon9 commented on pull request #5958: [REFACTOR][RELAY] Move invoke_tvm_op and shape_func to vm dialect

2020-07-10 Thread GitBox


icemelon9 commented on pull request #5958:
URL: https://github.com/apache/incubator-tvm/pull/5958#issuecomment-656802303


   @jroesch Could you take a look again?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5753: Support module based interface runtime

2020-07-10 Thread GitBox


tqchen commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r452977944



##
File path: src/runtime/graph/graph_runtime_factory.h
##
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/runtime/graph_runtime_factory.h
+ * \brief Graph runtime factory creating graph runtime.
+ */
+
+#ifndef TVM_RUNTIME_GRAPH_GRAPH_RUNTIME_FACTORY_H_
+#define TVM_RUNTIME_GRAPH_GRAPH_RUNTIME_FACTORY_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./graph_runtime.h"
+
+namespace tvm {
+namespace runtime {
+
+class TVM_DLL GraphRuntimeFactory : public runtime::ModuleNode {
+ public:
+  /*!
+   * \brief Initialize the GraphRuntimeFactory with graph and context.
+   * \param graph_json The execution graph.
+   * \param params The params of graph.
+   * \param module_name The module name of graph.
+   */
+  void Init(const std::string& graph_json, const tvm::Map& params,

Review comment:
   Make them as params of constructor?

##
File path: src/runtime/graph/graph_runtime_factory.cc
##
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file graph_runtime_factory.cc
+ * \brief Graph runtime factory implementations
+ */
+
+#include "./graph_runtime_factory.h"
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+
+void GraphRuntimeFactory::Init(const std::string& graph_json,
+   const tvm::Map& 
params,
+   const std::string& module_name) {
+  graph_json_ = graph_json;
+  params_ = params;
+  module_name_ = module_name;
+}
+
+PackedFunc GraphRuntimeFactory::GetFunction(
+const std::string& name, const 
tvm::runtime::ObjectPtr& sptr_to_self) {
+  if (name == "get_json") {
+return PackedFunc(
+[sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->GetJson(); });
+  } else if (name == "get_lib") {
+return PackedFunc(
+[sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->GetLib(); });
+  } else if (name == "get_params") {
+return PackedFunc(
+[sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->GetParams(); });
+  } else if (name == module_name_) {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  auto module = this->SelectModule(module_name_);
+  std::vector contexts;
+  for (int i = 0; i < args.num_args; ++i) {
+contexts.emplace_back(args[i].operator TVMContext());
+  }
+  *rv = this->RuntimeCreate(module, contexts);
+});
+  } else if (name == "debug_create") {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  CHECK_GE(args.size(), 2);
+  std::string module_name = args[0].operator String();
+  auto module = this->SelectModule(module_name);
+  std::vector contexts;
+  for (int i = 1; i < args.num_args; ++i) {
+contexts.emplace_back(args[i].operator TVMContext());
+  }
+  *rv = this->DebugRuntimeCreate(module, contexts);
+});
+  } else if (name == "remove_params") {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  auto exec = make_object();
+  exec->Init(this->GetJson(), {}, this->GetModuleName());
+  exec->Import(t

[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5753: Support module based interface runtime

2020-07-10 Thread GitBox


tqchen commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r452957910



##
File path: src/node/container.cc
##
@@ -357,7 +357,4 @@ TVM_REGISTER_GLOBAL("node.MapItems").set_body([](TVMArgs 
args, TVMRetValue* ret)
   *ret = std::move(rkvs);
 });
 
-#if (USE_FALLBACK_STL_MAP == 0)
-TVM_DLL constexpr uint64_t DenseMapNode::kNextProbeLocation[];

Review comment:
   cc @junrushao1994 please check 

##
File path: python/tvm/relay/backend/graph_runtime_factory.py
##
@@ -0,0 +1,98 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Graph runtime factory."""
+import warnings
+from tvm._ffi.base import string_types
+from tvm._ffi.registry import get_global_func
+from tvm.runtime.module import Module
+from tvm.runtime import ndarray
+
+
+def create(graph_json_str, libmod, libmod_name, params):
+"""Create a runtime executor module given a graph and module.
+Parameters
+--
+graph_json_str : str or graph class
+The graph to be deployed in json format output by nnvm graph.
+The graph can only contain one operator(tvm_op) that
+points to the name of PackedFunc in the libmod.
+libmod : tvm.Module
+The module of the corresponding function
+libmod_name: str
+The name of module
+params : dict of str to NDArray
+The parameters of module
+
+Returns
+---
+graph_module : GraphRuntimeFactoryModule
+Runtime graph runtime factory module.
+"""
+if not isinstance(graph_json_str, string_types):
+try:
+graph_json_str = graph_json_str._tvm_graph_json()
+except AttributeError:
+raise ValueError("Type %s is not supported" % type(graph_json_str))
+fcreate = get_global_func("tvm.graph_runtime_factory.create")
+args = []
+for k, v in params.items():
+args.append(k)
+args.append(ndarray.array(v))
+return GraphRuntimeFactoryModule(fcreate(graph_json_str, libmod, 
libmod_name, *args))
+
+
+class GraphRuntimeFactoryModule(Module):
+"""Graph runtime factory module.
+This is a module of graph runtime factory
+
+Parameters
+--
+module : Module
+The interal tvm module that holds the actual graph functions.
+"""
+
+def __init__(self, module):

Review comment:
   We don't need to wrap module here. Instead, we can simply take a 
constructor that takes graph_json, lib_mod, and libmod_name, besides the module 
as a member. 
   
   Then we overload export_library to redirects to 
`self.module.export_library`. This helps us to reduce the functions we need to 
expose, for example, we don't have to expose `get_lib`

##
File path: src/runtime/module.cc
##
@@ -66,9 +66,19 @@ PackedFunc ModuleNode::GetFunction(const std::string& name, 
bool query_imports)
   PackedFunc pf = self->GetFunction(name, GetObjectPtr(this));
   if (pf != nullptr) return pf;
   if (query_imports) {
-for (Module& m : self->imports_) {
-  pf = m->GetFunction(name, m.data_);
-  if (pf != nullptr) return pf;

Review comment:
   should'nt recursive call of m->GetFunction is enough? Perhaps we should 
add a GetFunction that has a query_import flag.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene edited a comment on pull request #5913: [ndarray][autotvm] support ndarray.non_empty

2020-07-10 Thread GitBox


FrozenGene edited a comment on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-656787804


   Thanks for reminding. I want to complete the clflush pr and model based 
runtime pr, then handle this pr. Could you help to handle model based runtime 
pr now?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #5913: [ndarray][autotvm] support ndarray.non_empty

2020-07-10 Thread GitBox


FrozenGene commented on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-656787804


   Thanks for reminding. I want to complete the clflush pr and model based 
runtime pr, then handle this pr. Could you help to handle model based runtime 
pr now?-- Original --From: Tianqi Chen 
Date: Sat,Jul 11,2020 0:46 AMTo: apache/incubator-tvm 
Cc: Zhao Wu , Mention 
Subject: Re: [apache/incubator-tvm] 
[ndarray][autotvm] support ndarray.non_empty (#5913)
   @FrozenGene please followup
   
   —You are receiving this because you were mentioned.Reply to this email 
directly, view it on GitHub, or unsubscribe.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5913: [ndarray][autotvm] support ndarray.non_empty

2020-07-10 Thread GitBox


tqchen commented on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-656774112


   @FrozenGene please followup



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #6036: [LLVM/CPU] Terminate basic block after "ret" instruction

2020-07-10 Thread GitBox


kparzysz-quic opened a new pull request #6036:
URL: https://github.com/apache/incubator-tvm/pull/6036


   `ret` is a terminator in LLVM IR and there should be no instructions in the 
basic block following it. When generating a `ret`, end the current block and 
start a new one.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [CI][ACL] Enable ACL installation in ci_cpu docker container (#5916)

2020-07-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new e329069  [CI][ACL] Enable ACL installation in ci_cpu docker container 
(#5916)
e329069 is described below

commit e32906923b8aea70f6aa577db980685b56812743
Author: lhutton1 <35535092+lhutt...@users.noreply.github.com>
AuthorDate: Fri Jul 10 17:43:25 2020 +0100

[CI][ACL] Enable ACL installation in ci_cpu docker container (#5916)

This patch adds a cross-compiled ACL build to the ci_cpu dockerfile used 
for CI.

Change-Id: I66e1521ab553306bc7367b65acc0363e750f0211
---
 docker/Dockerfile.ci_cpu |  4 ++
 docker/install/ubuntu_install_arm_compute_lib.sh | 71 
 2 files changed, 75 insertions(+)

diff --git a/docker/Dockerfile.ci_cpu b/docker/Dockerfile.ci_cpu
index aa77b06..42067b2 100644
--- a/docker/Dockerfile.ci_cpu
+++ b/docker/Dockerfile.ci_cpu
@@ -74,3 +74,7 @@ RUN bash /install/ubuntu_install_tflite.sh
 # TensorFlow deps
 COPY install/ubuntu_install_tensorflow.sh /install/ubuntu_install_tensorflow.sh
 RUN bash /install/ubuntu_install_tensorflow.sh
+
+# Arm(R) Compute Library
+COPY install/ubuntu_install_arm_compute_lib.sh 
/install/ubuntu_install_arm_compute_lib.sh
+RUN bash /install/ubuntu_install_arm_compute_lib.sh
diff --git a/docker/install/ubuntu_install_arm_compute_lib.sh 
b/docker/install/ubuntu_install_arm_compute_lib.sh
new file mode 100644
index 000..73e9279
--- /dev/null
+++ b/docker/install/ubuntu_install_arm_compute_lib.sh
@@ -0,0 +1,71 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+set -e
+set -u
+set -o pipefail
+
+repo_url="https://github.com/ARM-software/ComputeLibrary.git";
+repo_dir="acl"
+install_path="/opt/$repo_dir"
+architecture_type=$(uname -i)
+target_arch="arm64-v8a" # arm64-v8a/armv7a
+build_type="native"
+
+tmpdir=$(mktemp -d)
+
+cleanup()
+{
+  rm -rf "$tmpdir"
+}
+
+trap cleanup 0
+
+apt-get update && \
+apt-get install -y --no-install-recommends \
+git \
+scons \
+bsdmainutils \
+build-essential \
+g++-aarch64-linux-gnu \
+gcc-aarch64-linux-gnu
+
+cd "$tmpdir"
+
+git clone "$repo_url" "$repo_dir"
+
+cd "$repo_dir"
+
+# pin version to v20.05
+git checkout 6a7771e
+
+if [ "$architecture_type" != "aarch64" ]; then
+  build_type="cross_compile"
+fi
+
+scons \
+  install_dir="$install_path" \
+  Werror=1 \
+  -j8 \
+  debug=0 \
+  asserts=0 \
+  neon=1 \
+  opencl=0 \
+  os=linux \
+  arch="$target_arch" \
+  build="$build_type"



[GitHub] [incubator-tvm] tqchen commented on pull request #5916: [CI][Contrib] Add ACL docker installation

2020-07-10 Thread GitBox


tqchen commented on pull request #5916:
URL: https://github.com/apache/incubator-tvm/pull/5916#issuecomment-656773063


   Thanks @lhutton1 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5916: [CI][Contrib] Add ACL docker installation

2020-07-10 Thread GitBox


tqchen merged pull request #5916:
URL: https://github.com/apache/incubator-tvm/pull/5916


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5892: Add TVM application extension with WASM runtime

2020-07-10 Thread GitBox


tqchen commented on a change in pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#discussion_r452946065



##
File path: apps/wasm-graphcompiler-tvm/README.md
##
@@ -0,0 +1,188 @@
+# WebAssembly GraphCompiler for Deep Learning Framework with TVM Runtime
+
+ Experimental notice: This project is still *experimental* and only serves 
as a proof of concept for running deep learning frameworks (such like 
[MindSpore](https://github.com/mindspore-ai/mindspore)) on [WebAssembly 
runtime](https://github.com/bytecodealliance/wasmtime) with [TVM 
stack](https://tvm.apache.org/).
+
+- [WebAssembly GraphCompiler for Deep Learning Framework with TVM 
Runtime](#webassembly-graphcompiler-for-deep-learning-framework-with-tvm-runtime)
+- [Motivation](#motivation)
+- [Framework Landscape](#framework-landscape)
+- [Project Status](#project-status)
+- [PoC Guidelines](#poc-guidelines)
+- [Pre-installation](#pre-installation)
+- [Build ResNet50 model](#build-resnet50-model)
+- [Build wasm-graphcompiler-tvm 
package](#build-wasm-graphcompiler-tvm-package)
+- [Test](#test)
+- [Future Work](#future-work)
+- [More networks support](#more-networks-support)
+- [Performance benchmark](#performance-benchmark)
+- [Native TVM Rust runtime support](#native-tvm-rust-runtime-support)
+- [Appendix](#appendix)
+- [System packages install](#system-packages-install)
+- [Contribution](#contribution)
+
+## Motivation
+
+https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png";
 alt="TVM hardware support" width="600"/>
+
+As demonstrated in TVM runtime 
[tutorials](https://tvm.apache.org/docs/tutorials/relay_quick_start.html), TVM 
already supports WASM as the optional hardware backend, so we can leverage the 
features of WebAssembly (portability, security) and TVM runtime 
(domain-specific, optimization) to build a flexible and auto-optimized graph 
compiler for all deep learning frameworks.
+
+## Framework Landscape
+
+The figures below demonstrate the whole landscape of running deep learning 
frameworks on WASM runtime with TVM compiler stack.
+
+* WASM graph compiler stack
+```
+   _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _
+  |   |  | |  |   |
+  |  Framework Model  | ---> |  ONNX Model | ---> |  TVM Relay Python API |
+  |_ _ _ _ _ _ _ _ _ _|  |_ _ _ _ _ _ _|  |_ _ _ _ _ _ _ _ _ _ _ _|
+ ||
+ \/
+ _ _ _ _ _ _ _ _ _ _ _  _ _ _ _ _ _ _ _ _ _ _
+| || |
+| WASM Graph Compiler ||  TVM Compiler Stack |
+|(TVM runtime)||_ _ _ _ _ _ _ _ _ _ _|

Review comment:
   I see, perhapsit we should rename it as the app code? Since we are 
writing app using tvm rust runtime

##
File path: apps/wasm-graphcompiler-tvm/wasm-graphcompiler/Cargo.toml
##
@@ -0,0 +1,26 @@
+[package]
+name = "wasm-graphcompiler-tvm"
+version = "0.1.0"
+authors = ["leonwanghui "]

Review comment:
   let us rename authors to TVM contributors, the rationale is that the 
same code will be contributed by multiple contributors, and the contribution is 
already recored by the commit history.

##
File path: apps/wasm-graphcompiler-tvm/README.md
##
@@ -0,0 +1,191 @@
+# WebAssembly GraphCompiler for Deep Learning Framework with TVM Runtime
+
+ Experimental notice: This project is still *experimental* and only serves 
as a proof of concept for running deep learning frameworks (such like 
[MindSpore](https://github.com/mindspore-ai/mindspore)) on [WebAssembly 
runtime](https://github.com/bytecodealliance/wasmtime) with [TVM 
stack](https://tvm.apache.org/).
+
+- [WebAssembly GraphCompiler for Deep Learning Framework with TVM 
Runtime](#webassembly-graphcompiler-for-deep-learning-framework-with-tvm-runtime)
+- [Motivation](#motivation)
+- [Framework Landscape](#framework-landscape)
+- [Project Status](#project-status)
+- [PoC Guidelines](#poc-guidelines)
+- [Pre-installation](#pre-installation)
+- [Build ResNet50 model](#build-resnet50-model)
+- [Build wasm-graphcompiler-tvm 
package](#build-wasm-graphcompiler-tvm-package)
+- [Test](#test)
+- [Future Work](#future-work)
+- [More networks support](#more-networks-support)
+- [Performance benchmark](#performance-benchmark)
+- [Native TVM Rust runtime support](#native-tvm-rust-runtime-support)
+- [Appendix](#appendix)
+- [System packages install](#system-packages-install)
+- [Contribution](#contribution)
+
+## Motivation
+
+https://github.com/dmlc/w

[GitHub] [incubator-tvm] tqchen closed issue #4188: [RFC][AutoTVM] Selective Tuning

2020-07-10 Thread GitBox


tqchen closed issue #4188:
URL: https://github.com/apache/incubator-tvm/issues/4188


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4188: [RFC][AutoTVM] Selective Tuning

2020-07-10 Thread GitBox


tqchen commented on issue #4188:
URL: https://github.com/apache/incubator-tvm/issues/4188#issuecomment-656755516


   closing for now as we are moving towards a new autoscheduler



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #5435: [FFI] Improve Array FFI Type Error Message

2020-07-10 Thread GitBox


tqchen closed issue #5435:
URL: https://github.com/apache/incubator-tvm/issues/5435


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #6007: [RELAY][DYN] Dynamic broadcast_to, zeros, ones

2020-07-10 Thread GitBox


mbrookhart commented on pull request #6007:
URL: https://github.com/apache/incubator-tvm/pull/6007#issuecomment-656742984


   @zhiics could you take a look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #6035: Add creation of Hexagon device in RPC client

2020-07-10 Thread GitBox


kparzysz-quic opened a new pull request #6035:
URL: https://github.com/apache/incubator-tvm/pull/6035


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] notoraptor commented on pull request #6030: [topi][relay] Add operation scatter_add to relay, based on scatter implementation.

2020-07-10 Thread GitBox


notoraptor commented on pull request #6030:
URL: https://github.com/apache/incubator-tvm/pull/6030#issuecomment-656711022


   All tests passed, this time !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] notoraptor commented on pull request #6030: [topi][relay] Add operation scatter_add to relay, based on scatter implementation.

2020-07-10 Thread GitBox


notoraptor commented on pull request #6030:
URL: https://github.com/apache/incubator-tvm/pull/6030#issuecomment-656636533


   I got an unexpected error in jenkins:
   
   ```
   running 3 tests
   
   test graph::tests::test_str_to_type ... ok
   
   test threading::tests::test_max_concurrency ... ok
   
   test threading::tests::test_parallel_launch ... FAILED
   
   
   failures:
   
   
    threading::tests::test_parallel_launch stdout 
   
   thread 'threading::tests::test_parallel_launch' panicked at 'assertion 
failed: `(left == right)`
   
 left: `1`,
   
right: `3`', runtime/src/threading.rs:244:13
   
   note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
   ```
   
   It seems to come from Rust tests. I don't know why.
   
   I just rebased the branch to relaunch tests again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-07-10 Thread GitBox


masahi merged pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-07-10 Thread GitBox


masahi commented on pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#issuecomment-656630522


   Thanks @comaniac @zhiics @mbaret @lhutton1 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (ba2b75b -> 8a0249c)

2020-07-10 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ba2b75b  [CI] Update ci-cpu to the latest (#6031)
 add 8a0249c  [BYOC] JSON Runtime with DNNL End-to-End Flow (#5919)

No new revisions were added by this update.

Summary of changes:
 cmake/modules/contrib/DNNL.cmake   |  19 +-
 .../backend/contrib/codegen_json/codegen_json.h| 353 
 src/relay/backend/contrib/dnnl/codegen.cc  |  81 +++
 src/relay/backend/graph_runtime_codegen.cc |   3 +-
 src/relay/backend/utils.h  |  17 +
 src/runtime/contrib/dnnl/dnnl_json_runtime.cc  | 455 +++
 src/runtime/contrib/json/json_node.h   | 358 
 src/runtime/contrib/json/json_runtime.h| 267 +
 tests/python/relay/test_external_runtime.py|   2 +-
 tests/python/relay/test_json_runtime.py| 625 +
 tests/python/relay/test_pass_partition_graph.py|  12 +-
 tests/scripts/task_config_build_cpu.sh |   1 +
 12 files changed, 2179 insertions(+), 14 deletions(-)
 create mode 100644 src/relay/backend/contrib/codegen_json/codegen_json.h
 create mode 100644 src/runtime/contrib/dnnl/dnnl_json_runtime.cc
 create mode 100644 src/runtime/contrib/json/json_node.h
 create mode 100644 src/runtime/contrib/json/json_runtime.h
 create mode 100644 tests/python/relay/test_json_runtime.py



[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #5992: Add support for tflite arg_min and arg_max

2020-07-10 Thread GitBox


d-smirnov commented on a change in pull request #5992:
URL: https://github.com/apache/incubator-tvm/pull/5992#discussion_r452734994



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -1755,6 +1755,39 @@ def test_all_reduce():
 if package_version.parse(tf.VERSION) >= package_version.parse('1.15.0'):
 _test_forward_reduce(_test_reduce_any, dtype="bool")
 
+###
+# Arg_min_max
+# ---
+
+def _test_arg_min_max(math_op, data, axis, quantized=False):
+""" One iteration of arg_min_max"""
+
+with tf.Graph().as_default():
+t_name="in"
+in_data = array_ops.placeholder(shape=data.shape, dtype=np.float32, 
name=t_name )
+input_range=None
+qmin, qmax = -100, 102
+if quantized:
+inq_data = tf.quantization.fake_quant_with_min_max_args(in_data, 
min=qmin, max=qmax, name= 'q' + t_name )
+input_range = { inq_data.name.split(':')[0]: (qmin, qmax)}
+out = math_op(input=inq_data, axis=axis)
+compare_tflite_with_tvm([data], [inq_data.name], [inq_data], 
[out], quantized=True, input_range=input_range)
+else:
+out = math_op(input=in_data, axis=axis)
+compare_tflite_with_tvm([data], [in_data.name], [in_data], [out])
+
+def test_forward_arg_min_max():
+data = np.array(np.random.uniform(0, 100, (3, 4)), dtype=np.uint8)
+# test quantized
+# There is no quantized version of ArgMin
+for axis in [None, 0, 1, -1]:
+_test_arg_min_max(math_ops.argmax, data, axis, True)
+
+data = np.array(np.random.uniform(0, 100, (3, 4)), dtype=np.float32)

Review comment:
   Added negative ranges





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leonwanghui commented on pull request #5892: Add TVM application extension with WASM runtime

2020-07-10 Thread GitBox


leonwanghui commented on pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#issuecomment-656543313


   > please remove the csv file as we cannot checkin binary to the codebase
   
   @tqchen `Done`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org