[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-626110351


   clang6.0 also works. But I will fix it under gcc5.4



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-05-08 Thread GitBox


icemelon9 commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r422452792



##
File path: python/tvm/relay/op/_transform.py
##
@@ -189,10 +192,9 @@ def _reshape_shape_func(data_shape, newshape, ndim):
 out[infer_idx] = old_size // new_size
 return out
 
-@_reg.register_shape_func("reshape", False)
+@_reg.register_shape_func("reshape", True)

Review comment:
   I still have a bit concern here because if `newshape` is not a tensor 
input, we can still treat `reshape` as data independent shape func, which means 
that we can fuse reshape with other ops. But now, if the output of `reshape` is 
dynamic, it can no longer be fused any more.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-626098731


   Both `release` and `debug` under gcc5.4 will show the same behavior. It is 
not likely that O2 causes this problem.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-626096064


   I found the problem. It is due to the different behavior of gcc5.4 and gcc7.4
   ```c++
   doc << "(" << Print(op->a) << OpString << Print(op->b) << ")";
   ```
   
   gcc5.4 will execute Print(op->b) first.
   gcc7.4 will execute Print(op->a) first.
   
   I have no idea why gcc5.4 will do so since `<<` should be left associative.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-626096064


   I found the problem. It is due to the different behavior of gcc5.4 and gcc7.4
   ```c++
   doc << "(" << Print(op->a) << OpString << Print(op->b) << ")";
   ```
   
   gcc5.4 will execute Print(op->b) first.
   gcc7.4 will execute Print(op->a) first.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-626096064


   I found the problem. It is due to the different behavior of gcc5.4 and gcc 
7.4
   ```c++
   doc << "(" << Print(op->a) << OpString << Print(op->b) << ")";
   ```
   
   gcc5.4 will execute Print(op->b) first.
   gcc7.4 will execute Print(op->a) first.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tom-gall commented on pull request #5546: Add tests for running micro on native arm hardware

2020-05-08 Thread GitBox


tom-gall commented on pull request #5546:
URL: https://github.com/apache/incubator-tvm/pull/5546#issuecomment-626093626


   I've updated the commit so the new location is: 
tests/micro/test_runtime_micro_on_arm.py
   
   I went ahead and committed the perhaps egregious act of doing a --force as 
part of the amended commit. I figure this is one of the very rare times when 
that's ok since the act of moving a brand new file isn't important to the 
commit history.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m commented on pull request #5549: [PYTORCH]Fix bug during conversion with using mangled node name instead of actual output name

2020-05-08 Thread GitBox


trevor-m commented on pull request #5549:
URL: https://github.com/apache/incubator-tvm/pull/5549#issuecomment-626088489


   @masahi I see, thank you! I will change this PR to instead modify the 
converter for `aten::max_pool2d_with_indices` to have a second dummy output.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5382: [TE] Fix MakeLoopNest for warp memory

2020-05-08 Thread GitBox


tqchen commented on pull request #5382:
URL: https://github.com/apache/incubator-tvm/pull/5382#issuecomment-626078390


   Thanks @yongfeng-nv @roastduck !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (aded92d -> 06efa7f)

2020-05-08 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from aded92d  [Rust] Add first stage of updating and rewriting Rust 
bindings. (#5526)
 add 06efa7f  [TE] Fix MakeLoopNest for warp memory (#5382)

No new revisions were added by this update.

Summary of changes:
 src/te/operation/op_util.cc| 14 +++-
 .../test_tir_transform_lower_warp_memory.py| 37 ++
 2 files changed, 50 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] trevor-m opened a new pull request #5549: [PYTORCH]Fix bug during conversion with using mangled node name instead of actual output name

2020-05-08 Thread GitBox


trevor-m opened a new pull request #5549:
URL: https://github.com/apache/incubator-tvm/pull/5549


   This bug occurs when an op has multiple outputs in pytorch, but only has a 
single output in relay. For example, this bug occurs with 
`aten::max_pool2d_with_indices` which has two outputs but relay's max_pool only 
has one:
   
   `Tensor, %34 : Tensor = aten::max_pool2d_with_indices(%input_4.1, %29, %30, 
%31, %32, %3) `
   
   [This 
condition](https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L1923)
 is hit:
   
   ```
   if node.outputsSize() > 1:
   node_name = "_".join(_get_output_names(node))
   ```
   which gives this node the mangled name of `input_5.1_34`, instead of 
`input_5.1`. Since the TVM output is not a tuple, this mangled name is used as 
the key in the outputs dict.
   
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L2238-L2244
   
   
   Later on, when the output of the node is needed, `input_5.1` cannot be found 
in the dict, causing imprting the model to fail.
   
   ```
   File ".../tvm/relay/frontend/pytorch.py", line 1781, in
   return [outputs[name] for name in _get_input_names(op_node)] 

   
   KeyError: 'input_5.1'
   ```
   
   This PR fixes the bug by using the first output name to store the node 
output in the outputs dict.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Rust] Add first stage of updating and rewriting Rust bindings. (#5526)

2020-05-08 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new aded92d  [Rust] Add first stage of updating and rewriting Rust 
bindings. (#5526)
aded92d is described below

commit aded92d3ba2d3a097425774d0890d07a9350e1bc
Author: Jared Roesch 
AuthorDate: Fri May 8 16:53:47 2020 -0700

[Rust] Add first stage of updating and rewriting Rust bindings. (#5526)

* Add tvm-sys

* Use as_mut_ptr

* Address CR feedback

* Update rust/tvm-sys/src/datatype.rs

Co-authored-by: Nick Hynes 

* Final CR comments

* Fix find and replace error in frontend

Co-authored-by: Nick Hynes 
---
 rust/.rustfmt.toml  |   1 +
 rust/Cargo.toml |   3 +-
 rust/{ => tvm-sys}/Cargo.toml   |  32 ++--
 rust/tvm-sys/build.rs   |  61 +++
 rust/tvm-sys/src/array.rs   |  62 +++
 rust/tvm-sys/src/byte_array.rs  |  87 +
 rust/tvm-sys/src/context.rs | 284 ++
 rust/tvm-sys/src/datatype.rs| 187 
 rust/tvm-sys/src/errors.rs  |  46 +
 rust/tvm-sys/src/lib.rs |  54 ++
 rust/tvm-sys/src/packed_func.rs | 380 
 rust/tvm-sys/src/value.rs   |  95 ++
 12 files changed, 1277 insertions(+), 15 deletions(-)

diff --git a/rust/.rustfmt.toml b/rust/.rustfmt.toml
index 3c51bb3..5a1f1d2 100644
--- a/rust/.rustfmt.toml
+++ b/rust/.rustfmt.toml
@@ -29,3 +29,4 @@ merge_derives = true
 use_try_shorthand = false
 use_field_init_shorthand = false
 force_explicit_abi = true
+
diff --git a/rust/Cargo.toml b/rust/Cargo.toml
index f08f861..b4a159c 100644
--- a/rust/Cargo.toml
+++ b/rust/Cargo.toml
@@ -27,5 +27,6 @@ members = [
"frontend",
"frontend/tests/basics",
"frontend/tests/callback",
-   "frontend/examples/resnet"
+   "frontend/examples/resnet",
+"tvm-sys"
 ]
diff --git a/rust/Cargo.toml b/rust/tvm-sys/Cargo.toml
similarity index 74%
copy from rust/Cargo.toml
copy to rust/tvm-sys/Cargo.toml
index f08f861..fe4d0bf 100644
--- a/rust/Cargo.toml
+++ b/rust/tvm-sys/Cargo.toml
@@ -15,17 +15,21 @@
 # specific language governing permissions and limitations
 # under the License.
 
-[workspace]
-members = [
-   "common",
-   "macros",
-   "runtime",
-   "runtime/tests/test_tvm_basic",
-   "runtime/tests/test_tvm_dso",
-   "runtime/tests/test_wasm32",
-   "runtime/tests/test_nn",
-   "frontend",
-   "frontend/tests/basics",
-   "frontend/tests/callback",
-   "frontend/examples/resnet"
-]
+[package]
+name = "tvm-sys"
+version = "0.1.0"
+authors = ["TVM Contributors"]
+license = "Apache-2.0"
+edition = "2018"
+
+[features]
+bindings = []
+
+[dependencies]
+thiserror = "^1.0"
+anyhow = "^1.0"
+ndarray = "0.12"
+enumn = "^0.1"
+
+[build-dependencies]
+bindgen = "0.51"
diff --git a/rust/tvm-sys/build.rs b/rust/tvm-sys/build.rs
new file mode 100644
index 000..85e16be
--- /dev/null
+++ b/rust/tvm-sys/build.rs
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+extern crate bindgen;
+
+use std::path::PathBuf;
+
+use std::env;
+
+fn main() {
+let tvm_home = option_env!("TVM_HOME").map(str::to_string).unwrap_or({
+let crate_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
+.canonicalize()
+.unwrap();
+crate_dir
+.parent()
+.unwrap()
+.parent()
+.unwrap()
+.to_str()
+.unwrap()
+.to_string()
+});
+
+if cfg!(feature = "bindings") {
+println!("cargo:rerun-if-env-changed=TVM_HOME");
+println!("cargo:rustc-link-lib=dylib=tvm");
+println!("cargo:rustc-link-search={}/build", tvm_home);
+}
+
+// @see rust-bindgen#550 for `blacklist_type`
+bindgen::Builder::default()
+.header(format!("{}/include/tvm/runtime/c_runtime_api.h", tvm_home))
+.header(format!("{}/include/tvm/runtime/c_backend_api.h", tvm_home))
+ 

[GitHub] [incubator-tvm] tqchen commented on pull request #5498: [Optimization] Warp level reduction support for CUDA

2020-05-08 Thread GitBox


tqchen commented on pull request #5498:
URL: https://github.com/apache/incubator-tvm/pull/5498#issuecomment-626067165


   ping @Hzfengsy 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] binarybana commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


binarybana commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422420158



##
File path: rust/tvm-sys/Cargo.toml
##
@@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+[package]
+name = "tvm-sys"
+version = "0.1.0"
+authors = ["TVM Contributors"]
+license = "Apache-2.0"
+edition = "2018"
+
+[features]
+bindings = []
+
+[dependencies]
+thiserror = "^1.0"
+anyhow = "^1.0"

Review comment:
   I'm not sure we should bring in `anyhow` into a library crate (AFAIK 
it's form for libraries to pollute their API surface by exposing opinionated 
error types). But given that we only use it in one place, we can remove it 
later.

##
File path: rust/tvm-sys/src/packed_func.rs
##
@@ -0,0 +1,380 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::{
+convert::TryFrom,
+ffi::{CStr, CString},
+os::raw::c_void,
+};
+
+use crate::{errors::ValueDowncastError, ffi::*};
+
+pub use crate::ffi::TVMValue;
+
+pub trait PackedFunc:
+Fn(&[ArgValue]) -> Result + Send + 
Sync
+{
+}
+
+impl PackedFunc for T where
+T: Fn(&[ArgValue]) -> Result + 
Send + Sync
+{
+}
+
+/// Calls a packed function and returns a `RetValue`.
+///
+/// # Example
+///
+/// `call_packed!(my_tvm_func,  arg1,  arg2)`
+#[macro_export]
+macro_rules! call_packed {
+($fn:expr, $($args:expr),+) => {
+$fn(&[$($args.into(),)+])
+};
+($fn:expr) => {
+$fn(::new())
+};
+}
+
+/// Constructs a derivative of a TVMPodValue.
+macro_rules! TVMPODValue {
+{
+$(#[$m:meta])+
+$name:ident $(<$a:lifetime>)? {
+$($extra_variant:ident ( $variant_type:ty ) ),+ $(,)?
+},
+match $value:ident {
+$($tvm_type:ident => { $from_tvm_type:expr })+
+},
+match  {
+$($self_type:ident ( $val:ident ) => { $from_self_type:expr })+
+}
+$(,)?
+} => {
+$(#[$m])+
+#[derive(Clone, Debug)]
+pub enum $name $(<$a>)? {
+Int(i64),
+UInt(i64),
+Float(f64),
+Null,
+DataType(DLDataType),
+String(CString),
+Context(TVMContext),
+Handle(*mut c_void),
+ArrayHandle(TVMArrayHandle),
+ObjectHandle(*mut c_void),
+ModuleHandle(TVMModuleHandle),
+FuncHandle(TVMFunctionHandle),
+NDArrayHandle(*mut c_void),
+$($extra_variant($variant_type)),+
+}
+
+impl $(<$a>)? $name $(<$a>)? {
+pub fn from_tvm_value($value: TVMValue, type_code: u32) -> Self {
+use $name::*;
+#[allow(non_upper_case_globals)]
+unsafe {
+match type_code as _ {
+DLDataTypeCode_kDLInt => Int($value.v_int64),
+DLDataTypeCode_kDLUInt => UInt($value.v_int64),
+DLDataTypeCode_kDLFloat => Float($value.v_float64),
+TVMTypeCode_kTVMNullptr => Null,
+TVMTypeCode_kTVMDataType => DataType($value.v_type),
+TVMTypeCode_kTVMContext => Context($value.v_ctx),
+TVMTypeCode_kTVMOpaqueHandle => 
Handle($value.v_handle),
+TVMTypeCode_kTVMDLTensorHandle => 
ArrayHandle($value.v_handle as TVMArrayHandle),
+

[incubator-tvm] branch master updated: [RPC] Improve RPCServer AsyncIO support. (#5544)

2020-05-08 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 2175f6b  [RPC] Improve RPCServer AsyncIO support. (#5544)
2175f6b is described below

commit 2175f6be6d1fd414a14b73deb78808b80e2ba032
Author: Tianqi Chen 
AuthorDate: Fri May 8 15:55:22 2020 -0700

[RPC] Improve RPCServer AsyncIO support. (#5544)

* [RPC] Improve RPCServer AsyncIO support.

When the RPCServer is in the async IO mode, it is possible for the server
to directly serve async function that may return its value via a callback 
in the future.
This mode is particular useful to the web environment, where blocking is 
not an option.

This PR introduces the Async support to the RPCSession, allowing the 
AsyncIO driven servers
to serve the async functions. These functions will still be presented as 
synchronized version
on the client side.

Followup PR will refactor the web runtime to make use of this feature.

* Address comments
---
 src/runtime/rpc/rpc_endpoint.cc  | 267 ++-
 src/runtime/rpc/rpc_local_session.cc |  40 +++---
 src/runtime/rpc/rpc_local_session.h  |  23 +--
 src/runtime/rpc/rpc_session.cc   |  87 
 src/runtime/rpc/rpc_session.h| 109 +-
 5 files changed, 397 insertions(+), 129 deletions(-)

diff --git a/src/runtime/rpc/rpc_endpoint.cc b/src/runtime/rpc/rpc_endpoint.cc
index 8a7f11c..26f24c9 100644
--- a/src/runtime/rpc/rpc_endpoint.cc
+++ b/src/runtime/rpc/rpc_endpoint.cc
@@ -57,11 +57,13 @@ class RPCEndpoint::EventHandler : public dmlc::Stream {
   EventHandler(support::RingBuffer* reader,
support::RingBuffer* writer,
std::string name,
-   std::string* remote_key)
+   std::string* remote_key,
+   std::function flush_writer)
   : reader_(reader),
 writer_(writer),
 name_(name),
-remote_key_(remote_key) {
+remote_key_(remote_key),
+flush_writer_(flush_writer) {
 this->Clear();
 
 if (*remote_key == "%toinit") {
@@ -109,13 +111,21 @@ class RPCEndpoint::EventHandler : public dmlc::Stream {
   /*!
* \brief Enter the io loop until the next event.
* \param client_mode Whether we are in the client.
+   * \param async_server_mode Whether we are in the async server mode.
* \param setreturn The function to set the return value encoding.
* \return The function to set return values when there is a return event.
*/
-  RPCCode HandleNextEvent(bool client_mode, RPCSession::FEncodeReturn 
setreturn) {
+  RPCCode HandleNextEvent(bool client_mode,
+  bool async_server_mode,
+  RPCSession::FEncodeReturn setreturn) {
 std::swap(client_mode_, client_mode);
+std::swap(async_server_mode_, async_server_mode);
 
-while (this->Ready()) {
+RPCCode status = RPCCode::kNone;
+
+while (status == RPCCode::kNone &&
+   state_ != kWaitForAsyncCallback &&
+   this->Ready()) {
   switch (state_) {
 case kInitHeader: HandleInitHeader(); break;
 case kRecvPacketNumBytes: {
@@ -133,23 +143,27 @@ class RPCEndpoint::EventHandler : public dmlc::Stream {
   this->HandleProcessPacket(setreturn);
   break;
 }
+case kWaitForAsyncCallback: {
+  break;
+}
 case kReturnReceived: {
   this->SwitchToState(kRecvPacketNumBytes);
-  std::swap(client_mode_, client_mode);
-  return RPCCode::kReturn;
+  status = RPCCode::kReturn;
+  break;
 }
 case kCopyAckReceived: {
-  std::swap(client_mode_, client_mode);
-  return RPCCode::kCopyAck;
+  status = RPCCode::kCopyAck;
+  break;
 }
 case kShutdownReceived: {
-  std::swap(client_mode_, client_mode);
-  return RPCCode::kShutdown;
+  status = RPCCode::kShutdown;
 }
   }
 }
+
+std::swap(async_server_mode_, async_server_mode);
 std::swap(client_mode_, client_mode);
-return RPCCode::kNone;
+return status;
   }
 
   /*! \brief Clear all the states in the Handler.*/
@@ -229,6 +243,7 @@ class RPCEndpoint::EventHandler : public dmlc::Stream {
 kInitHeader,
 kRecvPacketNumBytes,
 kProcessPacket,
+kWaitForAsyncCallback,
 kReturnReceived,
 kCopyAckReceived,
 kShutdownReceived
@@ -239,6 +254,8 @@ class RPCEndpoint::EventHandler : public dmlc::Stream {
   bool init_header_step_{0};
   // Whether current handler is client or server mode.
   bool client_mode_{false};
+  // Whether current handler is in the async server mode.
+  bool async_server_mode_{false};
   // Internal arena
   support::Arena arena_;
 
@@ -249,6 +266,11 @@ class 

[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5511: [AutoTVM][TOPI] AutoTVM incorrect measurement

2020-05-08 Thread GitBox


kevinthesun commented on a change in pull request #5511:
URL: https://github.com/apache/incubator-tvm/pull/5511#discussion_r422409992



##
File path: topi/python/topi/mali/conv2d.py
##
@@ -138,20 +138,15 @@ def _schedule_spatial_pack(cfg, s, output, conv, 
data_vec, kernel_vec):
 s[data_vec].unroll(vw)
 
 if isinstance(kernel_vec.op, tvm.te.ComputeOp) and kernel_vec.name == 
'kernel_vec':
-if autotvm.GLOBAL_SCOPE.in_tuning:
-# kernel packing will be pre-computed during compilation, so we 
skip
-# this part to make tuning records correct
-s[kernel_vec].pragma(s[kernel_vec].op.axis[0], 'debug_skip_region')
-else:
-max_threads = 
tvm.target.Target.current(allow_none=False).max_num_threads
-co, ci, kh, kw, vc = s[kernel_vec].op.axis
-fused = s[kernel_vec].fuse(co, ci, kh, kw, vc)
-fused, vec = s[kernel_vec].split(fused, VC)
-bb, tt = s[kernel_vec].split(fused, max_threads)
-s[kernel_vec].bind(bb, te.thread_axis("blockIdx.x"))
-s[kernel_vec].bind(tt, te.thread_axis("threadIdx.x"))
-if VC in vec_size:
-s[kernel_vec].vectorize(vec)
+max_threads = 
tvm.target.Target.current(allow_none=False).max_num_threads

Review comment:
   While doing autotvm, all the workloads fetched are in the original 
data/kernel layouts, for example NCHW/OIHW. It means we need to do layout 
conversion first. For spatial_pack this is done inside compute. However, we 
need to skip this layout conversion stage while doing autotuning. Previously we 
use debug_skip_region which will cause inaccurate measurement. Another way to 
skip it is to change the input kernel tensor to a new placeholder with 
converted layout.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on issue #5215: [AutoTVM] AutoTVM incorrect measurement

2020-05-08 Thread GitBox


kevinthesun commented on issue #5215:
URL: https://github.com/apache/incubator-tvm/issues/5215#issuecomment-626050953


   @FrozenGene In https://github.com/apache/incubator-tvm/pull/5200 we 
discussed another source of autotvm inaccurate measurement due to empty input 
tensor. Do we have a timeline to fix that?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart opened a new pull request #5548: Apparently, ONNX Conv with no 'pads' defaults to zero padding

2020-05-08 Thread GitBox


mbrookhart opened a new pull request #5548:
URL: https://github.com/apache/incubator-tvm/pull/5548


   Can't find the behavior documented anywhere, but it showed up in an ONNX 
model from Caffe2
   
   @tmoreau89 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on pull request #5546: Add tests for running micro on native arm hardware

2020-05-08 Thread GitBox


areusch commented on pull request #5546:
URL: https://github.com/apache/incubator-tvm/pull/5546#issuecomment-626036189


   aside from tianqi's comment, this looks good to me. thanks for adding this!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 opened a new pull request #5547: [Refactor][std::string --> String] IR is updated with String

2020-05-08 Thread GitBox


ANSHUMAN87 opened a new pull request #5547:
URL: https://github.com/apache/incubator-tvm/pull/5547


   Refer #5490
   
   @jroesch , @tqchen , @zhiics : Please help review!
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5144: [Relay][VM] Memory planner (part 1)

2020-05-08 Thread GitBox


icemelon9 commented on a change in pull request #5144:
URL: https://github.com/apache/incubator-tvm/pull/5144#discussion_r422363192



##
File path: src/runtime/vm/vm.cc
##
@@ -535,6 +544,7 @@ void InstructionPrint(std::ostream& os, const Instruction& 
instr) {
 case Opcode::AllocTensorReg: {
   os << "alloc_tensor_reg $" << instr.dst << " $"
  << instr.alloc_tensor_reg.storage << " $"

Review comment:
   ```suggestion
<< instr.alloc_tensor_reg.storage << " "
   ```

##
File path: src/runtime/vm/vm.cc
##
@@ -535,6 +544,7 @@ void InstructionPrint(std::ostream& os, const Instruction& 
instr) {
 case Opcode::AllocTensorReg: {
   os << "alloc_tensor_reg $" << instr.dst << " $"
  << instr.alloc_tensor_reg.storage << " $"
+ << instr.alloc_tensor_reg.offset << " "

Review comment:
   ```suggestion
<< instr.alloc_tensor_reg.offset << " $"
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5546: Add tests for running micro on native arm hardware

2020-05-08 Thread GitBox


tqchen commented on pull request #5546:
URL: https://github.com/apache/incubator-tvm/pull/5546#issuecomment-626011102


   Given that all the tests under unittests/python will be automatically 
invoked by the CI. 
   
   Please put the tests under a different folder for now say `tests/micro` 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] robo-corg commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


robo-corg commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422360890



##
File path: rust/tvm-sys/src/context.rs
##
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! Provides [`Context`] and related device queries.
+//!
+//! Create a new context for device type and device id.
+//!
+//! # Example
+//!
+//! ```
+//! # use tvm_sys::{DeviceType, Context};
+//! let cpu = DeviceType::from("cpu");
+//! let ctx = Context::new(cpu , 0);
+//! let cpu0 = Context::cpu(0);
+//! assert_eq!(ctx, cpu0);
+//! ```
+//!
+//! Or from a supported device name.
+//!
+//! ```
+//! use tvm_sys::Context;
+//! let cpu0 = Context::from("cpu");
+//! println!("{}", cpu0);
+//! ```
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+use std::fmt::{self, Display, Formatter};
+
+use crate::ffi::{self, *};
+use crate::packed_func::{ArgValue, RetValue};
+
+use thiserror::Error;
+use anyhow::Result;
+use enumn::N;
+
+/// Device type represents the set of devices supported by
+/// [TVM](https://github.com/apache/incubator-tvm).
+///
+/// ## Example
+///
+/// ```
+/// use tvm_sys::DeviceType;
+/// let cpu = DeviceType::from("cpu");
+/// println!("device is: {}", cpu);
+///```
+
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, N)]
+#[repr(i64)]
+pub enum DeviceType {
+CPU = 1,
+GPU,
+CPUPinned,
+OpenCL,
+Vulkan,
+Metal,
+VPI,
+ROCM,
+ExtDev,
+}

Review comment:
   Spoiler: I was missing something and you were right!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tom-gall commented on pull request #5546: Add tests for running micro on native arm hardware

2020-05-08 Thread GitBox


tom-gall commented on pull request #5546:
URL: https://github.com/apache/incubator-tvm/pull/5546#issuecomment-626006130


   @weberlo  @areusch - I've forked the test_runtime_micro.py unit test and 
made one specifically for running native on arm hardware, in this case just the 
STM32F7 (for now). 
   
   Would you review?
   
   I don't think this will ever have any chance of passing intel based CI, OTOH 
it's useful to those of us working/testing with uTVM on real hardware. 
   
   Thanks.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tom-gall opened a new pull request #5546: Add tests for running micro on native arm hardware

2020-05-08 Thread GitBox


tom-gall opened a new pull request #5546:
URL: https://github.com/apache/incubator-tvm/pull/5546


   A clone of test_runtime_micro.py, however modified to run specifically on
   ARM hardware, currently just the STM32F746 discovery board.
   
   Signed-off-by: Tom Gall 
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Load platform specific lib for tvmdsoop instead of only so (#5542)

2020-05-08 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 15a4218  Load platform specific lib for tvmdsoop instead of only so 
(#5542)
15a4218 is described below

commit 15a421880f252b18d3636606234d8e20b91047f5
Author: tobe 
AuthorDate: Sat May 9 03:42:22 2020 +0800

Load platform specific lib for tvmdsoop instead of only so (#5542)
---
 python/tvm/contrib/tf_op/module.py | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/python/tvm/contrib/tf_op/module.py 
b/python/tvm/contrib/tf_op/module.py
index f13670e..fd0ed0c 100644
--- a/python/tvm/contrib/tf_op/module.py
+++ b/python/tvm/contrib/tf_op/module.py
@@ -17,6 +17,7 @@
 """Module container of TensorFlow TVMDSO op"""
 import tensorflow as tf
 from tensorflow.python.framework import load_library
+from tensorflow.python import platform
 
 
 class OpModule:
@@ -67,7 +68,7 @@ class TensorFunc:
 elif output_shape is not None:
 self.dynamic_output_shape = self._pack_shape_tensor(output_shape)
 
-self.module = load_library.load_op_library('tvm_dso_op.so')
+self.module = self._load_platform_specific_library("tvm_dso_op")
 self.tvm_dso_op = self.module.tvm_dso_op
 
 def apply(self, *params):
@@ -82,6 +83,16 @@ class TensorFunc:
 def __call__(self, *params):
 return self.apply(*params)
 
+def _load_platform_specific_library(self, lib_name):
+system = platform.system()
+if system == "Darwin":
+lib_file_name = lib_name + ".dylib"
+elif system == "Windows":
+lib_file_name = lib_name + ".dll"
+else:
+lib_file_name = lib_name + ".so"
+return load_library.load_op_library(lib_file_name)
+
 def _is_static_shape(self, shape):
 if shape is None or not isinstance(shape, list):
 return False



[GitHub] [incubator-tvm] tqchen commented on pull request #5542: Load platform specific lib for tvmdsoop instead of the hard-coded tvm_dso_op.so

2020-05-08 Thread GitBox


tqchen commented on pull request #5542:
URL: https://github.com/apache/incubator-tvm/pull/5542#issuecomment-625985407


   Thanks @tobegit3hub !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] antinucleon commented on pull request #5545: [WEB][RUNTIME] WebGPU support

2020-05-08 Thread GitBox


antinucleon commented on pull request #5545:
URL: https://github.com/apache/incubator-tvm/pull/5545#issuecomment-625962848


   This is really nice to have. We can think of a unified wasm/web gpu 
inference for the next step.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] robo-corg commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


robo-corg commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422303593



##
File path: rust/tvm-sys/src/datatype.rs
##
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::any::TypeId;
+use std::convert::TryFrom;
+use std::str::FromStr;
+
+use crate::packed_func::RetValue;
+use crate::ffi::DLDataType;
+
+use thiserror::Error;
+
+
+const DL_INT_CODE: u8 = 0;
+const DL_UINT_CODE: u8 = 1;
+const DL_FLOAT_CODE: u8 = 2;
+const DL_HANDLE: u8 = 3;

Review comment:
   Should DataType::code be public then?

##
File path: rust/tvm-sys/src/context.rs
##
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! Provides [`Context`] and related device queries.
+//!
+//! Create a new context for device type and device id.
+//!
+//! # Example
+//!
+//! ```
+//! # use tvm_sys::{DeviceType, Context};
+//! let cpu = DeviceType::from("cpu");
+//! let ctx = Context::new(cpu , 0);
+//! let cpu0 = Context::cpu(0);
+//! assert_eq!(ctx, cpu0);
+//! ```
+//!
+//! Or from a supported device name.
+//!
+//! ```
+//! use tvm_sys::Context;
+//! let cpu0 = Context::from("cpu");
+//! println!("{}", cpu0);
+//! ```
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+use std::fmt::{self, Display, Formatter};
+
+use crate::ffi::{self, *};
+use crate::packed_func::{ArgValue, RetValue};
+
+use thiserror::Error;
+use anyhow::Result;
+use enumn::N;
+
+/// Device type represents the set of devices supported by
+/// [TVM](https://github.com/apache/incubator-tvm).
+///
+/// ## Example
+///
+/// ```
+/// use tvm_sys::DeviceType;
+/// let cpu = DeviceType::from("cpu");
+/// println!("device is: {}", cpu);
+///```
+
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, N)]
+#[repr(i64)]
+pub enum DeviceType {
+CPU = 1,
+GPU,
+CPUPinned,
+OpenCL,
+Vulkan,
+Metal,
+VPI,
+ROCM,
+ExtDev,
+}

Review comment:
   Will do! The changes I suggested I think still allow this conversion 
without the extra crate unless I am missing something.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5543: Can not find stride and padding information in graph after relay.build function

2020-05-08 Thread GitBox


tqchen commented on issue #5543:
URL: https://github.com/apache/incubator-tvm/issues/5543#issuecomment-625957768


   Please open a new thread on https://discuss.tvm.ai/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422297873



##
File path: rust/tvm-sys/build.rs
##
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+extern crate bindgen;
+
+use std::path::PathBuf;
+
+// extern crate cmake;
+
+use std::env;
+// use std::path::Path;
+// use std::process::Command;
+// use cmake::Config;
+
+// fn main() {
+// if !Path::new("tvm/.git").exists() {
+// let _ = Command::new("git")
+// .args(&["submodule", "update", "--recursive", "--init"])
+// .status();
+// }
+
+// let dst = Config::new("tvm")
+// .very_verbose(true)
+// .build();
+
+// // let dst = dst.join("build");
+
+// let out_dir = env::var("OUT_DIR").unwrap();
+
+// println!("{}", out_dir);
+// // let _ = Command::new("mv")
+// // .args(&[format!("{}/build/libtvm.dylib", dst.display()), 
out_dir])
+// // .status();
+
+// println!("cargo:rustc-link-search=native={}/lib", dst.display());
+// // TODO(@jroesch): hack for dylib behavior
+// for lib in &[/* "tvm", */ "tvm_runtime", /* "tvm_topi" */] {
+// // let src = format!("{}/lib/lib{}.dylib", out_dir, lib);
+// // let dst = format!("{}/../../../deps", out_dir);
+// // let _ = Command::new("mv")
+// // .args(&[src, dst])
+// // .status();
+// println!("cargo:rustc-link-lib=dylib={}", lib);
+// }
+// // "-Wl,-rpath,/scratch/library/"
+// println!("cargo:rustc-env=TVM_HOME={}/build", dst.display());
+// // panic!("");
+// // cc::Build::new()
+// // .cpp(true)
+// // .flag("-std=c++11")
+// // .flag("-Wno-ignored-qualifiers")
+// // .flag("-Wno-unused-parameter")
+// // .include("/Users/jroesch/Git/tvm/include")
+// // .include("/Users/jroesch/Git/tvm/3rdparty/dmlc-core/include")
+// // .include("/Users/jroesch/Git/tvm/3rdparty/dlpack/include")
+// // .include("/Users/jroesch/Git/tvm/3rdparty/HalideIR/src")
+// // .file("tvm_wrapper.cc")
+// // .compile("tvm_ffi");
+// // println!("cargo:rustc-link-lib=dylib=tvm");
+// // println!("cargo:rustc-link-search=/Users/jroesch/Git/tvm/build");
+// }
+
+fn main() {
+let tvm_home = option_env!("TVM_HOME").map(str::to_string).unwrap_or({
+let crate_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
+.canonicalize()
+.unwrap();
+crate_dir
+.parent()
+.unwrap()
+.parent()
+.unwrap()
+.to_str()
+.unwrap()
+.to_string()
+});
+
+if cfg!(feature = "bindings") {
+println!("cargo:rerun-if-env-changed=TVM_HOME");
+// println!("cargo:rustc-link-lib=dylib=tvm_runtime");
+// TODO: move to core
+// println!("cargo:rustc-link-lib=dylib=tvm_runtime");

Review comment:
   I will restore in follow up PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422296419



##
File path: rust/tvm-sys/build.rs
##
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+extern crate bindgen;
+
+use std::path::PathBuf;
+
+// extern crate cmake;
+
+use std::env;
+// use std::path::Path;
+// use std::process::Command;
+// use cmake::Config;
+
+// fn main() {
+// if !Path::new("tvm/.git").exists() {

Review comment:
   I will remove from this PR, I need to restore once I land the second PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422295900



##
File path: rust/tvm-sys/src/context.rs
##
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! Provides [`Context`] and related device queries.
+//!
+//! Create a new context for device type and device id.
+//!
+//! # Example
+//!
+//! ```
+//! # use tvm_sys::{DeviceType, Context};
+//! let cpu = DeviceType::from("cpu");
+//! let ctx = Context::new(cpu , 0);
+//! let cpu0 = Context::cpu(0);
+//! assert_eq!(ctx, cpu0);
+//! ```
+//!
+//! Or from a supported device name.
+//!
+//! ```
+//! use tvm_sys::Context;
+//! let cpu0 = Context::from("cpu");
+//! println!("{}", cpu0);
+//! ```
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+use std::fmt::{self, Display, Formatter};
+
+use crate::ffi::{self, *};
+use crate::packed_func::{ArgValue, RetValue};
+
+use thiserror::Error;
+use anyhow::Result;
+use enumn::N;
+
+/// Device type represents the set of devices supported by
+/// [TVM](https://github.com/apache/incubator-tvm).
+///
+/// ## Example
+///
+/// ```
+/// use tvm_sys::DeviceType;
+/// let cpu = DeviceType::from("cpu");
+/// println!("device is: {}", cpu);
+///```
+
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, N)]
+#[repr(i64)]
+pub enum DeviceType {
+CPU = 1,
+GPU,
+CPUPinned,
+OpenCL,
+Vulkan,
+Metal,
+VPI,
+ROCM,
+ExtDev,
+}

Review comment:
   The `tvm-sys` stands alone from all other crates right now. The goal is 
to fix the above issues in the higher level crates by wrapping standard 
low-level constructs with better designs and then incrementally bubbling up to 
other crates, see my other open PR for more examples. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422295051



##
File path: rust/tvm-sys/src/context.rs
##
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! Provides [`Context`] and related device queries.
+//!
+//! Create a new context for device type and device id.
+//!
+//! # Example
+//!
+//! ```
+//! # use tvm_sys::{DeviceType, Context};
+//! let cpu = DeviceType::from("cpu");
+//! let ctx = Context::new(cpu , 0);
+//! let cpu0 = Context::cpu(0);
+//! assert_eq!(ctx, cpu0);
+//! ```
+//!
+//! Or from a supported device name.
+//!
+//! ```
+//! use tvm_sys::Context;
+//! let cpu0 = Context::from("cpu");
+//! println!("{}", cpu0);
+//! ```
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+use std::fmt::{self, Display, Formatter};
+
+use crate::ffi::{self, *};
+use crate::packed_func::{ArgValue, RetValue};
+
+use thiserror::Error;
+use anyhow::Result;

Review comment:
   I think we should revisit this when I land the higher-level errors, 
before these were all using failure and and I tried to just 1-1 map failure use 
cases to thiserror/anyhow. One of the challenges is handling dynamic errors 
produced by TVM. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422293153



##
File path: rust/tvm-sys/src/context.rs
##
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! Provides [`Context`] and related device queries.
+//!
+//! Create a new context for device type and device id.
+//!
+//! # Example
+//!
+//! ```
+//! # use tvm_sys::{DeviceType, Context};
+//! let cpu = DeviceType::from("cpu");
+//! let ctx = Context::new(cpu , 0);
+//! let cpu0 = Context::cpu(0);
+//! assert_eq!(ctx, cpu0);
+//! ```
+//!
+//! Or from a supported device name.
+//!
+//! ```
+//! use tvm_sys::Context;
+//! let cpu0 = Context::from("cpu");
+//! println!("{}", cpu0);
+//! ```
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+use std::fmt::{self, Display, Formatter};
+
+use crate::ffi::{self, *};
+use crate::packed_func::{ArgValue, RetValue};
+
+use thiserror::Error;
+use anyhow::Result;
+use enumn::N;
+
+/// Device type represents the set of devices supported by
+/// [TVM](https://github.com/apache/incubator-tvm).
+///
+/// ## Example
+///
+/// ```
+/// use tvm_sys::DeviceType;
+/// let cpu = DeviceType::from("cpu");
+/// println!("device is: {}", cpu);
+///```
+
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, N)]
+#[repr(i64)]
+pub enum DeviceType {
+CPU = 1,
+GPU,
+CPUPinned,
+OpenCL,
+Vulkan,
+Metal,
+VPI,
+ROCM,
+ExtDev,
+}

Review comment:
   `EnumN` allows me to convert back from the raw integers produced by the 
raw FFI with validation. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-08 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r422292581



##
File path: rust/tvm-sys/src/datatype.rs
##
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::any::TypeId;
+use std::convert::TryFrom;
+use std::str::FromStr;
+
+use crate::packed_func::RetValue;
+use crate::ffi::DLDataType;
+
+use thiserror::Error;
+
+
+const DL_INT_CODE: u8 = 0;
+const DL_UINT_CODE: u8 = 1;
+const DL_FLOAT_CODE: u8 = 2;
+const DL_HANDLE: u8 = 3;

Review comment:
   No which is why they are private.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner

2020-05-08 Thread GitBox


mbrookhart commented on pull request #5231:
URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-625942189


   @tqchen @mbaret care to take another look after I updated the code based on 
your comments?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart edited a comment on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner

2020-05-08 Thread GitBox


mbrookhart edited a comment on pull request #5231:
URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-625942189


   @tqchen @mbaret Thanks for the comments! Care to take another look after I 
updated the code based on your comments?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner

2020-05-08 Thread GitBox


mbrookhart commented on pull request #5231:
URL: https://github.com/apache/incubator-tvm/pull/5231#issuecomment-625934854


   @zhiics Thanks for the comments! completed those refactors.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #5544: [RPC] Improve RPCServer AsyncIO support.

2020-05-08 Thread GitBox


tmoreau89 commented on a change in pull request #5544:
URL: https://github.com/apache/incubator-tvm/pull/5544#discussion_r422274176



##
File path: src/runtime/rpc/rpc_session.h
##
@@ -189,6 +197,98 @@ class RPCSession {
*/
   virtual bool IsLocalSession() const = 0;
 
+  // Asynchrous variant of API
+  // These APIs are used by the RPC server to allow sessions that
+  // have special implementations for the async functions.
+  //
+  // In the async APIs, an exception is returned by the passing
+  // async_error=true, encode_args=[error_msg].
+
+  /*!
+   * \brief Whether the session is async.
+   *
+   * If the session is not async, its Aync implementations
+   * simply calls into the their synchronize counterparts,
+   * and the callback is guaranteed to be called before the async function 
finishes.
+   *
+   * \return the async state.
+   *
+   * \note We can only use async session in an Event driven RPC server.
+   */
+  virtual bool IsAsync() const;
+
+  /*!
+   * \brief Asynchrously call func.
+   * \param func The function handle.
+   * \param arg_values The argument values.
+   * \param arg_type_codes the type codes of the argument.
+   * \param num_args Number of arguments.
+   *
+   * \param callback The callback to pass the return value or exception.
+   */
+  virtual void AsyncCallFunc(PackedFuncHandle func,
+ const TVMValue* arg_values,
+ const int* arg_type_codes,
+ int num_args,
+ FAsyncCallback callback);
+
+  /*!
+   * \brief Asynchrous version of CopyToRemote.
+   *
+   * \param local_from The source host data.
+   * \param local_from_offset The byte offeset in the from.
+   * \param remote_to The target array.
+   * \param remote_to_offset The byte offset in the to.
+   * \param nbytes The size of the memory in bytes.
+   * \param remote_ctx_to The target context.
+   * \param type_hint Hint of content data type.
+   *
+   * \param on_complete The callback to signal copy complete.
+   * \note All the allocated memory in local_from, and remote_to
+   *   must stay alive until on_compelete is called.
+   */
+  virtual void AsyncCopyToRemote(void* local_from,
+ size_t local_from_offset,
+ void* remote_to,
+ size_t remote_to_offset,
+ size_t nbytes,
+ TVMContext remote_ctx_to,
+ DLDataType type_hint,
+ FAsyncCallback on_complete);
+
+  /*!
+   * \brief Asynchrous version of CopyFromRemote.
+   *
+   * \param local_from The source host data.

Review comment:
   order of comments does not match params order





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5545: [WEB] WebGPU support

2020-05-08 Thread GitBox


tqchen opened a new pull request #5545:
URL: https://github.com/apache/incubator-tvm/pull/5545


   This PR introduces WebGPU support to tvm. The WebGPU runtime is directly 
built in javascript(as WebGPU is a first-class js API) and exposes back to the 
tvm's runtime via PackedFuncs.
   
   One important note is that `ctx.sync` is not async. This is due to the fact 
that WebGPU is a purely async API and we cannot block in the web environment.
   
   So the current best way to use the js api is to wrap things in an async 
function. When copy a GPU array to CPU, `await ctx.sync()` need to be called to 
wait for copy completion.
   
   We use a AsyncIO rpc server to serve the async functions to the clients.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5545: [WEB] WebGPU support

2020-05-08 Thread GitBox


tqchen commented on pull request #5545:
URL: https://github.com/apache/incubator-tvm/pull/5545#issuecomment-625930446


   Depends on https://github.com/apache/incubator-tvm/pull/5544 
   
   cc @kazum @masahi @jroesch @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [CRT]fix to reduce RAM size during loading model (#5507)

2020-05-08 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 95540d2  [CRT]fix to reduce RAM size during loading model (#5507)
95540d2 is described below

commit 95540d2dd8e5d8e7297f6d7c8b5b77cef6137faf
Author: Samuel 
AuthorDate: Fri May 8 22:40:57 2020 +0530

[CRT]fix to reduce RAM size during loading model (#5507)

* [CRT]fix to reduce RAM size during loading model

* Release graph_json memory immediately after reading
---
 src/runtime/crt/graph_runtime.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/runtime/crt/graph_runtime.c b/src/runtime/crt/graph_runtime.c
index a4c07f4..ab96a0c 100644
--- a/src/runtime/crt/graph_runtime.c
+++ b/src/runtime/crt/graph_runtime.c
@@ -815,10 +815,10 @@ void TVMGraphRuntime_Init(TVMGraphRuntime * runtime, 
const char * graph_json,
   const TVMModule * module, const TVMContext * ctxs) {
   JSONReader reader = JSONReader_Create(graph_json);
   runtime->Load(runtime, );
+  JSONReader_Release();
   runtime->ctxs[0] = ctxs[0];
   runtime->SetupStorage(runtime);
   runtime->SetupOpExecs(runtime);
-  JSONReader_Release();
 }
 
 TVMGraphRuntime * TVMGraphRuntimeCreate(const char * sym_json,



[GitHub] [incubator-tvm] wpan11nv commented on pull request #5498: [Optimization] Warp level reduction support for CUDA

2020-05-08 Thread GitBox


wpan11nv commented on pull request #5498:
URL: https://github.com/apache/incubator-tvm/pull/5498#issuecomment-625917497


   Minor fix-up and cleaned comments



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (36d7fd9 -> e54e760)

2020-05-08 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 36d7fd9  Add Onnx Pad v11 (#5539)
 add e54e760  fix restructured text (#5541)

No new revisions were added by this update.

Summary of changes:
 docs/vta/install.rst | 247 ++-
 1 file changed, 128 insertions(+), 119 deletions(-)



[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-625889950







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-625889950


   I can not reproduce the errors in CI now. The errors reported in CI look 
strange. I will try to fix it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


spectrometerHBH commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-625889950


   I can not reproduce the errors in CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-08 Thread GitBox


liangfu commented on a change in pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#discussion_r422219093



##
File path: src/runtime/hexagon/sim/driver/CMakeLists.txt
##
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   sure, thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-08 Thread GitBox


kparzysz-quic commented on pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#issuecomment-625877725


   Rebased to force a new CI build.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-08 Thread GitBox


kparzysz-quic commented on a change in pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#discussion_r422164839



##
File path: src/runtime/hexagon/sim/driver/CMakeLists.txt
##
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   I've already added a commit that makes this a dependency and now it's 
compiled automatically.  Does that address your concerns about 
user-friendliness?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-08 Thread GitBox


kparzysz-quic commented on a change in pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#discussion_r422164839



##
File path: src/runtime/hexagon/sim/driver/CMakeLists.txt
##
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   I've already added a commit that makes this a dependency and now it's 
compiled automatically.  Doesn't that address your concerns about 
user-friendliness?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Hzfengsy commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


Hzfengsy commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-625818864


   > Binds in some pass function is not clean for round trip dump ir, how do we 
deal with it?
   
   Do you mean that `buffer_bind` is not necessary to print after the pass 
`storage_flatten`?
   Yes, the buffer no longer exists after the flatten pass. All we have is that 
Var(buffer->data) in Load and Store. But we still need a place to define those 
vars, and that's why we still have buffer_bind, where we define buffer as well 
as buffer->data, in low-level functions
   
   Furthermore, maybe one day we can use BufferLoad/ BufferStore after the 
flatten and use buffer from the beginning to the end.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on pull request #5507: [CRT]fix to reduce RAM size during loading model

2020-05-08 Thread GitBox


siju-samuel commented on pull request #5507:
URL: https://github.com/apache/incubator-tvm/pull/5507#issuecomment-625811064


   @tqchen Thanks a lot. The 40kb ram reduced for me was due to the free of 
graph_json I freed immediately after saving to runtime. 
   The other fix on setupstorage could save me only around 200bytes totally, 
which is not very significant. Sorry that I overlooked the other change. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-05-08 Thread GitBox


dhruvaray commented on pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#issuecomment-625768588


   @siju-samuel - rebased and incorporated comments. Kindly review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on a change in pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-05-08 Thread GitBox


dhruvaray commented on a change in pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#discussion_r422081874



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -579,6 +582,63 @@ def convert_tanh(self, op):
 
 return out
 
+def convert_range(self, op):
+"""Convert TFLite Range"""
+try:
+from tflite.Operator import Operator
+from tflite.TensorType import TensorType
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized RANGE operator is not supported yet.')
+
+assert isinstance(op, Operator)
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 3, "input tensors length should be 3"
+
+start, limit, delta = input_tensors[0], input_tensors[1], 
input_tensors[2]
+expressions = []
+
+for t in [start, limit, delta]:
+if self.has_expr(t.tensor_idx):
+expressions.append(self.get_expr(t.tensor_idx))
+else:
+tensor_type = self.get_tensor_type_str(t.tensor.Type())
+tensor_value = self.get_tensor_value(t)
+expressions.append(self.exp_tab.new_const(tensor_value, 
dtype=tensor_type))
+
+#out type inference
+if delta.tensor.Type() == TensorType.FLOAT32:
+out_type = self.get_tensor_type_str(delta.tensor.Type())
+else:
+out_type = self.get_tensor_type_str(start.tensor.Type())
+
+#put type here form op
+out = _op.arange(expressions[0], expressions[1], expressions[2], 
out_type)
+
+return out
+
+def convert_shape(self, op):
+"""Convert TFLite Shape"""
+try:
+from tflite.Operator import Operator
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized SHAPE operator is not supported yet.')
+

Review comment:
   removed these checks...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on a change in pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-05-08 Thread GitBox


dhruvaray commented on a change in pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#discussion_r422081644



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -650,6 +693,82 @@ def test_all_resize():
 _test_resize(tf.image.resize_nearest_neighbor, data, 
align_corners=False)
 
 
+###
+# Range
+# -
+def _test_range(start, limit, delta):
+# tflite 1.13 convert method does not accept empty shapes
+if package_version.parse(tf.VERSION) >= package_version.parse('1.14.0'):
+tf.reset_default_graph()
+with tf.Graph().as_default():
+start_scalar, limit_scalar, delta_scalar = \
+tf.placeholder(dtype=start.dtype, shape=(), name="start"), \
+tf.placeholder(dtype=limit.dtype, shape=(), name="limit"), \
+tf.placeholder(dtype=delta.dtype, shape=(), name="delta")
+
+out = tf.range(start_scalar, limit_scalar, delta_scalar, 
name="range")
+
+compare_tflite_with_tvm(
+[start, limit, delta],
+["start", "limit", "delta"],
+[start_scalar, limit_scalar, delta_scalar],
+[out],
+mode="vm",
+quantized=False
+)
+
+def _test_range_default():
+# tflite 1.13 convert method does not accept empty shapes
+if package_version.parse(tf.VERSION) >= package_version.parse('1.14.0'):
+tf.reset_default_graph()
+with tf.Graph().as_default():
+
+inputs = [
+tf.placeholder(dtype=tf.int32, shape=(), name="p1"),
+tf.placeholder(dtype=tf.int32, shape=(), name="p2")
+]
+leaves = [
+tf.range(start = inputs[0], limit = inputs[1]), #use default 
delta
+tf.range(start = inputs[1]) #use start as limit with 0 as the 
first item in the range
+]
+
+compare_tflite_with_tvm(
+[np.int32(1), np.int32(18)],
+["p1", "p2"],
+inputs,
+leaves,
+mode="vm",
+quantized=False
+)
+
+def test_forward_range():
+   _test_range(np.int32(1), np.int32(18), np.int32(3))
+   _test_range(np.int32(1), np.int32(18), np.float32(3.1)) # increment is of 
type float
+   _test_range(np.float32(1.0), np.int32(18), np.int32(3.1)) # start is of 
type float
+   _test_range_default()
+
+###
+# Shape
+# -
+def test_forward_shape():
+# tflite 1.13 convert method does not accept empty shapes
+if package_version.parse(tf.VERSION) >= package_version.parse('1.14.0'):
+tf.reset_default_graph()
+with tf.Graph().as_default():
+data = np.array([1, 18, 3], dtype=np.int32)
+start = tf.placeholder(dtype=tf.int32, shape=[], name="start")
+limit = tf.placeholder(dtype=tf.int32, shape=[], name="limit")
+delta = tf.placeholder(dtype=tf.int32, shape=[], name="delta")
+r = tf.range(start, limit, delta, tf.int32, name="range")
+out = tf.shape(r, out_type=tf.dtypes.int32)
+compare_tflite_with_tvm(
+[x for x in np.nditer(data)],
+["start", "limit", "delta"],
+[start, limit, delta],
+[out],
+mode="vm",
+quantized=False
+)
 ###

Review comment:
   added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on a change in pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-05-08 Thread GitBox


dhruvaray commented on a change in pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#discussion_r422081555



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -579,6 +582,63 @@ def convert_tanh(self, op):
 
 return out
 
+def convert_range(self, op):
+"""Convert TFLite Range"""
+try:
+from tflite.Operator import Operator
+from tflite.TensorType import TensorType
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized RANGE operator is not supported yet.')
+
+assert isinstance(op, Operator)
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 3, "input tensors length should be 3"
+
+start, limit, delta = input_tensors[0], input_tensors[1], 
input_tensors[2]
+expressions = []
+
+for t in [start, limit, delta]:
+if self.has_expr(t.tensor_idx):
+expressions.append(self.get_expr(t.tensor_idx))
+else:
+tensor_type = self.get_tensor_type_str(t.tensor.Type())
+tensor_value = self.get_tensor_value(t)
+expressions.append(self.exp_tab.new_const(tensor_value, 
dtype=tensor_type))
+

Review comment:
   Yes... fixed...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on a change in pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-05-08 Thread GitBox


dhruvaray commented on a change in pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#discussion_r422081372



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -579,6 +582,63 @@ def convert_tanh(self, op):
 
 return out
 
+def convert_range(self, op):
+"""Convert TFLite Range"""
+try:
+from tflite.Operator import Operator

Review comment:
   fixed...

##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -188,7 +231,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
 continue
 
 tvm_output = run_tvm_graph(tflite_model_buffer, in_data, in_node, 
target=device,
-   num_output=len(out_names), 
out_names=out_names)
+   num_output=len(out_names), 
out_names=out_names,mode=mode)

Review comment:
   fixed...

##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -579,6 +582,63 @@ def convert_tanh(self, op):
 
 return out
 
+def convert_range(self, op):
+"""Convert TFLite Range"""
+try:
+from tflite.Operator import Operator
+from tflite.TensorType import TensorType
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized RANGE operator is not supported yet.')
+
+assert isinstance(op, Operator)
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 3, "input tensors length should be 3"
+
+start, limit, delta = input_tensors[0], input_tensors[1], 
input_tensors[2]
+expressions = []
+
+for t in [start, limit, delta]:
+if self.has_expr(t.tensor_idx):
+expressions.append(self.get_expr(t.tensor_idx))
+else:
+tensor_type = self.get_tensor_type_str(t.tensor.Type())
+tensor_value = self.get_tensor_value(t)
+expressions.append(self.exp_tab.new_const(tensor_value, 
dtype=tensor_type))
+
+#out type inference

Review comment:
   fixed...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on pull request #5447: [TOPI,RELAY][TFLITE] Sparse to dense operator

2020-05-08 Thread GitBox


dhruvaray commented on pull request #5447:
URL: https://github.com/apache/incubator-tvm/pull/5447#issuecomment-625745563


   @siju-samuel - rebased and used new method



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] xqdan commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-08 Thread GitBox


xqdan commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-625717683


   Binds in some pass function is not clean for round trip dump ir, how do we 
deal with it?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wangzhao123456 opened a new issue #5543: Can not find stride and padding information in graph after relay.build function

2020-05-08 Thread GitBox


wangzhao123456 opened a new issue #5543:
URL: https://github.com/apache/incubator-tvm/issues/5543


   Thanks for participating in the TVM community! We use https://discuss.tvm.ai 
for any general usage questions and discussions. The issue tracker is used for 
actionable items such as feature proposals discussion, roadmaps, and bug 
tracking.  You are always welcomed to post on the forum first :)
   
   Issues that are inactive for a period of time may get closed. We adopt this 
policy so that we won't lose track of actionable issues that may fall at the 
bottom of the pile. Feel free to reopen a new one if you feel there is an 
additional problem that needs attention when an old one gets closed.
   
   For bug reports, to help the developer act on the issues, please include a 
description of your environment, preferably a minimum script to reproduce the 
problem.
   
   For feature proposals, list clear, small actionable items so we can track 
the progress of the change.
   
   Hello, I am a beginner of TVM and I run the tutorials on official website to 
learn TVM. 
   I noticed that before  relay.build function there has stride and padding 
informations stored in the model. After relay.build function, conducting some 
kinds of  graph-level optimizations, there has no stride and padding 
information in the optimized graph. I wonder where could I find the stride and 
padding informations after relay.build function? 
   Thanks a lot. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on a change in pull request #5502: [TOPI][RELAY][TENSORFLOW]Math ops added

2020-05-08 Thread GitBox


kazum commented on a change in pull request #5502:
URL: https://github.com/apache/incubator-tvm/pull/5502#discussion_r421972844



##
File path: tests/python/relay/test_op_grad_level1.py
##
@@ -69,7 +69,12 @@ def check_single_op(opfunc, ref):
 (tvm.relay.log2, lambda x: 1 / (np.log(2) * x)),
 (tvm.relay.log10, lambda x: 1 / (np.log(10) * x)),
 (tvm.relay.cosh, lambda x: -1.0 * np.sinh(x)),

Review comment:
   I think cosh'(x) is not -sinh(x) but sinh(x).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on a change in pull request #5502: [TOPI][RELAY][TENSORFLOW]Math ops added

2020-05-08 Thread GitBox


kazum commented on a change in pull request #5502:
URL: https://github.com/apache/incubator-tvm/pull/5502#discussion_r421965191



##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -116,12 +117,60 @@ def sinh_grad(orig, grad):
 x = orig.args[0]
 return [grad * cosh(x)]
 
+
+@register_gradient("acos")
+def acos_grad(orig, grad):
+"""Returns [grad * -1/((1 - (x ^ 2)) ^ 1/2)]"""
+x = orig.args[0]
+a = const(2.0)
+ones = ones_like(x)
+return [grad * (-ones / sqrt(ones_like(x) - power(x, a)))]

Review comment:
   ones_like(x) => ones

##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -116,12 +117,60 @@ def sinh_grad(orig, grad):
 x = orig.args[0]
 return [grad * cosh(x)]
 
+
+@register_gradient("acos")
+def acos_grad(orig, grad):
+"""Returns [grad * -1/((1 - (x ^ 2)) ^ 1/2)]"""
+x = orig.args[0]
+a = const(2.0)
+ones = ones_like(x)
+return [grad * (-ones / sqrt(ones_like(x) - power(x, a)))]
+
+
+@register_gradient("acosh")
+def acosh_grad(orig, grad):
+"""Returns [grad * 1/((x - 1) ^ 1/2 * (x + 1) ^ 1/2)]"""
+x = orig.args[0]
+a = const(2.0)
+ones = ones_like(x)
+return [grad * ones / sqrt(power(x, a) - ones)]
+
+
+@register_gradient("asin")
+def asin_grad(orig, grad):
+"""Returns [grad * 1/((1 - (x ^ 2)) ^ (1/2))]"""
+x = orig.args[0]
+a = const(2.0)
+ones = ones_like(x)
+return [grad * ones / sqrt(ones - power(x, a))]
+
+
+@register_gradient("asinh")
+def asinh_grad(orig, grad):
+"""Returns [grad * 1/((1 + (x ^ 2)) ^ (1/2))]"""
+x = orig.args[0]
+a = const(2.0)
+ones = ones_like(x)
+return [grad * ones / sqrt(ones + power(x, a))]
+
+
 @register_gradient("atan")
 def atan_grad(orig, grad):
 """Returns [grad * 1 / (1 + x ^ 2)]"""
 x = orig.args[0]
 a = const(2.0)
-return [grad * ones_like(x) / (ones_like(x) + power(x, a))]
+ones = ones_like(x)
+return [grad * ones / (ones + power(x, a))]
+
+
+@register_gradient("atanh")
+def atanh_grad(orig, grad):
+"""Returns xx[grad * 1 / (1 - x ^ 2)]"""

Review comment:
   Remove `xx`.

##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -116,12 +117,60 @@ def sinh_grad(orig, grad):
 x = orig.args[0]
 return [grad * cosh(x)]
 
+
+@register_gradient("acos")
+def acos_grad(orig, grad):
+"""Returns [grad * -1/((1 - (x ^ 2)) ^ 1/2)]"""
+x = orig.args[0]
+a = const(2.0)
+ones = ones_like(x)
+return [grad * (-ones / sqrt(ones_like(x) - power(x, a)))]
+
+
+@register_gradient("acosh")
+def acosh_grad(orig, grad):
+"""Returns [grad * 1/((x - 1) ^ 1/2 * (x + 1) ^ 1/2)]"""
+x = orig.args[0]
+a = const(2.0)
+ones = ones_like(x)
+return [grad * ones / sqrt(power(x, a) - ones)]

Review comment:
   I think all the `power(x, const(2.0))` can be replaced with `x * x`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5539: Add Onnx Pad v11

2020-05-08 Thread GitBox


masahi commented on pull request #5539:
URL: https://github.com/apache/incubator-tvm/pull/5539#issuecomment-625660908


   Thanks @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (1b17b73 -> 36d7fd9)

2020-05-08 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 1b17b73  Changes to cpp_rpc to make it work on Android (+ Hexagon 
offloading) (#5535)
 add 36d7fd9  Add Onnx Pad v11 (#5539)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  | 25 +++
 tests/python/frontend/onnx/test_forward.py | 69 +-
 2 files changed, 93 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] tobegit3hub opened a new pull request #5542: Load platform specific lib for tvmdsoop instead of the hard-coded tvm_dso_op.so

2020-05-08 Thread GitBox


tobegit3hub opened a new pull request #5542:
URL: https://github.com/apache/incubator-tvm/pull/5542


   Since different operating systems build the dynamic libraries with different 
postfix, we will check the operating system to load the platform-specific file 
in Python API.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org