[GitHub] [incubator-tvm] ANSHUMAN87 opened a new pull request #5523: [Refactor][std::string --> String] IRModule is updated with String

2020-05-06 Thread GitBox


ANSHUMAN87 opened a new pull request #5523:
URL: https://github.com/apache/incubator-tvm/pull/5523


   Refer #5490
   
   @tqchen , @zhiics : Please help review!
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on issue #5490: [REFACTOR] std::string -> String Migration in IR nodes

2020-05-06 Thread GitBox


ANSHUMAN87 commented on issue #5490:
URL: https://github.com/apache/incubator-tvm/issues/5490#issuecomment-624495276


   I like to contribute for this movement. May be we can share the workload, if 
@zhiics  already working on it. Please let me know.Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch rust-tvm-sys created (now 84e7d53)

2020-05-06 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch rust-tvm-sys
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at 84e7d53  Tests are passing

No new revisions were added by this update.



[GitHub] [incubator-tvm] jroesch opened a new pull request #5524: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch opened a new pull request #5524:
URL: https://github.com/apache/incubator-tvm/pull/5524


   I plan on sending an RFC tomorrow on the greater refactor but my goal is to 
restructure the current Rust code into 4 crates:
   - `tvm-sys` containing the lowest level bindings to `libruntimetvm.dylib`
   - `tvm-rt` a layer which provides a ergonomic Rust layer over the primitive 
concepts exposed by `tvms-sys`.
   - `tvm` a high level crate for using the full TVM library including the 
compiler components. 
   - `tvm-graph-rt` previously the `runtime` a library a ABI compatible Rust 
implementation of the graph runtime. 
   
   I was attempting to refactor this in one go, but have found my current 
branch is rapidly reaching a few thousand lines.
   
   I will first send it in pieces to be reviewed and merged, and then follow up 
with connecting PRs to put the pieces together, turn on CI, and replace the 
existing code. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch rust-tvm-sys updated (84e7d53 -> 79131d5)

2020-05-06 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch rust-tvm-sys
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


omit 84e7d53  Tests are passing
omit 6e7ef97  New function interface
omit 8ca7efd  Repair tests for tvm-sys
omit 719b805  Format for Max
omit 40a3adb  Hack hack
omit f4a940e  Almost there ...
omit 7fedb66  ToFunction not working
omit a4c6de2  Unify different context types and add ToFunction instances
omit e605234  Work on converting tvm-rt, split to_function into its own file
omit 77a6473  Clean up organization of tvm-sys
omit d74fb91  WIP
omit 37ce767  Add function and get out of tree pass working
omit 35be34b  WIP
omit 6a15349  WIP
omit 2177ffa  Almost got out of tree pass working
omit e38d2f7  Useful v1 of Macro
omit cb093c8  Object derive Macro v1
omit eea1f0f  Finish array support and add call node
omit 82377ba  Almost got arrays working
omit baf91bf  Remove diff tool generated files
omit f8b5f3c  Now with failing array test
omit b866880  Reorganize
omit 8c8bb4b  Runtime crate builds
omit 7701399  Reorganize to new structure
omit ebeb899  Add runtime copy
omit 06b1590  Move frontend
omit f4f4f50  Move runtime to graph-runtime
omit 1283829  Move tvm-common to tvm-sys
omit 2d8a2be  Convert to thiserror and anyhow
omit 62efbe2  WIP
omit 2ab6141  Add a couple more Relay IR nodes
omit c991575  Refactor the Rust libraries.
 add 79131d5  Add the tvm-sys crate as the lowest level bindings.

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (84e7d53)
\
 N -- N -- N   refs/heads/rust-tvm-sys (79131d5)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:



[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

2020-05-06 Thread GitBox


lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420642814



##
File path: docs/dev/convert_layout.rst
##
@@ -116,16 +117,21 @@ These steps happen for each operator in sequence, where 
ConvertLayout pass keeps
 data_layout = attrs['data_layout']
 kernel_layout = attrs['kernel_layout']
 data, weight = inputs
-assert desired_layout == 'NCHW', \
-"Currently only transformation to NCHW layout is supported."
+
+# Use the first entry in desired layouts which specifies the data 
layout.
+# The expected ordering of layouts for this operator is defined by 
this function.
+desired_data_layout = str(desired_layouts[0])
+
 if desired_layout == 'NCHW':
 new_attrs = dict(attrs)
-new_attrs['data_layout'] = desired_layout
+new_attrs['data_layout'] = desired_data_layout
 new_attrs['kernel_layout'] = 'OIHW'
 # Actual insertion of layout transforms is taken care internally
 # by ConvertLayout pass.
 return relay.nn.conv2d(data, weight, **new_attrs)
-return None
+else:
+assert "Layout %s is not yet supported" % desired_data_layout
+return None

Review comment:
   I'll go with the assert, thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

2020-05-06 Thread GitBox


lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420644527



##
File path: docs/dev/convert_layout.rst
##
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator 
- batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default 
relay.build pipeline. The intended usage is to call it between the 
framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of 
heavily-layout sensitive operators to a list of the desired layouts for that 
operator.
+
 .. code-block:: python
 
 # TFlite framework to Relay parser - Default layout is NHWC
 mod, params = relay.frontend.from_tflite(tflite_model,
  shape_dict=shape_dict,
  dtype_dict=dtype_dict)
 
+# We assume our model's heavily-layout sensitive operators only consist of 
nn.conv2d
+desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
   I'm happy with either, although Tuple might make slightly more sense. I 
think it might be easier to do this as a separate PR though.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch rust-tvm-sys updated (79131d5 -> 5662f8e)

2020-05-06 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch rust-tvm-sys
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


 discard 79131d5  Add the tvm-sys crate as the lowest level bindings.
 add 5662f8e  Add tvm-sys

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (79131d5)
\
 N -- N -- N   refs/heads/rust-tvm-sys (5662f8e)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 include/tvm/ir/expr.h  |   5 +-
 include/tvm/node/container.h   |   2 +-
 include/tvm/node/reflection.h  |   2 +-
 include/tvm/relay/base.h   |   2 +-
 include/tvm/relay/expr.h   |   4 +-
 include/tvm/runtime/c_runtime_api.h|  11 -
 include/tvm/runtime/container.h|  16 -
 include/tvm/runtime/object.h   |   2 +-
 python/tvm/runtime/object_generic.py   |   2 +-
 rust/Cargo.toml|  19 +-
 rust/{tvm-sys => common}/Cargo.toml|   5 +-
 rust/common/build.rs   |  58 +++
 rust/{tvm-sys => common}/src/array.rs  |  85 
 rust/{tvm-sys => common}/src/errors.rs |  13 +-
 rust/{tvm-sys => common}/src/lib.rs|  26 +-
 rust/{tvm-sys => common}/src/packed_func.rs| 111 +++--
 rust/common/src/value.rs   | 231 +++
 rust/{tvm => frontend}/.gitignore  |   0
 rust/{tvm => frontend}/.travis.yml |   0
 rust/{tvm => frontend}/Cargo.toml  |  12 +-
 rust/{tvm => frontend}/README.md   |   0
 .../examples/resnet/Cargo.toml |   0
 rust/{tvm => frontend}/examples/resnet/README.md   |   0
 rust/{tvm => frontend}/examples/resnet/build.rs|   0
 .../examples/resnet/src/build_resnet.py|   0
 rust/{tvm => frontend}/examples/resnet/src/main.rs |   0
 rust/frontend/src/context.rs   | 330 +++
 rust/{tvm-rt => frontend}/src/errors.rs|  22 +-
 rust/frontend/src/function.rs  | 462 +
 rust/{tvm-rt => frontend}/src/lib.rs   |  32 +-
 rust/{tvm-rt => frontend}/src/module.rs|  45 +-
 rust/{tvm-rt => frontend}/src/ndarray.rs   |  90 ++--
 rust/frontend/src/value.rs | 166 
 rust/{tvm => frontend}/tests/basics/.gitignore |   0
 rust/{tvm => frontend}/tests/basics/Cargo.toml |   2 +-
 rust/{tvm => frontend}/tests/basics/build.rs   |   0
 rust/{tvm => frontend}/tests/basics/src/main.rs|   0
 rust/{tvm => frontend}/tests/basics/src/tvm_add.py |   0
 rust/{tvm => frontend}/tests/callback/Cargo.toml   |   2 +-
 .../tests/callback/src/bin/array.rs|   0
 .../tests/callback/src/bin/error.rs|   0
 .../tests/callback/src/bin/float.rs|   0
 .../tests/callback/src/bin/int.rs  |   0
 .../tests/callback/src/bin/string.rs   |   0
 rust/graph-runtime/.travis.yml |  22 -
 rust/macros/Cargo.toml |   4 +-
 rust/macros/src/import_module.rs   | 133 --
 rust/macros/src/lib.rs | 124 +-
 rust/macros/src/object.rs  | 171 
 rust/out-of-tree/Cargo.toml|  16 -
 rust/out-of-tree/import_pass.py|  41 --
 rust/out-of-tree/src/lib.rs|  60 ---
 rust/{tvm-rt => runtime}/.travis.yml   |   0
 rust/{graph-runtime => runtime}/Cargo.toml |   8 +-
 rust/{graph-runtime => runtime}/src/allocator.rs   |   0
 rust/{graph-runtime => runtime}/src/array.rs   |   4 +-
 rust/{graph-runtime => runtime}/src/errors.rs  |   0
 rust/{graph-runtime => runtime}/src/graph.rs   |   6 +-
 rust/{graph-runtime => runtime}/src/lib.rs |   5 +-
 rust/{graph-runtime => runtime}/src/module/dso.rs  |   4 +-
 rust/{graph-runtime => runtime}/src/module/mod.rs  |   4 +-
 .../src/module/syslib.rs   |   2 +-
 rust/{graph-runtime => runtime}/src/threading.rs   |   2 +-
 rust/{graph-runtime => runtime}/src/workspace.rs   |   2 +-
 rust/{graph-runtime => runtime}/test

[incubator-tvm] branch rust-tvm-sys updated (79131d5 -> 5662f8e)

2020-05-06 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch rust-tvm-sys
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


 discard 79131d5  Add the tvm-sys crate as the lowest level bindings.
 add 5662f8e  Add tvm-sys

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (79131d5)
\
 N -- N -- N   refs/heads/rust-tvm-sys (5662f8e)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 include/tvm/ir/expr.h  |   5 +-
 include/tvm/node/container.h   |   2 +-
 include/tvm/node/reflection.h  |   2 +-
 include/tvm/relay/base.h   |   2 +-
 include/tvm/relay/expr.h   |   4 +-
 include/tvm/runtime/c_runtime_api.h|  11 -
 include/tvm/runtime/container.h|  16 -
 include/tvm/runtime/object.h   |   2 +-
 python/tvm/runtime/object_generic.py   |   2 +-
 rust/Cargo.toml|  19 +-
 rust/{tvm-sys => common}/Cargo.toml|   5 +-
 rust/common/build.rs   |  58 +++
 rust/{tvm-sys => common}/src/array.rs  |  85 
 rust/{tvm-sys => common}/src/errors.rs |  13 +-
 rust/{tvm-sys => common}/src/lib.rs|  26 +-
 rust/{tvm-sys => common}/src/packed_func.rs| 111 +++--
 rust/common/src/value.rs   | 231 +++
 rust/{tvm => frontend}/.gitignore  |   0
 rust/{tvm => frontend}/.travis.yml |   0
 rust/{tvm => frontend}/Cargo.toml  |  12 +-
 rust/{tvm => frontend}/README.md   |   0
 .../examples/resnet/Cargo.toml |   0
 rust/{tvm => frontend}/examples/resnet/README.md   |   0
 rust/{tvm => frontend}/examples/resnet/build.rs|   0
 .../examples/resnet/src/build_resnet.py|   0
 rust/{tvm => frontend}/examples/resnet/src/main.rs |   0
 rust/frontend/src/context.rs   | 330 +++
 rust/{tvm-rt => frontend}/src/errors.rs|  22 +-
 rust/frontend/src/function.rs  | 462 +
 rust/{tvm-rt => frontend}/src/lib.rs   |  32 +-
 rust/{tvm-rt => frontend}/src/module.rs|  45 +-
 rust/{tvm-rt => frontend}/src/ndarray.rs   |  90 ++--
 rust/frontend/src/value.rs | 166 
 rust/{tvm => frontend}/tests/basics/.gitignore |   0
 rust/{tvm => frontend}/tests/basics/Cargo.toml |   2 +-
 rust/{tvm => frontend}/tests/basics/build.rs   |   0
 rust/{tvm => frontend}/tests/basics/src/main.rs|   0
 rust/{tvm => frontend}/tests/basics/src/tvm_add.py |   0
 rust/{tvm => frontend}/tests/callback/Cargo.toml   |   2 +-
 .../tests/callback/src/bin/array.rs|   0
 .../tests/callback/src/bin/error.rs|   0
 .../tests/callback/src/bin/float.rs|   0
 .../tests/callback/src/bin/int.rs  |   0
 .../tests/callback/src/bin/string.rs   |   0
 rust/graph-runtime/.travis.yml |  22 -
 rust/macros/Cargo.toml |   4 +-
 rust/macros/src/import_module.rs   | 133 --
 rust/macros/src/lib.rs | 124 +-
 rust/macros/src/object.rs  | 171 
 rust/out-of-tree/Cargo.toml|  16 -
 rust/out-of-tree/import_pass.py|  41 --
 rust/out-of-tree/src/lib.rs|  60 ---
 rust/{tvm-rt => runtime}/.travis.yml   |   0
 rust/{graph-runtime => runtime}/Cargo.toml |   8 +-
 rust/{graph-runtime => runtime}/src/allocator.rs   |   0
 rust/{graph-runtime => runtime}/src/array.rs   |   4 +-
 rust/{graph-runtime => runtime}/src/errors.rs  |   0
 rust/{graph-runtime => runtime}/src/graph.rs   |   6 +-
 rust/{graph-runtime => runtime}/src/lib.rs |   5 +-
 rust/{graph-runtime => runtime}/src/module/dso.rs  |   4 +-
 rust/{graph-runtime => runtime}/src/module/mod.rs  |   4 +-
 .../src/module/syslib.rs   |   2 +-
 rust/{graph-runtime => runtime}/src/threading.rs   |   2 +-
 rust/{graph-runtime => runtime}/src/workspace.rs   |   2 +-
 rust/{graph-runtime => runtime}/test

[GitHub] [incubator-tvm] jroesch commented on pull request #5524: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on pull request #5524:
URL: https://github.com/apache/incubator-tvm/pull/5524#issuecomment-624530409


   @binarybana @robo-corg @mwillsey @vegaluisjose @MarisaKirisame @mbrookhart 
@tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5328: [Rust][Runtime] Add basic object system support.

2020-05-06 Thread GitBox


jroesch commented on pull request #5328:
URL: https://github.com/apache/incubator-tvm/pull/5328#issuecomment-624530829


   Closing in favor of finer grained PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5523: [Refactor][std::string --> String] IRModule is updated with String

2020-05-06 Thread GitBox


jroesch commented on pull request #5523:
URL: https://github.com/apache/incubator-tvm/pull/5523#issuecomment-624531258


   Looks good to me, assuming all tests pass. I could use this in my next set 
of PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] giuseros commented on issue #5514: RPC error for large arrays

2020-05-06 Thread GitBox


giuseros commented on issue #5514:
URL: https://github.com/apache/incubator-tvm/issues/5514#issuecomment-624534491


   Hi @tqchen ,
   Thanks for the prompt fix! It is now working fine (it was also nice to dig a 
bit around the RPC part of the codebase).
   
   I will close the issue now. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

2020-05-06 Thread GitBox


lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420644527



##
File path: docs/dev/convert_layout.rst
##
@@ -218,24 +224,49 @@ Second example is for a lightly-layout sensitive operator 
- batch normalization.
 
 ConvertLayout pass is extremely easy to use. The pass is not a part of default 
relay.build pipeline. The intended usage is to call it between the 
framework-to-relay parser and relay.build module call.
 
+In order to specify the layouts to convert to, we create a mapping of 
heavily-layout sensitive operators to a list of the desired layouts for that 
operator.
+
 .. code-block:: python
 
 # TFlite framework to Relay parser - Default layout is NHWC
 mod, params = relay.frontend.from_tflite(tflite_model,
  shape_dict=shape_dict,
  dtype_dict=dtype_dict)
 
+# We assume our model's heavily-layout sensitive operators only consist of 
nn.conv2d
+desired_layouts = {'nn.conv2d': ['NCHW']}

Review comment:
   Np, I'm happy with either, although Tuple might make slightly more 
sense. I think it might be easier to do this as a separate PR though.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch rust-tvm-sys updated (5662f8e -> dd2bcd0)

2020-05-06 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch rust-tvm-sys
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


 discard 5662f8e  Add tvm-sys
 add 12e737f  Make "none" DataType explicit (#5491)
 add 3f33b25  [Hexagon] Change "scalar" and "stack" in IDL from "inrout" to 
"in" (#5487)
 add 967d731  [MXNET]broadcast and logical op support (#5461)
 add 9c1e74c  [REFACTOR][BOYC] Non recursive partitioning (#5493)
 add 360027d  Link necessary libraries when building runtime for Android 
(#5496)
 add 8599f7c  [TFLite] Model importer to be compatible with tflite 2.1.0 
(#5497)
 add c7a16d8  [Rust] Fixes for wasm32 target (#5489)
 add 6347406  [uTVM] Reset target and wait for runtime initialization on 
connect. (#5499)
 add 0abf581  bump tophub rocm version (#5504)
 add 6bbab4c  Fix Canonical Simplifier (#5505)
 add 7e88030  [RUST][RUNTIME] Fix workspace (#5503)
 add 95e06b3  [REFACTOR][RPC][PROCOTOL-CHANGE] Modularize the RPC infra 
(#5484)
 add 70a5902  [RPC] Call sync in remote cpu to gpu copies (#5512)
 add 32a094c  [QNN] Support CallNode inputs in qnn.concatenate (#5360)
 add 4c9724d  [RPC][BUGFIX][BACKPORT-0.6] Fix bug in rpc ring buffer shrink 
(#5516)
 add 7cbc0ca  [PATCH] [ring_buffer.h] Improve commentary for RingBuffer 
(#5518)
 add 16cb571  [TFLITE]Nit: Function names made consitent (#5515)
 add 7eb2451  fix prelu importer and add tests: (#5521)
 add dd2bcd0  Add tvm-sys

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (5662f8e)
\
 N -- N -- N   refs/heads/rust-tvm-sys (dd2bcd0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .gitignore |3 +-
 3rdparty/dmlc-core |2 +-
 CMakeLists.txt |   26 +-
 apps/cpp_rpc/rpc_server.cc |   20 +-
 apps/cpp_rpc/rpc_tracker_client.h  |2 +-
 cmake/modules/Hexagon.cmake|6 +-
 golang/sample/gen_mobilenet_lib.py |8 +-
 include/tvm/runtime/c_runtime_api.h|   48 +
 include/tvm/runtime/data_type.h|   20 +-
 include/tvm/runtime/device_api.h   |   10 +-
 .../java/org/apache/tvm/contrib/GraphRuntime.java  |   53 +-
 .../src/main/java/org/apache/tvm/rpc/Client.java   |2 +-
 .../java/org/apache/tvm/rpc/NativeServerLoop.java  |2 +-
 .../main/java/org/apache/tvm/rpc/RPCSession.java   |4 +-
 python/tvm/_ffi/_ctypes/packed_func.py |8 +-
 python/tvm/_ffi/_cython/packed_func.pxi|8 +-
 python/tvm/_ffi/base.py|1 -
 python/tvm/autotvm/tophub.py   |2 +-
 python/tvm/contrib/cc.py   |   10 +-
 python/tvm/contrib/graph_runtime.py|9 +-
 python/tvm/error.py|5 +
 python/tvm/relay/frontend/mxnet.py |   37 +-
 python/tvm/relay/frontend/onnx.py  |7 +-
 python/tvm/relay/frontend/tflite.py|   38 +-
 python/tvm/relay/qnn/op/qnn.py |   13 +-
 python/tvm/rpc/__init__.py |4 +-
 python/tvm/{ir => rpc}/_ffi_api.py |4 +-
 python/tvm/rpc/base.py |7 -
 python/tvm/rpc/client.py   |  108 +-
 python/tvm/rpc/minrpc.py   |   86 ++
 python/tvm/rpc/proxy.py|3 +-
 python/tvm/rpc/server.py   |7 +-
 python/tvm/runtime/module.py   |6 +-
 rust/Cargo.toml|1 +
 rust/common/build.rs   |1 +
 rust/common/src/array.rs   |1 +
 rust/common/src/lib.rs |9 +-
 rust/runtime/src/array.rs  |1 +
 rust/runtime/src/graph.rs  |   13 +-
 rust/runtime/src/module/mod.rs |   12 +-
 rust/runtime/src/threading.rs  |9 +-
 rust/runtime/src/workspace.rs  |   10 +-
 rust/runtime/tests/test_wasm32/.cargo/config   |2 +
 .../tests/{test_tvm_dso => t

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5523: [Refactor][std::string --> String] IRModule is updated with String

2020-05-06 Thread GitBox


ANSHUMAN87 commented on pull request #5523:
URL: https://github.com/apache/incubator-tvm/pull/5523#issuecomment-624539441


   @jroesch: Thanks a lot for your feedback! There is a break in python binding 
because of this change. I am working on it currently. Will resolve ASAP. 
Thanks! 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] 01/01: Add tvm-rt

2020-05-06 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch rust-tvm-rt
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git

commit d8d3147e304bdf5336acf22fb7616a9e59c2ad71
Author: Jared Roesch 
AuthorDate: Wed May 6 02:22:08 2020 -0700

Add tvm-rt
---
 include/tvm/ir/expr.h  |   5 +-
 python/tvm/runtime/object_generic.py   |   2 +-
 rust/macros/Cargo.toml |   4 +-
 rust/macros/src/{lib.rs => import_module.rs}   |  12 +-
 rust/macros/src/lib.rs | 124 +-
 rust/macros/src/object.rs  | 171 
 rust/tvm-rt/.gitignore |   7 +
 rust/{macros/Cargo.toml => tvm-rt/.travis.yml} |  24 +-
 rust/{macros => tvm-rt}/Cargo.toml |  30 +-
 rust/tvm-rt/README.md  | 235 +++
 rust/{macros => tvm-rt/examples/resnet}/Cargo.toml |  23 +-
 rust/tvm-rt/examples/resnet/README.md  |  45 +++
 rust/tvm-rt/examples/resnet/build.rs   |  42 ++
 rust/tvm-rt/examples/resnet/src/build_resnet.py| 134 +++
 rust/tvm-rt/examples/resnet/src/main.rs| 160 
 rust/tvm-rt/src/context.rs |  76 
 rust/tvm-rt/src/errors.rs  |  45 +++
 rust/tvm-rt/src/function.rs| 340 
 rust/tvm-rt/src/lib.rs | 124 ++
 rust/tvm-rt/src/module.rs  | 130 +++
 rust/tvm-rt/src/ndarray.rs | 431 +
 rust/tvm-rt/src/object/mod.rs  |  99 +
 rust/tvm-rt/src/object/object_ptr.rs   | 283 ++
 rust/tvm-rt/src/string.rs  |  72 
 rust/tvm-rt/src/to_boxed_fn.rs | 222 +++
 rust/tvm-rt/src/to_function.rs | 377 ++
 rust/tvm-rt/src/value.rs   | 166 
 rust/tvm-rt/tests/test_ir.rs   |  36 ++
 src/ir/expr.cc |  11 +-
 src/printer/relay_text_printer.cc  |  15 +-
 src/relay/transforms/to_cps.cc |   2 +-
 src/runtime/object.cc  |  14 +
 src/runtime/object_internal.h  |   9 +
 33 files changed, 3294 insertions(+), 176 deletions(-)

diff --git a/include/tvm/ir/expr.h b/include/tvm/ir/expr.h
index fba35a9..82689bd 100644
--- a/include/tvm/ir/expr.h
+++ b/include/tvm/ir/expr.h
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -36,6 +37,8 @@
 
 namespace tvm {
 
+using tvm::runtime::String;
+
 /*!
  * \brief Base type of all the expressions.
  * \sa Expr
@@ -189,7 +192,7 @@ class GlobalVar;
 class GlobalVarNode : public RelayExprNode {
  public:
   /*! \brief The name of the variable, this only acts as a hint. */
-  std::string name_hint;
+  String name_hint;
 
   void VisitAttrs(AttrVisitor* v) {
 v->Visit("name_hint", &name_hint);
diff --git a/python/tvm/runtime/object_generic.py 
b/python/tvm/runtime/object_generic.py
index cc21450..8f559ae 100644
--- a/python/tvm/runtime/object_generic.py
+++ b/python/tvm/runtime/object_generic.py
@@ -38,7 +38,7 @@ ObjectTypes = (ObjectBase, NDArrayBase, Module, 
ObjectRValueRef, PyNativeObject)
 
 
 def convert_to_object(value):
-"""Convert a python value to corresponding object type.
+"""Convert a Python value to corresponding object type.
 
 Parameters
 --
diff --git a/rust/macros/Cargo.toml b/rust/macros/Cargo.toml
index 784b35e..7abc9ae 100644
--- a/rust/macros/Cargo.toml
+++ b/rust/macros/Cargo.toml
@@ -32,5 +32,5 @@ proc-macro = true
 [dependencies]
 goblin = "0.0.24"
 proc-macro2 = "^1.0"
-quote = "1.0"
-syn = "1.0"
+quote = "^1.0"
+syn = { version = "1.0.17", features = ["full", "extra-traits"] }
diff --git a/rust/macros/src/lib.rs b/rust/macros/src/import_module.rs
similarity index 92%
copy from rust/macros/src/lib.rs
copy to rust/macros/src/import_module.rs
index 9f28c74..6b059ae 100644
--- a/rust/macros/src/lib.rs
+++ b/rust/macros/src/import_module.rs
@@ -16,9 +16,6 @@
  * specific language governing permissions and limitations
  * under the License.
  */
-
-extern crate proc_macro;
-
 use quote::quote;
 use std::{fs::File, io::Read};
 use syn::parse::{Parse, ParseStream, Result};
@@ -37,8 +34,7 @@ impl Parse for ImportModule {
 }
 }
 
-#[proc_macro]
-pub fn import_module(input: proc_macro::TokenStream) -> 
proc_macro::TokenStream {
+pub fn macro_impl(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
 let import_module_args = syn::parse_macro_input!(input as ImportModule);
 
 let manifest =
@@ -109,11 +105,11 @@ pub fn import_module(input: proc_macro::TokenStream) -> 
proc_macro::TokenStream
 };
 
 let fns = quote! {
-  

[incubator-tvm] branch rust-tvm-rt created (now d8d3147)

2020-05-06 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch rust-tvm-rt
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at d8d3147  Add tvm-rt

This branch includes the following new commits:

 new d8d3147  Add tvm-rt

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[GitHub] [incubator-tvm] jroesch opened a new pull request #5525: [Rust] Second stage of Rust Refactor

2020-05-06 Thread GitBox


jroesch opened a new pull request #5525:
URL: https://github.com/apache/incubator-tvm/pull/5525


   See #5524, this PR implements changes to TVM required for the `tvm-rt` crate 
to expose the object system for consumption in the high level crate. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5524: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on pull request #5524:
URL: https://github.com/apache/incubator-tvm/pull/5524#issuecomment-624541301


   I pushed to the wrong remote, closing for fresh issue. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch opened a new pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch opened a new pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526


   I plan on sending an RFC tomorrow on the greater refactor but my goal is to 
restructure the current Rust code into 4 crates:
   - `tvm-sys` containing the lowest level bindings to `libruntimetvm.dylib`
   - `tvm-rt` a layer which provides a ergonomic Rust layer over the primitive 
concepts exposed by `tvms-sys`.
   - `tvm` a high level crate for using the full TVM library including the 
compiler components. 
   - `tvm-graph-rt` previously the `runtime` a library a ABI compatible Rust 
implementation of the graph runtime. 
   
   I was attempting to refactor this in one go, but have found my current 
branch is rapidly reaching a few thousand lines.
   
   I will first send it in pieces to be reviewed and merged, and then follow up 
with connecting PRs to put the pieces together, turn on CI, and replace the 
existing code. 
   @binarybana @robo-corg @mwillsey @vegaluisjose @MarisaKirisame @mbrookhart 
@tqchen



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5525: [Rust] Second stage of Rust Refactor

2020-05-06 Thread GitBox


jroesch commented on pull request #5525:
URL: https://github.com/apache/incubator-tvm/pull/5525#issuecomment-624542676


   Confused my remotes with both branches closing to point to ones on my fork.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch opened a new pull request #5527: [Rust] Second stage of Rust Refactor

2020-05-06 Thread GitBox


jroesch opened a new pull request #5527:
URL: https://github.com/apache/incubator-tvm/pull/5527


   See #5526, this PR implements the object system and basic types. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

2020-05-06 Thread GitBox


lhutton1 commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420688732



##
File path: python/tvm/relay/op/nn/_nn.py
##
@@ -141,20 +142,29 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
 from tvm import relay
 data, weight = inputs
 new_attrs = dict(attrs)
-new_attrs['data_layout'] = desired_layout
-if desired_layout == 'NCHW':
+desired_data_layout = str(desired_layouts[0])
+assert desired_data_layout != "default", "Data layout cannot be default"
+new_attrs['data_layout'] = desired_data_layout
+
+if len(desired_layouts) > 1:
+desired_kernel_layout = str(desired_layouts[1])
+if desired_kernel_layout != "default":
+new_attrs['kernel_layout'] = desired_kernel_layout
+return relay.nn.conv2d(data, weight, **new_attrs)
+
+# Handle default kernel layouts
+if desired_data_layout == 'NCHW':
 new_attrs['kernel_layout'] = 'OIHW'
 return relay.nn.conv2d(data, weight, **new_attrs)
-elif desired_layout == 'NHWC':
+elif desired_data_layout == 'NHWC':
 # Check for depthwise convolution.
 if is_depthwise_conv2d(data.shape, attrs['data_layout'], weight.shape,
attrs['kernel_layout'], attrs['groups']):
 new_attrs['kernel_layout'] = 'HWOI'
 else:
 new_attrs['kernel_layout'] = 'HWIO'
 return relay.nn.conv2d(data, weight, **new_attrs)
-else:
-assert "Layout %s is not yet supported." % (desired_layout)
+assert "Layout %s is not yet supported." % (desired_data_layout)
 return None

Review comment:
   Apologies I forgot to mention, although the return is unreachable I 
added it to silence the linter. In this case, I think its needed, or we disable 
the check for this function?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on pull request #3865: Add lift_if_then_else pass

2020-05-06 Thread GitBox


roastduck commented on pull request #3865:
URL: https://github.com/apache/incubator-tvm/pull/3865#issuecomment-624575446


   Hi everyone, I'm wondering if we can merge this pass into the default 
lowering procedure? Hoisting `if` statements can be very helpful for sparse 
applications, since `LoopPartition` cannot eliminate their `if` statements with 
dynamic (unknown at compile time) conditions. If there is no problem, I can 
make a PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Robeast commented on issue #5060: [uTVM][Runtime] Deprecate uTVM Standalone Runtime

2020-05-06 Thread GitBox


Robeast commented on issue #5060:
URL: https://github.com/apache/incubator-tvm/issues/5060#issuecomment-624592461


   Hi @liangfu is there any update on your current implementation efforts? We 
are really looking forward to it!!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5517: [RUNTIME] Improve PackedFunc robustness

2020-05-06 Thread GitBox


siju-samuel commented on a change in pull request #5517:
URL: https://github.com/apache/incubator-tvm/pull/5517#discussion_r420731146



##
File path: include/tvm/runtime/packed_func.h
##
@@ -1280,30 +1303,30 @@ struct unpack_call_by_signature {
 template
 struct unpack_call_by_signature {
   template
-  static void run(const F& f,
+  TVM_ALWAYS_INLINE static void run(const F& f,
   const TVMArgs& args,
   TVMRetValue* rv) {

Review comment:
   Align 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule opened a new pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


maheshambule opened a new pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


maheshambule commented on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624659434


   @FrozenGene, @anijain2305, @siju-samuel, @u99127, @mbaret Could you please 
review and let me know your thoughts?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] u99127 commented on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


u99127 commented on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624665048


   Have you seen #5519 ? 
   
   Ramana
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on pull request #5519: [RFC] Make tflite frontend more data driven / improve errors.

2020-05-06 Thread GitBox


maheshambule commented on pull request #5519:
URL: https://github.com/apache/incubator-tvm/pull/5519#issuecomment-624668257


   @u99127, I am also working on refactoring TFLite frontend. This is a sample 
implementation https://github.com/apache/incubator-tvm/pull/5528. I would love 
to hear your opinion on it.
   
   I am using a decorator to do all the checks,  common stuff, and dynamic 
imports of options APIs. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


maheshambule commented on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624668670


   > Have you seen #5519 ?
   > 
   > Ramana
   
   Just now. I replied there.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] u99127 commented on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


u99127 commented on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624679105


   > 
   > 
   > > Have you seen #5519 ?
   > > Ramana
   > 
   > Just now. I replied there.
   
   Yours goes further than #5519  and I'll think through it a bit more. My 
initial reaction is that this looks ok but there is one aspect that we could 
pull in here from 5519 :

   There is an appeal to keeping the number of inputs and outputs in the table 
and passing the input and output tensors to the helper functions seems to be 
high. It also seemed to me that the equality check could be done at the top 
level and any place where we had a range check we punted to the helper function 
as I had commented. I'll try and push out something the approach for something 
like conv2d.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #5490: [REFACTOR] std::string -> String Migration in IR nodes

2020-05-06 Thread GitBox


tqchen edited a comment on issue #5490:
URL: https://github.com/apache/incubator-tvm/issues/5490#issuecomment-624408382


   ## IMPORTANT NOTE
   
   as shown in the intiial example, we will need to update the converter(via 
`_update_from_std_str`) to keep backward compatibility of the nodes.
   
   To test the correctness, use `tvm.ir.save_json` to save a node (that is 
involved in the change) before the migration, and then use `tvm.ir.load_json` 
to load it back after change
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


tqchen commented on pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#issuecomment-624695329


   cc @nhynes 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5517: [RUNTIME] Improve PackedFunc robustness

2020-05-06 Thread GitBox


tqchen commented on a change in pull request #5517:
URL: https://github.com/apache/incubator-tvm/pull/5517#discussion_r420852646



##
File path: include/tvm/runtime/packed_func.h
##
@@ -45,6 +45,15 @@
 #define TVM_RUNTIME_HEADER_ONLY 0
 #endif
 
+// Always inline macro only use in template
+// expansion cases where we know inline is important.
+#ifdef _MSC_VER
+#define TVM_ALWAYS_INLINE __forceinline inline
+#pragma warning(disable : 4068)

Review comment:
   It seems to be irrelevant here, i just removed it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


tqchen edited a comment on pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#issuecomment-624695329


   cc @nhynes @kazum 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5485: [TOPI][Winograd] Optimization of Conv2d Winograd algorithm on Tensor …

2020-05-06 Thread GitBox


tqchen commented on pull request #5485:
URL: https://github.com/apache/incubator-tvm/pull/5485#issuecomment-624695979


   cc @FrozenGene 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (7eb2451 -> e2bd43b)

2020-05-06 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 7eb2451  fix prelu importer and add tests: (#5521)
 add e2bd43b  [RPC] Fix the multihop cpu case (#5522)

No new revisions were added by this update.

Summary of changes:
 src/runtime/rpc/rpc_endpoint.cc   | 27 ---
 src/runtime/rpc/rpc_local_session.h   |  4 
 src/runtime/rpc/rpc_session.h | 12 
 tests/python/unittest/test_runtime_rpc.py |  9 +++--
 4 files changed, 39 insertions(+), 13 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-06 Thread GitBox


tqchen commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-624696569


   Going to merge it in two days



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on issue #5490: [REFACTOR] std::string -> String Migration in IR nodes

2020-05-06 Thread GitBox


zhiics commented on issue #5490:
URL: https://github.com/apache/incubator-tvm/issues/5490#issuecomment-624723623


   @ANSHUMAN87 Welcome to contribute. I've busy with other things recently. 
Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


maheshambule commented on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624726976


   I thought of adding all the op attributes in table/map but decorator seemed 
to be more pythonic. 
   
   Few more points to consider:
   1. Sometimes equality checks can not be straight forward like 
num_inputs==input_tensors. In this case we can always set num_input check as 
None and handle assertion in the specific convert function. 
   2. Need to check instead of passing input tensors can we pass Relay 
expressions. If input tensors are needed they can always be accessed using 'op' 
variable.
   3. Need to check if there is a scope to add helper function for quantized 
params calculation.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m commented on pull request #5501: [TIR][REFACTOR] std::string -> String Migration in TIR nodes

2020-05-06 Thread GitBox


cchung100m commented on pull request #5501:
URL: https://github.com/apache/incubator-tvm/pull/5501#issuecomment-624733116


   Hi @tqchen @zhiics 
   
   Thanks for your kind feedback and I will keep work on it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mikeseven opened a new issue #5529: [BUG] ConvertLayout pass doesn't handle ops attributes

2020-05-06 Thread GitBox


mikeseven opened a new issue #5529:
URL: https://github.com/apache/incubator-tvm/issues/5529


   While converting layout from TensorFlow, from NHWC to NCHW, I noticed LRN 
axis was not converted from using axis=3 to axis=1. 
   
   Looking at the code, I don't find where ConvertLayout pass would handle 
attributes of operators. So, potentially, I wonder if this bug on LRN may be 
even more widespread to other nodes?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart opened a new pull request #5530: Fix an issue with ONNX Upsample

2020-05-06 Thread GitBox


mbrookhart opened a new pull request #5530:
URL: https://github.com/apache/incubator-tvm/pull/5530


   Update one test to hit the broken usecase.
   
   @masahi another small onnx bug, if you don't mind. Thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mikeseven opened a new issue #5531: [BUG] topi/python/x86/conv2d_avx_* is incomplete, won't run

2020-05-06 Thread GitBox


mikeseven opened a new issue #5531:
URL: https://github.com/apache/incubator-tvm/issues/5531


   In conv2d_avx_1x1.py and conv2d_avx_common.py, some methods are missing 
declaration of oc_bn. When code is called, python crashes.
   
   I believe oc_bn should be at least 1 and that seems to work but author 
should fix to the proper value...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5520: LRN only supports 4D tensors, remove it from alter_op_layout

2020-05-06 Thread GitBox


tmoreau89 commented on pull request #5520:
URL: https://github.com/apache/incubator-tvm/pull/5520#issuecomment-624747974


   @merrymercy if you can review this change by the end of the day thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on pull request #5474: [Frontend][TFLite] ADD_N operator

2020-05-06 Thread GitBox


maheshambule commented on pull request #5474:
URL: https://github.com/apache/incubator-tvm/pull/5474#issuecomment-624749216


   @tqchen Could you please help in merging this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] nhynes commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


nhynes commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420911319



##
File path: rust/tvm-sys/src/array.rs
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::{
+mem,
+os::raw::{c_int, c_void},
+};
+
+use crate::ffi::{
+DLContext, DLDataType, DLDataTypeCode_kDLFloat, DLDataTypeCode_kDLInt, 
DLDataTypeCode_kDLUInt,
+DLDeviceType_kDLCPU, DLTensor,
+};
+
+/// `From` conversions to `DLTensor` for `ndarray::Array`.
+/// Takes a reference to the `ndarray` since `DLTensor` is not owned.
+macro_rules! impl_dltensor_from_ndarray {
+($type:ty, $typecode:expr) => {
+impl<'a, D: ndarray::Dimension> From<&'a mut ndarray::Array<$type, D>> 
for DLTensor {
+fn from(arr: &'a mut ndarray::Array<$type, D>) -> Self {
+DLTensor {
+data: arr.as_mut_ptr() as *mut c_void,
+ctx: DLContext {
+device_type: DLDeviceType_kDLCPU,
+device_id: 0,
+},
+ndim: arr.ndim() as c_int,
+dtype: DLDataType {
+code: $typecode as u8,
+bits: 8 * mem::size_of::<$type>() as u8,
+lanes: 1,
+},
+shape: arr.shape().as_ptr() as *const i64 as *mut i64,
+strides: arr.strides().as_ptr() as *const isize as *mut 
i64,

Review comment:
   looks like you can use 
[as_mut_ptr](https://docs.rs/ndarray/0.13.1/ndarray/struct.ArrayBase.html#method.as_mut_ptr)
 now

##
File path: rust/tvm-sys/src/context.rs
##
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! Provides [`Context`] and related device queries.
+//!
+//! Create a new context for device type and device id.
+//!
+//! # Example
+//!
+//! ```
+//! # use tvm_sys::{TVMDeviceType, Context};
+//! let cpu = TVMDeviceType::from("cpu");
+//! let ctx = Context::new(cpu , 0);
+//! let cpu0 = Context::cpu(0);
+//! assert_eq!(ctx, cpu0);
+//! ```
+//!
+//! Or from a supported device name.
+//!
+//! ```
+//! use tvm_sys::Context;
+//! let cpu0 = Context::from("cpu");
+//! println!("{}", cpu0);
+//! ```
+
+use crate::ffi::{self, *};
+use crate::packed_func::{ArgValue, RetValue};
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+use thiserror::Error;
+
+use std::fmt::{self, Display, Formatter};
+
+use anyhow::Result;
+
+/// Device type can be from a supported device name. See the supported devices
+/// in [TVM](https://github.com/apache/incubator-tvm).
+///
+/// ## Example
+///
+/// ```
+/// use tvm_sys::TVMDeviceType;
+/// let cpu = TVMDeviceType::from("cpu");
+/// println!("device is: {}", cpu);
+///```
+
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
+pub struct TVMDeviceType(pub i64);

Review comment:
   I would probably opt or this not to expose the internal repr. Why not 
make it a `repr(i64)` enum, actually? Then you can just `as` it around as 
necessary. If you don't mind using rust nightly, the 
[proper](https://docs.rs/proper/0.1.5/proper/) crate might be of use.

##
File path: rust/tvm-sys/src/byte_array.rs
##
@@ -0,0 +1,64 @@
+use std::os::raw::c_char;
+
+use crate::ffi::TVMByteArray;

Review comment:
   this is stylistic, but if you truly mean to never newtype this struct

[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5479: [Relay-TFLite] FP32 and Quantized Object Detection Model

2020-05-06 Thread GitBox


mbaret commented on a change in pull request #5479:
URL: https://github.com/apache/incubator-tvm/pull/5479#discussion_r420935075



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -320,6 +321,45 @@ def dequantize(self, expr, tensor):
  
input_zero_point=tensor.qnn_params['zero_point'])
 return dequantized
 
+
+def convert_qnn_fused_activation_function(self, expr, fused_activation_fn,
+  scale, zero_point, dtype):
+"""Convert TFLite fused activation function. The expr is an input 
quantized tensor with
+scale and zero point """

Review comment:
   It's more from a position of having the history clearly reflect when 
features were added. Supporting fused qnn functions is needed for this model, 
but it's also necessary for a wide range of other models so I'd view this as a 
feature not specific to object detection.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420934977



##
File path: rust/tvm-sys/src/datatype.rs
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::any::TypeId;
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+
+use crate::packed_func::RetValue;
+
+use thiserror::Error;
+
+use crate::ffi::DLDataType;

Review comment:
   I will turn on the `rustfmt` flag for them. I'll do a cleanup pass have 
been very quickly hacking and trying to cut the PR up before its too large :). 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420935232



##
File path: rust/tvm-sys/src/datatype.rs
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::any::TypeId;
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+
+use crate::packed_func::RetValue;
+
+use thiserror::Error;
+
+use crate::ffi::DLDataType;
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub struct DataType {
+pub code: u8,
+pub bits: u8,
+pub lanes: u16,

Review comment:
   Good point, I will try and change this.

##
File path: rust/tvm-sys/src/datatype.rs
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::any::TypeId;
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+
+use crate::packed_func::RetValue;
+
+use thiserror::Error;
+
+use crate::ffi::DLDataType;
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub struct DataType {
+pub code: u8,
+pub bits: u8,
+pub lanes: u16,
+}
+
+impl DataType {
+pub fn new(code: u8, bits: u8, lanes: u16) -> DataType {
+DataType { code, bits, lanes }
+}
+
+/// Returns the number of bytes occupied by an element of this `DataType`.
+pub fn itemsize(&self) -> usize {
+(self.bits as usize * self.lanes as usize) >> 3
+}
+
+/// Returns whether this `DataType` represents primitive type `T`.
+pub fn is_type(&self) -> bool {
+if self.lanes != 1 {
+return false;
+}
+let typ = TypeId::of::();
+(typ == TypeId::of::() && self.code == 0 && self.bits == 32)
+|| (typ == TypeId::of::() && self.code == 0 && self.bits == 
64)
+|| (typ == TypeId::of::() && self.code == 1 && self.bits == 
32)
+|| (typ == TypeId::of::() && self.code == 1 && self.bits == 
64)
+|| (typ == TypeId::of::() && self.code == 2 && self.bits == 
32)
+|| (typ == TypeId::of::() && self.code == 2 && self.bits == 
64)
+}
+
+pub fn code(&self) -> usize {
+self.code as usize
+}
+
+pub fn bits(&self) -> usize {
+self.bits as usize
+}
+
+pub fn lanes(&self) -> usize {
+self.lanes as usize
+}
+}
+
+impl<'a> From<&'a DataType> for DLDataType {
+fn from(dtype: &'a DataType) -> Self {
+Self {
+code: dtype.code as u8,
+bits: dtype.bits as u8,
+lanes: dtype.lanes as u16,
+}
+}
+}
+
+impl From for DataType {
+fn from(dtype: DLDataType) -> Self {
+Self {
+code: dtype.code,
+bits: dtype.bits,
+lanes: dtype.lanes,
+}
+}
+}
+
+#[derive(Debug, Error)]
+pub enum ParseTvmTypeError {
+#[error("invalid number: {0}")]
+InvalidNumber(std::num::ParseIntError),
+#[error("unknown type: {0}")]
+UnknownType(String),
+}
+
+/// Implements TVMType conversion from `&str` of general format 
`{dtype}{bits}x{lanes}`
+/// such as "int32", "float32" or with lane "float32x1".
+impl FromStr for DataType {
+type Err = ParseTvmTypeError;
+fn from_str(type_str: &str) -> Result {
+if type_str == "bool" {
+return Ok(DataType::new(1, 1, 1));
+}
+
+let mut type_lanes = type_str.split('x');
+let typ = type_lanes.next().expect("Missing dtype");

Review comment:
   👍 





[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420935736



##
File path: rust/tvm-sys/src/datatype.rs
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::any::TypeId;
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+
+use crate::packed_func::RetValue;
+
+use thiserror::Error;
+
+use crate::ffi::DLDataType;
+
+#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+pub struct DataType {
+pub code: u8,
+pub bits: u8,
+pub lanes: u16,
+}
+
+impl DataType {
+pub fn new(code: u8, bits: u8, lanes: u16) -> DataType {
+DataType { code, bits, lanes }
+}
+
+/// Returns the number of bytes occupied by an element of this `DataType`.
+pub fn itemsize(&self) -> usize {
+(self.bits as usize * self.lanes as usize) >> 3
+}
+
+/// Returns whether this `DataType` represents primitive type `T`.
+pub fn is_type(&self) -> bool {
+if self.lanes != 1 {
+return false;
+}
+let typ = TypeId::of::();
+(typ == TypeId::of::() && self.code == 0 && self.bits == 32)
+|| (typ == TypeId::of::() && self.code == 0 && self.bits == 
64)
+|| (typ == TypeId::of::() && self.code == 1 && self.bits == 
32)
+|| (typ == TypeId::of::() && self.code == 1 && self.bits == 
64)
+|| (typ == TypeId::of::() && self.code == 2 && self.bits == 
32)
+|| (typ == TypeId::of::() && self.code == 2 && self.bits == 
64)
+}
+
+pub fn code(&self) -> usize {
+self.code as usize
+}
+
+pub fn bits(&self) -> usize {
+self.bits as usize
+}
+
+pub fn lanes(&self) -> usize {
+self.lanes as usize
+}
+}
+
+impl<'a> From<&'a DataType> for DLDataType {
+fn from(dtype: &'a DataType) -> Self {
+Self {
+code: dtype.code as u8,
+bits: dtype.bits as u8,
+lanes: dtype.lanes as u16,
+}
+}
+}
+
+impl From for DataType {
+fn from(dtype: DLDataType) -> Self {
+Self {
+code: dtype.code,
+bits: dtype.bits,
+lanes: dtype.lanes,
+}
+}
+}
+
+#[derive(Debug, Error)]
+pub enum ParseTvmTypeError {
+#[error("invalid number: {0}")]
+InvalidNumber(std::num::ParseIntError),
+#[error("unknown type: {0}")]
+UnknownType(String),
+}
+
+/// Implements TVMType conversion from `&str` of general format 
`{dtype}{bits}x{lanes}`
+/// such as "int32", "float32" or with lane "float32x1".
+impl FromStr for DataType {
+type Err = ParseTvmTypeError;
+fn from_str(type_str: &str) -> Result {
+if type_str == "bool" {
+return Ok(DataType::new(1, 1, 1));
+}
+
+let mut type_lanes = type_str.split('x');
+let typ = type_lanes.next().expect("Missing dtype");
+let lanes = type_lanes
+.next()
+.map(|l| ::from_str_radix(l, 10))
+.unwrap_or(Ok(1))
+.map_err(ParseTvmTypeError::InvalidNumber)?;
+let (type_name, bits) = match typ.find(char::is_numeric) {
+Some(idx) => {
+let (name, bits_str) = typ.split_at(idx);
+(
+name,
+u8::from_str_radix(bits_str, 
10).map_err(ParseTvmTypeError::InvalidNumber)?,
+)
+}
+None => (typ, 32),
+};
+
+let type_code = match type_name {
+"int" => 0,
+"uint" => 1,
+"float" => 2,
+"handle" => 3,
+_ => return 
Err(ParseTvmTypeError::UnknownType(type_name.to_string())),
+};
+
+Ok(DataType::new(type_code, bits, lanes))
+}
+}
+
+impl std::fmt::Display for DataType {
+fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
+if self.bits == 1 && self.lanes == 1 {
+return write!(f, "bool");
+}
+let mut type_str = match self.code {
+0 => "int",

Review comment:
   Sounds good. 





This is an automated message from the Apache Git Service.

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5422: [RELAY][Convert Layout] Specify additional layouts in convert layout pass

2020-05-06 Thread GitBox


comaniac commented on a change in pull request #5422:
URL: https://github.com/apache/incubator-tvm/pull/5422#discussion_r420935901



##
File path: python/tvm/relay/op/nn/_nn.py
##
@@ -141,20 +142,29 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
 from tvm import relay
 data, weight = inputs
 new_attrs = dict(attrs)
-new_attrs['data_layout'] = desired_layout
-if desired_layout == 'NCHW':
+desired_data_layout = str(desired_layouts[0])
+assert desired_data_layout != "default", "Data layout cannot be default"
+new_attrs['data_layout'] = desired_data_layout
+
+if len(desired_layouts) > 1:
+desired_kernel_layout = str(desired_layouts[1])
+if desired_kernel_layout != "default":
+new_attrs['kernel_layout'] = desired_kernel_layout
+return relay.nn.conv2d(data, weight, **new_attrs)
+
+# Handle default kernel layouts
+if desired_data_layout == 'NCHW':
 new_attrs['kernel_layout'] = 'OIHW'
 return relay.nn.conv2d(data, weight, **new_attrs)
-elif desired_layout == 'NHWC':
+elif desired_data_layout == 'NHWC':
 # Check for depthwise convolution.
 if is_depthwise_conv2d(data.shape, attrs['data_layout'], weight.shape,
attrs['kernel_layout'], attrs['groups']):
 new_attrs['kernel_layout'] = 'HWOI'
 else:
 new_attrs['kernel_layout'] = 'HWIO'
 return relay.nn.conv2d(data, weight, **new_attrs)
-else:
-assert "Layout %s is not yet supported." % (desired_layout)
+assert "Layout %s is not yet supported." % (desired_data_layout)
 return None

Review comment:
   We should not disable the checker anyways. In this case you have two 
options.
   
   1. Move the assertion out of the branch. This solution makes the last block 
on the top-level to avoid inconsistent return statement, but I visually do not 
like, so I prefer the second one.
   
   ```python
   if desired_data_layout == 'NCHW':
 # do something
 return
   assert desired_data_layout == 'NHWC'
   # do something
   return
   ```
   
   2. Throw exception instead of assertion. In this case, linter can know 
everything after `raise` is unreachable and it won't complain.
   
   ```python
   if desired_data_layout == 'NCHW':
 # do something
 return
   if desired_data_layout == 'NHWC': # PEP8 standard suggests not using else 
when previous block has a return statement. Since TVM linter doesn't enforce 
this point, I'm fine with both.
 # do something
 return
   raise RuntimeError(...)
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420936324



##
File path: rust/tvm-sys/src/packed_func.rs
##
@@ -0,0 +1,380 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::{
+convert::TryFrom,
+ffi::{CStr, CString},
+os::raw::c_void,
+};
+
+pub use crate::ffi::TVMValue;
+use crate::{errors::ValueDowncastError, ffi::*};
+
+pub trait PackedFunc:
+Fn(&[ArgValue]) -> Result + Send + 
Sync
+{
+}
+
+impl PackedFunc for T where
+T: Fn(&[ArgValue]) -> Result + 
Send + Sync
+{
+}
+
+/// Calls a packed function and returns a `RetValue`.
+///
+/// # Example
+///
+/// `call_packed!(my_tvm_func, &mut arg1, &mut arg2)`
+#[macro_export]
+macro_rules! call_packed {
+($fn:expr, $($args:expr),+) => {
+$fn(&[$($args.into(),)+])
+};
+($fn:expr) => {
+$fn(&Vec::new())
+};
+}
+
+/// Constructs a derivative of a TVMPodValue.
+macro_rules! TVMPODValue {
+{
+$(#[$m:meta])+
+$name:ident $(<$a:lifetime>)? {
+$($extra_variant:ident ( $variant_type:ty ) ),+ $(,)?
+},
+match $value:ident {
+$($tvm_type:ident => { $from_tvm_type:expr })+
+},
+match &self {
+$($self_type:ident ( $val:ident ) => { $from_self_type:expr })+
+}
+$(,)?
+} => {
+$(#[$m])+
+#[derive(Clone, Debug)]
+pub enum $name $(<$a>)? {
+Int(i64),
+UInt(i64),
+Float(f64),
+Null,
+DataType(DLDataType),
+String(CString),
+Context(TVMContext),
+Handle(*mut c_void),
+ArrayHandle(TVMArrayHandle),
+ObjectHandle(*mut c_void),
+ModuleHandle(TVMModuleHandle),
+FuncHandle(TVMFunctionHandle),
+NDArrayHandle(*mut c_void),
+$($extra_variant($variant_type)),+
+}
+
+impl $(<$a>)? $name $(<$a>)? {
+pub fn from_tvm_value($value: TVMValue, type_code: u32) -> Self {
+use $name::*;
+#[allow(non_upper_case_globals)]
+unsafe {
+match type_code as _ {
+DLDataTypeCode_kDLInt => Int($value.v_int64),
+DLDataTypeCode_kDLUInt => UInt($value.v_int64),
+DLDataTypeCode_kDLFloat => Float($value.v_float64),
+TVMTypeCode_kTVMNullptr => Null,
+TVMTypeCode_kTVMDataType => DataType($value.v_type),
+TVMTypeCode_kTVMContext => Context($value.v_ctx),
+TVMTypeCode_kTVMOpaqueHandle => 
Handle($value.v_handle),
+TVMTypeCode_kTVMDLTensorHandle => 
ArrayHandle($value.v_handle as TVMArrayHandle),
+TVMTypeCode_kTVMObjectHandle => 
ObjectHandle($value.v_handle),
+TVMTypeCode_kTVMModuleHandle => 
ModuleHandle($value.v_handle),
+TVMTypeCode_kTVMPackedFuncHandle => 
FuncHandle($value.v_handle),
+TVMTypeCode_kTVMNDArrayHandle => 
NDArrayHandle($value.v_handle),
+$( $tvm_type => { $from_tvm_type } ),+
+_ => unimplemented!("{}", type_code),
+}
+}
+}
+
+pub fn to_tvm_value(&self) -> (TVMValue, TVMTypeCode) {
+use $name::*;
+match self {
+Int(val) => (TVMValue { v_int64: *val }, 
DLDataTypeCode_kDLInt),
+UInt(val) => (TVMValue { v_int64: *val as i64 }, 
DLDataTypeCode_kDLUInt),
+Float(val) => (TVMValue { v_float64: *val }, 
DLDataTypeCode_kDLFloat),
+Null => (TVMValue{ v_int64: 0 },TVMTypeCode_kTVMNullptr),
+DataType(val) => (TVMValue { v_type: *val }, 
TVMTypeCode_kTVMDataType),
+Context(val) => (TVMValue { v_ctx: val.clone() }, 
TVMTypeCode_kTVMContext),
+String(val) => {
+(
+TVMValue { v_handle: val.as_ptr() as 

[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420936324



##
File path: rust/tvm-sys/src/packed_func.rs
##
@@ -0,0 +1,380 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::{
+convert::TryFrom,
+ffi::{CStr, CString},
+os::raw::c_void,
+};
+
+pub use crate::ffi::TVMValue;
+use crate::{errors::ValueDowncastError, ffi::*};
+
+pub trait PackedFunc:
+Fn(&[ArgValue]) -> Result + Send + 
Sync
+{
+}
+
+impl PackedFunc for T where
+T: Fn(&[ArgValue]) -> Result + 
Send + Sync
+{
+}
+
+/// Calls a packed function and returns a `RetValue`.
+///
+/// # Example
+///
+/// `call_packed!(my_tvm_func, &mut arg1, &mut arg2)`
+#[macro_export]
+macro_rules! call_packed {
+($fn:expr, $($args:expr),+) => {
+$fn(&[$($args.into(),)+])
+};
+($fn:expr) => {
+$fn(&Vec::new())
+};
+}
+
+/// Constructs a derivative of a TVMPodValue.
+macro_rules! TVMPODValue {
+{
+$(#[$m:meta])+
+$name:ident $(<$a:lifetime>)? {
+$($extra_variant:ident ( $variant_type:ty ) ),+ $(,)?
+},
+match $value:ident {
+$($tvm_type:ident => { $from_tvm_type:expr })+
+},
+match &self {
+$($self_type:ident ( $val:ident ) => { $from_self_type:expr })+
+}
+$(,)?
+} => {
+$(#[$m])+
+#[derive(Clone, Debug)]
+pub enum $name $(<$a>)? {
+Int(i64),
+UInt(i64),
+Float(f64),
+Null,
+DataType(DLDataType),
+String(CString),
+Context(TVMContext),
+Handle(*mut c_void),
+ArrayHandle(TVMArrayHandle),
+ObjectHandle(*mut c_void),
+ModuleHandle(TVMModuleHandle),
+FuncHandle(TVMFunctionHandle),
+NDArrayHandle(*mut c_void),
+$($extra_variant($variant_type)),+
+}
+
+impl $(<$a>)? $name $(<$a>)? {
+pub fn from_tvm_value($value: TVMValue, type_code: u32) -> Self {
+use $name::*;
+#[allow(non_upper_case_globals)]
+unsafe {
+match type_code as _ {
+DLDataTypeCode_kDLInt => Int($value.v_int64),
+DLDataTypeCode_kDLUInt => UInt($value.v_int64),
+DLDataTypeCode_kDLFloat => Float($value.v_float64),
+TVMTypeCode_kTVMNullptr => Null,
+TVMTypeCode_kTVMDataType => DataType($value.v_type),
+TVMTypeCode_kTVMContext => Context($value.v_ctx),
+TVMTypeCode_kTVMOpaqueHandle => 
Handle($value.v_handle),
+TVMTypeCode_kTVMDLTensorHandle => 
ArrayHandle($value.v_handle as TVMArrayHandle),
+TVMTypeCode_kTVMObjectHandle => 
ObjectHandle($value.v_handle),
+TVMTypeCode_kTVMModuleHandle => 
ModuleHandle($value.v_handle),
+TVMTypeCode_kTVMPackedFuncHandle => 
FuncHandle($value.v_handle),
+TVMTypeCode_kTVMNDArrayHandle => 
NDArrayHandle($value.v_handle),
+$( $tvm_type => { $from_tvm_type } ),+
+_ => unimplemented!("{}", type_code),
+}
+}
+}
+
+pub fn to_tvm_value(&self) -> (TVMValue, TVMTypeCode) {
+use $name::*;
+match self {
+Int(val) => (TVMValue { v_int64: *val }, 
DLDataTypeCode_kDLInt),
+UInt(val) => (TVMValue { v_int64: *val as i64 }, 
DLDataTypeCode_kDLUInt),
+Float(val) => (TVMValue { v_float64: *val }, 
DLDataTypeCode_kDLFloat),
+Null => (TVMValue{ v_int64: 0 },TVMTypeCode_kTVMNullptr),
+DataType(val) => (TVMValue { v_type: *val }, 
TVMTypeCode_kTVMDataType),
+Context(val) => (TVMValue { v_ctx: val.clone() }, 
TVMTypeCode_kTVMContext),
+String(val) => {
+(
+TVMValue { v_handle: val.as_ptr() as 

[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420937306



##
File path: rust/tvm-sys/src/value.rs
##
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use std::str::FromStr;
+
+use thiserror::Error;
+
+use crate::ffi::*;

Review comment:
   I've started to rely on just using `rustfmt` instead of paying attention 
to my coding style. I will flip the flag and organize them. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#issuecomment-624770902


   @nhynes thanks for the review. I opened this at like 2:30am last night 
before going to bed. Will provide some more context.
   
   The goal of my refactor to is to turn on the new C-ABI compatible object 
system in TVM. I started by trying to simplify the split between interfaces and 
better match TVM's linking model. I found some of the code using the low-level 
`ffi::{X, Y}` types and some using higher-level types, this first patch just 
combines the overlap without fully changing things to my liking as in not to 
break everything. 
   
   What I first did is combine some overlapping code that I found in 
`tvm-frontend` and `tvm-common` and then have been working on rectifying the 
top-level layers. If you want to see end-to-end state you can see my WIP 
version at https://github.com/jroesch/tvm/tree/rust-fe. 
   
   A cool not yet polished thing is the ability to write an out of tree Rust 
pass. See https://github.com/jroesch/tvm/tree/rust-fe/rust/out-of-tree. 
   
   I would like to land the initial layers in tree so I can start to turn on CI 
and incrementally land instead of writing a single 10kloc PR. Hoping to do 
multiple passes on the naming, interface, etc. 
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420949949



##
File path: rust/tvm-sys/src/lib.rs
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! This crate contains the minimal interface over TVM's
+//! C runtime API.
+//!
+//! These common bindings are useful to both runtimes
+//! written in Rust, as well as higher level API bindings.
+//!
+//! See the `tvm-rt` or `tvm` crates for full bindings to
+//! the TVM API.
+
+/// The low-level C runtime FFI API for TVM.
+pub mod ffi {
+#![allow(non_camel_case_types, non_snake_case, non_upper_case_globals, 
unused)]
+
+use std::os::raw::{c_char, c_int, c_void};
+
+include!(concat!(env!("CARGO_MANIFEST_DIR"), "/src/c_runtime_api.rs"));
+
+pub type BackendPackedCFunc =
+extern "C" fn(args: *const TVMValue, type_codes: *const c_int, 
num_args: c_int) -> c_int;
+}
+
+pub mod array;
+pub mod byte_array;
+pub mod context;
+pub mod datatype;
+pub mod errors;
+#[macro_use]
+pub mod packed_func;
+pub mod value;
+
+pub use byte_array::ByteArray;
+pub use context::{Context, TVMDeviceType};

Review comment:
   I will do that as well, I think we can just rely on Rust namespacing in 
general :) 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r420950375



##
File path: rust/tvm-sys/src/context.rs
##
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+//! Provides [`Context`] and related device queries.
+//!
+//! Create a new context for device type and device id.
+//!
+//! # Example
+//!
+//! ```
+//! # use tvm_sys::{TVMDeviceType, Context};
+//! let cpu = TVMDeviceType::from("cpu");
+//! let ctx = Context::new(cpu , 0);
+//! let cpu0 = Context::cpu(0);
+//! assert_eq!(ctx, cpu0);
+//! ```
+//!
+//! Or from a supported device name.
+//!
+//! ```
+//! use tvm_sys::Context;
+//! let cpu0 = Context::from("cpu");
+//! println!("{}", cpu0);
+//! ```
+
+use crate::ffi::{self, *};
+use crate::packed_func::{ArgValue, RetValue};
+
+use std::convert::TryFrom;
+use std::str::FromStr;
+use thiserror::Error;
+
+use std::fmt::{self, Display, Formatter};
+
+use anyhow::Result;
+
+/// Device type can be from a supported device name. See the supported devices
+/// in [TVM](https://github.com/apache/incubator-tvm).
+///
+/// ## Example
+///
+/// ```
+/// use tvm_sys::TVMDeviceType;
+/// let cpu = TVMDeviceType::from("cpu");
+/// println!("device is: {}", cpu);
+///```
+
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
+pub struct TVMDeviceType(pub i64);
+
+impl Default for TVMDeviceType {
+/// default device is cpu.
+fn default() -> Self {
+TVMDeviceType(1)
+}
+}
+
+impl From for ffi::DLDeviceType {
+fn from(device_type: TVMDeviceType) -> Self {
+match device_type.0 {
+1 => ffi::DLDeviceType_kDLCPU,
+2 => ffi::DLDeviceType_kDLGPU,
+3 => ffi::DLDeviceType_kDLCPUPinned,
+4 => ffi::DLDeviceType_kDLOpenCL,
+7 => ffi::DLDeviceType_kDLVulkan,
+8 => ffi::DLDeviceType_kDLMetal,
+9 => ffi::DLDeviceType_kDLVPI,
+10 => ffi::DLDeviceType_kDLROCM,
+12 => ffi::DLDeviceType_kDLExtDev,
+_ => panic!("device type not found!"),
+}
+}
+}
+
+impl From for TVMDeviceType {
+fn from(device_type: ffi::DLDeviceType) -> Self {
+match device_type {
+ffi::DLDeviceType_kDLCPU => TVMDeviceType(1),

Review comment:
   Yeah sounds good, another example of leaving existing code alone and 
rewriting everything in one go :) 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] u99127 commented on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


u99127 commented on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624774462


   Thanks for the discussion and prompting this. I agree with you entirely with 
the motivation behind this  that we should look to simplify the frontend, the 
amount of copy pasting to add a new operator isn't ideal and reducing the 
complexity is a good thing. 
   
   > 
   > 
   > I thought of adding all the op attributes in table/map but decorator 
seemed to be more pythonic.
   
   Ok.
   
   > 
   > Few more points to consider:
   > 
   > 1. Sometimes equality checks can not be straight forward like 
num_inputs==input_tensors. In this case we can always set num_input check as 
None and handle assertion in the specific convert function.
   > 
   
   That seems to be equivalent to -1 in the other scheme . So if we can't 
handle it in the common case because it has variable number of inputs but 
greater than a particular number for instance we need to make an operator 
specific check.
   
   I guess my question is whether a single decorator would be able to handle 
the most common cases. At the end of the day my goal is to reduce code 
duplication as much as I can avoid it which is why the table driven approach 
helps but as you say it has it's limits especially with the cases you mention. 
   
   From a very old branch of mine, here is a table with the operator, helper 
function, number of inputs , number of outputs : all of which are likely to 
need just an equality check for number of inputs and outputs.
   
   'AVERAGE_POOL_2D': (self.convert_average_pool2d, 1, 1),
   'SOFTMAX': (self.convert_softmax, 1, 1),
   'SQUEEZE': (self.convert_squeeze, 1, 1),
   'MAX_POOL_2D': (self.convert_max_pool2d, 1, 1),
   'ADD': (self.convert_add, 2, 1),
   'SUB': (self.convert_sub, 2, 1),
   'MUL': (self.convert_mul, 2, 1),
   'DIV': (self.convert_div, 2, 1),
   'POW': (self.convert_pow, 2, 1),
   'MAXIMUM': (self.convert_maximum, 2, 1),
   'MINIMUM': (self.convert_minimum, 2, 1),
   'GREATER': (self.convert_greater, 2, 1),
   'LOGISTIC': (self.convert_logistic, 1, 1),
   'TANH':(self.convert_tanh, 1, 1),
   'RELU':(self.convert_relu, 1, 1),
   'CAST': (self.convert_cast, 1, 1),
   'TRANSPOSE_CONV': (self.convert_transpose_conv, 3, 1)
   
   as these have fixed number of input and output tensors.
   
   My hunch is that we should be able to get to a single decorator for this 
sample set above falls out but I'd like to see what you think. Without working 
it out 
   
   If on the other hand we end up with multiple decorators per helper for each 
of these , then we end up with duplication of code again and copy-pastism which 
I think is what we both want to avoid.
   
   > 2. Need to check instead of passing input tensors can we pass Relay 
expressions. If input tensors are needed they can always be accessed using 'op' 
variable.
   > 
   
   What would the relay expressions represent ? 
   
   > 3. Need to check if there is a scope to add helper function for 
quantized params calculation.
   
   I'm not sure I follow this one.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-06 Thread GitBox


tqchen commented on pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#issuecomment-624783943


   ping @kevinthesun @icemelon9 @yongwww please followup 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] u99127 commented on a change in pull request #5508: [TFLITE]GATHER_ND

2020-05-06 Thread GitBox


u99127 commented on a change in pull request #5508:
URL: https://github.com/apache/incubator-tvm/pull/5508#discussion_r420955608



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -343,6 +343,36 @@ def test_forward_gather():
 _test_gather((1, 3, 3), [20], 1, 'float32', quantized, oob=True)
 _test_gather((1, 3, 3), [20, 20], 2, 'float32', quantized, oob=True)
 
+###
+# Gather_ND
+# -
+
+def _test_gather_nd(data, indices):
+""" One iteration of GATHER_ND """
+with tf.Graph().as_default():
+in_data = tf.placeholder(shape=data.shape, dtype=data.dtype, 
name="data")
+indices_data = tf.placeholder(shape=indices.shape, dtype=indices.dtype,
+name="indices")
+out = tf.gather_nd(in_data, indices_data)
+
+compare_tflite_with_tvm([data, indices], ['data:0', 'indices:0'],
+  [in_data, indices_data], [out])
+
+def test_forward_gather_nd():
+""" GATHER_ND """
+_test_gather_nd(
+np.array([[[1.2, 2.0], [3.1, 4.1]], [[5.1, 6.1], [7.1, 
8.1]]]).astype('float32'),
+np.asarray([[0, 1], [1, 0]]).astype('int32')

Review comment:
   Can we put in a test for int8 input tensor types as well ? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ehsanmok commented on pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


ehsanmok commented on pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#issuecomment-624800522


   These changes make very good sense. I'm glad Rust is receiving more love and 
attention.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ehsanmok commented on a change in pull request #5527: [Rust] Second stage of Rust Refactor

2020-05-06 Thread GitBox


ehsanmok commented on a change in pull request #5527:
URL: https://github.com/apache/incubator-tvm/pull/5527#discussion_r420987364



##
File path: rust/tvm-rt/README.md
##
@@ -0,0 +1,235 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# TVM Runtime Frontend Support
+
+This crate provides an idiomatic Rust API for 
[TVM](https://github.com/apache/incubator-tvm) runtime frontend. Currently this 
requires **Nightly Rust** and tested on `rustc 1.32.0-nightly`

Review comment:
   Nudging to update the READMEs when done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


maheshambule commented on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624809315


   Thanks for the response.
   
   > Few more points to consider:
   
   Let me clarify, the points which I mentioned are not for in or against both 
the approaches (table vs decorator). They were for general discussion.
   
   I think as far as code deduplication is considered both the approaches are 
fine. Both can achieve the same. The difference comes with the look and feel of 
it. And hence it is subjective and can be driven by personal choices. However, 
still, let me put a few pros of a decorator approach.
   -  Since it is more pythonic, new developers could easily relate to it.
   -  The decorator places all the attributes/properties of a particular op in 
a single view. For table based approach these are divided into two places - at 
table level and in the convert function.
   - The decorator intuitively lets you add pre and post-processing. For. ex. 
fusing activation functions to the output.
   
   Let me know your thoughts.
   
   
   > That seems to be equivalent to -1 in the other scheme
   
   Ok. Need to decide -1 or None.
   
   
   > My hunch is that we should be able to get to a single decorator for this 
sample set above falls out but I'd like to see what you think. Without working 
it out
   
   I think a single decorator will suffice.
   
   > What would the relay expressions represent ?
   There is a common pattern where we convert TFLite tensor expression to Relay 
expression often using self.get_expr. Should we push back this conversion to a 
common code? Also a more generic implementation of conversion is added in this 
PR as get_tensor_expr function 
https://github.com/apache/incubator-tvm/pull/5528/files#diff-ee9a43edfb6fd0c9121e00c67c59ef43R2414.
   
   > I'm not sure I follow this one.
   I was thinking if we could somehow find some common code that will be a 
wrapper to code like below.  But on the second though, I think it is not worth 
the effort.
   
https://github.com/apache/incubator-tvm/blob/e2bd43b6f728d4d0ca2659dcf73da74294655133/python/tvm/relay/frontend/tflite.py#L633-L641
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule edited a comment on pull request #5528: POC refactor tflite frontend

2020-05-06 Thread GitBox


maheshambule edited a comment on pull request #5528:
URL: https://github.com/apache/incubator-tvm/pull/5528#issuecomment-624809315


   Thanks for the response.
   
   > Few more points to consider:
   
   Let me clarify, the points which I mentioned are not for in or against both 
the approaches (table vs decorator). They were for general discussion.
   
   I think as far as code deduplication is considered both the approaches are 
fine. Both can achieve the same. The difference comes with the look and feel of 
it. And hence it is subjective and can be driven by personal choices. However, 
still, let me put a few pros of a decorator approach.
   -  Since it is more pythonic, new developers could easily relate to it.
   -  The decorator places all the attributes/properties of a particular op in 
a single view. For table based approach these are divided into two places - at 
table level and in the convert function.
   - The decorator intuitively lets you add pre and post-processing. For. ex. 
fusing activation functions to the output.
   
   Let me know your thoughts.
   
   
   > That seems to be equivalent to -1 in the other scheme
   
   Ok. Need to decide -1 or None.
   
   
   > My hunch is that we should be able to get to a single decorator for this 
sample set above falls out but I'd like to see what you think. Without working 
it out
   
   I think a single decorator will suffice.
   
   > What would the relay expressions represent ?
   
   There is a common pattern where we convert TFLite tensor expression to Relay 
expression often using self.get_expr. Should we push back this conversion to a 
common code? Also a more generic implementation of conversion is added in this 
PR as get_tensor_expr function 
https://github.com/apache/incubator-tvm/pull/5528/files#diff-ee9a43edfb6fd0c9121e00c67c59ef43R2414.
   
   > I'm not sure I follow this one.
   
   I was thinking if we could somehow find some common code that will be a 
wrapper to code like below.  But on the second though, I think it is not worth 
the effort.
   
https://github.com/apache/incubator-tvm/blob/e2bd43b6f728d4d0ca2659dcf73da74294655133/python/tvm/relay/frontend/tflite.py#L633-L641
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yongfeng-nv commented on a change in pull request #5382: [TE] Fix MakeLoopNest for warp memory

2020-05-06 Thread GitBox


yongfeng-nv commented on a change in pull request #5382:
URL: https://github.com/apache/incubator-tvm/pull/5382#discussion_r421003668



##
File path: src/te/operation/op_util.cc
##
@@ -164,9 +164,21 @@ MakeLoopNest(const Stage& stage,
 value_map[iv] = dom->min;
   } else {
 runtime::ThreadScope ts = 
runtime::ThreadScope::make(bind_iv->thread_tag);
-if (stage->scope == "" || stage->scope == "warp" ||
+if (stage->scope == "" ||
 static_cast(runtime::StorageScope::make(stage->scope).rank) 
<= ts.rank) {
   value_map[iv] = var;
+} else if (stage->scope == "warp" && ts.rank == 1) {
+  // To determine whether a thread index is inside or outside a warp, 
we need
+  // to know the thread extent. We leave a warning for now.
+  if (ts.dim_index == 0) {
+value_map[iv] = var;
+  } else {

Review comment:
   > Yes, dom does. But where can we get the warp size? Warp size is not a 
constant value, it is 32 for NVIDIA GPU and 64 for AMD GPU. The warp size 
information is stored in Target class. Since MakeLoopNest is designed to be a 
target-irrelevant function, I don't think it a good idea to add an argument to 
specify the target.
   
   I agree with you about `MakeLoopNest ` being target independent.  My first 
thought is to defer the check and `threadIdx.x` vs. `threadIdx.y/z` handling 
till LowerWarpMemory(), such as adjusting indices in WarpAccessRewriter(), 
since it does target specific "warp" lowering.  Maybe substitute 
`threadIdx.y/z` with `0` in the group indices for the supported cases and give 
error otherwise?  However, it seems not the case, as your 3rd situation ends 
with incorrect code instead of an error from LowerWarpMemory().  But I don't 
know the reason.
   
   > Actually I did try to run this example, but TVM somehow generated a 
correct code. It only led to a wrong boundary checking bug in a very complex 
program.
   
   I see.  Does your simplified case trigger the warning?  If so, checking for 
the warning can guard your changes from being accidentally deleted or skipped.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 opened a new pull request #5532: [Fix] Fix conv2d alter op for arm cpu

2020-05-06 Thread GitBox


icemelon9 opened a new pull request #5532:
URL: https://github.com/apache/incubator-tvm/pull/5532


   https://discuss.tvm.ai/t/autotuner-fails-for-cnn-in-new-dev-branch/6099



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (e2bd43b -> 4a262ec)

2020-05-06 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e2bd43b  [RPC] Fix the multihop cpu case (#5522)
 add 4a262ec  [RUNTIME] Improve PackedFunc robustness (#5517)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/packed_func.h | 101 +++---
 src/ir/op.cc  |   5 +-
 src/relay/quantize/quantize.cc|   4 +-
 3 files changed, 69 insertions(+), 41 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on pull request #5517: [RUNTIME] Improve PackedFunc robustness

2020-05-06 Thread GitBox


tqchen commented on pull request #5517:
URL: https://github.com/apache/incubator-tvm/pull/5517#issuecomment-624825618


   THanks @siju-samuel @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (4a262ec -> 79e29ab)

2020-05-06 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 4a262ec  [RUNTIME] Improve PackedFunc robustness (#5517)
 add 79e29ab  LRN only supports 4D tensors, remove it from alter_op_layout 
(#5520)

No new revisions were added by this update.

Summary of changes:
 src/relay/op/nn/nn.cc   |  1 -
 tests/python/relay/test_pass_alter_op_layout.py | 51 +
 2 files changed, 51 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] kparzysz-quic commented on pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-06 Thread GitBox


kparzysz-quic commented on pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#issuecomment-624830922


   Added `sim_dev` as an external project. It will be built automatically with 
`USE_HEXAGON_DEVICE=sim`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5523: [Refactor][std::string --> String] IRModule is updated with String

2020-05-06 Thread GitBox


tqchen commented on a change in pull request #5523:
URL: https://github.com/apache/incubator-tvm/pull/5523#discussion_r421034694



##
File path: src/printer/relay_text_printer.cc
##
@@ -918,13 +918,29 @@ static const char* kSemVer = "v0.0.4";
 //- Implements AsText
 // - relay_text_printer.cc (specific printing logics for relay)
 // - tir_text_printer.cc (specific printing logics for TIR)
-std::string PrettyPrint(const ObjectRef& node) {
+String PrettyPrint(const ObjectRef& node) {
   Doc doc;
   doc << relay::RelayTextPrinter(false, nullptr).PrintFinal(node);
   return doc.str();
 }
 
-std::string AsText(const ObjectRef& node,
+String AsText(const ObjectRef& node,
+   bool show_meta_data,
+   runtime::TypedPackedFunc annotate) {
+  Doc doc;
+  doc << kSemVer << Doc::NewLine();
+  runtime::TypedPackedFunc ftyped = nullptr;
+  if (annotate != nullptr) {
+ftyped = runtime::TypedPackedFunc(
+  [&annotate](const ObjectRef& expr) -> std::string {
+return annotate(expr);
+  });
+  }
+  doc << relay::RelayTextPrinter(show_meta_data, ftyped).PrintFinal(node);
+  return doc.str();
+}
+
+String AsTextByStr(const ObjectRef& node,

Review comment:
   We don't want to introduce two variants of the functions (std::string 
and String). 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (79e29ab -> 900254d)

2020-05-06 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 79e29ab  LRN only supports 4D tensors, remove it from alter_op_layout 
(#5520)
 add 900254d  Fix an issue with Upsampling and update one test to hit the 
broken usecase (#5530)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  |  5 -
 tests/python/frontend/onnx/test_forward.py | 13 +++--
 2 files changed, 11 insertions(+), 7 deletions(-)



[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


kparzysz-quic opened a new pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533


   The objects that the raw pointers point to can be deallocated and new 
objects can be allocated at the same address, all while these pointers are 
still in the cache. This can lead to unexpected behavior, for example to 
calculated bound conflicts with previously cached values.
   
   Internally, we have observed build failures caused by this with a message 
like
   ```
   TVMError: Check failed: (val->second->min_value == res.min_value && 
val-second->max_value == res.max_value) || (val->second->min_value == 
everything.min_value && val->second->max_value == everything.max_value): 
Detected bound for 31conflicts with memorization
   ```
   
   Caching `ObjectPtr` will prevent the objects from being deallocated while 
the cache is active.
   
   This PR has a few parts:
   1. Allowing `ObjectPtr` objects to point to `const` objects.  Since most 
object pointers are const-qualified, this will avoid casting constness away.  
The reference counter was explicitly marked as `mutable` to allow counting 
references for const objects as well.
   2. Allowing `GetObjectPtr` to automatically create `ObjectPtr` with the type 
corresponding to the input argument.  Explicit template arguments still work as 
before, but are no longer needed when the `ObjectPtr` simply wraps the argument 
type.
   3. Finally, caching `ObjectPtr` in the bound analyzer.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5511: [AutoTVM][TOPI] AutoTVM incorrect measurement

2020-05-06 Thread GitBox


kevinthesun commented on a change in pull request #5511:
URL: https://github.com/apache/incubator-tvm/pull/5511#discussion_r421056807



##
File path: topi/python/topi/mali/conv2d.py
##
@@ -138,20 +138,15 @@ def _schedule_spatial_pack(cfg, s, output, conv, 
data_vec, kernel_vec):
 s[data_vec].unroll(vw)
 
 if isinstance(kernel_vec.op, tvm.te.ComputeOp) and kernel_vec.name == 
'kernel_vec':
-if autotvm.GLOBAL_SCOPE.in_tuning:
-# kernel packing will be pre-computed during compilation, so we 
skip
-# this part to make tuning records correct
-s[kernel_vec].pragma(s[kernel_vec].op.axis[0], 'debug_skip_region')
-else:
-max_threads = 
tvm.target.Target.current(allow_none=False).max_num_threads
-co, ci, kh, kw, vc = s[kernel_vec].op.axis
-fused = s[kernel_vec].fuse(co, ci, kh, kw, vc)
-fused, vec = s[kernel_vec].split(fused, VC)
-bb, tt = s[kernel_vec].split(fused, max_threads)
-s[kernel_vec].bind(bb, te.thread_axis("blockIdx.x"))
-s[kernel_vec].bind(tt, te.thread_axis("threadIdx.x"))
-if VC in vec_size:
-s[kernel_vec].vectorize(vec)
+max_threads = 
tvm.target.Target.current(allow_none=False).max_num_threads

Review comment:
   Thank you for working on this. You also need to replace kernel with the 
correct layout placeholder in compute when autotvm.GLOBAL_SCOPE.in_tuning is 
True. Look at 
https://github.com/apache/incubator-tvm/pull/5200/files#diff-7d0b52f92ecc0d246c17fa17f90d2a8fR189





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5532: [Fix] Fix conv2d alter op for arm cpu

2020-05-06 Thread GitBox


kevinthesun commented on a change in pull request #5532:
URL: https://github.com/apache/incubator-tvm/pull/5532#discussion_r421057402



##
File path: topi/python/topi/arm_cpu/conv2d_alter_op.py
##
@@ -139,6 +140,7 @@ def _alter_conv2d_layout(attrs, inputs, tinfos, out_type):
 assert data_layout == "NCHW" and kernel_layout == "OIHW"
 N, CI, H, W = get_const_tuple(data.shape)
 CO, _, KH, KW = get_const_tuple(kernel.shape)
+new_attrs['channels'] = CO

Review comment:
   Is it possible to add a test case to cover this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ehsanmok edited a comment on pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


ehsanmok edited a comment on pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#issuecomment-624800522


   These changes make good sense in general. I'm glad Rust is receiving more 
love and attention. Wish I had time to spend on these line of work.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart opened a new pull request #5534: fix a few bugs with shape inference and types in the ONNX importer

2020-05-06 Thread GitBox


mbrookhart opened a new pull request #5534:
URL: https://github.com/apache/incubator-tvm/pull/5534


   @tmoreau89 @jroesch Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


tqchen commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421124915



##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   Let us just do std::unordered_map;





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


kparzysz-quic commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421127447



##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   So, no const?  Then `GetObjectPtr` will need a `const_cast`, e.g.
   ```
   auto op = GetElementPtr(const_cast(expr.as()));
   ```
   
   Do you also want to remove the first commit?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


kparzysz-quic commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421127973



##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   Wait, maybe I misunderstood.  `PrimExpr` instead of `PrimExprNode`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


tqchen commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421129652



##
File path: include/tvm/runtime/object.h
##
@@ -350,6 +350,39 @@ inline RelayRefType GetRef(const ObjectType* ptr);
 template 
 inline SubRef Downcast(BaseRef ref);
 
+template  class ObjectPtr;
+
+/*!
+ * \brief Get an object ptr type from a raw object ptr.
+ *
+ * \param ptr The object pointer
+ * \tparam BaseType The reference type
+ * \tparam ObjectType The object type
+ * \return The corresponding ObjectPtr type
+ */
+template ::value, 
int>::type = 0>
+inline ObjectPtr GetObjectPtr(ObjectType* ptr);
+
+/*!
+ * \brief Get an object ptr type from a raw object ptr.
+ *
+ * \param ptr The object pointer
+ * \tparam ObjectType The object type
+ * \return The corresponding ObjectPtr type
+ */
+template 
+inline ObjectPtr GetObjectPtr(ObjectType* ptr);

Review comment:
   Let us simply unify this function with the above one so that users won't 
be confused about the variants

##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   yes, PrimExpr itself exposes the ptr as const pointer 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


kparzysz-quic commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421127973



##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   Wait, maybe I misunderstood.  `PrimExpr` instead of `PrimExprNode`?
   
   Edit: `PrimExpr` instead of `ObjectPtr`.  I think I got 
it now... haha





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#issuecomment-624927829


   Yeah @ehsanmok I'm hoping to reduce the effort for us to use/consume. My 
goal is effectively to get to:
   
   ```
   #[external_pass]
   fn pass(func: relay::Function, ...) -> relay::Function { ... }
   ```
   
   At the high level and reduce some duplication and mirror the TVM bindings. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


kparzysz-quic commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421138163



##
File path: include/tvm/runtime/object.h
##
@@ -350,6 +350,39 @@ inline RelayRefType GetRef(const ObjectType* ptr);
 template 
 inline SubRef Downcast(BaseRef ref);
 
+template  class ObjectPtr;
+
+/*!
+ * \brief Get an object ptr type from a raw object ptr.
+ *
+ * \param ptr The object pointer
+ * \tparam BaseType The reference type
+ * \tparam ObjectType The object type
+ * \return The corresponding ObjectPtr type
+ */
+template ::value, 
int>::type = 0>
+inline ObjectPtr GetObjectPtr(ObjectType* ptr);
+
+/*!
+ * \brief Get an object ptr type from a raw object ptr.
+ *
+ * \param ptr The object pointer
+ * \tparam ObjectType The object type
+ * \return The corresponding ObjectPtr type
+ */
+template 
+inline ObjectPtr GetObjectPtr(ObjectType* ptr);

Review comment:
   How is that unification possible?  If you say `GetObjectPtr(op)`, 
without the explicit `BaseType`, the compiler won't be able to deduce it if 
`GetObjectPtr` is a template with two independent type parameters.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


kparzysz-quic commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421140419



##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   Using `PrimExpr` worked fine.  I tried `ObjectRef` before, but it failed 
(I guess I missed the ObjectHash/Equal for refs), and I went the way of 
`ObjectPtr`.  This really makes all the remaining  changes unnecessary 
(although I think the default return type of `GetObjectPtr` may be useful).
   
   What do you suggest I do with the rest of the changes?  Should I remove the 
`ObjectPtr` completely?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5535: Changes to cpp_rpc to make it work on Android (+ Hexagon offloading)

2020-05-06 Thread GitBox


kparzysz-quic opened a new pull request #5535:
URL: https://github.com/apache/incubator-tvm/pull/5535


   - Implement `getNextString` to break up `std::string` into words. 
`stringstream` just doesn't work on Android.
   - `string::find_last_of` doesn't look for the last substring, but the last 
character from a given string.
   - Use `SIGTERM` to terminate processes (this isn't necessary, but using 
`SIGKILL` is not a good practice).
   - Convert `./rpc` to a full path. When a module is uploaded and offloaded to 
Hexagon, the `dlopen` on Hexagon needs an absolute path (or a path without 
directories).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


tqchen commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421182429



##
File path: include/tvm/runtime/object.h
##
@@ -350,6 +350,39 @@ inline RelayRefType GetRef(const ObjectType* ptr);
 template 
 inline SubRef Downcast(BaseRef ref);
 
+template  class ObjectPtr;
+
+/*!
+ * \brief Get an object ptr type from a raw object ptr.
+ *
+ * \param ptr The object pointer
+ * \tparam BaseType The reference type
+ * \tparam ObjectType The object type
+ * \return The corresponding ObjectPtr type
+ */
+template ::value, 
int>::type = 0>
+inline ObjectPtr GetObjectPtr(ObjectType* ptr);
+
+/*!
+ * \brief Get an object ptr type from a raw object ptr.
+ *
+ * \param ptr The object pointer
+ * \tparam ObjectType The object type
+ * \return The corresponding ObjectPtr type
+ */
+template 
+inline ObjectPtr GetObjectPtr(ObjectType* ptr);

Review comment:
   That is what I mean, we should  use GetObjectPtr as it is not a 
quite frequently used feature





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


tqchen commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421182713



##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   I think it can go both ways, given that the most reference types already 
have const properties, perhaps we could remove `ObjectPtr`, perhaps we 
can remove it for simplicity





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5533: Cache ObjectPtr instead of raw pointers in bound analyzer

2020-05-06 Thread GitBox


tqchen commented on a change in pull request #5533:
URL: https://github.com/apache/incubator-tvm/pull/5533#discussion_r421182713



##
File path: include/tvm/arith/analyzer.h
##
@@ -107,6 +107,7 @@ class ConstIntBound : public ObjectRef {
  */
 class ConstIntBoundAnalyzer {
  public:
+  using BoundMapType = std::unordered_map, 
ConstIntBound, ObjectHash>;

Review comment:
   I think it can go both ways, given that the most reference types already 
have const properties, perhaps we could remove `ObjectPtr` for simplicity





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] shoubhik opened a new pull request #5536: [WIP][Do NOT Merge]V0.6.0 quantization backport

2020-05-06 Thread GitBox


shoubhik opened a new pull request #5536:
URL: https://github.com/apache/incubator-tvm/pull/5536


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on a change in pull request #5382: [TE] Fix MakeLoopNest for warp memory

2020-05-06 Thread GitBox


roastduck commented on a change in pull request #5382:
URL: https://github.com/apache/incubator-tvm/pull/5382#discussion_r421205351



##
File path: src/te/operation/op_util.cc
##
@@ -164,9 +164,21 @@ MakeLoopNest(const Stage& stage,
 value_map[iv] = dom->min;
   } else {
 runtime::ThreadScope ts = 
runtime::ThreadScope::make(bind_iv->thread_tag);
-if (stage->scope == "" || stage->scope == "warp" ||
+if (stage->scope == "" ||
 static_cast(runtime::StorageScope::make(stage->scope).rank) 
<= ts.rank) {
   value_map[iv] = var;
+} else if (stage->scope == "warp" && ts.rank == 1) {
+  // To determine whether a thread index is inside or outside a warp, 
we need
+  // to know the thread extent. We leave a warning for now.
+  if (ts.dim_index == 0) {
+value_map[iv] = var;
+  } else {

Review comment:
   > However, it seems not the case, as your 3rd situation ends with 
incorrect code instead of an error from LowerWarpMemory(). But I don't know the 
reason.
   
   Actually I mean:
   
   1. The 1st situation has no problem, before and after this PR.
   2. The 2nd situation led to incorrect code before this PR, and correct code 
after this PR. Plus, we will see a warning after this PR.
   3. The 3rd situation is currently not supported by `lower_warp_memory`, 
which will lead to an error. Plus, we will see a warning after this PR.
   
   No matter how, I still have no idea how the 2nd situation ends up with 
incorrect code.
   
   > I see. Does your simplified case trigger the warning? If so, checking for 
the warning can guard your changes from being accidentally deleted or skipped.
   
   Actually no. Let's have a look at some details.
   
   Here's my simplified example:
   
   ```python
   import tvm
   import topi
   import numpy as np
   
   from tvm import te
   
   n = 32
   A = te.placeholder((n, n), name='A', dtype="float32")
   C = te.compute((n, n), lambda i, j: A(i, (j + 1) % n), name='C')
   
   s = te.create_schedule(C.op)
   th_y = te.thread_axis("threadIdx.y")
   th_x = te.thread_axis("threadIdx.x")
   B = s.cache_read(A, "warp", [C])
   ci, cj = C.op.axis
   bi, bj = B.op.axis
   s[C].bind(ci, th_y)
   s[C].bind(cj, th_x)
   s[B].compute_at(s[C], ci)
   s[B].bind(bj, th_x)
   
   print(tvm.lower(s, [A, C]))
   ```
   
   And here's the result, which is unexpectedly correct before this PR.
   
   ```
   PrimFunc([A, C]) attrs={"tir.noalias": (bool)1, "global_symbol": "main"} {
 // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 32
 // attr [A.warp] storage_scope = "warp"
 allocate A.warp[float32 * 32]
 // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 32
 A.warp[threadIdx.x] = A[((threadIdx.y*32) + threadIdx.x)]
 // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 32
 C[((threadIdx.y*32) + threadIdx.x)] = A.warp[floormod((threadIdx.x + 1), 
32)]
   }
   ```
   
   The `if (stage->scope == "warp" && ts.rank == 1)` branch in the modified 
code is only triggered once, where `ts.dim_index == 0`. I don't know why the 
`ts.dim_index == 1` `IterVar` is ignored.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-05-06 Thread GitBox


spectrometerHBH commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-624997033


   Relay's type printing requires attr and relayExpr printing, so It is not 
straightforward to make type printing independent. Meanwhile, relay's attir 
printing overlaps PrimExpr printing in tir now. We'd better combine them in 
later PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5526: [Rust] Add first stage of updating and rewriting Rust bindings.

2020-05-06 Thread GitBox


jroesch commented on a change in pull request #5526:
URL: https://github.com/apache/incubator-tvm/pull/5526#discussion_r421208546



##
File path: rust/tvm-sys/src/byte_array.rs
##
@@ -0,0 +1,64 @@
+use std::os::raw::c_char;
+
+use crate::ffi::TVMByteArray;
+
+/// A struct holding TVM byte-array.

Review comment:
   This comment also comes from the original code. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5523: [Refactor][std::string --> String] IRModule is updated with String

2020-05-06 Thread GitBox


ANSHUMAN87 commented on pull request #5523:
URL: https://github.com/apache/incubator-tvm/pull/5523#issuecomment-625007100


   @tqchen : Your comment is handled now, Thanks!
   @jroesch : The issue is resolved, test cases are passing now! 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on a change in pull request #5498: [Optimization] Warp level reduction support for CUDA

2020-05-06 Thread GitBox


roastduck commented on a change in pull request #5498:
URL: https://github.com/apache/incubator-tvm/pull/5498#discussion_r421221319



##
File path: src/tir/transforms/lower_warp_memory.cc
##
@@ -265,10 +265,11 @@ class WarpAccessRewriter : protected StmtExprMutator {
   << op->index << " local_index=" << local_index;
   PrimExpr load_value = LoadNode::make(
   op->dtype, op->buffer_var, local_index, op->predicate);
+  PrimExpr mask = IntImm(DataType::UInt(32), 0x);

Review comment:
   Please have a look on this example (modified from a test for 
`lower_warp_memory`):
   
   ```python
   import tvm
   import topi
   import numpy as np
   
   from tvm import te
   
   A = te.placeholder((128,), name='A', dtype="float32")
   B = te.compute((100,), lambda i: A[i // 32 * 32 + (i + 1) % 32], name='B')
   
   cuda_target = tvm.target.create("cuda")
   assert cuda_target.thread_warp_size == 32
   with cuda_target:
   s = te.create_schedule(B.op)
   AA = s.cache_read(A, "warp", [B])
   xo, xi = s[B].split(B.op.axis[0], 64)
   xi0, xi1 = s[B].split(xi, factor=32)
   tx = te.thread_axis("threadIdx.x")
   s[B].bind(xi1, tx)
   s[B].bind(xo, te.thread_axis("blockIdx.x"))
   s[AA].compute_at(s[B], xo)
   xo, xi = s[AA].split(s[AA].op.axis[0], 32)
   s[AA].bind(xi, tx)
   
   print(tvm.build(s, [A, B], "cuda").imported_modules[0].get_source())
   ```
   
   The generated code is:
   
   ```cuda
   extern "C" __global__ void default_function_kernel0(void* __restrict__ A, 
void* __restrict__ B) {
 float A_warp[2];
 for (int ax0_outer = 0; ax0_outer < 2; ++ax0_outer) {
   A_warp[(ax0_outer)] = ((float*)A)[((int)blockIdx.x) * 64) + 
(ax0_outer * 32)) + ((int)threadIdx.x)))];
 }
 for (int i_inner_outer = 0; i_inner_outer < 2; ++i_inner_outer) {
   if ((int)blockIdx.x) * 64) + (i_inner_outer * 32)) + 
((int)threadIdx.x)) < 100) {
 ((float*)B)[((int)blockIdx.x) * 64) + (i_inner_outer * 32)) + 
((int)threadIdx.x)))] = __shfl(A_warp[(i_inner_outer)], int)threadIdx.x) + 
1) & 31), 32);
   }
 }
   }
   ```
   
   Here `__shfl` is inside an `if`. Please check that whether `__shfl_async` 
can deal with this example.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yongfeng-nv commented on a change in pull request #5382: [TE] Fix MakeLoopNest for warp memory

2020-05-06 Thread GitBox


yongfeng-nv commented on a change in pull request #5382:
URL: https://github.com/apache/incubator-tvm/pull/5382#discussion_r421228975



##
File path: src/te/operation/op_util.cc
##
@@ -164,9 +164,21 @@ MakeLoopNest(const Stage& stage,
 value_map[iv] = dom->min;
   } else {
 runtime::ThreadScope ts = 
runtime::ThreadScope::make(bind_iv->thread_tag);
-if (stage->scope == "" || stage->scope == "warp" ||
+if (stage->scope == "" ||
 static_cast(runtime::StorageScope::make(stage->scope).rank) 
<= ts.rank) {
   value_map[iv] = var;
+} else if (stage->scope == "warp" && ts.rank == 1) {
+  // To determine whether a thread index is inside or outside a warp, 
we need
+  // to know the thread extent. We leave a warning for now.
+  if (ts.dim_index == 0) {
+value_map[iv] = var;
+  } else {

Review comment:
   I modified the simplified example a little to bind threadIdx.y in the 
warp stage to let threadIdx.y pass through the new code.
   
   import tvm
   import topi
   import numpy as np
   
   from tvm import te
   
   n = 32
   A = te.placeholder((2, n, n), name='A', dtype="float32")
   C = te.compute((2, n, n), lambda x, i, j: A(x, i, (j + 1) % n), name='C')
   
   s = te.create_schedule(C.op)
   bk_x = te.thread_axis("blockIdx.x")
   th_y = te.thread_axis("threadIdx.y")
   th_x = te.thread_axis("threadIdx.x")
   B = s.cache_read(A, "warp", [C])
   cx, ci, cj = C.op.axis
   bx, bi, bj = B.op.axis
   # s[C].bind(ci, th_y)
   s[C].bind(cj, th_x)
   s[C].bind(cx, bk_x)
   s[B].compute_at(s[C], cx)
   s[B].bind(bi, th_y)
   s[B].bind(bj, th_x)
   
   print(tvm.lower(s, [A, C]))
   func = tvm.build(s, [A, C], target="cuda", name='tid')
   print(func.imported_modules[0].get_source())
   
   The three situations make a good summary.  
   1st one already has at least one test in 
tests/python/unittest/test_tir_transform_lower_warp_memory.py.
   I hope the above code can lock down the 2nd situation and probably the error 
for the 3rd one by reducing threadIdx.x's extent.
   
   Warp has been special cased in several places, e.g. in bound.cc and here 
before this PR.  I tried to push back to add more special case code, but I am 
Ok the accept the current change.  Please try to add tests.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-06 Thread GitBox


FrozenGene commented on a change in pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#discussion_r421245002



##
File path: src/runtime/hexagon/sim/driver/CMakeLists.txt
##
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+project(SIM_DEV C CXX)
+cmake_minimum_required(VERSION 3.0.2)
+
+set(CMAKE_SYSTEM_NAME "Linux")
+
+if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/config.cmake)
+  include(${CMAKE_CURRENT_BINARY_DIR}/config.cmake)
+endif()
+
+set(EXTRA_CXX_FLAGS
+  "-O2"
+  "-Wno-format"
+  "-mhvx -mhvx-length=128b"
+  "-mv66"

Review comment:
   Do you think it is better to provide one option for users but set the 
default value be `-mhvx-length=128b` and `-mv66`? For example, some users maybe 
want to use `64` bits HVX length and use `-mv60`

##
File path: src/runtime/hexagon/sim/driver/pthread.h
##
@@ -0,0 +1,96 @@
+/*

Review comment:
   Overall comment: do we need to use low level pthread api? could we use 
C++11 thread and port our TVM CPU thread pool implementation if we want to 
support Hexagon parallel? Does we have some special concern so that we have to 
use pthread api?

##
File path: src/runtime/hexagon/sim/driver/CMakeLists.txt
##
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+project(SIM_DEV C CXX)
+cmake_minimum_required(VERSION 3.0.2)
+
+set(CMAKE_SYSTEM_NAME "Linux")
+
+if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/config.cmake)
+  include(${CMAKE_CURRENT_BINARY_DIR}/config.cmake)
+endif()
+
+set(EXTRA_CXX_FLAGS
+  "-O2"
+  "-Wno-format"
+  "-mhvx -mhvx-length=128b"
+  "-mv66"
+  "-stdlib=libc++"
+)
+
+set(EXTRA_LINK_FLAGS
+  "-stdlib=libc++"

Review comment:
   I met the symbol (related with `pthread` symbol If I remember correctly) 
can not find. The fix is to use `-stdlib=libstdc++`. The complete command is ` 
-G0 -ldl 
-stdlib=libstdc++ -Wl,--force-dynamic -Wl,-E -Wl,--whole-archive -lm `, 
which is the same as your original 
pr:https://github.com/kparzysz-quic/tvm/blob/b4e5960bb4f804e42c0abe60045621ae1d075f27/src/runtime/hexagon/sim/driver/sim_device.cc#L38.
 Now, we change to `libc++`, could you explain how does it solve? 

##
File path: src/runtime/hexagon/sim/driver/sim_device.cc
##
@@ -0,0 +1,573 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*
+  Required options:
+-ldl -G0  For dlinit/dlopen/dlclose.
+-Wl,--force-dynamic   Make this a dynamic executable (with dynamic
+  symbol table).
+-Wl,-EExport all defined symbols as dynamic.
+-Wl,--whole-archive   Link the entire contents of libc.
+-mhvx -mhvx-length=12

  1   2   >