tqchen commented on a change in pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#discussion_r452946065



##########
File path: apps/wasm-graphcompiler-tvm/README.md
##########
@@ -0,0 +1,188 @@
+# WebAssembly GraphCompiler for Deep Learning Framework with TVM Runtime
+
+#### Experimental notice: This project is still *experimental* and only serves 
as a proof of concept for running deep learning frameworks (such like 
[MindSpore](https://github.com/mindspore-ai/mindspore)) on [WebAssembly 
runtime](https://github.com/bytecodealliance/wasmtime) with [TVM 
stack](https://tvm.apache.org/).
+
+- [WebAssembly GraphCompiler for Deep Learning Framework with TVM 
Runtime](#webassembly-graphcompiler-for-deep-learning-framework-with-tvm-runtime)
+    - [Motivation](#motivation)
+    - [Framework Landscape](#framework-landscape)
+    - [Project Status](#project-status)
+    - [PoC Guidelines](#poc-guidelines)
+        - [Pre-installation](#pre-installation)
+        - [Build ResNet50 model](#build-resnet50-model)
+        - [Build wasm-graphcompiler-tvm 
package](#build-wasm-graphcompiler-tvm-package)
+        - [Test](#test)
+    - [Future Work](#future-work)
+        - [More networks support](#more-networks-support)
+        - [Performance benchmark](#performance-benchmark)
+        - [Native TVM Rust runtime support](#native-tvm-rust-runtime-support)
+    - [Appendix](#appendix)
+        - [System packages install](#system-packages-install)
+    - [Contribution](#contribution)
+
+## Motivation
+
+<img 
src="https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png";
 alt="TVM hardware support" width="600"/>
+
+As demonstrated in TVM runtime 
[tutorials](https://tvm.apache.org/docs/tutorials/relay_quick_start.html), TVM 
already supports WASM as the optional hardware backend, so we can leverage the 
features of WebAssembly (portability, security) and TVM runtime 
(domain-specific, optimization) to build a flexible and auto-optimized graph 
compiler for all deep learning frameworks.
+
+## Framework Landscape
+
+The figures below demonstrate the whole landscape of running deep learning 
frameworks on WASM runtime with TVM compiler stack.
+
+* WASM graph compiler stack
+    ```
+       _ _ _ _ _ _ _ _ _ _        _ _ _ _ _ _ _        _ _ _ _ _ _ _ _ _ _ _ _
+      |                   |      |             |      |                       |
+      |  Framework Model  | ---> |  ONNX Model | ---> |  TVM Relay Python API |
+      |_ _ _ _ _ _ _ _ _ _|      |_ _ _ _ _ _ _|      |_ _ _ _ _ _ _ _ _ _ _ _|
+                                                                 ||
+                                                                 \/
+                 _ _ _ _ _ _ _ _ _ _ _                  _ _ _ _ _ _ _ _ _ _ _
+                |                     |                |                     |
+                | WASM Graph Compiler |                |  TVM Compiler Stack |
+                |    (TVM runtime)    |                |_ _ _ _ _ _ _ _ _ _ _|

Review comment:
       I see, perhapsit we should rename it as the app code? Since we are 
writing app using tvm rust runtime

##########
File path: apps/wasm-graphcompiler-tvm/wasm-graphcompiler/Cargo.toml
##########
@@ -0,0 +1,26 @@
+[package]
+name = "wasm-graphcompiler-tvm"
+version = "0.1.0"
+authors = ["leonwanghui <[email protected]>"]

Review comment:
       let us rename authors to TVM contributors, the rationale is that the 
same code will be contributed by multiple contributors, and the contribution is 
already recored by the commit history.

##########
File path: apps/wasm-graphcompiler-tvm/README.md
##########
@@ -0,0 +1,191 @@
+# WebAssembly GraphCompiler for Deep Learning Framework with TVM Runtime
+
+#### Experimental notice: This project is still *experimental* and only serves 
as a proof of concept for running deep learning frameworks (such like 
[MindSpore](https://github.com/mindspore-ai/mindspore)) on [WebAssembly 
runtime](https://github.com/bytecodealliance/wasmtime) with [TVM 
stack](https://tvm.apache.org/).
+
+- [WebAssembly GraphCompiler for Deep Learning Framework with TVM 
Runtime](#webassembly-graphcompiler-for-deep-learning-framework-with-tvm-runtime)
+    - [Motivation](#motivation)
+    - [Framework Landscape](#framework-landscape)
+    - [Project Status](#project-status)
+    - [PoC Guidelines](#poc-guidelines)
+        - [Pre-installation](#pre-installation)
+        - [Build ResNet50 model](#build-resnet50-model)
+        - [Build wasm-graphcompiler-tvm 
package](#build-wasm-graphcompiler-tvm-package)
+        - [Test](#test)
+    - [Future Work](#future-work)
+        - [More networks support](#more-networks-support)
+        - [Performance benchmark](#performance-benchmark)
+        - [Native TVM Rust runtime support](#native-tvm-rust-runtime-support)
+    - [Appendix](#appendix)
+        - [System packages install](#system-packages-install)
+    - [Contribution](#contribution)
+
+## Motivation
+
+<img 
src="https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png";
 alt="TVM hardware support" width="600"/>
+
+As demonstrated in TVM runtime 
[tutorials](https://tvm.apache.org/docs/tutorials/relay_quick_start.html), TVM 
already supports WASM as the optional hardware backend, so we can leverage the 
features of WebAssembly (portability, security) and TVM runtime 
(domain-specific, optimization) to build a flexible and auto-optimized graph 
compiler for all deep learning frameworks.
+
+## Framework Landscape
+
+The figures below demonstrate the whole landscape of running deep learning 
frameworks on WASM runtime with TVM compiler stack.
+
+* WASM graph compiler stack
+    ```
+       _ _ _ _ _ _ _ _ _ _        _ _ _ _ _ _ _        _ _ _ _ _ _ _ _ _ _ _ _
+      |                   |      |             |      |                       |
+      |  Framework Model  | ---> |  ONNX Model | ---> |  TVM Relay Python API |
+      |_ _ _ _ _ _ _ _ _ _|      |_ _ _ _ _ _ _|      |_ _ _ _ _ _ _ _ _ _ _ _|
+                                                                 ||
+                                                                 \/
+                 _ _ _ _ _ _ _ _ _ _ _                  _ _ _ _ _ _ _ _ _ _ _
+                |                     |                |                     |
+                | WASM Graph Compiler |                |  TVM Compiler Stack |
+                |    (TVM runtime)    |                |_ _ _ _ _ _ _ _ _ _ _|
+                |_ _ _ _ _ _ _ _ _ _ _|                          ||
+                          ||                                     \/
+        _ _ _ _ _ _ _ _   ||   _ _ _ _ _ _ _ _ _ _            _ _ _ _ _
+       |               |  \/  |                   |  llvm-ar |         |
+       | *.graph.wasm  | <--- | libgraph_wasm32.a | <------- | graph.o |
+       |_ _ _ _ _ _ _ _|      |_ _ _ _ _ _ _ _ _ _|          |_ _ _ _ _|
+    ```
+
+* WASM graph runtime
+    ```
+         _ _ _ _ _ _ _ _ _ _ _
+        |                     |
+        | WASM Graph Runtime  |
+        |   (WASM runtime)    |
+        |_ _ _ _ _ _ _ _ _ _ _|
+                  ||
+           _ _ _ _\/_ _ _ _
+          |                |
+          |  *.graph.wasm  |
+          |_ _ _ _ _ _ _ _ |
+    ```
+
+## Project Status
+
+This project should be considered **experimental** at the very early stage, 
all rich features are under active development. Here is the current operator 
support matrix:
+
+| Model Name | Status |
+| ---------- | ------ |
+| ResNet50 | ✔️ |
+| LeNet | <center>&mdash;</center> |
+
+**NOTICE**: Currently this project is ONLY tested on Ubuntu system, so `Ubuntu 
16.04+` should be prepared as the testing environment.
+
+## PoC Guidelines
+
+### Pre-installation
+
+* Rust
+
+    Before running this demo, please make sure 
[Rust](#system-packages-install) has been installed.
+
+    After Rust installed, execute the code below to add `wasm32-wasi` target:
+    ```shell
+    rustup target add wasm32-wasi
+    ```
+
+* TVM
+
+    Please follow TVM 
[installations](https://tvm.apache.org/docs/install/index.html), `export 
TVM_HOME=/path/to/tvm` and add `libtvm_runtime` to your `LD_LIBRARY_PATH`.

Review comment:
       Skip TVM part as there are instructions to do so

##########
File path: apps/wasm-graphcompiler-tvm/README.md
##########
@@ -0,0 +1,191 @@
+# WebAssembly GraphCompiler for Deep Learning Framework with TVM Runtime
+

Review comment:
       Shall we rename it to wasm-standalone? since what we are doing is 
compiling to a standalone wasm application

##########
File path: apps/wasm-graphcompiler-tvm/wasm-graphcompiler/Cargo.toml
##########
@@ -0,0 +1,26 @@
+[package]
+name = "wasm-graphcompiler-tvm"

Review comment:
       wasm-standalone-example

##########
File path: apps/wasm-graphcompiler-tvm/README.md
##########
@@ -0,0 +1,191 @@
+# WebAssembly GraphCompiler for Deep Learning Framework with TVM Runtime
+
+#### Experimental notice: This project is still *experimental* and only serves 
as a proof of concept for running deep learning frameworks (such like 
[MindSpore](https://github.com/mindspore-ai/mindspore)) on [WebAssembly 
runtime](https://github.com/bytecodealliance/wasmtime) with [TVM 
stack](https://tvm.apache.org/).

Review comment:
       I am not sure mindspore is relevant. If there are examples to interface 
with MS later, we could add a link to that example later in the README




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to