imalsogreg commented on a change in pull request #7085:
URL: https://github.com/apache/tvm/pull/7085#discussion_r570241645



##########
File path: python/tvm/relay/analysis/__init__.py
##########
@@ -26,7 +26,7 @@
 from . import call_graph
 from .call_graph import CallGraph
 
-# Feature
+# # Feature

Review comment:
       Two `#`'s?

##########
File path: python/tvm/relay/build_module.py
##########
@@ -193,6 +193,18 @@ def get_params(self):
             ret[key] = value.data
         return ret
 
+@register_func("tvm.relay.build")
+def _rust_build_module(mod, target=None, target_host=None, params=None, 
mod_name="default"):
+    print(mod)
+    print("\n")
+    rt_mod = build(mod, target, target_host, params, mod_name).module
+    print(rt_mod)
+    print(rt_mod["default"])

Review comment:
       leftover prints, or something we want to keep?

##########
File path: rust/tvm/README.md
##########
@@ -15,221 +15,40 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# TVM Runtime Frontend Support
+# TVM
 
-This crate provides an idiomatic Rust API for 
[TVM](https://github.com/apache/tvm) runtime frontend. Currently this requires 
**Nightly Rust** and tested on `rustc 1.32.0-nightly`
+This crate provides an idiomatic Rust API for 
[TVM](https://github.com/apache/tvm).
+The code works on **Stable Rust** and is tested against `rustc 1.47`.
 
-## What Does This Crate Offer?
-
-Here is a major workflow
-
-1. Train your **Deep Learning** model using any major framework such as 
[PyTorch](https://pytorch.org/), [Apache MXNet](https://mxnet.apache.org/) or 
[TensorFlow](https://www.tensorflow.org/)
-2. Use **TVM** to build optimized model artifacts on a supported context such 
as CPU, GPU, OpenCL and specialized accelerators.
-3. Deploy your models using **Rust** :heart:
-
-### Example: Deploy Image Classification from Pretrained Resnet18 on ImageNet1k
-
-Please checkout [examples/resnet](examples/resnet) for the complete end-to-end 
example.
-
-Here's a Python snippet for downloading and building a pretrained Resnet18 via 
Apache MXNet and TVM
-
-```python
-block = get_model('resnet18_v1', pretrained=True)
-
-sym, params = relay.frontend.from_mxnet(block, shape_dict)
-# compile the model
-with relay.build_config(opt_level=opt_level):
-    graph, lib, params = relay.build(
-        net, target, params=params)
-# same the model artifacts
-lib.save(os.path.join(target_dir, "deploy_lib.o"))
-cc.create_shared(os.path.join(target_dir, "deploy_lib.so"),
-                [os.path.join(target_dir, "deploy_lib.o")])
-
-with open(os.path.join(target_dir, "deploy_graph.json"), "w") as fo:
-    fo.write(graph.json())
-with open(os.path.join(target_dir,"deploy_param.params"), "wb") as fo:
-    fo.write(relay.save_param_dict(params))
-```
+You can find the API Documentation 
[here](https://tvm.apache.org/docs/api/rust/tvm/index.html).
 
-Now, we need to input the artifacts to create and run the *Graph Runtime* to 
detect our input cat image
-
-![cat](https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true)
+## What Does This Crate Offer?
 
-as demostrated in the following Rust snippet
+The goal of this crate is to provide bindings to both the TVM compiler and 
runtime
+APIs. First train your **Deep Learning** model using any major framework such 
as
+[PyTorch](https://pytorch.org/), [Apache MXNet](https://mxnet.apache.org/) or 
[TensorFlow](https://www.tensorflow.org/).
+Then use **TVM** to build and deploy optimized model artifacts on a supported 
devices such as CPU, GPU, OpenCL and specialized accelerators.
 
-```rust
-    let graph = fs::read_to_string("deploy_graph.json")?;
-    // load the built module
-    let lib = Module::load(&Path::new("deploy_lib.so"))?;
-    // get the global TVM graph runtime function
-    let runtime_create_fn = Function::get("tvm.graph_runtime.create", 
true).unwrap();
-    let runtime_create_fn_ret = call_packed!(
-        runtime_create_fn,
-        &graph,
-        &lib,
-        &ctx.device_type,
-        &ctx.device_id
-    )?;
-    // get graph runtime module
-    let graph_runtime_module: Module = runtime_create_fn_ret.try_into()?;
-    // get the registered `load_params` from runtime module
-    let ref load_param_fn = graph_runtime_module
-        .get_function("load_params", false)
-        .unwrap();
-    // parse parameters and convert to TVMByteArray
-    let params: Vec<u8> = fs::read("deploy_param.params")?;
-    let barr = TVMByteArray::from(&params);
-    // load the parameters
-    call_packed!(load_param_fn, &barr)?;
-    // get the set_input function
-    let ref set_input_fn = graph_runtime_module
-        .get_function("set_input", false)
-        .unwrap();
+The Rust bindings are composed of a few crates:
+- The [tvm](https://tvm.apache.org/docs/api/rust/tvm/index.html) crate which 
exposes Rust bindings to
+  both the compiler and runtime.
+- The [tvm_macros](https://tvm.apache.org/docs/api/rust/tvm/index.html) crate 
which provides macros
+  which generate unsafe boilerplate for TVM's data structures.
+- The [tvm_rt](https://tvm.apache.org/docs/api/rust/tvm_rt/index.html) crate 
which exposes Rust

Review comment:
       Just wondering as a naive reader, why there is a `tvm_rt` crate when 
`tvm` exposes bindings to the runtime. (Is there _extra_ runtime stuff in 
`tmv_rt`, vs. `tvm_rt` being a subset of `tvm` for users that don't want to 
pull in the compiler bindings?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to