ccjoechou commented on a change in pull request #48:
URL: https://github.com/apache/tvm-rfcs/pull/48#discussion_r787267709



##########
File path: rfcs/0048-BYOC-Marvell-ML-accelerator-integration.md
##########
@@ -0,0 +1,547 @@
+- Feature Name: (fill me in with a unique identifier, `my_awesome_feature`)
+- Start Date: (fill me in with today's date, YYYY-MM-DD)
+- RFC PR: [apache/tvm-rfcs#0000](https://github.com/apache/tvm-rfcs/pull/0000)
+- GitHub Issue: [apache/tvm#0000](https://github.com/apache/tvm/issues/0000)
+- GitHub pre-RFC PR: 
[apache/tvm-PR-9730](https://github.com/apache/tvm/pull/9730)
+- GitHub pre-RFC discussion: 
[BYOC-Marvell](https://discuss.tvm.apache.org/t/pre-rfc-byoc-marvell-ml-ai-accelerator-integration/11691)
+
+# Summary
+[summary]: #summary
+
+Integrate Marvell’s ML/AI accelerator with TVM BYOC framework in order to 
bring the TVM ecosystem to Marvell customers.
+
+# Motivation
+[motivation]: #motivation
+
+Marvell MLIP is an ML/AI inference accelerator and is embedded on our ARM 
Neoverse N2-based OCTEON 10 processor.
+  We are building an easy-to-use, open, software suite for our customers by 
integrating and utilizing TVM so that
+  we can bring TVM capability and experience to our customers.
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+Based on what Marvell ML/AI inference accelerator does the best, a given 
pre-trained network model
+will be applied to a TVM-Mrvl-BYOC AOT compilation and code-gen flow as 
illustrated in steps below.
+
+STEP (1) Run TVM-Mrvl-BYOC AOT ML Frontend Compilation and Mrvl-BYOC code-gen. 
The steps involved in this are:
+
+* Load pre-trained network into TVM IR graph
+
+* Do Marvell-specific layout conversions to transform IR graph in order to 
meet requirements of the accelerator
+
+* Do Marvell-specific composite-merging/fusing to transform IR graph in order 
to utilize available HW capability

Review comment:
       @mbs-octoml: Thanks for replying. Please also see my comments to 
@areusch's reply below including several in-line write-ups since they can be 
providing information regarding to your questions too. Please let me know if 
anything can be clarified on the TVM GitHub PR-9730 front.
   Currently, we are also running parts of the tvm/Jenkinsfile stages and their 
steps locally using our own Jenkins server. However, we are having problem to 
debug rust/cargo issue (the tvm/scripts/task_rust.sh suite). It will be great, 
if you can provide us additional information regarding how to build our "local" 
tvm-build package (I can git clone current OctoML GitHub tvm-build repo) and 
then how we can adjust the tvm/rust/Cargo.toml file to use our "local" 
tvm-build package.
   Also, any tips and pointers regarding how to debug rust/cargo build.
   Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to