tqchen commented on code in PR #97:
URL: https://github.com/apache/tvm-rfcs/pull/97#discussion_r1064718054


##########
rfcs/0097-unify-packed-and-object.md:
##########
@@ -0,0 +1,677 @@
+Authors: @cloud-mxd, @junrushao,  @tqchen
+
+- Feature Name: Further Unify Packed and Object in TVM Runtime
+- Start Date: 2023-01-08
+- RFC PR: [apache/tvm-rfcs#0097](https://github.com/apache/tvm-rfcs/pull/97)
+- GitHub Issue: [apache/tvm#0000](https://github.com/apache/tvm/issues/0000)
+
+## Summary
+
+This RFC proposes to further unify our PackedFunc and Object in TVM Runtime. 
The key improvements include: unifying `type_code`, solidifying AnyValue 
support for both stack and object values, open doors for small-string and 
NLP-preprocessing, and enable universal container.
+
+## Motivation
+
+FFI is one of the main component of the TVM. We use PackedFunc convention to 
safely type erase values and pass things around. In order to support a general 
set of data structures both for compilation purposes, we also have Object 
system, which is made to be aware in the Packed API. 
+
+The object supports reference counting, dynamic type casting and checking as 
well as structural equality/hashing/serialization in the compiler.
+Right now most of the things of interest are Object, this including containers 
like Map, Array. PackedFunc itself, Module and various IR objects.
+Object requires heap allocation and reference counting, which can be optimized 
through pooling. They are suitable for most of the deep learning runtime needs, 
+such as containers as long as they are infrequent.
+In the meantime, we still need to operate with values on stack. Specifically, 
when we pass around int, float values. 
+It can be wasteful to invoke heap allocations/or even pooling if the 
operations is meant to be low cost. As a result, the FFI mechanism also serves 
additional ways to be able to pass **stack values** directly around without 
object.
+
+This post summarizes lessons from us and other related projects and needs 
around the overall TVM FFI and Object system. And seek to use these lessons to 
further solidify the current system. We summarize some of the needs and 
observations as follows:
+
+### N0: First class stack small string and AnyValue
+
+**Lesson from matxscript:** Data preprocessing is an important part of ML 
pipeline. Pre-processing in NLP involves strings and containers. Additionally, 
when translating programs written by users (in python), there may not be 
sufficient type annotations. We can common get the one of the programs below
+
+```cpp
+// This can be part of data processing code translated 
+// from user that comes without type annotation
+AnyValue unicode_split_any(const AnyValue& word) {
+  List ret;
+  for (size_t i = 0; i < word.size(); ++i) {
+     AnyValue res = word[i];
+     ret.push_back(res);   
+  }
+  return ret;
+}
+// This is a better typed execution code
+// Note that word[i] returns a UCS4String container to match python semantics 
+// Use UCS4String stores unicode in a fixed-length 4 bytes value to ease random
+// access to the elements. 
+List unicode_split(const UCS4String& word) {
+  List ret;
+  for (size_t i = 0; i < word.size(); ++i) {
+     UCS4String res = word[i];
+     ret.push_back(res);   
+  }
+  return ret;
+}
+```
+
+- Need a base AnyValue to support both stack values and object.
+    - This is to provide a safety net of translation.
+- The AnyValue needs to accommodate small-string(on stack) to enable fast 
string processing. Specifically, note that the particular example creates a 
`UCS4String res` for every character of the word. If we run heap allocation for 
each invocation, or even do reference countings, this can become expensive.

Review Comment:
   Yes This does generalizes to need to support stack values. 
   However, that also depends on whether such scenarios are useful in the 
settings we consider. In this particular case. We have not yet seen needs on 
other stack values yet. But the design naturally generalizes to stack values



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to