tqchen commented on code in PR #97:
URL: https://github.com/apache/tvm-rfcs/pull/97#discussion_r1064928967


##########
rfcs/0097-unify-packed-and-object.md:
##########
@@ -0,0 +1,712 @@
+Authors: @cloud-mxd, @junrushao,  @tqchen
+
+- Feature Name: Further Unify Packed and Object in TVM Runtime
+- Start Date: 2023-01-08
+- RFC PR: [apache/tvm-rfcs#0097](https://github.com/apache/tvm-rfcs/pull/97)
+- GitHub Issue: [apache/tvm#0000](https://github.com/apache/tvm/issues/0000)
+
+## Summary
+
+This RFC proposes to further unify our PackedFunc and Object in TVM Runtime. 
Key improvements include: unifying `type_code`, solidifying AnyValue support 
for both stack and object values, open doors for small-string and 
NLP-preprocessing, and enable universal container.
+
+## Motivation
+
+FFI is one of the main components of the TVM. We use PackedFunc convention to 
safely type-erase values and pass things around. In order to support a general 
set of data structures both for compilation purposes, we also have an Object 
system, which is made to be aware in the Packed API. 
+
+Object supports reference counting, dynamic type casting, and checking as well 
as structural equality/hashing/serialization in the compiler.
+Right now, most of the things of interest are Object, including containers 
like Map, Array. PackedFunc itself, Module, and various IR objects.
+Object requires heap allocation and reference counting, which can be optimized 
through pooling. They are suitable for most of the deep learning runtime needs, 
+such as containers, as long as they are infrequent.
+In the meantime, we still need to operate with values on the stack. 
Specifically, when we pass around int, and float values. 
+It can be wasteful to invoke heap allocations/or even pooling if the 
operations are meant to be low cost. As a result, the FFI mechanism also serves 
additional ways to be able to pass **stack values** directly around without 
object.
+
+This post summarizes lessons from us and other related projects and needs 
around the overall TVM FFI and Object system. And seek to use these lessons to 
further solidify the current system. We summarize some of the needs and 
observations as follows:
+
+### N0: First class stack small string and AnyValue
+
+Data preprocessing is an important part of ML pipeline. Preprocessing in NLP 
involves strings and containers. Additionally, when translating programs 
written by users (in python), there may not be sufficient type annotations. 
+
+The programs below comes from real production scenario code from matxscript in 
NLP Preprocessing:
+
+```cpp
+// This can be part of data processing code translated 
+// from user that comes without type annotation
+AnyValue unicode_split_any(const AnyValue& word) {
+  List ret;
+  for (size_t i = 0; i < word.size(); ++i) {
+     AnyValue res = word[i];
+     ret.push_back(res);   
+  }
+  return ret;
+}
+// This is a better typed execution code
+// Note that word[i] returns a UCS4String container to match python semantics 
+// Use UCS4String stores Unicode in a fixed-length 4 bytes value to ease random
+// access to the elements. 
+List<UCS4String> unicode_split(const UCS4String& word) {
+  List<UCS4String> ret;
+  for (size_t i = 0; i < word.size(); ++i) {
+     UCS4String res = word[i];
+     ret.push_back(res);   
+  }
+  return ret;
+}
+```
+We would like to highlight a few key points by observing the above programs: 
+- Need a base AnyValue to support both stack values and object.
+    - This is to provide a safety net of translation.
+- The AnyValue needs to accommodate small-string(on stack) to enable fast 
string processing. Specifically, note that the particular example creates a 
`UCS4String res` for every character of the word. If we run heap allocation for 
each invocation, or even do reference countings, this can become expensive. The 
same principle also generalizes to the need to accommodate fast processing of 
other on-stack values. 
+
+
+While it is possible to rewrite the program through stronger typing and get 
more efficient code. It is important to acknowledge the need to efficient 
erased runtime support (with minimum overhead), especially given many ML user 
comes from python.
+
+### N1: Universal Container
+
+In the above example, it is important to note that the container `List` should 
hold any values. While it is possible to also provide different variant of 
specialized containers(such as `vector<int>`), to interact with a language like 
python, it would be nice to have a single universal container across the 
codebase. We also experienced similar issues in our compilation stack. As an 
example, while it is possible to use Array to hold IR nodes such as Expr, we 
cannot use it to hold POD int values, or other POD data types such as 
DLDataType.
+
+Having an efficient universal container helps to simplify conversions across 
language as well. For example, a list from python will be able to be turned 
into a single container without worrying about content type. The execution 
runtime will also be able to directly leverage the universal container to 
support all possible cases that a developer might write. 
+
+### N2: Further Unify POD Value, Object and AnyValue
+
+TVM currently does have an AnyValue. Specifically `TVMRetValue` is used to 
hold managed result for C++ PackedFunc return and can serve as AnyValue. 
Additionally, if the value is an object. `ObjectRef` serves as a nice way that 
comes with various mechanisms, including structural equality hashing.
+We can adopt a process processing called 
[boxing](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/boxing-and-unboxing)
 that enables most of the runtime container to store values as object.
+If we create Boxed Object for each stack values, e.g. Integer to represent 
int. We will be able to effectively represent every value in Object as well.
+Both TVMRetValue and Object leverages a code field in the beginning of the 
data structure to identify the type. TVMRetValue’s code is statically assigned, 
Object’s code contains a statically assigned segment for runtime objects and 
dynamically assigned (that are indexed by type_key) for other objects.
+
+There are two interesting regimes of operation that comes with ObjectRef and 
AnyValue.
+
+- R0: On one hand, if we are operating on the regime of no need for frequent 
stack value operations. It is desirable to simply use Object. Because object is 
more compact on register (the size of ptr, which costs 8 bytes on modern 64 bit 
machines and 4 bytes on 32 bit machines), it can obtain underlying container 
pointers easily for weak references
+    
+    ```cpp
+    void ObjectOperation(ObjectRef obj) {
+      if (auto* IntImmNode int_ptr = obj.as<IntImmNode>()) {
+        LOG(INFO) << int_ptr->value;
+      }
+    }
+    ```
+    
+- R1: On the other hand, when we operate on frequent processing that is also 
not well-typed (as the `unicode_split` example). It is important to also 
support a AnyValue that comes with stack value support.
+
+As a point of reference, python use object as base for everything. But that 
indeed creates the overhead for str, int (which we seek to eliminate). Java and 
C# support both stack values, and their object counter part. 
+Right now we have both mechanism. It would be **desirable to further unify the 
Object and AnyValue** to support both R0 and R1. Additionally, it would be nice 
to have automatic conversions if we decide that two mechanisms are supported. 
Say a caller pass in a boxed int value, the callee should be able to easily get 
int out from it(or treat it as an int) without having to do explicit casting. 
So the same routine can be implemented via either R0 or R1 that is transparent 
to the caller.
+
+- This is also important for compilers and runtimes, as different compiler and 
runtime might have their own considerations operating under R0/R1.
+
+## Guide-level explanation and Design Goals
+
+We have the following design goals:
+
+- G0: Automatic switching between object focused scenario and stack-mixed that 
requires AnyValue.
+- G1: Enable efficient string processing, specifically small-string support 
for NLP use-cases.
+- G2: Enable efficient universal container (e.g common container for 
List/Array that stores everything).

Review Comment:
   This is mainly considering the state of transition. Array currently in TVM 
is immutable(closer to tuple), where as List is muttable. So likely we will 
need to start with two container names (that shares the same underlying 
object). Then we can consider aliasing, consolidation or keep two versions 
based on the need.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to