areusch commented on code in PR #65:
URL: https://github.com/apache/tvm-rfcs/pull/65#discussion_r853613741


##########
rfcs/0009_Unified_Static_Memory_Planning.md:
##########
@@ -515,4 +663,6 @@ NOTE : to support tir.constants generally, we'll be 
enhancing the bound relay.co
 
 # Drawbacks
 
-* The relay "main" function that describes the call order to operator 
PrimFuncs has to be described in TIR to be able to integrate the USMP into the 
respective executor codegen. However, we dont view this as a major problem as 
the relay "main" function could easily be lowered to TIR.
\ No newline at end of file
+* The relay "main" function that describes the call order to operator 
PrimFuncs has to be described in TIR to be able to integrate the USMP into the 
respective executor codegen. However, we dont view this as a major problem as 
the relay "main" function could easily be lowered to TIR.
+
+* The U4 usecase will only be supported with [Embedded C Runtime 
Interface](https://discuss.tvm.apache.org/t/rfc-utvm-embedded-c-runtime-interface/9951/14).
 This is mainly because the nature of the requirement is associated with 
embedded usecases. However, the USMP changes here should be complimentary to 
support other runtime interfaces such as Module-based Model Runtime Interface's 
set_input and set_output in future.

Review Comment:
   > I admit that we need to account for a certain overhead 
   
   i guess the main question is: do you think that DLTensor data should be 
"overhead" in the sense that USMP should consider a buffer to need 
`sizeof(data) + sizeof(DLTensor)` bytes contiguously, or do you prefer to keep 
with all the `data` in one contiguous block and the DLTensors elsewhere? the 
second option here seems more inline with the unpacked calling convention.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to