csullivan commented on code in PR #11357:
URL: https://github.com/apache/tvm/pull/11357#discussion_r890621408


##########
include/tvm/te/operation.h:
##########
@@ -76,6 +78,11 @@ class TVM_DLL OperationNode : public Object {
    * \return type of i-th output.
    */
   virtual DataType output_dtype(size_t i) const = 0;
+  /**
+   * @brief Returns memory scope of operation
+   * TODO(amalyshe): add support for individual output's tensors, not the only 
one
+   */
+  String memory_scope() const { return memory_scope_; }

Review Comment:
   Memory scope is a part of the TIR Buffer and can be communicated via the 
tensor to buffer `binds` map provided when lowering the TE tensors. See 
https://github.com/apache/tvm/blob/main/src/relay/backend/te_compiler.cc#L346. 
As you can see, this binds field is empty and is not being used by the TE 
Compiler today in main. Instead of annotating the placeholders with scope from 
the virtual device, we should use the virtual device to construct the TIR 
buffers with the appropriate scope and store them in the cached func to be used 
in the binds field when LowerSchedule is called by the te_compiler.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to