srkreddy1238 commented on code in PR #17571:
URL: https://github.com/apache/tvm/pull/17571#discussion_r1921887216
##########
include/tvm/runtime/device_api.h:
##########
@@ -135,12 +136,32 @@ class TVM_DLL DeviceAPI {
*/
virtual void* AllocDataSpace(Device dev, int ndim, const int64_t* shape,
DLDataType dtype,
Optional<String> mem_scope = NullOpt);
+
+ /*!
+ * \brief Create a new view with given spec over existing tensor.
+ * \param dev The device device to perform operation.
+ * \param data The source array.
+ * \param shape The shape of allocated tensor.
+ * \param dtype The type of elements.
+ * \param mem_scope The memory scope of allocated tensor.
+ * \return The allocated device pointer.
+ */
+ virtual void* AllocDataSpaceView(Device dev, void* data, ShapeTuple shape,
DLDataType dtype,
Review Comment:
I see this is the clean way to keep minimal changes in other modules (graph
runtime, ndarray, memory manager ..etc).
In Relax also , I am mapping alloc_storage to allocate cl_buffer and
alloc_tensor does a view over it by this device API call. (WIP Ref.
https://github.com/srkreddy1238/tvm/commit/a6376b94799fd9c546334073fcbaeb468c3884df#diff-847ee73fb0b77db96cce920da6cbae223f6bdb026ea125514122e96630356c9b)
Later, this also allows easy path for CLML memory management going through
TVM memory_manager interface and also features like GMEM (on chip memory of
AdrenoGPU) support for TVM ..etc.
Let me know if you have different advice, I can explore the possibilities.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]