tqchen commented on code in PR #17571:
URL: https://github.com/apache/tvm/pull/17571#discussion_r1922419301
##########
include/tvm/runtime/device_api.h:
##########
@@ -135,12 +136,32 @@ class TVM_DLL DeviceAPI {
*/
virtual void* AllocDataSpace(Device dev, int ndim, const int64_t* shape,
DLDataType dtype,
Optional<String> mem_scope = NullOpt);
+
+ /*!
+ * \brief Create a new view with given spec over existing tensor.
+ * \param dev The device device to perform operation.
+ * \param data The source array.
+ * \param shape The shape of allocated tensor.
+ * \param dtype The type of elements.
+ * \param mem_scope The memory scope of allocated tensor.
+ * \return The allocated device pointer.
+ */
+ virtual void* AllocDataSpaceView(Device dev, void* data, ShapeTuple shape,
DLDataType dtype,
Review Comment:
Would be great to start with thinking along the direction of special
allocator
https://github.com/apache/tvm/blob/main/include/tvm/runtime/memory/memory_manager.h
My reading is that seems the main issue lies in the need to get Tensor from
existing Buffer in an customized fashion, perhaps we can extend Allocator
interface to enable such view
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]