srkreddy1238 commented on code in PR #17571:
URL: https://github.com/apache/tvm/pull/17571#discussion_r1922513302
##########
include/tvm/runtime/device_api.h:
##########
@@ -135,12 +136,32 @@ class TVM_DLL DeviceAPI {
*/
virtual void* AllocDataSpace(Device dev, int ndim, const int64_t* shape,
DLDataType dtype,
Optional<String> mem_scope = NullOpt);
+
+ /*!
+ * \brief Create a new view with given spec over existing tensor.
+ * \param dev The device device to perform operation.
+ * \param data The source array.
+ * \param shape The shape of allocated tensor.
+ * \param dtype The type of elements.
+ * \param mem_scope The memory scope of allocated tensor.
+ * \return The allocated device pointer.
+ */
+ virtual void* AllocDataSpaceView(Device dev, void* data, ShapeTuple shape,
DLDataType dtype,
Review Comment:
> My reading is that seems the main issue lies in the need to get Tensor
from existing Buffer in an customized fashion, perhaps we can extend Allocator
interface to enable such view
True, where the backing buffer is used as it as or many image views created
over based on memory plan.
With view over NDArray or special Allocator we need to reach opencl device
api for final view creation which happen by OpenCL call ``clCreateImage`` . We
create a new cl_mem from an existing cl_mem as backing buffer.
Current flow is
storage_pool populated by: Allocator->Empty => NDArray.
Data_entry_ populated by: NDArray => NDArray::CreateView =>
DeviceAPI::AllocDataSpaceView => NDArray
We can change this to Allocator interface by
Special Allocator (Extended from Allocator with new call for View)
registered from OpenCL Device API at Init.
storage_pool by : Allocator->Alloc => StorageObj
data_entry_ : StorageObj => AllocNDArrayWithScope => Allocator::CreateView
(access OpenCLWorkSpace and create view) => NDArray
Is my understanding correct here ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]