felipeblazing commented on code in PR #36489:
URL: https://github.com/apache/arrow/pull/36489#discussion_r1261447512


##########
cpp/src/arrow/device.h:
##########
@@ -29,6 +29,25 @@
 
 namespace arrow {
 
+/// \brief EXPERIMENTAL: Device type enum which matches up with C Data Device 
types
+enum class DeviceType : char {
+  UNKNOWN = 0,
+  CPU = 1,
+  CUDA = 2,

Review Comment:
   Should we consider adding a seperate allocation type for allocations that 
are made using Stream Ordered Allocators [More Details 
Here](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY__POOLS.html)
 These allocators are asynchronous with respect to the host. When a user 
allocates data using a stream ordered allocator the allocator returns 
immediately and the allocation is guaranteed to be valid before the next 
operation is added to that particular stream. Because arrow is not handling 
cuda streams if a user uses the asynchronous allocator then there is no 
guarantee this allocation will be performed before we attempt to write or read 
to that memory. 
   
   This means that if we are using async allocators the user should be informed 
this is the case so that they can do things like capture the stream from 
whatever library is making the allocation to handle the stream or at least call 
cudaDeviceSynchronize() to ensure that this allocation is indeed already 
available.



##########
cpp/src/arrow/device.h:
##########
@@ -29,6 +29,25 @@
 
 namespace arrow {
 
+/// \brief EXPERIMENTAL: Device type enum which matches up with C Data Device 
types
+enum class DeviceType : char {

Review Comment:
   Is DeviceType the appropriate label for this? It seems to be more a 
combination of DeviceType with different allocation flavors that the drivers 
implement for the various device types.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to