zeroshade commented on code in PR #36489:
URL: https://github.com/apache/arrow/pull/36489#discussion_r1268204722


##########
cpp/src/arrow/buffer.h:
##########
@@ -57,18 +58,31 @@ class ARROW_EXPORT Buffer {
   ///
   /// \note The passed memory must be kept alive through some other means
   Buffer(const uint8_t* data, int64_t size)
-      : is_mutable_(false), is_cpu_(true), data_(data), size_(size), 
capacity_(size) {
+      : is_mutable_(false),
+        is_cpu_(true),
+        data_(data),
+        size_(size),
+        capacity_(size),
+        device_type_(DeviceAllocationType::kCPU) {
     SetMemoryManager(default_cpu_memory_manager());
   }
 
   Buffer(const uint8_t* data, int64_t size, std::shared_ptr<MemoryManager> mm,
-         std::shared_ptr<Buffer> parent = NULLPTR)
+         std::shared_ptr<Buffer> parent = NULLPTR,
+         std::optional<DeviceAllocationType> device_type = std::nullopt)
       : is_mutable_(false),
         data_(data),
         size_(size),
         capacity_(size),
         parent_(std::move(parent)) {
+    // will set device_type from the memory manager
     SetMemoryManager(std::move(mm));
+    // if a device type is specified, use that instead. for example:
+    // CUDA_HOST. The CudaMemoryManager will set device_type_ to CUDA,
+    // but you can specify CUDA_HOST as the device type to override it.

Review Comment:
   They are different, `CUDA` is memory allocated on the device itself while 
`CUDA_HOST` is memory allocated on the CPU and the pages are pinned so that 
they are still accessible by the devices. For example: 
https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1gab84100ae1fa1b12eaca660207ef585b
 vs 
https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1g37d37965bfb4803b6d4e59ff26856356



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to