[ 
https://issues.apache.org/jira/browse/ARROW-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788780#comment-16788780
 ] 

Pearu Peterson commented on ARROW-2447:
---------------------------------------

Re [~pitrou] comment:  need a way to query device-specific buffer properties 
(such as `cuda_buffer->context()`) ..

Currently, Arrow CUDA support uses primary context management which means that 
to get the CUDA context, one only needs to know the device number (use 
[cuDevicePrimaryCtxRetain|https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__PRIMARY__CTX.html#group__CUDA__PRIMARY__CTX_1g9051f2d5c31501997a6cb0530290a300]).
 The device number can be retrieved from the memory pointer (use 
[cudaPointerGetAttributes|https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__UNIFIED.html#group__CUDART__UNIFIED_1gd89830e17d399c064a2f3c3fa8bb4390]).
 So, it would be sufficient to know that the pointer is a CUDA device pointer 
to establish its accessibility properties as well as context if needed.

> [C++] Create a device abstraction
> ---------------------------------
>
>                 Key: ARROW-2447
>                 URL: https://issues.apache.org/jira/browse/ARROW-2447
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: C++, GPU
>    Affects Versions: 0.9.0
>            Reporter: Antoine Pitrou
>            Assignee: Pearu Peterson
>            Priority: Major
>             Fix For: 0.14.0
>
>
> Right now, a plain Buffer doesn't carry information about where it actually 
> lies. That information also cannot be passed around, so you get APIs like 
> {{PlasmaClient}} which take or return device number integers, and have 
> implementations which hardcode operations on CUDA buffers. Also, unsuspecting 
> receivers of a {{Buffer}} pointer may try to act on the underlying memory 
> without knowing whether it's CPU-reachable or not.
> Here is a sketch for a proposed Device abstraction:
> {code}
> class Device {
>     enum DeviceKind { KIND_CPU, KIND_CUDA };
>     virtual DeviceKind kind() const;
>     //MemoryPool* default_memory_pool() const;
>     //std::shared_ptr<Buffer> Allocate(...);
> };
> class CpuDevice : public Device {};
> class CudaDevice : public Device {
>     int device_num() const;
> };
> class Buffer {
>     virtual DeviceKind device_kind() const;
>     virtual std::shared_ptr<Device> device() const;
>     virtual bool on_cpu() const {
>         return true;
>     }
>     const uint8_t* cpu_data() const {
>         return on_cpu() ? data() : nullptr;
>     }
>     uint8_t* cpu_mutable_data() {
>         return on_cpu() ? mutable_data() : nullptr;
>     }
>     virtual CopyToCpu(std::shared_ptr<Buffer> dest) const;
>     virtual CopyFromCpu(std::shared_ptr<Buffer> src);
> };
> class CudaBuffer : public Buffer {
>     virtual bool on_cpu() const {
>         return false;
>     }
> };
> CopyBuffer(std::shared_ptr<Buffer> dest, const std::shared_ptr<Buffer> src);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to