[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302552#comment-16302552
 ] 

ASF GitHub Bot commented on ARROW-1394:
---------------------------------------

Wapaul1 opened a new pull request #1445: [WIP] ARROW-1394: [Plasma] Add 
optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445
 
 
   **Done:**
   - CudaIPCMemHandles are now returned as shared pointers instead of unique 
pointers.
   - Objects now have a device number; 0 for host memory, 1-infinity for GPU 
memory.
   - After being allocated and exported on the store, CudaIPCMemHandles are 
sent using flatbuffers alongside the object metadata.
   - Create and Get now return CudaBuffers for device numbers greater than 
zero, with the API change in #1444 .
   - There is an issue with the same object on the GPU being retrieved 
multiples on the same process. CudaIPCMemHandles can only be mapped once per 
process, so to solve this, there is a process-global unordered map 
`gpu_object_map` of object id to a struct containing the mapped CudaBuffer and 
count of how many clients are using the object. Removing entries would be done 
when the count reaches zero on releasing the object.
   
   **Todo:**
   - The hash on the data done when the object is sealed is a constant zero for 
objects on the GPU. 
   - The eviction policy currently has no notion of total size on GPUs, so GPU 
objects will never be released or evicted.
   - Similar to the last point, there is no configuration for how much memory 
to use on the GPU or what GPU's to use (though this can be resolved by 
`CUDA_VISIBLE_DEVICES`).
   
   As a side note, it seems like what's currently done could be abstracted into 
supporting arbitrary devices that can ship memory handles, though that is out 
of scope for the ticket.
   
   @pcmoritz @wesm  

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


> [Plasma] Add optional extension for allocating memory on GPUs
> -------------------------------------------------------------
>
>                 Key: ARROW-1394
>                 URL: https://issues.apache.org/jira/browse/ARROW-1394
>             Project: Apache Arrow
>          Issue Type: New Feature
>          Components: Plasma (C++)
>            Reporter: Wes McKinney
>              Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to