[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355993#comment-16355993
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

pcmoritz commented on issue #1445: ARROW-1394: [Plasma] Add optional extension 
for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-363894149
 
 
   @Wapaul1 Is working on python integration and an end-to-end example that 
shows how to use this and then there are some loose ends that need to be fixed 
(eviction, hashing), we can create JIRAs for these.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.9.0
>
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355975#comment-16355975
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

wesm commented on issue #1445: ARROW-1394: [Plasma] Add optional extension for 
allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-363891955
 
 
   Thank you for building this! Are there any follow ups for the GPU support 
that you've thought of?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.9.0
>
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354775#comment-16354775
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

pcmoritz commented on issue #1445: ARROW-1394: [Plasma] Add optional extension 
for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-363611642
 
 
   Great, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354773#comment-16354773
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

kou commented on issue #1445: ARROW-1394: [Plasma] Add optional extension for 
allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-363611523
 
 
   No problem!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354575#comment-16354575
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

wesm commented on issue #1445: ARROW-1394: [Plasma] Add optional extension for 
allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-363572464
 
 
   Using `shared_ptr` seems OK with me, @kou do you see any issues?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353124#comment-16353124
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

pcmoritz commented on issue #1445: ARROW-1394: [Plasma] Add optional extension 
for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-363264898
 
 
   @wesm Do you have any thoughts about replacing the unique pointers with 
shared pointers in cuda_context and cuda_memory? I'd like to merge the PR, 
since there is some follow up work happening on top of it. Everything else 
should be safe since it is behind a feature flag and the API is backwards 
compatible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353123#comment-16353123
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

pcmoritz commented on issue #1445: ARROW-1394: [Plasma] Add optional extension 
for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-363264898
 
 
   @wesm Do you have any thoughts about replacing the unique pointers with 
shared pointers in cuda_context and cuda_memory? I'd like to merge this, since 
there is some follow up work happening on top of it. Everything else should be 
safe since it is behind a feature flag and the API is backwards compatible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352048#comment-16352048
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

robertnishihara commented on a change in pull request #1445: ARROW-1394: 
[Plasma] Add optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#discussion_r165883107
 
 

 ##
 File path: cpp/src/plasma/store.h
 ##
 @@ -74,7 +74,7 @@ class PlasmaStore {
   ///cannot create the object. In this case, the client should not call
   ///plasma_release.
   int create_object(const ObjectID& object_id, int64_t data_size, int64_t 
metadata_size,
-Client* client, PlasmaObject* result);
+int device_num, Client* client, PlasmaObject* result);
 
 Review comment:
   document the missing params


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352047#comment-16352047
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

robertnishihara commented on a change in pull request #1445: ARROW-1394: 
[Plasma] Add optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#discussion_r165882799
 
 

 ##
 File path: cpp/src/plasma/protocol.cc
 ##
 @@ -396,18 +429,25 @@ Status SendGetReply(
   flatbuffers::FlatBufferBuilder fbb;
   std::vector objects;
 
-  ARROW_CHECK(store_fds.size() == mmap_sizes.size());
-
-  for (int64_t i = 0; i < num_objects; ++i) {
+  std::vector handles;
+  for (int i = 0; i < num_objects; ++i) {
 
 Review comment:
   Let's leave this as an `int64_t`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352046#comment-16352046
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

robertnishihara commented on a change in pull request #1445: ARROW-1394: 
[Plasma] Add optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#discussion_r165882064
 
 

 ##
 File path: cpp/src/plasma/client.h
 ##
 @@ -111,11 +114,10 @@ class ARROW_EXPORT PlasmaClient {
   ///should be NULL.
   /// \param metadata_size The size in bytes of the metadata. If there is no
   ///metadata, this should be 0.
-  /// \param data A buffer containing the address of the newly created object
-  ///will be written here.
+  /// \param data The address of the newly created object will be written here.
   /// \return The return status.
   Status Create(const ObjectID& object_id, int64_t data_size, uint8_t* 
metadata,
 
 Review comment:
   document the `device_num` param


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352042#comment-16352042
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

robertnishihara commented on a change in pull request #1445: ARROW-1394: 
[Plasma] Add optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#discussion_r165881740
 
 

 ##
 File path: cpp/src/plasma/client.cc
 ##
 @@ -265,14 +330,41 @@ Status PlasmaClient::Get(const ObjectID* object_ids, 
int64_t num_objects,
 // If we are here, the object was not currently in use, so we need to
 // process the reply from the object store.
 if (object->data_size != -1) {
-  uint8_t* data = lookup_mmapped_file(object->store_fd);
-  // Finish filling out the return values.
-  object_buffers[i].data =
-  std::make_shared(data + object->data_offset, 
object->data_size);
-  object_buffers[i].metadata = std::make_shared(
-  data + object->data_offset + object->data_size, 
object->metadata_size);
+  if (object->device_num == 0) {
+uint8_t* data = lookup_mmapped_file(object->store_fd);
+// Finish filling out the return values.
+object_buffers[i].data =
+std::make_shared(data + object->data_offset, 
object->data_size);
+object_buffers[i].metadata = std::make_shared(
+data + object->data_offset + object->data_size, 
object->metadata_size);
+  } else {
+#ifdef PLASMA_GPU
+std::lock_guard lock(gpu_mutex);
+auto handle = gpu_object_map.find(object_ids[i]);
+std::shared_ptr gpu_handle;
+if (handle == gpu_object_map.end()) {
+  std::shared_ptr context;
+  RETURN_NOT_OK(manager_->GetContext(object->device_num - 1, 
));
+  GpuProcessHandle* obj_handle = new GpuProcessHandle();
+  RETURN_NOT_OK(context->OpenIpcBuffer(*object->ipc_handle, 
_handle->ptr));
+  gpu_object_map[object_ids[i]] = obj_handle;
+  gpu_handle = obj_handle->ptr;
+} else {
+  handle->second->client_count += 1;
+  gpu_handle = handle->second->ptr;
+}
+object_buffers[i].data =
+std::make_shared(gpu_handle, 0, object->data_size);
+object_buffers[i].metadata = std::make_shared(
+gpu_handle, object->data_size, object->metadata_size);
+#else
+ARROW_LOG(FATAL)
+<< "This should be unreachable as no objects can be created on a 
gpu.";
 
 Review comment:
   same here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352044#comment-16352044
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

robertnishihara commented on a change in pull request #1445: ARROW-1394: 
[Plasma] Add optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#discussion_r165881777
 
 

 ##
 File path: cpp/src/plasma/client.cc
 ##
 @@ -75,7 +85,24 @@ struct ObjectInUseEntry {
   bool is_sealed;
 };
 
-PlasmaClient::PlasmaClient() {}
+#ifdef PLASMA_GPU
+struct GpuProcessHandle {
+  std::shared_ptr ptr;
+  int client_count;
+};
 
 Review comment:
   document these fields and the struct


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352041#comment-16352041
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

robertnishihara commented on a change in pull request #1445: ARROW-1394: 
[Plasma] Add optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#discussion_r165881711
 
 

 ##
 File path: cpp/src/plasma/client.cc
 ##
 @@ -210,13 +260,28 @@ Status PlasmaClient::Get(const ObjectID* object_ids, 
int64_t num_objects,
   ARROW_CHECK(object_entry->second->is_sealed)
   << "Plasma client called get on an unsealed object that it created";
   PlasmaObject* object = _entry->second->object;
-  uint8_t* data = lookup_mmapped_file(object->store_fd);
-  object_buffers[i].data =
-  std::make_shared(data + object->data_offset, 
object->data_size);
-  object_buffers[i].metadata = std::make_shared(
-  data + object->data_offset + object->data_size, 
object->metadata_size);
+  if (object->device_num == 0) {
+uint8_t* data = lookup_mmapped_file(object->store_fd);
+object_buffers[i].data =
+std::make_shared(data + object->data_offset, 
object->data_size);
+object_buffers[i].metadata = std::make_shared(
+data + object->data_offset + object->data_size, 
object->metadata_size);
+  } else {
+#ifdef PLASMA_GPU
+std::shared_ptr gpu_handle =
+gpu_object_map.find(object_ids[i])->second->ptr;
+object_buffers[i].data =
+std::make_shared(gpu_handle, 0, object->data_size);
+object_buffers[i].metadata = std::make_shared(
+gpu_handle, object->data_size, object->metadata_size);
+#else
+ARROW_LOG(FATAL)
+<< "This should be unreachable as no objects can be created on a 
gpu.";
 
 Review comment:
   This error should be the same as in `Create`, that is `"Arrow GPU library is 
not enabled."`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351592#comment-16351592
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

pcmoritz commented on issue #1445: ARROW-1394: [Plasma] Add optional extension 
for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-362870629
 
 
   +1 This is now ready for review, the Travis test failures seem unrelated


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339862#comment-16339862
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

pcmoritz commented on issue #1445: [WIP] ARROW-1394: [Plasma] Add optional 
extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445#issuecomment-360596865
 
 
   Sorry for the long delay, I'm working on rebasing this now and getting it 
merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>Priority: Major
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARROW-1394) [Plasma] Add optional extension for allocating memory on GPUs

2017-12-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARROW-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16302552#comment-16302552
 ] 

ASF GitHub Bot commented on ARROW-1394:
---

Wapaul1 opened a new pull request #1445: [WIP] ARROW-1394: [Plasma] Add 
optional extension for allocating memory on GPUs
URL: https://github.com/apache/arrow/pull/1445
 
 
   **Done:**
   - CudaIPCMemHandles are now returned as shared pointers instead of unique 
pointers.
   - Objects now have a device number; 0 for host memory, 1-infinity for GPU 
memory.
   - After being allocated and exported on the store, CudaIPCMemHandles are 
sent using flatbuffers alongside the object metadata.
   - Create and Get now return CudaBuffers for device numbers greater than 
zero, with the API change in #1444 .
   - There is an issue with the same object on the GPU being retrieved 
multiples on the same process. CudaIPCMemHandles can only be mapped once per 
process, so to solve this, there is a process-global unordered map 
`gpu_object_map` of object id to a struct containing the mapped CudaBuffer and 
count of how many clients are using the object. Removing entries would be done 
when the count reaches zero on releasing the object.
   
   **Todo:**
   - The hash on the data done when the object is sealed is a constant zero for 
objects on the GPU. 
   - The eviction policy currently has no notion of total size on GPUs, so GPU 
objects will never be released or evicted.
   - Similar to the last point, there is no configuration for how much memory 
to use on the GPU or what GPU's to use (though this can be resolved by 
`CUDA_VISIBLE_DEVICES`).
   
   As a side note, it seems like what's currently done could be abstracted into 
supporting arbitrary devices that can ship memory handles, though that is out 
of scope for the ticket.
   
   @pcmoritz @wesm  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Plasma] Add optional extension for allocating memory on GPUs
> -
>
> Key: ARROW-1394
> URL: https://issues.apache.org/jira/browse/ARROW-1394
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Plasma (C++)
>Reporter: Wes McKinney
>  Labels: pull-request-available
>
> It would be useful to be able to allocate memory to be shared between 
> processes via Plasma using the CUDA IPC API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)