zeroshade commented on code in PR #39772:
URL: https://github.com/apache/arrow/pull/39772#discussion_r1470129965


##########
cpp/src/arrow/device.h:
##########
@@ -363,4 +363,22 @@ class ARROW_EXPORT CPUMemoryManager : public MemoryManager 
{
 ARROW_EXPORT
 std::shared_ptr<MemoryManager> default_cpu_memory_manager();
 
+/// \brief Copy all buffers of an array to destination MemoryManager
+///
+/// This utilizes MemoryManager::CopyBuffer to create a new Array recursively 
copying
+/// all buffers, and all children buffers, to the destination MemoryManager. 
This
+/// includes any dictionaries if applicable.
+ARROW_EXPORT
+Result<std::shared_ptr<Array>> CopyArrayTo(const Array& array,

Review Comment:
   If the caller *requires* unique ownership of the array, then they should 
just call `CopyTo` in the first place. 
   
   We don't want to fail if the source and destination are the same device 
because maybe they are just intentionally doing a full copy to get unique 
ownership, but also because the device is tracked on a per-buffer basis, not at 
the array level. Having a `CopyOrView` allows the user to attempt to avoid 
copies if they don't need unique ownership of the buffers, so i don't think 
it's weird expectations about the returned values. It's very clear what the 
expectations are:
   
   1. If you need unique ownership, call `Copy`
   2. If you don't need unique ownership and want to avoid copies if possible 
then use `ViewOrCopy` (for example if you're using Cuda Host memory you can get 
a CPU accessible view without having to manually perform a copy)
   
   > This might be a bad idea without a guarantee that all buffers making up an 
array or RecordBatch are on the same device. :/
   
   One way to ensure they are all on the same device is to call this function! 
:smile: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to