alexeykudinkin commented on issue #25822:
URL: https://github.com/apache/arrow/issues/25822#issuecomment-2529427268

   @felipecrv @jorisvandenbossche do we have a clear line of sight when this 
issue will be addressed.
   
   Can you help me understand if there were/are any practical limitations 
around why chunking wasn't a consideration in the first place (for 
Table/ChunkedArray APIs)?
   
   This is a pretty foundational issue for `take` API not doing chunking for 
arrays growing above 2Gb rendering this API essentially impossible to use for 
Data Processing. 
   
   For ex, in Ray Data
   
   1. We can't force users to go for int64-based types
   2. We can't blindly upcast all types to int64-based ones either
   3. We have to be able to handle columns growing above 2Gb


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to