frbvianna commented on issue #37976:
URL: https://github.com/apache/arrow/issues/37976#issuecomment-1750733234

   I face a very similar situation, except that we are not using FlightSQL, but 
the IPC writer directly. This issue already shed some light, thank you.
   
   However, I am curious if this chunking could have been done based on the 
actual IPC encoded bytes instead, as opposed to calculating/estimating the 
number of elements based on the `arrow.Record`, which is anyways compressed 
later on. 
   You only get to know the final size once you encode it, and every slice of 
Record written to the underlying buffer will be decoded individually instead of 
adding up to a single Record. So it does not seem feasible to encode e.g. 
slices of each row at a time and keep watching the buffer size, then only send 
to the gRPC stream once the size limit is reached. Each data chunk would end up 
in multiple Records in the receiver side, which does not seem ideal.
   
   Do you see any other feasible way we might have achieved this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to