shikibu-z commented on issue #40257:
URL: https://github.com/apache/arrow/issues/40257#issuecomment-2423050809

   I'm running into a similar usecase when trying to configure the 
"generic_options" for the server side but it's about the 
`GRPC_ARG_MAX_SEND_MESSAGE_LENGTH`. If this is still impossible yet for 
python-based flight server, I'm curious why changing the `max_chunksize` for 
batch stream on the send data (on the server side) to be bigger than 4MB (the 
default gRPC max size) won't cause any errors. For references, my code looks 
like the follows:
   ```
   reader = arrow.ipc.RecordBatchReader().from_batches(
       data.schema, data.to_batches(max_chunksize=8 * 1024 * 1024)
   )
   return flight.RecordBatchStream(reader)
   ```
   On the client side, I use the `reader.read_chunk()` and find that it has the 
same length as the send chunk (8MB). Is it because some hidden mechanisms in 
the cpp layer that automatically chop send data into the appropriate size? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to