[ 
https://issues.apache.org/jira/browse/ARROW-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17326276#comment-17326276
 ] 

Yibo Cai commented on ARROW-10351:
----------------------------------

I retested on an old i7 machine with 8 cores. Running the same commands as 
yours, I can see stable improvements. Speed: 800 -> 1000, Latency: 170 -> 120.

My POC test code is too bad to use correctly. I hardcoded to use compression in 
[client.cc|https://github.com/cyb70289/arrow/blob/flight-poc/cpp/src/arrow/flight/client.cc#L73].
 Master branch won't use compression by default. I meant to comment out 
[INTERLEAVE_PREPARE_AND_SEND|https://github.com/cyb70289/arrow/blob/flight-poc/cpp/src/arrow/flight/flight_benchmark.cc#L196
 ] macro to benchmark again master branch.

Will provide a patch to add command line options to exercise easily both code 
paths.

> [C++][Flight] See if reading/writing to gRPC get/put streams asynchronously 
> helps performance
> ---------------------------------------------------------------------------------------------
>
>                 Key: ARROW-10351
>                 URL: https://issues.apache.org/jira/browse/ARROW-10351
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: C++, FlightRPC
>            Reporter: Wes McKinney
>            Priority: Major
>
> We don't use any asynchronous concepts in the way that Flight is implemented 
> now, i.e. IPC deconstruction/reconstruction (which may include compression!) 
> is not performed concurrent with moving FlightData objects through the gRPC 
> machinery, which may yield suboptimal performance. 
> It might be better to apply an actor-type approach where a dedicated thread 
> retrieves and prepares the next raw IPC message (within a Future) while the 
> current IPC message is being processed -- that way reading/writing to/from 
> the gRPC stream is not blocked on the IPC code doing its thing. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to