Hello,

Running into a mysterious issue, and would really appreciate some insight.

We're using gRPC server-side streaming to transfer some data between two 
services (via protocol buffers). We're working on performance improvements 
as the system isn't as fast as it needs to be.

We set up a test environment as follows:

   - a client that initiates the server-side stream and immediately 
   discards messages as they're received
   - a server that produces a stream of uniformly-sized messages consisting 
   of a random ByteString wrapped in a protocol buffer, sending them as fast 
   as possible
   - both client and server run in separate processes on the same machine

We tried switching the client from Netty channels to Okhttp channels. 
Testing with the above setup showed a 4x speed increase, which is awesome. 
However, then we introduced a proxy between the client & server that adds 
100ms of latency (simulating a connection that goes between our production 
datacenters). The rate while using Okhttp dropped from 200,000 messages/sec 
to 2,000 messages/sec.

We theorized this could be solved by increasing the clients' 
flowControlWindow (especially since the Okhttp default is much smaller than 
Netty's). However, increasing the window size by any significant amount 
(more than a few kb) results in the client receiving a few hundred messages 
and then stopping. According to the logs, the stream remains open until the 
client terminates it due to deadline, then the client establishes a new 
stream on the same connection, but still no more messages are sent. We 
can't find a reason for this to happen. (This doesn't happen with netty)


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/21beb5c7-cffe-40bc-9d46-a67298871cf6%40googlegroups.com.

Reply via email to