I am assuming you are using gRPC Java: * Setting the window size correctly will have the biggest win; it should be about equal to the bandwidth delay product BDP. 64K was picked as a generally safe guess, but it isn't correct in all environments. There is work to automatically tune this, but it isn't in today.
* If you have exactly 1 RPC active at a time, there are optimizations to make the DATA frames larger. (16K by default, set by the remote side settings). You can change this (though TBH, I have never tried and don't know how) to be larger so that each RPC fits in a single frame and doesn't need to be cut up. * If you have more then 1 RPC active, each message is cut up into 1K chunks in order to make each RPC get more fair access to the wire. This was changed in master, and will be available in 1.5, but you can run with master to try it out. This ONLY helps if there are more than one active RPCs. * If you are pushing more than 10gbps, you can run into TLS bottlenecks. This is almost certainly not applicable to most people, but you can create multiple channels to get around this, but you give up in order delivery. I would avoid doing this until it is the last possible thing. * Making multiple RPCs will slow down your code, due to the header overhead for each RPC. As seen on our performance dashboard ( http://performance-dot-grpc-testing.appspot.com/explore?dashboard=5636470266134528 ) you can see streaming throughput is about 2 - 2.5x faster. What kind of bottlenecks do you see, and what are your target goals? On Friday, June 2, 2017 at 6:37:22 AM UTC-7, Vivek M wrote: > > Hi, > > We have a gRPC streaming server and it serves only one streaming RPC but > there are lots of data that has to be streamed using this RPC. And our > customer is ready to invoke this RPC only once (they are not ok to have > multiple streaming calls running). We hit a throughput issue and we > observed that by increasing the HTTP/2 window size from its default 64K, we > are able to achieve more throughput. > > However I would like to know with default value of 64K window size how can > we achieve more throughput. Is there a way to tell the gRPC stack to use > multiple streams per streaming RPC? So instead of using one stream with > larger window, it can create and use multiple small streams of 64K window > by dynamically creating a stream whenever it senses the existing active > streams are choked. > > If not, what other options do we have to increase the throughput with > default window of 64K? > > Thanks, > Vivek > > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/grpc-io. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/4c33760c-d626-4f7f-b0e1-4545a10976cb%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
