Hi,

We use GRPC bidirectional streaming RPCs for distributed SQL query 
execution. For complex queries, we need to create as many as ~100K streams 
(~1000 per GRPC server) for processing 1 query. We share channel for 
streams between the same client and server. There are dependencies across 
these streams, and a server processes data received on these streams in the 
order specified by the dependency graph. We see that when a lot of data is 
flowing through the system, the data passing through these streams grinds 
to a halt.

Is there any HTTP2 flow control config that could cause this? Is it 
guaranteed that even when there are thousands of streams on a single 
channel they can all independently make progress? Can something prevent a 
client from sending any data to a server on one of the streams because of 
any HTTP2 flow control limit?

Some config we specify for our GRPC server:
ResourceQuota max threads: 2000
ResourceQuota max memory: 1GB

Thanks,
Sandeep

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d553883d-e55f-4a1a-af23-f113de0498db%40googlegroups.com.

Reply via email to