I don't know if this works today, but we could make it:  IMO the correct 
way to check this is to call isReady or setOnReadyHandler from the 
CallStreamObserver.  This is normally for flow control, but connection 
limiting seems similar enough.

Actually looking at the code that should do the right thing (assuming you 
are using NettyChannelBuilder) since it checks if the stream has yet been 
allocated.

On Thursday, June 8, 2017 at 2:35:43 PM UTC-7, R A wrote:
>
> Hello,
>
>  
>
> I am using a custom ServerProvider in which I am limiting the number of 
> concurrent calls per connection:
>
> NettyServerBuilder.*forPort*
> (port).maxConcurrentCallsPerConnection(maxConcurrentCallsPerConnection)….
>
>  
>
> On the client side, grpc seems to wire in StreamBufferingEncoder which 
> starts buffering new streams once maxConcurrentCallsPerConnection limit 
> is reached, with no limit as far as I can see.  My questions are:
>
> 1.       Is there a way to inject custom enoder, instead, such that I can 
> limit the number of buffered/pending streams?  Or is there a way to get a 
> handle to StreamBufferingEncoder object such that numBufferedStreams() 
> can be used to monitor the number of buffered stream?
>
> 2.       If the above is not possible is there any way to enforce max 
> concurrent streams, such that new streams beyond that limit are rejected 
> and not buffered?
>
>  
>
> Any other solution that may enable me to limit the number of buffered 
> stream will be helpful as well.
>
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9e5cd3bb-35a3-4ee8-961d-9f01b8256cbd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to