You *really* don't want to limit the queue size.  The queue is not per RPC, 
but per RPC callback event.   If the enqueue'd callback (like headers 
received, data received, cancellation, etc.) gets dropped, the RPC will be 
in a zombie state and never able to finish or die.  Additionally, if you 
block on attempting to add callbacks (instead of just failing them), you 
run the risk of deadlocking, because the net thread will be blocked on the 
application thread.    

The BlockingQueue in the executor is not a good fit for async callbacks.  
 It would be much better to install an Interceptor that keeps track of the 
number of active calls, and simply fails RPCs (instead of callbacks) if the 
number gets too high.   

On Thursday, November 29, 2018 at 8:43:04 AM UTC-8, 
in...@olivierboucher.com wrote:
>
>
> Hi everyone,
>
> Our gRPC server runs on a ThreadPoolExecutor with a corePoolSize of 4 and 
> a maximumPoolSize of 16. In order to have the pool size increase, we 
> provide a BlockingQueue with a bounded size of 20. 
>
> Sometimes short bursts happen and we're perfectly fine with dropping 
> requests at this moment, we provided a custom RejectionExecutionHandler 
> that increases a counter we are monitoring. However, this rejection handler 
> is not aware of the request itself, it only sees a Runnable.
>
> My question is: are the requests automatically canceled if they could not 
> get queued? Do I need to cancel them manually somehow?
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9c124d4b-e3da-4927-b552-2a09bae0c6d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to