>
> What should happen if a request comes in and the server cannot handle it? 
>  Fail the request immediately?  Queue it?  Drop the connection?  Queue with 
> dropping if overloaded? 


By adding my own Executor to the server (ServerBuilder.executor()) I can 
control much of this. By fixing the max number of threads I can control the 
maximum concurrency. By setting the Executor's queue depth, I can control 
how many backed up requests I'll accept before turning requesters away. 
What I can't do is control this on a service-by-service basis. I can only 
set an Executor for the Server. I'd hate to have to allocate full Server 
and port for each unique throttling policy.

What I'd really like is for the Task created for each request to include 
some kind of metadata about the request so that I can write my own 
intelligent Executor that enforces throttles and backlog queues on a 
service-by-service or even operation-by-operation basis.

On Monday, February 27, 2017 at 12:53:58 PM UTC-8, Carl Mastrangelo wrote:
>
> What should happen if a request comes in and the server cannot handle it? 
>  Fail the request immediately?  Queue it?  Drop the connection?  Queue with 
> dropping if overloaded?  You can do most of these from your application 
> without getting gRPC involved.  If you don't want to even parse the 
> request, you can disable automatic inbound flow control, and simply not 
> call request.  The data will queue up until the flow control window is 
> empty and the client will stop sending.  
>
> On Saturday, February 25, 2017 at 11:17:30 PM UTC-8, Ryan Michela wrote:
>>
>> I'd like to implement concurrency throttling for my gRPC-java services so 
>> I can limit the number of concurrent executions of my service and put a 
>> reasonable cap on the queue for waiting work. One way I've found to do this 
>> is to use a custom Executor that limits the number of concurrent tasks and 
>> task queue depth. The downside of this approach is that it applies to all 
>> services hosted in a server. I've studied the code and there does not 
>> appear to be a a way for the server Executor to know which service and 
>> operation is being requested.
>>
>> What is the correct way to implement different throttle policies for 
>> individual service running in a Server? Do I really have to create a unique 
>> Server instance (with associated port) for every distinct throttle policy?
>>
>> Finally, would a PR to allow for per-service Executors be accepted?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/31e12f2b-9c3c-4adf-ab10-7a0ca2f26ff6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to