I'd like to implement concurrency throttling for my gRPC-java services so I can limit the number of concurrent executions of my service and put a reasonable cap on the queue for waiting work. One way I've found to do this is to use a custom Executor that limits the number of concurrent tasks and task queue depth. The downside of this approach is that it applies to all services hosted in a server. I've studied the code and there does not appear to be a a way for the server Executor to know which service and operation is being requested.
What is the correct way to implement different throttle policies for individual service running in a Server? Do I really have to create a unique Server instance (with associated port) for every distinct throttle policy? Finally, would a PR to allow for per-service Executors be accepted? -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/grpc-io. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/33857b6e-8a0f-484c-859e-0f43ccd5fc83%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
