+1 To what Robert said.   You have a couple options here:   

* use ForkJoinPool, which scales much more gracefully under load.  
* If your RPC logic is pretty simple, and does not block (like, ever),  
 you can use directExecutor() on the server builder and run RPCs inline.  
This avoids the need for the executor, and pushes all work into the worker 
EventLoopGroup
* Consider using an eventloop and channel type that scale better under 
load.  I recommend EpollServerSocketChannel, and the corresponding event 
loop.

On Tuesday, November 27, 2018 at 11:30:02 AM UTC-8, robert engels wrote:
>
> If you get multiple requests that are external, and all of these take 
> 200ms, you are going to be blocked… if the requests are IO bound, then 4 is 
> too small for the pool, and by increasing the pool size if other requests 
> arrive that are not external they can be handled
>
> On Nov 27, 2018, at 1:19 PM, Hugo Migneron <hugo.m...@gmail.com 
> <javascript:>> wrote:
>
> Hi,
>
> We run a high throughput gRPC server (in Java) with a target latency of 
> sub 15ms. A vast majority of the requests are executed within that 
> timeframe. However, a small percentage of the requests rely on an external 
> service that must be executed inline. It is generally fast (~20ms) but can 
> sometimes be slow (200+ ms). We do timeout these external service requests 
> at 200ms, but we noticed that our event loop blocks when timeouts happen 
> the effect on latency snowballs and we quickly start taking more than 2 
> seconds to process requests. Things remain that way for a few seconds until 
> the latency gets back on track again.
>
> We run on kubernetes and our docker container is provided with about 1.5 
> to 2 CPU. There are many replicas of the same server.
>
> Here's how we start the server : 
>
> ```
>
> LinkedBlockingQueue<Runnable> workerQueue = new LinkedBlockingQueue<>();
> EFThreadFactory factory = new EFThreadFactory(); // Does nothing 
> fancy/special
> ExecutorService executorService = new ThreadPoolExecutor(4, 4, 0L, 
> TimeUnit.MILLISECONDS, workerQueue, threadFactory);
> Server server = NettyServerBuilder.forPort(port)
>                 .executor(executorService)
>                 .addService(new OurService())
>                 .build()
>                 .start();
>
> ```
>
> Does this configuration make sense given our goals and the situation we're 
> in ? If not, what would be the optimal configuration to avoid blocking the 
> event loop ?
>
> Would increasing `maximumPoolSize` of the executor be of any benefit 
> given the low amount of CPU each server gets ?
>
> If thread configuration / the posted code is not the issue here, any 
> pointers as to what I should look at / understand in order to solve the 
> problem ?
>
> Thank you !
>
>
> -- 
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+u...@googlegroups.com <javascript:>.
> To post to this group, send email to grp...@googlegroups.com <javascript:>
> .
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/db13c95b-db49-4a21-b912-f841a38cc85c%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/grpc-io/db13c95b-db49-4a21-b912-f841a38cc85c%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/99307073-2ac5-4c8e-ba58-e1328eab7ac0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to