I have an interesting observation with the grpc async server. So, I have 2 
worker threads, A and B, each one has its own queue. I ran a stress test, 
initially I see that both the worker threads are serving to clients. After 
some time, I don't see any activity from thread A. A thread is not stalled 
on anything, it is waiting on conditional queue for the event. This thread 
has already scheduled an event in condition queue to accept new connection. 
I see that for around 4 hours this thread was not scheduled. Now, other 
thread B, gets stalled in an application after 4 hours. After this point if 
I try to connect new client to service, I don't see this request being 
served. So, it seems like all those requests went to condition queue for B 
where my A is free.
This brings up the question that how is load balancing done internally for 
the requests?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6ce16c81-469f-4204-a181-841e1388fbc3n%40googlegroups.com.

Reply via email to