Hello there,

Wanted to follow up on this. You didn't describe the setup, but I presume 
that all of your client threads are sending on the same channel to the 
server - is that correct? gRPC multiplexes streams (RPCs) onto a client 
channel, but ultimately the kernel-level polling activities are on the 
level of a FD (TCP connection). As a result, multiple client threads will 
need activity on one particular FD and one of them will be the one to get 
it. That thread is then responsible for processing the TCP activity that 
allows it to unblock the thread that is actually waiting for the particular 
stream. So, the more threads you have, the greater the chance will be that 
the thread that wakes up because of the OS activity is not the one that 
actually is waiting for the particular incoming stream. Additionally, don't 
forget to consider queueing latency at the clients: even if each client RPC 
is processed by a different thread, the shared socket is only serviced by 
one at a time.

As a side note, I am interested in your use of MM1 queueing theory to 
describe this issue. My gut feeling is that that's not the right model 
since I don't expect a memoryless service time ever, and the server 
consists of numerous parts: serialized socket processing but fully parallel 
RPC handling. Are you developing (or have you developed) a paper or TR 
validating this concept?

Best regards - vjpai

On Friday, September 29, 2017 at 10:23:07 AM UTC-7, [email protected] 
wrote:
>
>
> Hello,
>
> I am using grpc for a university research project.
> My experiment setup is as follows: I have a synchronous server running on 
> one machine. On another machine, I start a process that creates and 
> launches multiple threads (say X threads) that act as clients, and make 
> synchronous rpc calls to this synchronous server. The client threads are 
> then in the blocked state, waiting for responses from the server.
> I want to find out how each client thread gets woken up when the 
> synchronous rpc call completes, and the response is returned to the client. 
> Is it that grpc has its own underlying thread pool where threads are woken 
> up at random until the correct thread is woken up, which then notifies the 
> corresponding thread in the 'X' client threads that I've created?
>
> I ask because I notice a strange behavior: As I go on increasing the 
> number of client threads, the responses take longer and longer to return to 
> each client. Using MM1 queuing theory, I eliminated queuing at the server 
> as the reason: the actual response latencies are orders of magnitude 
> greater than what is contributed by queuing. Also, upon looking at time 
> stamps on the send request to server and receive response from server 
> paths, I find that the receive response from server path is the culprit. 
> This possibly implies that the client is not able to pick up the response 
> exactly when the response is received, even though it was blocked waiting 
> for this response. So, my hypothesis is that random threads are woken up, 
> until the correct one is identified, which then waked up the client thread 
> that I created.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/12b7ef79-ab5b-4cc9-808d-ccf534f01130%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to