I'm a little new to gRPC, and looking for some advice/recommendations on 
whether it is useful (or not) to share various client objects, both from an 
efficiency standpoint and a maintainability one (for C++).


I'm considering a dedicated thread for each specific instance of a service 
that a client application will require. In this "AsyncClient" thread, the 
thread will own a Channel object and the CompletionQueue. Each "request" 
will be derived from a common "AsyncRequest" base class that contains the 
ClientContext, stub, Status etc. The derived class(es) will implement the 
specifics related to each call (request, response and the operations on the 
"stub" that are unique to the request).

>From an application standpoint, they construct the appropriate derived 
object, "execute it" on the AsyncClient (which constructs a stub from it's 
channel, assigns a tag for the request, and invokes the "run" method on the 
AsyncRequest object (which does the Prepare, StartCall and Finish 
operations). When the call completes, the AsyncClient "completes" the 
request and forwards it to one higher level applications.

This leads to a couple of questions:
1) Which client objects can be shared and which are unique/specific to the 
call?  It seems channel and perhaps stub can be shared across multiple 
requests, but can't find any documentation that confirms this is possible 
(or not). CompletionQueue seems to be shareable across multiple outstanding 
requests, as long as the tags can be uniquely correlated to the requests 
(which is easy enough when tag is the address of the request).

2) What is the lifecycle of potentially shared objects?  For example, does 
the channel and stub need to remain "live" throughout the call, or can the 
stub be deleted/released as soon as the Prepare function is completed? 
Given the Channel is shared, is it only needed in the construction of the 
stub, and hence, not needed by the request afterwards (note: the 
AsyncClient" will still "own" the Channel and make available for subsequent 
requests.

3) What possible race-conditions are introduced? Can a underlying gRPC 
thread return responses potentially faster than another thread can complete 
the request (especially on the same physical server).

4) And the obvious one...is it worth it to have a common "AsyncClient" 
process/handler or is it better for each application to re-construct 
channels, Contexts, stubs (and async logic handling) for each request? 
(Note: The goal is to have multiple outstanding requests at a time and not 
have applications just block on CompletionQueue as in the gRPC Async C++ 
examples.

Any advice/insights much appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/14a03ac2-8f41-41e6-874a-7b4b70ab4250%40googlegroups.com.

Reply via email to