In "server_async.cc" each thread runs
each thread runs
void ThreadFunc(int thread_idx) {
// Wait until work is available or we are shutting down
bool ok;
void* got_tag;
if (!srv_cqs_[cq_[thread_idx]]->Next(&got_tag, &ok)) {
return;
}
ServerRpcContext* ctx;
std::mutex* mu_ptr = &shutdown_state_[thread_idx]->mutex;
do {
ctx = detag(got_tag);
// The tag is a pointer to an RPC context to invoke
// Proceed while holding a lock to make sure that
// this thread isn't supposed to shut down
mu_ptr->lock();
if (shutdown_state_[thread_idx]->shutdown) {
mu_ptr->unlock();
return;
}
} while (srv_cqs_[cq_[thread_idx]]->DoThenAsyncNext(
[&, ctx, ok, mu_ptr]() {
ctx->lock();
if (!ctx->RunNextState(ok)) {
ctx->Reset();
}
ctx->unlock();
mu_ptr->unlock();
},
&got_tag, &ok, gpr_inf_future(GPR_CLOCK_REALTIME)));
}
Which means each thread handles a different completion queue and
mutlple rpc calls (wrapped in ServerRpcContext, 4 different ServerContext
will use the same completion queue in the init loop).
Let us see the implementation of ServerRpcContext. The subclass of
ServerRpcConext handles a different rpc method:
in the constructor, it intializes the passed rpc method (why it is not in
aysnchronouse format as demo greeter_async_server says "service_->RequestXX")
and invoke method (rpc process method inside Proceed?) for completion queue
to handle.
Something prevents me from going deep :
srv_cqs_[cq_[thread_idx]]->DoThenAsyncNext
It is the first time we see DoThenAsyncNext. I want to know how this method
work with ctx->RunNextState (CallData::RunNextState) compared to the demo
code proceed:
void Proceed() {
if (status_ == CREATE) {
status_ = PROCESS;
initAsyncReq(&ctx_, &request_, &responder_, cq_, cq_, this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while
we process
// the one for this CallData. The instance will deallocate
itself as
// part of its FINISH state.
new CallData(service_, cq_, initAsyncReq);
// todo: TODO
// The actual processing.
invoker(request_, reply_);
// And we are done! Let the gRPC runtime know we've
finished, using the
// memory address of this instance as the uniquely
identifying tag for
// the event.
status_ = FINISH;
responder_.Finish(reply_, Status::OK, this);
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves
(CallData).
delete this;
}
};
On Wednesday, February 20, 2019 at 2:12:29 PM UTC+8, Lei Wang wrote:
>
> After my careful study on "
> https://github.com/grpc/grpc/blob/master/test/cpp/qps/server_async.cc:"
> suggested in another topic "*GRPC Threading Model*", I have some
> concreted questions wandering in my mind, because
> user has to provide their own threading models. Since I am implementing
> pubsub services on top of grpc, threads are important for stream
> rpc performance. I am seeking help from grpc community.
>
>
> I am implementing a aysnc grpc pubsub services (multiple services
> methods), and try to utilize multi threads to boost performances. Solution
> form "
> https://github.com/grpc/grpc/blob/master/test/cpp/qps/server_async.cc:"
> just tell me that each thread handles a unique CallData object
> (ServerRpcContext
> instance):
>
> one thread <-> one completion queue <-> one grpc server context, one rpc
> method (CallData instance with its address as a unique tag)
>
> I am familiar with hands on experiences of Linux event handling mechanisms
> of epoll , kqueue, and select. Now I am curious about what are the
> differences of the following grpc threading models and how to implement
> them in pubsub server and client sides:
>
> 1 thread <-> 1 completion queue <-> mutliple rpc methods handling
> 1 thread <-> multple completion queue <-> multiple rpc methods handling
> many threads <-> 1 completion queue
> many threads <-> many completion queue
>
>
>
>
>
>
>
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit
https://groups.google.com/d/msgid/grpc-io/180eb40a-e372-4e82-9de8-cc997c660486%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.