Vijay, thank you for your detailed explanation. After the server shutdown, can 
no async operation (read/write/request/finish) be started or just no request 
can be registered?
  

  
Is it also required that CQ shutdown must happen after server shutdown?
  

  
Thank you.
  

  
  

  
  
>   
> On Nov 12, 2019 at 12:09 PM,  <'Vijay Pai' via grpc.io 
> (mailto:[email protected])>  wrote:
>   
>   
>   
> There's a requirement that new RPCs can't be registered once the server has 
> been shutdown nor can new RPC operations be initiated once the CQ has been 
> shutdown. These are, imo, the two rough edges in the C++ CQ-based async API. 
> The shutdown mutex protects against that possibility. Note that it's actually 
> an array of mutexes: one per RPC-processing thread. So, in the common case, 
> the mutex will be uncontended since the same thread keeps accessing it over 
> and over again. Only at shutdown the master thread will take the locks on 
> each of the worker threads and shut them down, after which they'll next see 
> their shutdown state updated and will stop performing operations. Because 
> it's uncontended in the common case, it's unlikely to cause any substantial 
> positive performance impact if it is removed, and its removal will require 
> some alternative mechanism of enforcing the shutdown contract described 
> above.  
>
>   
> - Vijay
>   
>  On Tuesday, November 12, 2019 at 4:01:12 AM UTC-8, Ctmahapa95 wrote:  
> >   
> >   
> >   
> >
> >  Thanks for the explanation. In the same vein, is there any reason to use a 
> > shutdown mutex instead of calling server shutdown and then cq shutdown? 
> > With these two mutex removed, probably the c++ benchmark program will show 
> > marked improvement?
> >   
> >
> >   
> > Thank you.
> >   
> >
> >   
> >   
> >
> >   
> >   
> > >   
> > > On Nov 12, 2019 at 6:21 AM,  <'Vijay Pai' via grpc.io (javascript:)>  
> > > wrote:
> > >   
> > >   
> > >   
> > > Hi there,  
> > >
> > >   
> > > I think I'm the original writer of that locked code. It's about a new CQ 
> > > tag posting for the same RPC while a call to RunNextState is still 
> > > outstanding (which could happen in particular if the operation initiated 
> > > by RunNextState completes quickly). It's true that having 1:1 thread:CQ 
> > > would not allow this since the CQ would not be polled until after the 
> > > existing call to RunNextState completed. I hope that helps and feel free 
> > > to reach out with any follow-ups.
> > >   
> > >
> > >   
> > > - Vijay
> > >   
> > >  On Wednesday, November 6, 2019 at 4:24:02 PM UTC-8, Ctmahapa95 wrote:  
> > > >   
> > > >   
> > > >   
> > > >
> > > >  Yes. Thank you.
> > > >   
> > > >
> > > >   
> > > >   
> > > >
> > > >   
> > > >   
> > > > >   
> > > > > On Nov 6, 2019 at 1:49 PM,  <'Charlie Magnuson' via grpc.io>  wrote:
> > > > >   
> > > > >   
> > > > >   
> > > > > Hello Saroj,  
> > > > >
> > > > >   
> > > > > Are you referring to this loop?   
> > > > > https://github.com/grpc/grpc/blob/15c67e255b30c407aafcafe72d91bbaaad7a49dd/test/cpp/qps/server_async.cc#L220
> > > > >   
> > > > >  On Monday, November 4, 2019 at 3:55:00 PM UTC-8, Saroj Mahapatra 
> > > > > wrote:  
> > > > > >  My guess is that if multiple threads are using the same ‘cq’, then 
> > > > > > two threads might run the ‘RunNexrState’ function at the same time. 
> > > > > > Is that correct? Does the need for ‘ctx’ licking go away with one 
> > > > > > thread per ‘cq’?
> > > > > >   
> > > > > >  Thank you.       
> > > > >
> > > > >
> > > > >
> > > > >  --
> > > > >  You received this message because you are subscribed to the Google 
> > > > > Groups "grpc.io (http://grpc.io)" group.
> > > > >  To unsubscribe from this group and stop receiving emails from it, 
> > > > > send an email to  [email protected].
> > > > >  To view this discussion on the web visit   
> > > > > https://groups.google.com/d/msgid/grpc-io/231aff4d-e43f-43ce-aa09-529713d23195%40googlegroups.com
> > > > >  
> > > > > (https://groups.google.com/d/msgid/grpc-io/231aff4d-e43f-43ce-aa09-529713d23195%40googlegroups.com?utm_medium=email&utm_source=footer).
> > > > >                   
> > >
> > >
> > >
> > >  --
> > >  You received this message because you are subscribed to the Google 
> > > Groups "grpc.io (http://grpc.io)" group.
> > >  To unsubscribe from this group and stop receiving emails from it, send 
> > > an email to  [email protected] (javascript:).
> > >  To view this discussion on the web visit   
> > > https://groups.google.com/d/msgid/grpc-io/d12da475-05df-4ae0-90a9-03860e198f11%40googlegroups.com
> > >  
> > > (https://groups.google.com/d/msgid/grpc-io/d12da475-05df-4ae0-90a9-03860e198f11%40googlegroups.com?utm_medium=email&utm_source=footer).
> > >                   
>
>
>
>  --
>  You received this message because you are subscribed to the Google Groups 
> "grpc.io" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to  [email protected] 
> (mailto:[email protected]).
>  To view this discussion on the web visit   
> https://groups.google.com/d/msgid/grpc-io/74cd9e77-089c-4ea3-99e2-b1bcd65a61ad%40googlegroups.com
>  
> (https://groups.google.com/d/msgid/grpc-io/74cd9e77-089c-4ea3-99e2-b1bcd65a61ad%40googlegroups.com?utm_medium=email&utm_source=footer).
>              

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0b4923cb-6349-4f40-a84f-560b806ab99d%40iPhone.

Reply via email to