Hi Arpit,

This discussion is not really related to this gRFC now, as it does not seem
to be related to grpc_init/shutdown issue.

If I recall correctly, what you did is correct. Let's say you have a thread
pool driving the ServerCompletionQueue's to handle events for the async
rpc, and in the main thread you call Server::Shutdown(). Shutdown will try
to stop listening, mark all pending rpcs as cancelled (after a timeout if
you specify one) and send goaway to all existing channels. At this point
your threadpool should be running the same way popping events and maybe
deleting per-rpc objects including the ServerContext. You then at the main
thread Shutdown all the ServerCompletionQueue's and wait for the threadpool
to exit, because in the threadpool you would have a loop like while
(cq->Next()). Once the threadpool has exited, you are sure that there is no
more pending work on the rpcs. You can destroy the server and
ServerCompletionQueue's.

If things do not work this way, feel free to file an issue on github.
Thanks.

Hopefully this makes things a bit clear


On Wed, Jan 31, 2018 at 9:32 AM, Arpit Baldeva <abald...@gmail.com> wrote:

> Hi,
>
> Any feedback on this?
>
> Thanks.
>
>
> On Thursday, January 4, 2018 at 10:05:56 AM UTC-8, Arpit Baldeva wrote:
>>
>> Hi Yang,
>>
>> Sorry for the delayed reply. I was on vacation.
>>
>> Let me restate the problem very simply - Currently, I can't call
>> grpc::Server::Shutdown without first making sure that all the "operations"
>> are completed first but without calling grpc::Server::Shutdown, the server
>> keeps sending me work from if client(s) keep requesting new rpc execution.
>>
>> The details of my setup are in my earlier reply. If I am missing
>> something in the current api that can be used, please let me know. In fact,
>> this recent post has the shutdown flow that I really want in Java -
>> https://groups.google.com/forum/#!topic/grpc-io/P5BFGoGxkbw
>>
>> Some more information on this gRFC discussion that I shared privately
>> previously (Just in case someone else also has inputs here).
>>
>> >>
>>
>> I create an encapsulation for Rpc object that will have it’s method
>> executed in response to the events. The object contains a ServerContext
>> which is naturally destroyed when the Rpc object goes out of scope. The Rpc
>> object is destroyed when the aysnc done event from grpc is received. All
>> the rpcs also queue up their ‘request’ event in completion queue.
>>
>>
>>
>> When my application wants to shutdown, I call the server shutdown routine
>> followed by a completion queue shutdown. During this processing, the ‘done’
>> tag is received for queued up rpcs and rpcs in process. There is no other
>> way for receiving the ‘done’ tag for these rpcs. So my Rpc object is
>> destroyed in parallel.
>>
>>
>>
>> It should also be noted that grpc can send a “done” event for an rpc
>> while an async op for the same rpc is waiting. So when I receive the done
>> tag, I wait for the other async ops to finish for the object. Once I know
>> that everything is cleaned up, I go ahead and destroy.
>> <<
>>
>>
>> On Friday, December 22, 2017 at 1:22:41 PM UTC-8, Yang Gao wrote:
>>>
>>> Hi Arpit,
>>>
>>> You did not mention you use grpc_init() before. My understanding of the
>>> crash you saw at the destruction of ServerContext is that you have no more
>>> grpc_init() covering at that point. And you deleted the server before
>>> destroying the ServerContext (rather than just shutdown the server).
>>> If that is not the case, then I think my previous thought was not right.
>>>
>>>
>>>
>>> On Wed, Dec 20, 2017 at 11:33 AM, Arpit Baldeva <abal...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have looked at the gRFC and not too sure how it fixes some of the
>>>> current issues with library shutdown (you referenced the issues I mentioned
>>>> on GitHub/private conversations).
>>>>
>>>> First off, I don't know about other users but I was already forced to
>>>> use grpc_init/shutdown in my code (so I am aware of those functions). In my
>>>> app, I am setting the tracers using tracer api and the tracers are only
>>>> enabled if the grpc_init is called first. So when the application boots up
>>>> and I configure my logging, I needed to call grpc_init before doing
>>>> anything with server instantiation etc.
>>>>
>>>> The main problem I have is I feel like the current apis require too
>>>> much management from the integrator and still not 100% safe.
>>>>
>>>>
>>>>
>>>> In my setup, I use async api. My main thread processes events/tags
>>>> while I have a separate thread to pump the completion queue. Now, suppose I
>>>> want to shutdown my server instance. Currently, I can’t call
>>>> grpc::Server::Shutdown without making sure that all my existing events in
>>>> progress are finished first and I have destroyed every reference to all the
>>>> library objects (say ServerContext which theoretically should be
>>>> independent of server life time) . However, without calling
>>>> grpc::Server::Shutdown, my server may keep on getting new events from new
>>>> clients requesting new rpcs. It’d be much easier if I had an api that could
>>>> allow me to cancel pending rpcs (the ones that have not been started yet).
>>>> At the moment, only way for me to do this would be to keep a list of rpcs
>>>> that have not started and manually call serverContext->TryCancel on them.
>>>>
>>>>
>>>> Are you suggesting that with new flow, I can simply call
>>>> grpc::server::Shutdown, have it cancel all the pending rpcs/events (some of
>>>> which can cause freeing of additional resources like ServerContext) and the
>>>> things that are attaching their lifetime to server would instead hang on to
>>>> the GrpcLibrary?
>>>>
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> On Tuesday, December 19, 2017 at 3:48:40 PM UTC-8, Yang Gao wrote:
>>>>>
>>>>> Users reported bugs related to this issue. Some of the issues can be
>>>>> avoided/worked around by strengthening the requirement or minor tweaks of
>>>>> the code.
>>>>> Some are not so easy to fix without potential performance overhead.
>>>>> The internal doc contains a couple of more links of related issues people
>>>>> encountered.
>>>>>
>>>>> On Tue, Dec 19, 2017 at 3:24 PM, 'Vijay Pai' via grpc.io <
>>>>> grp...@googlegroups.com> wrote:
>>>>>
>>>>>> Hi there,
>>>>>>
>>>>>> I'd very much like to discuss this issue. Switching to explicit
>>>>>> initialization increases friction for users, but keeping it the existing
>>>>>> way just increases friction for the library writers (unless the code ends
>>>>>> up being so failure-prone that it affects users through a loss of
>>>>>> stability). Has there been a user feature request for explicit
>>>>>> initialization?
>>>>>>
>>>>>>
>>>>>> On Tuesday, December 12, 2017 at 9:40:21 AM UTC-8, Yang Gao wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I have created a gRFC in https://github.com/grpc/proposal/pull/48
>>>>>>> which will add a new C++ class to make gRPC C++ library lifetime 
>>>>>>> explicit.
>>>>>>> If you have comments and suggestions, please use this thread to
>>>>>>> discuss.
>>>>>>> Thanks.
>>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "grpc.io" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to grpc-io+u...@googlegroups.com.
>>>>>> To post to this group, send email to grp...@googlegroups.com.
>>>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>>>> To view this discussion on the web visit
>>>>>> https://groups.google.com/d/msgid/grpc-io/79d883f9-cbc3-4281
>>>>>> -9c16-3f5b7edaff3e%40googlegroups.com
>>>>>> <https://groups.google.com/d/msgid/grpc-io/79d883f9-cbc3-4281-9c16-3f5b7edaff3e%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
>>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "grpc.io" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to grpc-io+u...@googlegroups.com.
>>>> To post to this group, send email to grp...@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>>> gid/grpc-io/fd79a64c-048a-4dd5-9d2b-ae7256009e40%40googlegroups.com
>>>> <https://groups.google.com/d/msgid/grpc-io/fd79a64c-048a-4dd5-9d2b-ae7256009e40%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/0acc4edd-7033-45dc-b3a9-35a14657c050%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/0acc4edd-7033-45dc-b3a9-35a14657c050%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAB1HKY5gV2MmrsPttqGkJ_-UWt9mobU7CMhLURBKM83yk6w9fg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to