No, it wouldn't. If you did not drain the cq, the shutdown won't even
finish.

-Sree

On Tue, Dec 5, 2017 at 7:30 PM, <[email protected]> wrote:

> Hi, Sree
>
> If I didn't drain the CQ. If it is possible to behave like that?
>
> Thanks,
> Yihao
>
> On Tuesday, September 19, 2017 at 2:32:08 PM UTC-7, Sree Kuchibhotla wrote:
>>
>> Thanks for the info Yihao. This does look like a bug but I am not sure
>> what might be happening.
>>
>> We did fix a bug in completion queue shutdown path (
>> https://github.com/grpc/grpc/pull/11703) but that is in version 1.6.0
>> and later  *- *Would you mind upgrading to the latest release *1.6.1 *and
>> retry your test ?
>>
>> thanks,
>> Sree
>>
>> On Sat, Sep 16, 2017 at 4:27 PM, yihao yang <[email protected]> wrote:
>>
>>> Hi, Sree,
>>>
>>> I do not have a minimum code snippet that can reproduce (and I only
>>> encountered the assertion once). I can only discribe the scenario:
>>> 1. I am using bidirectional async stream to maintain the heartbeat
>>> between server and client.
>>> 2. The assertion is in client side exiting phase.
>>> 3. The client stream will periodically send msg to server (around 500ms
>>> a msg). The test case will first try cancel client stream and when there is
>>> no inflight-read/write, it will try to shutdown the client CQ. (Notice at
>>> that time the server stream and CQ are still running)
>>>
>>> In the log above, the ThreadId 14901 is using to process the server
>>> completion queue; 14886 is using to process the client completion queue.
>>>
>>> Hope you can get some information from my description.
>>>
>>> Thanks,
>>> Yihao
>>>
>>> 在 2017年9月15日星期五 UTC-7上午7:12:23,Sree Kuchibhotla写道:
>>>>
>>>> It does look like a bug in the completion queue shutdown path. Can you
>>>> share the test program you have been using to reproduce this ?
>>>>
>>>> thanks,
>>>> Sree
>>>>
>>>> On Thu, Sep 14, 2017 at 6:23 PM, yihao yang <[email protected]>
>>>> wrote:
>>>>
>>>>> I0914 12:05:40.258058747   14907 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x7fa87c004af0, deadline=gpr_timespec {
>>>>> tv_sec: 10, tv_nsec: 0, clock_type: 3 }, reserved=(nil))
>>>>> I0914 12:05:40.262040844   14901 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fca250]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.262107598   14901 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fca250, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 362102878, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.262387036   14886 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fd21e0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.262416496   14886 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fd21e0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 362410478, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.288670933    8291 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fc20c0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.288740234    8291 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fc20c0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 388734668, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.362347155   14901 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fca250]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.362500837   14901 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fca250, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 462491875, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.362647536   14886 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fd21e0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.362678953   14886 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fd21e0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 462672886, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.388942072    8291 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fc20c0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.389056726    8291 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fc20c0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 489047123, clock_type: 1 }, reserved=(nil))
>>>>> D0914 12:05:40.392114240   14901 connectivity_state.c:135]   CONWATCH:
>>>>> 0x7fa80c01eed8 server_transport: from IDLE [cur=READY] 
>>>>> notify=0x7fa80c01a138
>>>>> D0914 12:05:40.392180813   14901 connectivity_state.c:135]   CONWATCH:
>>>>> 0x7fa80c01eed8 server_transport: from READY [cur=READY]
>>>>> notify=0x7fa80c01a138
>>>>> I0914 12:05:40.462920251   14886 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fd21e0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.463057779   14886 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fd21e0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 563049680, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.463475112   14901 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fca250]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.463502256   14901 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fca250, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 563497963, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.489336198    8291 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fc20c0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.489426090    8291 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fc20c0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 589417977, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.563312610   14886 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fd21e0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.563392795   14886 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fd21e0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 663385808, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.563684056   14901 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fca250]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.563706730   14901 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fca250, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 663702959, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.589633763    8291 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fc20c0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.589719300    8291 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fc20c0, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 689710645, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.663776 14906 single_thread_worker.cpp:84]
>>>>> SingleThreadWorker stop: HeartbeatClient
>>>>>
>>>>> // stop bidirectional stream
>>>>> I0914 12:05:40.663796024    8267 call.c:583]
>>>>> grpc_call_cancel(call=0x1fdb758, reserved=(nil))
>>>>> I0914 12:05:40.664022389    8267 client_channel.c:1300]
>>>>>  OP[client-channel:0x1fdbe10]: [UNCOVERED] CANCEL:"Cancelled"
>>>>> I0914 12:05:40.664053075    8267 http_client_filter.c:435]
>>>>> OP[http-client:0x1fe0900]: [UNCOVERED] CANCEL:"Cancelled"
>>>>> I0914 12:05:40.664066269    8267 connected_channel.c:70]
>>>>> OP[connected:0x1fe0930]: [UNCOVERED] CANCEL:"Cancelled"
>>>>> I0914 12:05:40.664119609    8267 completion_queue.c:564]
>>>>> cq_end_op_for_next(exec_ctx=0x7fffc824c4b0, cc=0x1fd21e0,
>>>>> tag=0x1fdc078, error="No Error", done=0x552280, done_arg=0x1fe1778,
>>>>> storage=0x1fe1780)
>>>>> I0914 12:05:40.664163094   14901 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fca250]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.664193123   14901 completion_queue.c:764]
>>>>> grpc_completion_queue_next(cc=0x1fca250, deadline=gpr_timespec {
>>>>> tv_sec: 1505415940, tv_nsec: 764188067, clock_type: 1 }, reserved=(nil))
>>>>> I0914 12:05:40.664274513   14901 completion_queue.c:564]
>>>>> cq_end_op_for_next(exec_ctx=0x7fa86f7fdb80, cc=0x1fca250,
>>>>> tag=0x7fa87c01d138, error="No Error", done=0x552280,
>>>>> done_arg=0x7fa80c010f58, storage=0x7fa80c010f60)
>>>>> D0914 12:05:40.664341644   14901 call.c:706]
>>>>> get_final_status SVR
>>>>> D0914 12:05:40.664383484   14901 call.c:709]                   1:
>>>>> {"created":"@1505415940.664334949","description":"Error received from
>>>>> peer","file":"src/core/lib/surface/call.c","file_line":955,"grpc_message":"Received
>>>>> RST_STREAM with error code 8","grpc_status":1}
>>>>> I0914 12:05:40.664424456   14901 completion_queue.c:564]
>>>>> cq_end_op_for_next(exec_ctx=0x7fa86f7fdb80, cc=0x1fca250,
>>>>> tag=0x7fa80c005dc0, error="No Error", done=0x552280,
>>>>> done_arg=0x7fa80c010e88, storage=0x7fa80c010e90)
>>>>>
>>>>> I0914 12:05:40.664525623   14901 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fca250]: OP_COMPLETE: tag:0x7fa87c01d138 OK
>>>>> I0914 12:05:40.664557682   14901 call.c:634]
>>>>> grpc_call_cancel_with_status(c=0x7fa80c00f7f8, status=1,
>>>>> description=Cancelled on the server side, reserved=(nil))
>>>>>
>>>>>
>>>>> // start to shutdown CQ
>>>>> I0914 12:05:40.664123966   14886 completion_queue.c:861]
>>>>> RETURN_EVENT[0x1fd21e0]: QUEUE_TIMEOUT
>>>>> I0914 12:05:40.664660 14886 completion_queue_manager.cpp:103] Client
>>>>> CompletionQueue (0x1fd6670) quit.
>>>>> I0914 12:05:40.664697968    8267 completion_queue.c:1086]
>>>>>  grpc_completion_queue_shutdown(cc=0x1fd21e0)
>>>>> I0914 12:05:40.664711547    8267 completion_queue.c:1105]
>>>>>  grpc_completion_queue_destroy(cc=0x1fd21e0)
>>>>> I0914 12:05:40.664717709    8267 completion_queue.c:1086]
>>>>>  grpc_completion_queue_shutdown(cc=0x1fd21e0)
>>>>> E0914 12:05:40.664723704    8267 completion_queue.c:1112]    assertion
>>>>> failed: cq_event_queue_num_items(&cc->data.queue) == 0
>>>>>
>>>>> Hi, all
>>>>>
>>>>> When testing upgradation of the grpc to version 1.4.2. I encountered
>>>>> assertion in CQ shutdown. Can anyone give me some hint?
>>>>>
>>>>> Thanks,
>>>>> Yihao
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "grpc.io" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> To post to this group, send email to [email protected].
>>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/grpc-io/32b0826a-ea6f-4188
>>>>> -85d8-1435c134f0ab%40googlegroups.com
>>>>> <https://groups.google.com/d/msgid/grpc-io/32b0826a-ea6f-4188-85d8-1435c134f0ab%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>> gid/grpc-io/a63b6995-596f-4426-a137-bd130683698a%40googlegroups.com
>>> <https://groups.google.com/d/msgid/grpc-io/a63b6995-596f-4426-a137-bd130683698a%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/grpc-io/91e10b93-6d62-4944-9f27-6a2a1e59a6c2%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/91e10b93-6d62-4944-9f27-6a2a1e59a6c2%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CALRi9QfzNrisi%2BTN6hQV2bMphnvhJTF-kiVc2iKbf9UStsE7Ng%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to