I saw some reports about KeyboardInterrupt not being handled well with
asyncio, so I tried adding an explicit signal handler with
`loop.add_signal_handler(signal.SIGINT, lambda: loop.stop())` to break
out of the run_forever loop but that still results in the "Task was
destroyed but it is pending!" warning on exit.

My understanding is that this should avoid any BaseException handling
issues since it just stops the event loop and doesn't interfere with
any running tasks. Is this correct?



On Tue, May 5, 2015 at 2:57 PM, Guido van Rossum <[email protected]> wrote:
> The problem here is most likely due to the way ^C is handled -- it raises
> KeyboardInterrupt which inherits from BaseException but not from Exception.
> There are some places in the asyncio code that catch only Exception. I think
> this is probably a bug we should fix -- our original reasoning was that we
> should never catch BaseException because it is too severe, but in practice
> many programs recover at some higher level (e.g. the interpreter top level)
> from a BaseException and then you get the behavior you observe.
>
> Possibly you could get a handle on the issue by explicitly searching for
> "except Exception" in the asyncio code base that match your traceback.
>
> On Tue, May 5, 2015 at 2:23 PM, [email protected]
> <[email protected]> wrote:
>>
>> When trying to shutdown a streams based server I often get a "Task was
>> destroyed but it is pending!" error and haven't been able to find a
>> way to fix it.
>>
>> I've been able to reproduce this using the "TCP echo server using
>> streams" example in the Python docs
>>
>> <https://docs.python.org/3/library/asyncio-stream.html#tcp-echo-server-using-streams>
>> by establishing a connection (nc localhost 8888) and then interrupting
>> the server before sending any data:
>>
>> Python-3.5.0a3$ PYTHONASYNCIODEBUG=1 ./python
>> /tmp/tcp_echo_using_streams.py
>> Serving on ('127.0.0.1', 8888)
>> ^CTask was destroyed but it is pending!
>> source_traceback: Object created at (most recent call last):
>>   File "/tmp/tcp_echo_using_streams.py", line 24, in <module>
>>     loop.run_forever()
>>   File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/base_events.py",
>> line 276, in run_forever
>>     self._run_once()
>>   File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/base_events.py",
>> line 1164, in _run_once
>>     handle._run()
>>   File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/events.py", line
>> 120, in _run
>>     self._callback(*self._args)
>>   File "/home/dcoles/src/Python-3.5.0a3/Lib/asyncio/streams.py", line
>> 227, in connection_made
>>     self._loop.create_task(res)
>> task: <Task pending coro=<handle_echo() done, defined at
>> /tmp/tcp_echo_using_streams.py:3> wait_for=<Future pending
>> cb=[Task._wakeup()] created at
>> /home/dcoles/src/Python-3.5.0a3/Lib/asyncio/streams.py:392> created at
>> /home/dcoles/src/Python-3.5.0a3/Lib/asyncio/streams.py:227>
>>
>> It appears that the issue here is that since there is still a
>> connected client socket, Server.close() will leave the connected
>> socket untouched meaning the client_connected_cb task will remain
>> active.
>>
>> PEP-3156 specifically states that Server.wait_closed is "A coroutine
>> that blocks until the service is closed and all accepted requests have
>> been handled", so I'm surprised that wait_closed doesn't block until
>> the connection is closed. Additionally there doesn't seem to be any
>> way of forcing all connections to close (short of adding this logic to
>> the coroutine).
>>
>> Should Server.wait_closed block if there are remaining connections? Is
>> it possible to force these connections closed?
>>
>> Cheers,
>> David
>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)

Reply via email to