On Tue, Sep 9, 2014 at 12:53 AM, Martin Teichmann <
[email protected]> wrote:

>
> Hi Guido, Hi List,
>
> for t in asyncio.Task.all_tasks(loop):
>>     t.cancel()
>> loop.run_until_complete(asyncio.sleep(0.1))  # Give them a little time
>> to recover
>> loop.close()
>>
>
> That solves my problem. Couln't we write def cancel(self): on top of it
> and put it into
> BaseEventLoop? In this case we should even be able to replace
> the run_until_complete by a mere self._run_once(), then we don't need to
> wait
> an extra 0.1.
>

I suppose we could do that, but then we'd have to explain the limitations
and repercussions  as well. (Also, I don't think such a helper should
include the close() call.)


> Sure, if tasks start to resist being cancelled, that's a big mess. But I
> do think there
> is quite a number of programmers out there who withstood the temptation to
> let
> tasks resist their cancellation, but who would like to have their
> finalizers called in
> an orderly fashion.
>

And yet having a try/finally around a yield-from is an easy recipe for
resisting cancellation -- it is all too convenient to put another
yield-from in the finally clause, and then you are requiring multiple trips
through the event loop.

Where are the finalizers you need called and what do they do? And why do
you need them called when you're shutting down the process?

For servers a convenient idiom is to call close() in the Server object(s)
returned by create_server() and then wait a certain time for all requests
to be handled. Which reminds me, cancelling *all* tasks can easily cause a
problem if there are tasks waiting for other tasks -- dealing with multiple
cancellations simultaneously is nearly impossible. It's almost always
better to bite down and do the right thing, which to keep track of the
higher-level activities you have going and come up with a way to shut those
down in an orderly fashion, rather than just shooting all tasks.

-- 
--Guido van Rossum (python.org/~guido)

Reply via email to