Hi Guido,
My understanding seems to be that a wait_for timeout effectively unchains
the two tasks, thus making it tricky to ensure consistency.
One option would be changing wait_for to always wait for the target task
(cond.wait()) to complete, even on a timeout. This would at least guarantee
that cleanup actions would have consistent ordering.
The downside is that tasks that chose to ignore or excessively delay
cancellation would not allow the calling task to resume. Perhaps adding a
dont_wait/return_immediately flag to wait_for would help for cases when you
knew a Task might ignore cancellation. To me, this behaviour would be less
surprising and give the caller the decision whether to unchain the tasks if
required.
Sadly I can't think of other ways of ensuring order other than having
something the calling task can yield on.
Cheers,
David
On Friday, 21 November 2014 11:34:59 UTC+9, Guido van Rossum wrote:
>
> Hi David,
>
> I've confirmed the issue and I agree with your diagnosis. There are two
> tasks, one representing foo() and one representing cond.wait(). When the
> timeout happens, both become runnable. Due to the way scheduling works (I
> haven't carefully analyzed this yet) the task representing foo() is resumed
> first, and fails because it is supposed to be the other task's job to
> re-acquire the lock.
>
> You can see this more clearly by surrounding the release() call in your
> test program as well as the acquire() call in locks.py with something like
> the following:
>
> print('before', __file__)
> try:
> <the call>
> finally:
> print('after', __file__)
>
> If you print different strings in each case you'll see that the test file
> runs before locks.py.
>
> I wonder if there's a way to influence the order in which the tasks are
> resumed...
>
> On Thu, Nov 20, 2014 at 12:37 AM, David Coles <[email protected]
> <javascript:>> wrote:
>
>> Hi,
>>
>> I've been experimenting with asyncio in Python 3.4.2 and run into some
>> interesting behaviour when attempting to use ayncio.wait_for with a
>> Condition:
>>
>> import asyncio
>>
>> loop = asyncio.get_event_loop()
>> cond = asyncio.Condition()
>>
>> @asyncio.coroutine
>> def foo():
>> #with (yield from cond):
>> yield from cond.acquire()
>> try:
>> try:
>> # Wait for condition with timeout
>> yield from asyncio.wait_for(cond.wait(), 1)
>> except asyncio.TimeoutError:
>> print("Timeout")
>> finally:
>> # XXX: Raises RuntimeError: Lock is not acquired.
>> cond.release()
>>
>> loop.run_until_complete(foo())
>>
>>
>> Taking a look around with a debugger I can see that the cond.wait()
>> coroutine task receives a CancelledError as expected and this attempts to
>> reacquire the lock associated with the condition (yield from
>> self.acquire()). Unfortunately because asyncio.wait_for(...) immediately
>> raises a TimeoutError, it's too late and we've already called called
>> cond.release() in our main task, causing a runtime error.
>>
>> My current workaround is to roll my own condition timeout-cancellation
>> logic, but it really seems like wait_for and Condition should be able to
>> play nicer together.
>>
>> @asyncio.coroutine
>> def cond_wait_timeout(condition, timeout):
>> wait_task = asyncio.async(condition.wait())
>> loop.call_later(timeout, wait_task.cancel)
>> try:
>> yield from wait_task
>> return True
>> except asyncio.CancelledError:
>> return False
>>
>>
>> Any thoughts?
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>