Marco Paolini added the comment:
Sorry for keeping this alive.
Take a look at the `wait_for.py` just submitted in the unrelated #22448: no
strong refs to the tasks are kept. Tasks remain alive only because they are
timers and the event loop keeps strong ref.
Do you think my proposed patch is
Guido van Rossum added the comment:
I'm not sure how that wait_for.py example from issue2116 relates to this issue
-- it seems to demonstrate the opposite problem (tasks are kept alive even
though they are cancelled).
Then again I admit I haven't looked deeply into the example (though I am
Guido van Rossum added the comment:
(Whoops meant to link to issue22448.)
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21163
___
___
STINNER Victor added the comment:
I don't understand how keeping a strong refrence would fix anything. You
only provided one example (async-gc-bug.py) which uses Queue objects but
keep weak references to them. Keeping strong references to tasks is not the
right fix. You must keep strong
Marco Paolini added the comment:
I don't understand how keeping a strong refrence would fix anything. You
only provided one example (async-gc-bug.py) which uses Queue objects but
keep weak references to them. Keeping strong references to tasks is not the
right fix. You must keep strong
Marco Paolini added the comment:
I finally wrapped my head around this. I wrote a (simpler) script to get a
better picture.
What happens
-
When a consumer task is first istantiated, the loop holds a strong reference to
it (_ready)
Later on, as the loop starts, the consumer task
Guido van Rossum added the comment:
I'm all in favor of documenting that you must keep a strong reference to a
task that you want to keep alive. I'm not keen on automatically keep all
tasks alive, that might exacerbate leaks (which are by definition hard to
find) in existing programs.
On Mon,
Marco Paolini added the comment:
Asking the user to manage strong refs is just passing the potential
leak issue outside of the standard library. It doesn't really solve anything.
If the user gets the strong refs wrong he can either lose tasks or
leak memory.
If the standard library gets it
Guido van Rossum added the comment:
So you are changing your mind and withdrawing your option #1.
I don't have the time to really dig deeply into the example app and what's
going on. If you want to help, you can try to come up with a patch (and it
should have good unit tests).
I'll be on
Marco Paolini added the comment:
So you are changing your mind and withdrawing your option #1.
I think option #1 (tell users to keep strong refs to tasks) is
OK but option #2 is better.
Yes, I changed my mind ;)
--
___
Python tracker
Marco Paolini added the comment:
Submitted a first stab at #2. Let me know what you think.
If this works we'll have to remove the test_gc_pending test and then maybe even
the code that now logs errors when a pending task is gc'ed
--
Added file:
Roundup Robot added the comment:
New changeset 6d5a76214166 by Victor Stinner in branch '3.4':
Issue #21163, asyncio: Ignore destroy pending task warnings for private tasks
http://hg.python.org/cpython/rev/6d5a76214166
New changeset fbd3e9f635b6 by Victor Stinner in branch 'default':
(Merge
Roundup Robot added the comment:
New changeset e4fe6706b7b4 by Victor Stinner in branch '3.4':
Issue #21163: Fix destroy pending task warning in test_wait_errors()
http://hg.python.org/cpython/rev/e4fe6706b7b4
New changeset a627b23f57d4 by Victor Stinner in branch 'default':
(Merge 3.4) Issue
STINNER Victor added the comment:
Ok, I fixed the last warnings emitted in unit tests ran in debug mode. I close
the issue.
--
resolution: - fixed
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21163
Roundup Robot added the comment:
New changeset f13cde63ca73 by Victor Stinner in branch '3.4':
asyncio: sync with Tulip
http://hg.python.org/cpython/rev/f13cde63ca73
New changeset a67adfaf670b by Victor Stinner in branch 'default':
(Merge 3.4) asyncio: sync with Tulip
STINNER Victor added the comment:
Hum, dont_log_pending.patch is not correct for wait(): wait() returns (done,
pending), where pending is a set of pending tasks. So it's still possible that
pending tasks are destroyed while they are not a still pending, after the end
of wait(). The log should
Roundup Robot added the comment:
New changeset 13e78b9cf290 by Victor Stinner in branch '3.4':
Issue #21163: BaseEventLoop.run_until_complete() and test_utils.run_briefly()
http://hg.python.org/cpython/rev/13e78b9cf290
New changeset 2d0fa8f383c8 by Victor Stinner in branch 'default':
(Merge
Roundup Robot added the comment:
New changeset 1088023d971c by Victor Stinner in branch '3.4':
Issue #21163, asyncio: Fix some Task was destroyed but it is pending! logs in
tests
http://hg.python.org/cpython/rev/1088023d971c
New changeset 7877aab90c61 by Victor Stinner in branch 'default':
Roundup Robot added the comment:
New changeset e9150fdf068a by Victor Stinner in branch '3.4':
asyncio: sync with Tulip
http://hg.python.org/cpython/rev/e9150fdf068a
New changeset d92dc4462d26 by Victor Stinner in branch 'default':
(Merge 3.4) asyncio: sync with Tulip
Roundup Robot added the comment:
New changeset 4e4c6e2ed0c5 by Victor Stinner in branch '3.4':
Issue #21163: Fix one more Task was destroyed but it is pending! log in tests
http://hg.python.org/cpython/rev/4e4c6e2ed0c5
New changeset 24282c6f6019 by Victor Stinner in branch 'default':
(Merge
STINNER Victor added the comment:
I fixed the first Task was destroyed but it is pending! messages when the fix
was simple.
Attached dont_log_pending.patch fixes remaining messages when running
test_asyncio. I'm not sure yet that this patch is the best approach to fix the
issue.
Modified
Guido van Rossum added the comment:
Patch looks good. Go ahead.
--
nosy: +Guido.van.Rossum
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21163
___
Richard Kiss added the comment:
I reread more carefully, and I am in agreement now that I better understand
what's going on. Thanks for your patience.
--
nosy: +Richard.Kiss
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21163
STINNER Victor added the comment:
I commited my change in Tulip (78dc74d4e8e6), Python 3.4 and 3.5:
changeset: 91359:978525270264
branch: 3.4
parent: 91357:a941bb617c2a
user:Victor Stinner victor.stin...@gmail.com
date:Tue Jun 24 22:37:53 2014 +0200
files:
STINNER Victor added the comment:
The new check emits a lot of Task was destroyed but it is pending! messages
when running test_asyncio. I keep the issue open to remember me that I have to
fix them.
--
___
Python tracker rep...@bugs.python.org
STINNER Victor added the comment:
@Guido, @Yury: What do you think of log_destroyed_pending_task.patch? Does it
sound correct?
Or would you prefer to automatically keep a strong reference somewhere and then
break the strong reference when the task is done? Such approach sounds to be
error
Yury Selivanov added the comment:
@Guido, @Yury: What do you think of log_destroyed_pending_task.patch? Does it
sound correct?
Premature task garbage collection is indeed hard to debug. But at least, with
your patch, one gets an exception and has a chance to track the bug down. So
I'm +1 for
STINNER Victor added the comment:
The more I use asyncio, the more I am convinced that the correct fix is to
keep a strong reference to a pending task (perhaps in a set in the eventloop)
until it starts.
The problem is not the task, read again my message. The problem is that nobody
holds
Changes by STINNER Victor victor.stin...@gmail.com:
--
title: asyncio task possibly incorrectly garbage collected - asyncio doesn't
warn if a task is destroyed during its execution
___
Python tracker rep...@bugs.python.org
Richard Kiss added the comment:
The more I use asyncio, the more I am convinced that the correct fix is to keep
a strong reference to a pending task (perhaps in a set in the eventloop) until
it starts.
Without realizing it, I implicitly made this assumption when I began working on
my asyncio
30 matches
Mail list logo