[issue39116] StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop

2020-10-21 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Well this is unexpected, the same code running on Linux is throwing 
GeneratorExit-related mysterious exceptions as well. I'm not sure whether this 
is the same problem, but this one has a clearer traceback. I will attach the 
full error log, but the most pertinent part seems to be this:


During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.8/contextlib.py", line 662, in __aexit__
cb_suppress = await cb(*exc_details)
  File "/usr/lib/python3.8/contextlib.py", line 189, in __aexit__
await self.gen.athrow(typ, value, traceback)
  File "/opt/prettysocks/prettysocks.py", line 332, in closing_writer
await writer.wait_closed()
  File "/usr/lib/python3.8/asyncio/streams.py", line 376, in wait_closed
await self._protocol._get_close_waiter(self)
RuntimeError: cannot reuse already awaited coroutine


closing_writer() is an async context manager that calls close() and await 
wait_closed() on the given StreamWriter. So it looks like wait_closed() can 
occasionally reuse a coroutine?

--
Added file: https://bugs.python.org/file49534/error_log_on_linux_python38.txt

___
Python tracker 
<https://bugs.python.org/issue39116>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39116] StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop

2020-10-20 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I have attached a script that should be able to reproduces this problem. It's 
not a minimal reproduction, but hopefully easy enough to trigger.

The script is a SOCKS5 proxy server listening on localhost:1080. In its current 
form it does not need any external dependencies. Run it on Windows 10 + Python 
3.9, set a browser to use the proxy server, and browse a little bit, it should 
soon start printing mysterious errors involving GeneratorExit.

--
Added file: https://bugs.python.org/file49532/prettysocks.py

___
Python tracker 
<https://bugs.python.org/issue39116>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39116] StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop

2020-10-19 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
versions: +Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39116>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39116] StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop

2020-10-19 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

This problem still exists on Python 3.9 and latest Windows 10.

I tried to catch the GeneratorExit and turn it into a normal Exception, and 
things only got weirder from here. Often several lines later another await 
statement would raise another GeneratorExit, such as writer.write() or even 
asyncio.sleep(). Doesn't matter whether I catch the additional GeneratorExit or 
not, once code exits this coroutine a RuntimeError('coroutine ignored 
GeneratorExit') is raised. And it doesn't matter what I do with this 
RuntimeError, the outermost coroutine's Task always generates an 'asyncio Task 
was destroyed but it is pending!' error message.

Taking a step back from this specific problem. Does a "casual" user of asyncio 
need to worry about handling GeneratorExits? Can I assume that I should not see 
GeneratorExits in user code?

--

___
Python tracker 
<https://bugs.python.org/issue39116>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39116] StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop

2019-12-21 Thread twisteroid ambassador


New submission from twisteroid ambassador :

I have been getting these strange exception since Python 3.8 on my Windows 10 
machine. The external symptoms are many errors like "RuntimeError: aclose(): 
asynchronous generator is already running" and "Task was destroyed but it is 
pending!".

By adding try..except..logging around my code, I found that my StreamReaders 
would raise GeneratorExit on readexactly(). Digging deeper, it seems like the 
following line in StreamReader._wait_for_data():

await self._waiter

would raise a GeneratorExit.

There are only two other methods on StreamReader that actually does anything to 
_waiter, set_exception() and _wakeup_waiter(), but neither of these methods 
were called before GeneratorExit is raised. In fact, both these methods sets 
self._waiter to None, so normally after _wait_for_data() does "await 
self._waiter", self._waiter is None. However, after GeneratorExit is raised, I 
can see that self._waiter is not None. So it seems the GeneratorExit came from 
nowhere.

I have not been able to reproduce this behavior in other code. This is with 
Python 3.8.1 on latest Windows 10 1909, using ProactorEventLoop. I don't 
remember seeing this ever on Python 3.7.

--
components: asyncio
messages: 358774
nosy: asvetlov, twisteroid ambassador, yselivanov
priority: normal
severity: normal
status: open
title: StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39116>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-05-22 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

With regards to the failing test, it looks like the test basically boils down 
to testing whether loop.getaddrinfo('fe80::1%1', 80, type=socket.SOCK_STREAM) 
returns (, , *, *, ('fe80::1', 80, 0, 1)). 
This feels like a dangerous assumption to make, since it's tied to the 
operating system's behavior. Maybe AIX's getaddrinfo() in fact does not resolve 
scoped addresses correctly; maybe it only resolves scope ids correctly for real 
addresses that actually exist on the network; Maybe AIX assigns scope ids 
differently and do not use small integers; etc.

--

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-05-22 Thread twisteroid ambassador

twisteroid ambassador  added the comment:

AFAIK the reason why scope id is required for IPv6 is that every IPv6
interfaces has its own link-local address, and all these addresses are in
the same subnet, so without an additional scope id there’s no way to tell
from which interface an address can be reached. IPv4 does not have this
problem because IPv4 interfaces usually don’t use link-local addresses.

Michael Felt 于2019年5月22日 周三18:08写道:

>
> Michael Felt  added the comment:
>
> On 22/05/2019 10:43, Michael Felt wrote:
> > 'fe80::1%1' <> 'fe80::1' - ... I am not 'experienced' with IPv6 and
> scope.
>
> >From what I have just read (again) - scope seems to be a way to indicate
> the interface used (e.g., eth0, or enp0s25) as a "number".
>
> Further, getsockname() (and getpeername()) seem to be more for after a
> fork(), or perhaps after a pthread_create(). What remains unclear is why
> would I ever care what the scopeid is.  Is it because it is "shiney",
> does it add security (if so, how)?
>
> And, as this has been added - what breaks in Python when "scopeid" is
> not available?
>
> I am thinking, if adding a scopeid is a way to assign an IPv6 address to
> an interface - what is to prevent abuse? Why would I even want the same
> (link-local IP address on eth0 and eth1 at the same time? Assuming that
> it what it is making possible - the same IPv6/64 address on multiple
> interfaces and use scope ID to be more selective/aware. It this an
> alternative way to multiplex interfaces - now in the IP layer rather
> than in the LAN layer?
>
> If I understand why this is needed I may be able to come up with a way
> to "get it working" for the Python model of interfaces - although,
> probably not "fast".
>
> Regards,
>
> Michael
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue35545>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36636] Inner exception is not being raised using asyncio.gather

2019-04-15 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

The difference is because you grabbed and print()ed the exception themselves in 
Script 2, while in Script 1 you let Python's built-in unhandled exception 
handler (sys.excepthook) print the traceback for you.

If you want a traceback, then you need to print it yourself. Try something 
along the lines of this:

traceback.print_tb(result.__traceback__)

or:

traceback.print_exception(type(result), result, result.__traceback__)

or if you use the logging module:

logging.error('Unexpected exception', exc_info=result)


reference: 
https://stackoverflow.com/questions/11414894/extract-traceback-info-from-an-exception-object

------
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue36636>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32776] asyncio SIGCHLD scalability problems

2019-04-04 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

The child watchers are documented now, see here: 
https://docs.python.org/3/library/asyncio-policy.html#process-watchers

Sounds like FastChildWatcher 
https://docs.python.org/3/library/asyncio-policy.html#asyncio.FastChildWatcher 
is exactly what you need if you stick with the stock event loop.

--
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue32776>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36069] asyncio: create_connection cannot handle IPv6 link-local addresses anymore (linux)

2019-02-21 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Duplicate of issue35545, I believe.

--
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue36069>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30782] Allow limiting the number of concurrent tasks in asyncio.as_completed

2019-02-21 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I feel like once you lay out all the requirements: taking futures from an 
(async) generator, limiting the number of concurrent tasks, getting completed 
tasks to one consumer "as completed", and an implicit requirement that back 
pressure from the consumer should be handled (i.e. if whoever's iterating 
through "async for fut in as_completed(...)" is too slow, then the tasks should 
pause until it catches up), there are too many moving parts, and this should 
really be implemented using several tasks.

So a straightforward implementation may look like this:

async def better_as_completed(futs, limit):
MAX_DONE_FUTS_HELD = 10  # or some small number

sem = asyncio.Semaphore(limit)
done_q = asyncio.Queue(MAX_DONE_FUTS_HELD)

async def run_futs():
async for fut in futs:
await sem.acquire()
asyncio.create_task(run_one_fut(fut))

async with sem:
await done_q.put(None)

async def run_one_fut(fut):
try:
fut = asyncio.ensure_future(fut)
await asyncio.wait((fut,))
await done_q.put(fut)
finally:
sem.release()

asyncio.create_task(run_futs())

while True:
next_fut = await done_q.get()
if next_fut is None:
return
yield next_fut


Add proper handling for cancellation and exceptions and whatnot, and it may 
become a usable implementation.

And no, I do not feel like this should be added to asyncio.as_completed.

------
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue30782>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35945] Cannot distinguish between subtask cancellation and running task cancellation

2019-02-19 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I wrote a recipe on this idea:

https://gist.github.com/twisteroidambassador/f35c7b17d4493d492fe36ab3e5c92202

Untested, feel free to use it at your own risk.

--

___
Python tracker 
<https://bugs.python.org/issue35945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35945] Cannot distinguish between subtask cancellation and running task cancellation

2019-02-18 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

There is a way to distinguish whether a task is being cancelled from the 
"inside" or "outside", like this:

async def task1func():
task2 = asyncio.create_task(task2func())
try:
await asyncio.wait((task2,))
except asyncio.CancelledError:
print('task1 is being cancelled from outside')
# Optionally cancel task2 here, since asyncio.wait() shields task2 
from
# being cancelled from the outside
raise
assert task2.done()
if task2.cancelled():
print('task2 was cancelled')
# Note that task1 is not cancelled here, so if you want to cancel
# task1 as well, do this:
# raise asyncio.CancelledError

task2_result = task2.result()
# Use your result here

--
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue35945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-01-04 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Oh wait, there's also this in asyncio docs for loop.sock_connect:

Changed in version 3.5.2: address no longer needs to be resolved. sock_connect 
will try to check if the address is already resolved by calling 
socket.inet_pton(). If not, loop.getaddrinfo() will be used to resolve the 
address.

https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.sock_connect

So this is where the current bug comes from! My PR 11403 basically undid this 
change.

My proposal, as is probably obvious, is to undo this change and insist on 
passing only resolved address tuples to loop.sock_connect(). My argument is 
that this feature never worked properly: 

* As mentioned in the previous message, this does not work on ProactorEventLoop.
* On SelectorEventLoop, the resolution done by loop.sock_connect() is pretty 
weak anyways: it only takes the first resolved address, unlike 
loop.create_connection() that actually tries all the resolved addresses until 
one of them successfully connects.

Users should use create_connection() or open_connection() if they want to avoid 
the complexities of address resolution. If they are reaching for low_level APIs 
like loop.sock_connect(), they should also handle loop.getaddrinfo() themselves.

--

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-01-04 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I just noticed that in the socket module, an AF_INET address tuple is allowed 
to have an unresolved host name. Quote:

A pair (host, port) is used for the AF_INET address family, where host is a 
string representing either a hostname in Internet domain notation like 
'daring.cwi.nl' or an IPv4 address like '100.50.200.5', and port is an integer.

https://docs.python.org/3/library/socket.html#socket-families

Passing a tuple of (hostname, port) to socket.connect() successfully connects 
the socket (tested on Windows). Since the C struct sockaddr_in does not support 
hostnames, socket.connect obviously does resolution at some point, but its 
implementation is in C, and I haven't looked into it.

BaseSelectorEventLoop.sock_connect() calls socket.connect() directly, therefore 
it also supports passing in a tuple of (hostname, port). I just tested 
ProactorEventLoop.sock_connect() on 3.7.1 on Windows, and it does not support 
hostnames, raising OSError: [WinError 10022] An invalid argument was supplied.

I personally believe it's not a good idea to allow hostnames in address tuples 
and in sock.connect(). However, the socket module tries pretty hard to 
basically accept any (host, port) tuple as address tuples, whether host is an 
IPv4 address, IPv6 address or host name, so that's probably not going to change.

--

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-01-03 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Hi Emmanuel,

Are you referring to my PR 11403? I don't see where IPv6 uses separate 
parameters.

--

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-01-02 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +10786, 10787, 10788, 10789

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33678] selector_events.BaseSelectorEventLoop.sock_connect should preserve socket type

2019-01-02 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +10790, 10791
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue33678>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33678] selector_events.BaseSelectorEventLoop.sock_connect should preserve socket type

2019-01-02 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +10790
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue33678>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-01-02 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +10786

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-01-02 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +10786, 10787, 10788

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2019-01-02 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +10786, 10787

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33678] selector_events.BaseSelectorEventLoop.sock_connect should preserve socket type

2018-12-23 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Looks like this bug is also cause by using _ensure_resolved() more than once 
for a given host+port, so it can probably be fixed together with 
https://bugs.python.org/issue35545 .

Masking sock.type should not be necessary anymore since 
https://bugs.python.org/issue32331 fixed it.

--
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue33678>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2018-12-21 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Also I believe it's a good idea to change the arguments of _ensure_resolved() 
from (address, *, ...) to (host, port, *, ...), and go through all its usages, 
making sure we're not mixing host + port with address tuples everywhere in 
asyncio.

--

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35545] asyncio.base_events.create_connection doesn't handle scoped IPv6 addresses

2018-12-21 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I think the root cause of this bug is a bit of confusion.

The "customer-facing" asyncio API, create_connection(), takes two arguments: 
host and port. The lower-level API that actually deal with connecting sockets, 
socket.connect() and loop.sock_connect(), takes one argument: the address 
tuple. These are *not the same thing*, despite an IPv4 address tuple having two 
elements (host, port), and must not be mixed.

_ensure_resolved() is the function responsible for turning host + port into an 
address tuple, and it does the right thing, turning 
host="fe80::1%lo",port=12345 into ('fe80::1', 12345, 0, 1) correctly. The 
mistake is taking the address tuple and passing it through _ensure_resolved() 
again, since that's not the correct input type for it: the only correct input 
type is host + port.

So I think the appropriate fix for this bug is to make sure _ensure_resolved is 
only called once. In particular, BaseSelectorEventLoop.sock_connect() 
https://github.com/python/cpython/blob/3bc0ebab17bf5a2c29d2214743c82034f82e6573/Lib/asyncio/selector_events.py#L458
 should not call _ensure_resolved(). It might be a good idea to add some 
comments clarifying that sock_connect() takes an address tuple argument, not 
host + port, and likewise for sock_connect() on each event loop implementation.

--
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35302] create_connection with local_addr misses valid socket bindings

2018-12-20 Thread twisteroid ambassador

twisteroid ambassador  added the comment:

I don't have a Mac, so I have not tested Ronald's workaround. Assuming it 
works, we will have to either i) implement platform-specific behavior and only 
apply IPV6_V6ONLY on macOS for each AF_INET6 socket created, or ii) apply it to 
all AF_INET6 sockets on all platforms, ideally after testing the option on all 
these platforms to make sure it doesn't have any undesirable side effect.

Linux's man page of ipv6 (http://man7.org/linux/man-pages/man7/ipv6.7.html ) 
has this to say about IPV6_V6ONLY:


If this flag is set to true (nonzero), then the socket is re‐
stricted to sending and receiving IPv6 packets only.  In this
case, an IPv4 and an IPv6 application can bind to a single
port at the same time.

If this flag is set to false (zero), then the socket can be
used to send and receive packets to and from an IPv6 address
or an IPv4-mapped IPv6 address.


So setting IPV6_V6ONLY might break some use cases? I have no idea how prevalent 
that may be.

The upside of this solution, as well as the second suggestion in Neil's OP 
(filter out local addrinfos with mismatching family), is that they should not 
increase connect time for normal cases. My solution (for which I have already 
submitted a PR) probably has a negligible increase in connection time and 
resource usage, because a fresh socket object is created for each pair of 
remote and local addrinfo.

--

___
Python tracker 
<https://bugs.python.org/issue35302>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35302] create_connection with local_addr misses valid socket bindings

2018-12-19 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
keywords: +patch
pull_requests: +10471
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35302>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35302] create_connection with local_addr misses valid socket bindings

2018-12-19 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

IMO macOS is at fault here, for even allowing an IPv6 socket to bind to an IPv4 
address. ;-)

I have given some thought about this issue when writing my happy eyeballs 
library. My current solution is closest to Neil's first suggestion, i.e. each 
pair of remote addrinfo and local addrinfo is tried in a connection attempt.

--
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue35302>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34769] _asyncgen_finalizer_hook running in wrong thread

2018-10-10 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +9174

___
Python tracker 
<https://bugs.python.org/issue34769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34769] _asyncgen_finalizer_hook running in wrong thread

2018-10-06 Thread twisteroid ambassador

twisteroid ambassador  added the comment:

I’m now convinced that the bug we’re fixing and the original bug with debug 
mode off are two separate bugs. With the fix in place and debug mode off, I’m 
still seeing the original buggy behavior. Bummer.


In my actual program, I have an async generator that spawns two tasks. In the 
finally clause I cancel and wait for them, then check and log any exceptions. 
The program ran on Python 3.7. The symptom of the original bug is occasional 
“Task exception was never retrieved” log entries about one of the spawned 
tasks. After I patched 3.7 with the fix, the symptom remains, so the fix does 
not actually fix the original bug.

Running the same program on master, there are additional error and warning 
messages about open stream objects being garbage collected, unclosed sockets, 
etc. Are these logging messages new to 3.8? If not, perhaps 3.8 introduced 
additional bugs?

--

___
Python tracker 
<https://bugs.python.org/issue34769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34769] _asyncgen_finalizer_hook running in wrong thread

2018-10-05 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
keywords: +patch
pull_requests: +9099
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue34769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34769] _asyncgen_finalizer_hook running in wrong thread

2018-10-05 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I'll open a PR with your diff soon, but I don't have a reliable unit test yet. 
Also, it does not seem to fix the old problem with debug mode off. :-( I had 
hoped that the problem with debug mode off is nothing more than 
_asyncgen_finalizer_hook not running reliably each time, but that doesn't seem 
to be the case.

--

___
Python tracker 
<https://bugs.python.org/issue34769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34769] _asyncgen_finalizer_hook running in wrong thread

2018-10-01 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I have finally managed to reproduce this one reliably. The issue happens when 
i) async generators are not finalized immediately and must be garbage collected 
in the future, and ii) the garbage collector happens to run in a different 
thread than the one running the event loop. (Obviously, if there are more than 
one Python threads, eventually gc will run in those other threads, causing 
problems.)

I have attached a script reproducing the problem. I tried several ways of using 
async generators (the use_agen_*() coroutines), and the only way to make them 
not finalize immediately is use_agen_anext_separate_tasks(), which is the 
pattern used in my Happy Eyeballs library.

--
Added file: https://bugs.python.org/file47838/asyncgen_test.py

___
Python tracker 
<https://bugs.python.org/issue34769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34769] _asyncgen_finalizer_hook running in wrong thread

2018-09-22 Thread twisteroid ambassador


New submission from twisteroid ambassador :

When testing my happy eyeballs library, I occasionally run into issues with 
async generators seemingly not finalizing. After setting loop.set_debug(True), 
I have been seeing log entries like these:


Exception ignored in: 
Traceback (most recent call last):
  File "/opt/Python3.7.0/lib/python3.7/asyncio/base_events.py", line 466, in 
_asyncgen_finalizer_hook
self.create_task(agen.aclose())
  File "/opt/Python3.7.0/lib/python3.7/asyncio/base_events.py", line 386, in 
create_task
task = tasks.Task(coro, loop=self)
  File "/opt/Python3.7.0/lib/python3.7/asyncio/base_events.py", line 674, in 
call_soon
self._check_thread()
  File "/opt/Python3.7.0/lib/python3.7/asyncio/base_events.py", line 712, in 
_check_thread
"Non-thread-safe operation invoked on an event loop other "
RuntimeError: Non-thread-safe operation invoked on an event loop other than the 
current one
ERRORasyncio Task was destroyed but it is pending!
source_traceback: Object created at (most recent call last):
  File "/opt/Python3.7.0/lib/python3.7/threading.py", line 885, in _bootstrap
self._bootstrap_inner()
  File "/opt/Python3.7.0/lib/python3.7/threading.py", line 917, in 
_bootstrap_inner
self.run()
  File "/opt/Python3.7.0/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
  File "/opt/Python3.7.0/lib/python3.7/concurrent/futures/thread.py", line 80, 
in _worker
work_item.run()
  File "/opt/Python3.7.0/lib/python3.7/concurrent/futures/thread.py", line 57, 
in run
result = self.fn(*self.args, **self.kwargs)
  File "/opt/Python3.7.0/lib/python3.7/asyncio/base_events.py", line 746, in 
_getaddrinfo_debug
msg.append(f'type={type!r}')
  File "/opt/Python3.7.0/lib/python3.7/enum.py", line 572, in __repr__
self.__class__.__name__, self._name_, self._value_)
  File "/opt/Python3.7.0/lib/python3.7/asyncio/base_events.py", line 466, in 
_asyncgen_finalizer_hook
self.create_task(agen.aclose())
  File "/opt/Python3.7.0/lib/python3.7/asyncio/base_events.py", line 386, in 
create_task
task = tasks.Task(coro, loop=self)
task: ()> created 
at /opt/Python3.7.0/lib/python3.7/asyncio/base_events.py:386>



This is a typical instance. Usually several instances like this occur at once.

I'll try to reproduce these errors in a simple program. Meanwhile, here are 
some details about the actual program, which may or may not be related to the 
errors:

* I have several nested async generators (async for item in asyncgen: yield 
do_something(item); ), and when the errors appear, the above error messages and 
stack traces repeat several times, with the object names mentioned in 
"Exception ignored in: ..." being each of the nested async generators. Other 
parts of the error messages, including the stack traces, are exactly the same.

* I never used threads or loop.run_in_executor() explicitly in the program. 
However, the innermost async generator calls loop.getaddrinfo(), and that is 
implemented by running a Python function, socket.getaddrinfo(), with 
loop.run_in_executor(), so the program implicitly uses threads. 
(socket.getaddrinfo() is a Python function that calls a C function, 
_socket.getaddrinfo().)

* The outermost async generator is not iterated using `async for`. Instead, it 
is iterated by calling its `__aiter__` method, saving the returned async 
iterator object, and then awaiting on the `__anext__` method of the async 
iterator repeatedly. Of course, all of these are done in the same event loop.


Environment: Python 3.7.0 compiled from source, on Debian stretch.

--
components: asyncio
messages: 326090
nosy: asvetlov, twisteroid ambassador, yselivanov
priority: normal
severity: normal
status: open
title: _asyncgen_finalizer_hook running in wrong thread
type: behavior
versions: Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue34769>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33833] ProactorEventLoop raises AssertionError

2018-06-27 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

As an aside, I'm wondering whether it makes sense to add a blanket "assert 
exception handler has not been called" condition to ProactorEventLoop's tests, 
or even other asyncio tests. It looks like ProactorEventLoop is more prone to 
dropping exceptions on the floor than SelectorEventLoop.

--

___
Python tracker 
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33833] ProactorEventLoop raises AssertionError

2018-06-27 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

No problem. Running the attached test script on latest master, Windows 10 1803, 
several errors like this are logged:

Exception in callback 
_ProactorBaseWritePipeTransport._loop_writing(<_OverlappedF...events.py:479>)
handle: ) 
created at 
%USERPROFILE%\source\repos\cpython\lib\asyncio\proactor_events.py:373>
source_traceback: Object created at (most recent call last):
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\base_events.py", line 
555, in run_until_complete
self.run_forever()
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\base_events.py", line 
523, in run_forever
self._run_once()
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\base_events.py", line 
1750, in _run_once
handle._run()
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\events.py", line 88, in 
_run
self._context.run(self._callback, *self._args)
  File "..\..\..\..\Documents\python\test_proactor_force_close.py", line 8, in 
server_callback
writer.write(b'hello')
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\streams.py", line 305, 
in write
self._transport.write(data)
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\proactor_events.py", 
line 334, in write
self._loop_writing(data=bytes(data))
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\proactor_events.py", 
line 373, in _loop_writing
self._write_fut.add_done_callback(self._loop_writing)
Traceback (most recent call last):
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\events.py", line 88, in 
_run
self._context.run(self._callback, *self._args)
  File "%USERPROFILE%\source\repos\cpython\lib\asyncio\proactor_events.py", 
line 346, in _loop_writing
assert f is self._write_fut
AssertionError

--

___
Python tracker 
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2018-06-27 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Well, I opened the PR, it shows up here, but there's no reviewer assigned.

--

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2018-06-27 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +7571

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2018-06-27 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Turns out my typo when preparing the pull request had another victim: the 
changelog entries in documentation currently links to the wrong issue. I'll 
make a PR to fix that typo; since it's just documentation, hopefully it can 
still get into Python 3.6.6 and 3.7.0.

--

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33833] ProactorEventLoop raises AssertionError

2018-06-24 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +7500

___
Python tracker 
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33834] Test for ProactorEventLoop logs InvalidStateError

2018-06-11 Thread twisteroid ambassador


New submission from twisteroid ambassador :

When running the built-in regression tests, although 
test_sendfile_close_peer_in_the_middle_of_receiving on ProactorEventLoop 
completes successfully, an InvalidStateError is logged.

Console output below:

test_sendfile_close_peer_in_the_middle_of_receiving 
(test.test_asyncio.test_events.ProactorEventLoopTests) ... Exception in 
callback _ProactorReadPipeTransport._loop_reading(<_OverlappedF...ne, 64, 
None)>)
handle: )>
Traceback (most recent call last):
  File "\cpython\lib\asyncio\windows_events.py", line 428, in finish_recv
return ov.getresult()
OSError: [WinError 64] The specified network name is no longer available

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "\cpython\lib\asyncio\proactor_events.py", line 255, in 
_loop_reading
data = fut.result()
  File "\cpython\lib\asyncio\windows_events.py", line 732, in _poll
value = callback(transferred, key, ov)
  File "\cpython\lib\asyncio\windows_events.py", line 432, in finish_recv
raise ConnectionResetError(*exc.args)
ConnectionResetError: [WinError 64] The specified network name is no longer 
available

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "\cpython\lib\asyncio\events.py", line 88, in _run
self._context.run(self._callback, *self._args)
  File "\cpython\lib\asyncio\proactor_events.py", line 282, in 
_loop_reading
self._force_close(exc)
  File "\cpython\lib\asyncio\proactor_events.py", line 117, in 
_force_close
self._empty_waiter.set_exception(exc)
concurrent.futures._base.InvalidStateError: invalid state
ok

--
components: asyncio
messages: 319303
nosy: asvetlov, twisteroid ambassador, yselivanov
priority: normal
severity: normal
status: open
title: Test for ProactorEventLoop logs InvalidStateError
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue33834>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33833] ProactorEventLoop raises AssertionError

2018-06-11 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
keywords: +patch
pull_requests: +7251
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33833] ProactorEventLoop raises AssertionError

2018-06-11 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
type:  -> behavior

___
Python tracker 
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33833] ProactorEventLoop raises AssertionError

2018-06-11 Thread twisteroid ambassador


New submission from twisteroid ambassador :

Sometimes when a socket transport under ProactorEventLoop is writing while the 
peer closes the connection, asyncio logs an AssertionError. 

Attached is a script that fairly reliably reproduces the behavior on my 
computer.

This is caused by _ProactorBasePipeTransport._force_close() being called 
between two invocations of _ProactorBaseWritePipeTransport._loop_writing(), 
where the latter call asserts self._write_fut has not changed after being set 
by the former call.

--
components: asyncio
files: test_proactor_force_close.py
messages: 319302
nosy: asvetlov, twisteroid ambassador, yselivanov
priority: normal
severity: normal
status: open
title: ProactorEventLoop raises AssertionError
versions: Python 3.6, Python 3.7, Python 3.8
Added file: https://bugs.python.org/file47639/test_proactor_force_close.py

___
Python tracker 
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33530] Implement Happy Eyeball in asyncio

2018-05-29 Thread twisteroid ambassador


Change by twisteroid ambassador :


--
pull_requests: +6867

___
Python tracker 
<https://bugs.python.org/issue33530>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2018-05-28 Thread twisteroid ambassador

Change by twisteroid ambassador :


--
keywords: +patch
pull_requests: +6785
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31467] cElementTree behaves differently compared to ElementTree

2018-05-28 Thread twisteroid ambassador

Change by twisteroid ambassador :


--
pull_requests: +6786

___
Python tracker 
<https://bugs.python.org/issue31467>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2018-05-28 Thread twisteroid ambassador

twisteroid ambassador  added the comment:

I was about to write a long comment asking what the appropriate behavior should 
be, but then saw that _ProactorSocketTransport already handles the same issue 
properly, so I will just change _SelectorSocketTransport to do the same thing.

--

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31467] cElementTree behaves differently compared to ElementTree

2018-05-28 Thread twisteroid ambassador

Change by twisteroid ambassador :


--
pull_requests: +6783

___
Python tracker 
<https://bugs.python.org/issue31467>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33630] test_posix: TestPosixSpawn fails on PPC64 Fedora 3.x

2018-05-24 Thread twisteroid ambassador

Change by twisteroid ambassador :


--
keywords: +patch
pull_requests: +6734
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue33630>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33530] Implement Happy Eyeball in asyncio

2018-05-24 Thread twisteroid ambassador

Change by twisteroid ambassador :


--
keywords: +patch
pull_requests: +6733
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue33530>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33530] Implement Happy Eyeball in asyncio

2018-05-15 Thread twisteroid ambassador

New submission from twisteroid ambassador :

Add a Happy Eyeballs implementation to asyncio, based on work in 
https://github.com/twisteroidambassador/async_stagger .

Current plans:

- Add 2 keyword arguments to loop.create_connection and asyncio.open_connection.

* delay: Optional[float] = None. None disables happy eyeballs. A number >= 
0 means the delay between starting new connections.

* interleave: int = 1. Controls reordering of resolved IP addresses by 
address family.

- Optionally, expose the happy eyeballs scheduling helper function. 

* It's currently called "staggered_race()". Suggestions for a better name 
welcome.

* Should it belong to base_events.py, some other existing file or a new 
file?

--
components: asyncio
messages: 316757
nosy: Yury.Selivanov, asvetlov, twisteroid ambassador, yselivanov
priority: normal
severity: normal
status: open
title: Implement Happy Eyeball in asyncio
type: enhancement
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue33530>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33413] asyncio.gather should not use special Future

2018-05-08 Thread twisteroid ambassador

twisteroid ambassador  added the comment:

I would like to comment on the last observation about current_task().cancel(). 
I also ran into this corner case recently. 

When a task is cancelled from outside, by virtue of there *being something 
outside doing the cancelling*, the task being cancelled is not currently 
running, and that usually means the task is waiting at an `await` statement, in 
which case a CancelledError will be raised at this `await` statement the next 
time this task runs. The other possibility is that the task has been created 
but has not had a chance to run yet, and in this case the task is marked 
cancelled, and code inside the task will not run.

When one cancels a task from the inside by calling cancel() on the task object, 
the task will still run as normal until it reaches the next `await` statement, 
where a CancelledError will be raised. If there is no `await` between calling 
cancel() and the task returning, however, the CancelledError is never raised 
inside the task, and the task will end up in the state of done() == True, 
cancelled() == False, exception() == CancelledError. Anyone awaiting for the 
task will get a CancelledError without a meaningful stack trace, like this:

Traceback (most recent call last):
  File "cancel_self.py", line 89, in run_one
loop.run_until_complete(coro)
  File "C:\Program Files\Python36\lib\asyncio\base_events.py", line 467, in 
run_until_complete
return future.result()
concurrent.futures._base.CancelledError

This is the case described in the original comment.

I would also consider this a bug or at least undesired behavior. Since 
CancelledError is never raised inside the task, code in the coroutine cannot 
catch it, and after the task returns the return value is lost. For a coroutine 
that acquires and returns some resource (say asyncio.open_connection()), this 
means that neither the task itself nor the code awaiting the task can release 
the resource, leading to leakage.

I guess one should be careful not to cancel the current task from the inside.

--
nosy: +twisteroid ambassador

___
Python tracker 
<https://bugs.python.org/issue33413>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2017-10-16 Thread twisteroid ambassador

Change by twisteroid ambassador :


--
nosy: +giampaolo.rodola, haypo

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2017-09-29 Thread twisteroid ambassador

twisteroid ambassador  added the comment:

This issue is somewhat related to issue27223, in that both are caused by using 
self._sock after it has already been assigned None when the connection is 
closed. It seems like Transports in general may need better protection from 
this kind of behavior.

--

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31647] asyncio: StreamWriter write_eof() after close raises mysterious AttributeError

2017-09-29 Thread twisteroid ambassador

New submission from twisteroid ambassador :

Currently, if one attempts to do write_eof() on a StreamWriter after the 
underlying transport is already closed, an AttributeError is raised:


Traceback (most recent call last):
  File "\scratch_3.py", line 34, in main_coro
writer.write_eof()
  File "C:\Program Files\Python36\lib\asyncio\streams.py", line 300, in 
write_eof
return self._transport.write_eof()
  File "C:\Program Files\Python36\lib\asyncio\selector_events.py", line 808, in 
write_eof
self._sock.shutdown(socket.SHUT_WR)
AttributeError: 'NoneType' object has no attribute 'shutdown'


This is because _SelectorSocketTransport.write_eof() only checks for self._eof 
before calling self._sock.shutdown(), and self._sock has already been assigned 
None after _call_connection_lost().

Compare with StreamWriter.write() after close, which either does nothing or 
logs a warning after 5 attempts; or StreamWriter.drain() after close, which 
raises a ConnectionResetError; or even StreamWriter.close() after close, which 
does nothing.

Trying to do write_eof() after close may happen unintentionally, for example 
when the following sequence of events happen:
* the remote side closes the connection
* the local side attempts to write, so the socket "figures out" the connection 
is closed, and close this side of the socket. Note the write fails silently, 
except when loop.set_debug(True) where asyncio logs "Fatal write error on 
socket transport".
* the local side does write_eof(). An AttributError is raised.

Currently the only way to handle this gracefully is to either catch 
AttributeError or check StreamWriter.transport.is_closing() before write_eof(). 
Neither is pretty.

I suggest making write_eof() after close either do nothing, or raise a subclass 
of ConnectionError. Both will be easier to handle then the current behavior.

Attached repro.

--
components: asyncio
files: asyncio_write_eof_after_close_test.py
messages: 303391
nosy: twisteroid ambassador, yselivanov
priority: normal
severity: normal
status: open
title: asyncio: StreamWriter write_eof() after close raises mysterious 
AttributeError
type: behavior
versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8
Added file: 
https://bugs.python.org/file47180/asyncio_write_eof_after_close_test.py

___
Python tracker 
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31176] Is a UDP transport also a ReadTransport/WriteTransport?

2017-08-10 Thread twisteroid ambassador

New submission from twisteroid ambassador:

In docs / Library Reference / asyncio / Transports and Protocols, it is 
mentioned that "asyncio currently implements transports for TCP, UDP, SSL, and 
subprocess pipes. The methods available on a transport depend on the 
transport’s kind." It also lists methods available on a BaseTransport, 
ReadTransport, WriteTransport, DatagramTransport and BaseSubprocessTransport.

However, the docs does not explain which transports have methods from which 
base classes, or in other words which base classes each concrete transport 
class inherits from. And this may not be obvious: for example, a UDP transport 
certainly is a DatagramTransport, but is it also a ReadTransport, or a 
WriteTransport?

(I feel like the answer is "no it isn't", but there are plenty of conflicting 
evidence. The docs show that WriteTransport has write_eof() and can_write_eof() 
-- methods clearly geared towards stream-like transports, and it duplicates 
abort() from DatagramTransport, so it would seem like WriteTransport and 
DatagramTransport are mutually exclusive. On the other hand, the default 
concrete implementation of _SelectorDatagramTransport actually inherits from 
Transport which inherits from both ReadTransport and WriteTransport, yet it 
does not inherit from DatagramTransport; As a result _SelectorDatagramTransport 
has all the methods from ReadTransport and WriteTransport, but many of them 
raise NotImplemented. This is why I'm asking this question in the first place: 
I found that the transport object I got from create_datagram_endpoint() has 
both pause_reading() and resume_reading() methods that raise NotImplemented, 
and thought that perhaps some event loop implementations would have these 
methods working, and I should try to use them. And before you say "UDP doesn't 
do flow control", asyncio actually does provide flow control for UDP on the 
writing end: see 
https://www.mail-archive.com/python-tulip@googlegroups.com/msg00532.html So 
it's not preposterous that there might be flow control on the reading end as 
well.)

I think it would be nice if the documentation can state the methods implemented 
for each type of transport, as the designers of Python intended, so there's a 
clear expectation of what methods will / should be available across different 
implementations of event loops and transports. Something along the lines of 
"The methods available on a transport depend on the transport’s kind: TCP 
transports support methods declared in BaseTransport, ReadTransport and 
WriteTransport below, etc."

--
assignee: docs@python
components: Documentation, asyncio
messages: 300083
nosy: docs@python, twisteroid ambassador, yselivanov
priority: normal
severity: normal
status: open
title: Is a UDP transport also a ReadTransport/WriteTransport?
type: enhancement
versions: Python 3.5, Python 3.6, Python 3.7

___
Python tracker 
<http://bugs.python.org/issue31176>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com