bluecarrot <fe...@kngnt.org> added the comment:

Hi Andrew, thank you for your answer. I am experimenting with coroutines, as I 
am pretty new to them. My idea was to let the writer drain while other packets 
where read, and thus I am waiting for the writer_drain right before starting 
writer.write again. Isn't that the correct wait to overlap the readings and the 
writings?

If I modify my initial code to look like:

async def forward_stream(reader: StreamReader, writer: StreamWriter, event: 
asyncio.Event, source: str):
    writer_drain = writer.drain()  # <--- awaitable is created here
    while not event.is_set():
        try:
            data = await asyncio.wait_for(reader.read(1024), 1)  # <-- 
CancelledError can be caught here, stack unwinds and writer_drain is never 
awaited, sure.
        except asyncio.TimeoutError:
            continue
        except asyncio.CancelledError:
            event.set()
            break
     ...  # the rest is not important for this case

    await writer_drain

so that in case the task is cancelled, writer_drain will be awaited outside of 
the loop. This works, at the cost of having to introduce code specific for 
testing purposes (which feels wrong). In "production", the workflow of this 
code will be to loose the connection, break out of the loop, and wait for the 
writer stream to finish... but I am not introducing any method allowing me to 
cancel the streams once the script is running.

In the same way leaked tasks are "swallowed", which I have tested and works, 
shouldn't be these cases also handled by the tearDownClass method of 
IsolatedAsyncioTestCase?

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue46568>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to