New submission from Jason Fried <m...@jasonfried.info>:

At Facebook and Instagram we have large interconnected codebases without clear 
boundaries of ownership. As we move more and more services to utilize asyncio 
we are finding that once blocking (but fast) code paths, are now cropping up 
with asyncio code using run_until_complete().  Now this is fine for all the 
blocking callers, but some times we have async callers to that blocking code 
path and now it doesn't work.  

So we have two options revert the change to not use asyncio deep in the dep 
tree or Convert all functions in the stack to be asyncio.  Both are not 
possible and engineers have solved them in two crazy ways.

1. Nested Event Loops, when you hit a run_until_complete, create a new 
eventloop and do the async and return the result.
2. Like the first, but each library creates its own eventloop, and swaps it 
with the running loop for the duration of the run_until_complete, restoring the 
original loop when its done. 

Both of these ultimately have the same problem, everything on the primary event 
loop stops running until the new loop is complete. What if we could instead 
start servicing the existing eventloop from the run_until_complete. This would 
insure that tasks don't timeout.

This would allow us to convert things to asyncio faster without having to have 
absolute knowledge of a codebase and its call graph, and not have to have all 
engineers completely synchronized.

----------
components: asyncio
messages: 316678
nosy: asvetlov, fried, yselivanov
priority: normal
severity: normal
status: open
title: loop.run_until_complete re-entrancy to support more complicated 
codebases in transition to asyncio
type: enhancement
versions: Python 3.8

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33523>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to