On Wed, Mar 27, 2019 at 1:49 PM Guido van Rossum <gu...@python.org> wrote:
>
> On Wed, Mar 27, 2019 at 1:23 PM Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Wed, Mar 27, 2019 at 10:44 AM Daniel Nugent <nug...@gmail.com> wrote:
>> >
>> > FWIW, the ayncio_run_encapsulated approach does not work with the 
>> > transport/protocol apis because the loop needs to stay alive concurrent 
>> > with the connection in order for the awaitables to all be on the same loop.
>>
>> Yeah, there are two basic approaches being discussed here: using two
>> different loops, versus re-entering an existing loop.
>> asyncio_run_encapsulated is specifically for the two-loops approach.
>>
>> In this version, the outer loop, and everything running on it, stop
>> entirely while the inner loop is running – which is exactly what
>> happens with any other synchronous, blocking API. Using
>> asyncio_run_encapsulated(aiohttp.get(...)) in Jupyter is exactly like
>> using requests.get(...), no better or worse.
>
>
> And Yury's followup suggests that it's hard to achieve total isolation 
> between loops, due to subprocess management and signal handling (which are 
> global states in the OS, or at least per-thread -- the OS doesn't know about 
> event loops).

The tough thing about signals is that they're all process global
state, *not* per-thread.

In Trio I think this wouldn't be a big deal – whenever we touch signal
handlers, we save the old value and then restore it afterwards, so the
inner loop would just temporarily override the outer loop, which I
guess is what you'd expect. (And Trio's subprocess support avoids
touching signals or any global state.) Asyncio could potentially do
something similar, but its subprocess support does rely on signals,
which could get messy since the outer loop can't be allowed to miss
any SIGCHLDs. Asyncio does have a mechanism to share SIGCHLD handlers
between loops (intended to support the case where you have loops
running in multiple threads simultaneously), and it might handle this
case too, but I don't know the details well enough to say for sure.

> I just had another silly idea. What if the magical decorator that can be used 
> to create a sync version of an async def (somewhat like tworoutines) made the 
> async version hand off control to a thread pool? Could be a tad slower, but 
> the tenor of the discussion seems to be that performance is not that much of 
> an issue.

Unfortunately I don't think this helps much... If your async def
doesn't use signals, then it won't interfere with the outer loop's
signal state and a thread is unnecessary. And if it *does* use
signals, then you can't put it in a thread, because Python threads are
forbidden to call any of the signal-related APIs.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/

Reply via email to