Not sure if it helps, but I got something working for the problem I was 
experiencing by detecting if there was a currently running event loop, and then 
at the synchronous call points, creating and running a new loop on a separate 
thread. This makes the object in question synchronous or asynchronous but not 
both.

This was kind of a pain in the butt though and it blocks the outer loop anyway.

I think I’m in favor of a configurable option to allow separate nested loops if 
possible. In the narrow situation I am concerned with of allowing library 
writers to provide synchronous APIs to otherwise asynchronous code that has to 
run in a world it can’t make usage demands of, it’s a good solution. In that 
scenario, I think the details of the underlying inner event loop likely won’t 
leak out to the outer event loop (creating a cross event loop dependency) when 
it’s being used synchronously.

-Dan Nugent
On Mar 25, 2019, 21:59 -0400, Dima Tisnek <[email protected]>, wrote:
> End-user point of view, a.k.a. my 2c:
>
> re more worrisome scenario: if "objects" from two event loops depends
> on each other, that's unsolvable in general case. On the other hand,
> what OP wanted, was akin to DAG-like functionality or locking
> hierarchy. Naive implementation would block caller callbacks until
> callee completes, but that may be what the user actually wanted (?).
>
> re ipython notebook state reuse across cells: that's a whole different
> can of worms, because cells can be re-evaluated in arbitrary order. As
> a user I would expect my async code to not interfere with ipynb
> internal implementation. In fact, I'd rather see ipynb isolated into
> own thread/loop/process. After all, I would, at times like to use a
> debugger.
> (full disclosure: I use debugger in ipython and it never really worked
> for me in sync notebook, let alone async).
>
> re original proposal: async code calls a synchronous function that
> wants to do some async work and wait for the result, for example,
> telemetry bolt-on. I would expect the 2 event loops to be isolated.
> Attempting to await across loop should raise an exception, as it does.
> When some application wants to coordinate things that happen in
> multiple event loops, it should be the application's problem.
>
>
> I think this calls for a higher-level paradigm, something that allows
> suspension and resumption of entire event loops (maybe executors?) or
> something that allows several event loops to run without being aware
> of each other (threads?).
>
>
> I feel that just adding the flag to allow creation / setting of event
> loop is not enough.
> We'd need at least a stack where event loops can be pushed and popped
> from, and possibly more...
>
> Cheers,
> D.
>
> On Tue, 26 Mar 2019 at 09:52, Glyph <[email protected]> wrote:
> >
> > Allowing reentrant calls to the same loop is not a good idea IMO. At best, 
> > you'll need to carefully ensure that the event loop and task 
> > implementations are themselves reentrancy-safe (including the C 
> > accelerators and third parties like uvloop?), and then it just invites 
> > subtle issues in the applications built on top of it. I don't think there's 
> > a good reason to allow or support this (and nest_asyncio should be heavily 
> > discouraged). I do, however, think that PBP is a good enough reason to 
> > allow opt-in use of multiple event loops nested inside each other (maybe 
> > something on the EventLoopPolicy for configuration?).
> >
> >
> > +1 to all of this.
> > _______________________________________________
> > Async-sig mailing list
> > [email protected]
> > https://mail.python.org/mailman/listinfo/async-sig
> > Code of Conduct: https://www.python.org/psf/codeofconduct/
_______________________________________________
Async-sig mailing list
[email protected]
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/

Reply via email to