On Fri, Oct 13, 2017 at 7:38 PM, Yury Selivanov <yselivanov...@gmail.com>
wrote:

> On Fri, Oct 13, 2017 at 11:49 AM, Koos Zevenhoven <k7ho...@gmail.com>
> wrote:
> [..]
> > This was my starting point 2.5 years ago, when Yury was drafting this
> status
> > quo (PEP 492). It looked a lot of PEP 492 was inevitable, but that there
> > will be a problem, where each API that uses "blocking IO" somewhere under
> > the hood would need a duplicate version for asyncio (and one for each
> > third-party async framework!). I felt it was necessary to think about a
> > solution before PEP 492 is accepted, and this became a fairly short-lived
> > thread here on python-ideas:
>
> Well, it's obvious why the thread was "short-lived".  Don't mix
> non-blocking and blocking code and don't nest asyncio loops.  But I
> believe this new subtopic is a distraction.


​Nesting is not the only way to have interaction between two event loops.​
​ But whenever anyone *does* want to nest two loops, they are perhaps more
likely to be loops of different frameworks.​

​You believe that the semantics in async code is a distraction?


> You should start a new
> thread on Python-ideas if you want to discuss the acceptance of PEP
> 492 2.5 years ago.
>

I
​'m definitely not interested in discussing the acceptance of PEP 492.
​


>
> [..]
> > The bigger question is, what should happen when a coroutine awaits on
> > another coroutine directly, without giving the framework a change to
> > interfere:
> >
> >
> > async def inner():
> >     do_context_aware_stuff()
> >
> > async def outer():
> >     with first_context():
> >         coro = inner()
> >
> >     with second_context():
> >         await coro
> >
> > The big question is: In the above, which context should the coroutine be
> run
> > in?
>
> The real big question is how people usually write code.  And the
> answer is that they *don't write it like that* at all.  Many context
> managers in many frameworks (aiohttp, tornado, and even asyncio)
> require you to wrap your await expressions in them.  Not coroutine
> instantiation.
>

​You know very well that I've been talking about how people usually write
code etc. But we still need to handle the corner cases too.​


>
> A more important point is that existing context solutions for async
> frameworks can only support a with statement around an await
> expression. And people that use such solutions know that 'with ...:
> coro = inner()' isn't going to work at all.
>
> Therefore wrapping coroutine instantiation in a 'with' statement is
> not a pattern.  It can only become a pattern, if whatever execution
> context PEP accepted in Python 3.7 encouraged people to use it.
>
>
​The code is to illustrate semantics, not an example of real code. The
point is to highlight that the context has changed between when the
coroutine function was called and when it is awaited. That's certainly a
thing that can happen in real code, even if it is not the most typical
case. I do mention this in my previous email.


> [..]
> > Both of these would have their own stack of (argument, value) assignment
> > pairs, explained in the implementation part of the first PEP 555 draft.
> > While this is a complication, the performance overhead of these is so
> small,
> > that doubling the overhead should not be a performance concern.
>
> Please stop handwaving performance.  Using big O notation:
>
>
​There is discussion on perfomance elsewhere, now also in this other
subthread:

https://mail.python.org/pipermail/python-ideas/2017-October/047327.html

PEP 555, worst complexity for uncached lookup: O(N), where 'N' is the
> total number of all context values for all context keys for the
> current frame stack.


​Not true. See the above link. Lookups are fast (*and* O(1), if we want
them to be).

​PEP 555 stacks are independent of frames, BTW.​



> For a recursive function you can easily have a
> situation where cache is invalidated often, and code starts to run
> slower and slower.
>

​Not true either. The lookups are O(1) in a recursive function, with and
without nested contexts.​

​I started this thread for discussion about semantics in an async context.
Stefan asked about performance in the other thread, so I posted there.

––Koos
​


> PEP 550 v1, worst complexity for uncached lookup: O(1), see [1].
>
> PEP 550 v2+, worst complexity for uncached lookup: O(k), where 'k' is
> the number of nested generators for the current frame. Usually k=1.
>
> While caching will mitigate PEP 555' bad performance characteristics
> in *tight loops*, the performance of uncached path must not be
> ignored.
>
> Yury
>
> [1] https://www.python.org/dev/peps/pep-0550/#appendix-hamt-
> performance-analysis
>



-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to