Yury Selivanov <yseliva...@gmail.com> added the comment:

> However, I could not find any tests for the added feature (safe
> use with async) though. We would be adding a new feature without
> tests.

This is no problem, I can add a few async/await tests.


> I'm getting a large slowdown:
> ./python Modules/_decimal/tests/bench.py
> [..]
> patched:    [0.199, 0.206, 0.198, 0.199, 0.197, 0.202, 0.198, 0.201, 0.213, 
> 0.199]
> status-quo: [0.187, 0.184, 0.185, 0.183, 0.184, 0.188, 0.184, 0.183, 0.183, 
> 0.185]

I'd like you to elaborate a bit more here.  First, bench.py produces a 
completely different output from what you've quoted.  How exactly did you 
compile these results?  Are those numbers results of Pi calculation or 
factorial?  Can you upload the actual script you used here (if there's one)?

Second, here's my run of bench.py with contextvars and without: 
https://gist.github.com/1st1/1187fc58dfdef86e3cad8874e0894938

I don't see any difference, left alone 10% slowdown.


> xwith.py
> --------
>
> patched:    [0.535, 0.541, 0.523]
> status-quo: [0.412, 0.393, 0.375]

This benchmark is specially constructed to profile creating decimal contexts 
and doing almost nothing with them.

I've optimized PEP 567 for contextvar.get() operation, not contextvar.set (it's 
hard to make hamt.set() as fast as dict.set()).  That way, if you have an some 
decimal code that performs actual calculations with decimal objects, the 
operation of looking up the current context is cheap.

It's hard to imagine a situation, where a real decimal-related code just 
creates decimal contexts and does nothing else with them.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32630>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to