Hi Yuri,
I think these are great ideas to speed up CPython. They are probably
the simplest yet most effective ways to get performance improvements
in the VM.
MicroPython has had LOAD_METHOD/CALL_METHOD from the start (inspired
by PyPy, and the main reason to have it is because you don't need to
BTW, this optimization also makes some old optimization tricks obsolete.
1. No need to write 'def func(len=len)'. Globals lookups will be fast.
2. No need to save bound methods:
obj = []
obj_append = obj.append
for _ in range(10**6):
obj_append(something)
This hand-optimized code would
On 2016-01-27 3:46 PM, Glenn Linderman wrote:
On 1/27/2016 12:37 PM, Yury Selivanov wrote:
MicroPython also has dictionary lookup caching, but it's a bit
different to your proposal. We do something much simpler: each opcode
that has a cache ability (eg LOAD_GLOBAL, STORE_GLOBAL,
On 1/27/2016 12:37 PM, Yury Selivanov wrote:
MicroPython also has dictionary lookup caching, but it's a bit
different to your proposal. We do something much simpler: each opcode
that has a cache ability (eg LOAD_GLOBAL, STORE_GLOBAL, LOAD_ATTR,
etc) includes a single byte in the opcode which
On Wed, 27 Jan 2016 at 10:26 Yury Selivanov wrote:
> Hi,
>
>
> tl;dr The summary is that I have a patch that improves CPython
> performance up to 5-10% on macro benchmarks. Benchmarks results on
> Macbook Pro/Mac OS X, desktop CPU/Linux, server CPU/Linux are available
>
Hi Yury,
(Sorry for misspelling your name previously!)
> Yes, we'll need to add CALL_METHOD{_VAR|_KW|etc} opcodes to optimize all
> kind of method calls. However, I'm not sure how big the impact will be,
> need to do more benchmarking.
I never did such fine grained analysis with MicroPython.
Damien,
On 2016-01-27 4:20 PM, Damien George wrote:
Hi Yury,
(Sorry for misspelling your name previously!)
NP. As long as the first letter is "y" I don't care ;)
Yes, we'll need to add CALL_METHOD{_VAR|_KW|etc} opcodes to optimize all
kind of method calls. However, I'm not sure how big
Brett Cannon writes:
> the core team has an implicit understanding that any performance
> improvement is taken into consideration in terms of balancing
> complexity in CPython with how much improvement it gets us.
EIBTI. I can shut up now. Thank you!
Hi,
I pushed a fix before you sent your message. At least test_ast should be
fixed.
https://hg.python.org/cpython/rev/c5df914e73ad
FYI I'm unable to reproduce the test_collections leak.
Victor
Le mardi 26 janvier 2016, Brett Cannon a écrit :
> Looks like Victor's
On 28 January 2016 at 04:40, Sven R. Kunze wrote:
> On 27.01.2016 12:16, Nick Coghlan wrote:
>> Umm, no, that's not how this works
> That's exactly how it works, Nick.
>
> INADA uses Python as I use crossroads each day. Daily human business.
>
> If you read his post carefully,
Terry Reedy writes:
> So you agree that the limit of 39 is not intrinsic to the fib function
> or its uses, but is an after-the-fact limit imposed to mask the bug
> proneness of using substitutes for integers.
I don't know what the limit used in the benchmark is, but it must be
quite a bit
On 27.01.2016 11:59, Terry Reedy wrote:
On 1/26/2016 12:35 PM, Sven R. Kunze wrote:
I completely agree with INADA.
I an not sure you do.
I am sure I am. He wants to solve a problem the way that is natural to
him as a unique human being.
It's like saying, because a specific crossroad
On 2016-01-27 3:10 PM, Damien George wrote:
Hi Yuri,
I think these are great ideas to speed up CPython. They are probably
the simplest yet most effective ways to get performance improvements
in the VM.
Thanks!
MicroPython has had LOAD_METHOD/CALL_METHOD from the start (inspired
by PyPy,
Hi,
tl;dr The summary is that I have a patch that improves CPython
performance up to 5-10% on macro benchmarks. Benchmarks results on
Macbook Pro/Mac OS X, desktop CPU/Linux, server CPU/Linux are available
at [1]. There are no slowdowns that I could reproduce consistently.
There are
On 2016-01-27 3:01 PM, Brett Cannon wrote:
[..]
We can also optimize LOAD_METHOD. There are high chances, that
'obj' in
'obj.method()' will be of the same type every time we execute the code
object. So if we'd have an opcodes cache, LOAD_METHOD could then
cache
a
On 1/27/2016 9:12 AM, Stephen J. Turnbull wrote:
Without that knowledge and effort, choosing a programming language
based on microbenchmarks is like choosing a car based on the
leg-length of the model sitting on the hood in the TV commercial.
+1 QOTD
On Wed, 27 Jan 2016 at 10:12 Sven R. Kunze wrote:
> On 27.01.2016 11:59, Terry Reedy wrote:
>
> On 1/26/2016 12:35 PM, Sven R. Kunze wrote:
>
> I completely agree with INADA.
>
>
> I an not sure you do.
>
>
> I am sure I am. He wants to solve a problem the way that is natural to
On 27.01.2016 12:16, Nick Coghlan wrote:
On 27 January 2016 at 03:35, Sven R. Kunze wrote:
I completely agree with INADA.
It's like saying, because a specific crossroad features a higher accident
rate, people need to change their driving behavior.
No! People won't change and
On 27.01.2016 19:33, Brett Cannon wrote:
And this is why this entire email thread has devolved into a
conversation that isn't really going anywhere. This whole thread has
completely lost track of the point of Victor's earlier email saying
"I'm still working on my FAT work and don't take any
As Brett suggested, I've just run the benchmarks suite with memory
tracking on. The results are here:
https://gist.github.com/1st1/1851afb2773526fd7c58
Looks like the memory increase is around 1%.
One synthetic micro-benchmark, unpack_sequence, contains hundreds of
lines that load a global
On 27 January 2016 at 03:35, Sven R. Kunze wrote:
> I completely agree with INADA.
>
> It's like saying, because a specific crossroad features a higher accident
> rate, people need to change their driving behavior.
> No! People won't change and it's not necessary either. The
On 1/26/2016 12:51 PM, Stephen J. Turnbull wrote:
Terry Reedy writes:
> On 1/26/2016 12:02 AM, INADA Naoki wrote:
>
> > People use same algorithm on every language when compares base language
> > performance [1].
>
> The python code is NOT using the same algorithm. The proof is that
Python has test_dynamic which tests such corner cases.
For example, test_modify_builtins_while_generator_active(): "Modify
the builtins out from under a live generator."
https://hg.python.org/cpython/file/58266f5101cc/Lib/test/test_dynamic.py#l49
Victor
2016-01-27 10:28 GMT+01:00 Paul Moore
2016-01-23 7:03 GMT+01:00 Chris Angelico :
> I just had a major crash on the system that hosts the
> angelico-debian-amd64 buildbot, and as usual, checked it carefully
> after bringing everything up. It seems now to be timing out after an
> hour of operation:
>
>
On 27 January 2016 at 05:23, Sjoerd Job Postmus wrote:
> On Mon, Jan 25, 2016 at 11:58:12PM +0100, Victor Stinner wrote:
>> ...
>> Oh, they are a lot of things to do! ...
>
> Just wondering, do you also need a set of (abusive) test-cases which
> check 100% conformity to the
On 1/26/2016 12:35 PM, Sven R. Kunze wrote:
I completely agree with INADA.
I an not sure you do.
It's like saying, because a specific crossroad features a higher
accident rate, *people need to change their driving behavior*.
*No!* People won't change and it's not necessary either. The
On Wed, Jan 27, 2016 at 8:39 PM, Victor Stinner
wrote:
> 2016-01-23 7:03 GMT+01:00 Chris Angelico :
>> Running just that test file:
>>
>> $ ./python Lib/test/test_socket.py
>> ... chomp lots of lines ...
>> testRecvmsgPeek (__main__.RecvmsgUDP6Test) ...
27 matches
Mail list logo