Hi, I have been trying to use Pypy to speed up the Krakatau decompiler (
https://github.com/Storyyeller/Krakatau). It is a large, pure Python
application with several compute intensive parts, so I thought it would
work well. Unfortunately, there is no clear speedup, and Pypy requires
several times as much memory as well, making it unusual for larger inputs.

For example, decompiling a quarter of ASM, I got the following results
(execution time, memory usage)

cpython 64 -  62.5s, 102.6kb
cpython 32 -  69.2s,  54.5kb
pypy 2.1.0 - 106.5s, 277.8kb
pypy 2.2.1 - 109.2s, 194.6kb

Sometimes, 2.2.1 is faster than 2.1.0, but they're both clearly much worse
than CPython.

These tests were performed on Windows 7 64bit using the prebuilt 32bit
binaries of Pypy. I tested the 32bit version of CPython too, to see if the
problem was a lack of 64bit support. However, CPython 32bit also vastly
outperformed Pypy.

Execution time was measured using time.time(). Memory usage was measured by
watching the Windows Resource Manager and recording the peak Private value.
Similar patterns were seen in Working Set, etc.

I thought the increased memory usage at least might be explained by
constant overhead from compiled code or from it not running long enough to
trigger full garbage collection. However, Pypy continues to use several
times as much memory on much larger examples.

Does anyone know what could be going on here? Pypy isn't normally slower
than CPython. Is there a way for me to tell what the problem is?
_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to