Serhiy Storchaka <[email protected]> added the comment:
This speeds up pickling large bytes objects.
$ ./python -m timeit -s 'import pickle; a = [bytes([i%256])*1000000 for i in
range(256)]' 'with open("/dev/null", "wb") as f: pickle._dump(a, f)'
Unpatched: 10 loops, best of 5: 20.7 msec per loop
Patched: 200 loops, best of 5: 1.12 msec per loop
But slows down pickling short bytes objects longer than 256 bytes (up to 40%).
$ ./python -m timeit -s 'import pickle; a = [bytes([i%256])*1000 for i in
range(25600)]' 'with open("/dev/null", "wb") as f: pickle._dump(a, f)'
Unpatched: 5 loops, best of 5: 77.8 msec per loop
Patched: 2 loops, best of 5: 98.5 msec per loop
$ ./python -m timeit -s 'import pickle; a = [bytes([i%256])*256 for i in
range(100000)]' 'with open("/dev/null", "wb") as f: pickle._dump(a, f)'
Unpatched: 1 loop, best of 5: 278 msec per loop
Patched: 1 loop, best of 5: 382 msec per loop
Compare with:
$ ./python -m timeit -s 'import pickle; a = [bytes([i%256])*255 for i in
range(100000)]' 'with open("/dev/null", "wb") as f: pickle._dump(a, f)'
Unpatched: 1 loop, best of 5: 277 msec per loop
Patched: 1 loop, best of 5: 273 msec per loop
I think the code should be optimized for decreasing an overhead of
_write_many().
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue31993>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com