New submission from Dimitar Tasev:

Hello, I have noticed a significant performance regression when allocating a 
large shared array in Python 3.x versus Python 2.7. The affected module seems 
to be `multiprocessing`.

The function I used for benchmarking:

from timeit import timeit
timeit('sharedctypes.Array(ctypes.c_float, 500*2048*2048)', 'from 
multiprocessing import sharedctypes; import ctypes', number=1)

And the results from executing it:

Python 3.5.2
Out[2]: 182.68500420999771

-------------------

Python 2.7.12
Out[6]: 2.124835968017578

I will try to provide any information you need. Right now I am looking at 
callgrind/cachegrind without Debug symbols, and can post that, in the meantime 
I am building Python with Debug and will re-run the callgrind/cachegrind.

Allocating the same-size array with numpy doesn't seem to have a difference 
between Python versions. The numpy command used was 
`numpy.full((500,2048,2048), 5.0)`. Allocating the same number of list members 
also doesn't have a difference - `arr = [5.0]*(500*2048*2048)`

----------
files: shared_array_alloc.py
messages: 298285
nosy: dtasev
priority: normal
severity: normal
status: open
title: Shared Array Memory Allocation Regression
type: performance
versions: Python 2.7, Python 3.5, Python 3.6
Added file: http://bugs.python.org/file47009/shared_array_alloc.py

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30919>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to