I made enough of a patch to at least get a preliminary benchmark. The program toward the bottom of this email runs over 100 times faster with my patch. The patch still has a ways to go--I use a very primitive scheme to reclaim orphan pointers (1000 at a time) and I am still segfaulting when removing the last element of the list. But the initial results at least confirm that the intended benefit is achievable.
I've attached the diff, in case anyone wants to try it out or help me figure
out what else needs to change.
The core piece of the patch is this--everything else is memory management
related.
+ if (ilow == 0) {
+ a->orphans += 1;
+ a->ob_item += (-1 * d);
+ }
+ else {
+ memmove(&item[ihigh+d], &item[ihigh],
+ (Py_SIZE(a) - ihigh)*sizeof(PyObject *));
+ }
import time
n = 80000
lst = []
for i in range(n):
lst.append(i)
t = time.time()
for i in range(n-1):
del lst[0]
print('time = ' + str(time.time() - t))
print(len(lst))
print('got here at least')
show...@showell-laptop:~/PYTHON/py3k$ cat BEFORE
0
2.52699589729
show...@showell-laptop:~/PYTHON/py3k$ cat AFTER
time = 0.0216660499573
1
got here at least
Python 3.2a0 (py3k:77751M, Jan 25 2010, 20:25:21)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 2.526996 / 0.021666
116.63417335918028
>>>
DIFF
Description: Binary data
_______________________________________________ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
