While doing a little testing of my dbm.sqlite module (it's pretty damn slow
at the moment) I came across this chestnut.  Given this shell for loop:

    for n in 10 100 1000 10000 ; do
        rm -f /tmp/trash.db*
        python3.0 -m timeit -s 'import dbm.ndbm as db' -s 'f = 
db.open("/tmp/trash.db", "c")' 'for i in range('$n'): f[str(i)] = str(i)'
    done

I get this output:

    100000 loops, best of 3: 16 usec per loop
    1000 loops, best of 3: 185 usec per loop
    100 loops, best of 3: 5.04 msec per loop
    10 loops, best of 3: 207 msec per loop

Replacing dbm.ndbm with dbm.sqlite shows more linear growth (only went to
n=1000 because it was so slow):

    10 loops, best of 3: 44.9 msec per loop
    10 loops, best of 3: 460 msec per loop
    10 loops, best of 3: 5.26 sec per loop

My guess is there is something nonlinear in the ndbm code, probably the
underlying library, but it may be worth checking the wrapper quickly.

Platform is Mac OSX 10.5.4 on a MacBook Pro.

Now to dig into the abysmal sqlite performance.

Skip

_______________________________________________
Python-3000 mailing list
Python-3000@python.org
http://mail.python.org/mailman/listinfo/python-3000
Unsubscribe: 
http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com

Reply via email to