New submission from lesshaste <[email protected]>:

paste <(seq 20000000) <(seq 2 20000001)  > largefile.txt

Then run the attached read.py. It takes about 1 minute on my system and simply 
makes one large dict.

However the attached C code takes less than 10 seconds (code taken from public 
forums on the web.)

Is there potential to be competitive with the C code?

----------
files: test.c
messages: 6047
nosy: lesshaste, pypy-issue
priority: performance bug
release: 2.0
status: unread
title: Potential speedup reading large file into a dict

________________________________________
PyPy bug tracker <[email protected]>
<https://bugs.pypy.org/issue1579>
________________________________________
_______________________________________________
pypy-issue mailing list
[email protected]
http://mail.python.org/mailman/listinfo/pypy-issue

Reply via email to