Hi,
i've a storage engine that stores a lot of files (e.g. 10.000) in
one path. Running the code under cProfile, I found that with a total
CPU-time of 1,118 seconds, 121 seconds are spent in 27.013 calls to
open(). The number of calls is not the problem; however I find it
*very* discomforting
Hi
What OS and what file system you are using? Many file systems (e,g.
ext2/3fs) handle large directories very poorly.
A quick way to check if this has anything to do with Python is writing
a small C program that opens these files and time it.
Eugene
On Sat, Mar 12, 2011 at 10:13 AM, Lukas Lueg
Am 12.03.2011 16:13, schrieb Lukas Lueg:
i've a storage engine that stores a lot of files (e.g. 10.000) in
one path. Running the code under cProfile, I found that with a total
CPU-time of 1,118 seconds, 121 seconds are spent in 27.013 calls to
open(). The number of calls is not the problem;