Hi What OS and what file system you are using? Many file systems (e,g. ext2/3fs) handle large directories very poorly. A quick way to check if this has anything to do with Python is writing a small C program that opens these files and time it.
Eugene On Sat, Mar 12, 2011 at 10:13 AM, Lukas Lueg <lukas.l...@googlemail.com> wrote: > Hi, > > i've a storage engine that stores a lot of files (e.g. > 10.000) in > one path. Running the code under cProfile, I found that with a total > CPU-time of 1,118 seconds, 121 seconds are spent in 27.013 calls to > open(). The number of calls is not the problem; however I find it > *very* discomforting that Python spends about 2 minutes out of 18 > minutes of cpu time just to get a file-handle after which it can spend > some other time to read from them. > > May this be a problem with the way Python 2.7 gets filehandles from > the OS or is it a problem with large directories itself? > > Best regards > Lukas > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/eltoder%40gmail.com > _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com