Min Yuan <[EMAIL PROTECTED]> wrote:
> We have a directory on Redhat 6.2 with 500, 000 files. In our code we open and read
>the directory and for each entry in the directory we use lstat() to check for some
>information. The whole scanning takes more than eight hours which is terribly long.
> Is there any way we could reduce this length of time? If the answer is NO, then is
>there any official documents about it and where can we find it?
Use more directories! That is, create subdirectories to group files.
For example, if you're files are called "fileXXXXXX" with XXXXXX being
the numbers from 1 to 500,000 then you could create 500 directories
and put "file123456" into directory "123". Get the idea?!
BTW, the reason why this is so slow is because ext2 is using a
linear search through the inode-table.
Cheers
Thilo
--
Thilo Mezger <[EMAIL PROTECTED]>
innominate AG <URL:http://www.innominate.com>
_______________________________________________
Redhat-devel-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-devel-list