Dan Nelson wrote:
In the last episode (Mar 10), Bill Moran said:

Question for the gurus or anyone who has done any test on filesystem
performance.

Where is the point at which a directory has too many files in it?
Mainly with regard to performance degredation?

It Depends


http://www.cnri.dit.ie/Downloads/fsopt.pdf has a nice rundown of the
performance benefits of softupdates, dirhash, dirpref, and vmiodir for
different benchmarks (including tests on directories with up to 20000
files).  vmiodir and dirpref are now on by default, but the results for
softupdates and dirhash are still useful.

Thanks for the input ... that was definately an interesting article. I would have expected dirhash to do more but ...

Anyway, it still wasn't exactly what I was looking for.  It occurred to
me that I should just describe what I'm doing ....
I'm writing a web-interface to file-sharing.  It's back-ended by a metadata
database and a filesystem based file store.  I've decided (based on my
experience with software such as squid and huge directories resulting from
unpacking the php documentation) to split the file store up between
directories so that the directory listings never get too big.  The question
I need to answer is: how many files can a directory contain before the
system determines that no more files should be placed in it?  The database
will determine where and what file to retrieve, so I don't need to worry
about the listing getting too long to be easily managed by command-line
tools, but I want to make sure that finding a specific file in that
directory doesn't get too awfully time-consuming.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com


To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message

Reply via email to