On Fri, Feb 13, 2009 at 2:09 PM, Ludwig Isaac Lim <[email protected]> wrote:
>
> Hi:
>
>     We're try to implement a large file server. Since there are so many 
> files. I'm thinking of
> splitting the files across multiple subdirectories. What do you think in a 
> good number of files
> per directory? Many thanks.

as many as you want as long as you know how to fine tune your filesystem...

first.. you have to look how linux VFS architecture looks like...
there are three important caches in VFS:

1. inode cache
2. directory cache
3. buffer cache

these hash searching algorithm caches stored in memory to avoid disk
i/o lookup to improve speed...  ill let you do the research about VFS
and just take note that directory caching only cache directories 15
characters long... meaning like this directory /dir/less/15chr must be
less or equal to 15 characters long... as always.. cache needs a lot
of physical memory in your system to improve things...

second.. the number of files you can store in your filesystem depends
on the number of inodes that have been allocated during the creation
of your filesystem.. every file correspond one inode...

and third... you have to take advantage the "dir_index" feature of
ext2/ext3 filesystem so that it uses the hash b-tree lookup to speed
up the file access in a single directory...

http://lwn.net/Articles/11481/

fooler.
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
http://lists.linux.org.ph/mailman/listinfo/plug
Searchable Archives: http://archives.free.net.ph

Reply via email to