IMHO putting tons of files in the filesystem is a Bad Idea (TM)

You will eventually run into scaling limitations (see large Squid
servers for example). Even if you keep the # of files per directory
down, eventually the whole FS will bog down.

Maybe modern FS like Sun's ZFS will fare better... ZFS certainly has a
lot going for it. For this reason alone, I'd use Solaris X86 or
OpenSolaris, with ZFS.

Another option is to use something like Oracle SecureFiles and
dispense with a filesystem altogether. That's a far more scalable
solution (and is actually faster than base filesystem) but at a $$$
cost $$$.




> On Thu, Feb 12, 2009 at 10:09 PM, Ludwig Isaac Lim <[email protected]>
> wrote:
>>
>> Hi:
>>
>>     We're try to implement a large file server. Since there are so many
>> files. I'm thinking of
>> splitting the files across multiple subdirectories. What do you think in a
>> good number of files
>> per directory? Many thanks.


-- 
Orlando Andico
+63.2.976.8659 | +63.920.903.0335
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
http://lists.linux.org.ph/mailman/listinfo/plug
Searchable Archives: http://archives.free.net.ph

Reply via email to