On 2007-Jun-27 14:11:19 +0400, Nguyen Tam Chinh <[EMAIL PROTECTED]> wrote:
> We're going to build a server with some 1Tb of over 500 million small
> files with size from 0,5k to 4k.  I'm wonder if the ufs2 can handle
> this kind of system well.

Short answer: No.

Longer answer: FreeBSD and UFS2 have been tweaked to support large
numbers of files in larger filesystems and there are no hard limits
that you will exceed by having 500,000,000 files in a >1TB FS.

However, you will not be able to fsck the FS on an i386 system and
will need a lot of RAM+SWAP on amd64 or SPARC64.  fsck will also take
a _long_ time (hours) to run.  Depending on how the files are organised,
you may run into severe performance problems with directory searching.

> From newfs(8) the min block size is 4k. This
> is not optimal in our case, a 1k or 0,5k block is more effective IMHO.
> I'd be happy if anyone can suggest what does fragment (block/8) in the
> ufs2 mean and how this parameter works.

I suggest you read /usr/share/doc/smm/05.fastfs/paper.ascii.gz
Whilst this paper discusses UFS1, the basics remain the same.

I have tried using a 4K/0.5K UFS1 filesystem in the past and found the
performance was very poor.  UFS2 was based on 16K/2K and I would expect
it to perform even worse with 4K/0.5K.  I would suggest you try 8K/1K.

BTW, in sizing your system, you will need to allow for both the last
space when the file sizes are rounded up to a multiple of the fragment
size, as well as the inode size (256 bytes).  If you have 1TB of data,
it's likely that you will have another 0.5-1TB of overheads.

Overall, I suggest you look at an alternative way to store the data.

-- 
Peter Jeremy

Attachment: pgp1NIeAQs5sj.pgp
Description: PGP signature

Reply via email to