On Jul 11, 2008, at 4:59 PM, Bob Friesenhahn wrote:

>>
>> Has anyone tested a ZFS file system with at least 100 million +  
>> files?
>> What were the performance characteristics?
>
> I think that there are more issues with file fragmentation over a long
> period of time than the sheer number of files.

actually it's a similar problem .. with a maximum blocksize of 128KB  
and the COW nature of the filesytem you get indirect block pointers  
pretty quickly on a large ZFS filesystem as the size of your tree  
grows .. in this case a large constantly modified file (eg: /u01/data/ 
*.dbf) is going to behave over time like a lot of random access to  
files spread across the filesystem .. the only real difference is that  
you won't walk it every time someone does a getdirent() or an lstat64()

so ultimately the question could be framed as what's the maximum  
manageable tree size you can get to with ZFS while keeping in mind  
that there's no real re-layout tool (by design) .. the number i'm  
working with until i hear otherwise is probably about 20M, but in the  
relativistic sense - it *really* does depend on how balanced your tree  
is and what your churn rate is .. we know on QFS we can go up to 100M,  
but i trust the tree layout a little better there, can separate the  
metadata out if i need to and have planned on it, and know that we've  
got some tools to relayout the metadata or dump/restore for a tape  
backed archive

jonathan

(oh and btw - i believe this question is a query for field data ..  
architect != crash test dummy .. but some days it does feel like it)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to