On Fri, 11 Jul 2008, Sean Cochrane - Storage Architect wrote:

> I need to find out what is the largest ZFS file system - in numbers of files, 
> NOT CAPACITY that has been tested.

In response to an earlier such question (from you?) I created a 
directory with a million files.  I forgot about it since then so the 
million files are still there without impacting anything for a month 
now.

The same simple script (with a small enhancement) could be used to 
create a million directories containing a million files but it might 
take a while to complete.  It seems that a Storage Architect should be 
willing to test this for himself and see what happens.

> Looking to scale to billions of files and would like to know if anyone has 
> tested anything close and what the performance ramifications are.

There are definitely issues with programs like 'ls' when listing a 
directory with a million files since 'ls' sorts its output by default. 
My Windows system didn't like it at all when accessing it with CIFS 
and the file browser since it wants to obtain all file information 
before doing anything else.  System backup with hundreds of millions 
of files sounds like fun.

> Has anyone tested a ZFS file system with at least 100 million + files?
> What were the performance characteristics?

I think that there are more issues with file fragmentation over a long 
period of time than the sheer number of files.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to