> We have seen just the opposite... we have a
>  server with about
> 0 million files and only 4 TB of data. We have been
> benchmarking FSes
> for creation and manipulation of large populations of
> small files and
> ZFS is the only one we have found that continues to
> scale linearly
> above one million files in one FS. UFS, VXFS, HFS+
> (don't ask why),
> NSS (on NW not Linux) all show exponential growth in
> response time as
> you cross a certain knee (we are graphing time to
> create <n> zero
> length files, then do a series of basic manipulations
> on them) in
> number of files. For all the FSes we have tested that
> knee has been
> under one million files, except for ZFS. I know this
> is not 'real
> world' but it does reflect the response time issues
> we have been
> trying to solve. I will see if my client (I am a
> consultant) will
> allow me to post the results, as I am under NDA for
> most of the
> details of what we are doing.

It would be great!

> On the other hand, we have seen serious
> issues using rsync to
> migrate this data from the existing server to the
> Solaris 10 / ZFS
> system, so perhaps your performance issues were rsync
> related and not
> ZFS. In fact, so far the fastest and most reliable
> method for moving
> the data is proving to be Veritas NetBackup (back it
> up on the source
> server, restore to the new ZFS server).
> 
> Now having said all that, we are probably
>  never going to see
> 00 million files in one zpool, because the ZFS
> architecture lets us
> use a more distributed model (many zpools and
> datasets within them)
> and still present the end users with a single view of
> all the data.

Hi Paul,
may I ask you your medium file size? Have you done some optimization?
ZFS recordsize?
Your test included also writing 1 million files?

Gino
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to