On Mon, Jun 16, 2008 at 6:42 PM, Steffen Weiberle
<[EMAIL PROTECTED]> wrote:
> Has anybody stored 1/2 billion small (> 50KB) files in a ZFS data store?
> If so, any feedback in how many file systems [and sub-file systems, if
> any] you used?

I'm not quite there yet, although I have a thumper with about 110 million
files on it. That's across a couple of dozen filesystems, one has 27 million
files (and is going to get to one or two hundred million on its own before
it's done), and several have over 10 million. So while we're not there yet, it's
only a question of time.

> How were ls times? And insights in snapshots, clones, send/receive, or
> restores in general?

Directory listings aren't quick. Snapshots are easy to create; we have seen
destroying a snapshot take hours. Using send/receive (or anything else, like
tar) isn't quick. I suspect that using raidz is less than ideal for this sort of
workload (our workload has changed somewhat over the last year); I haven't
got anything like the resources to try alternatives, as I suspect
we're being bitten
by the relatively poor performance of raidz for random reads (basically you
only get one disk's worth of I/O per vdev).

Backups are slow. We seem to be able to do about 10 million files a day. I'm
wishing I don't ever have to tell you what restore times are like ;-)

I think that you need some way of breaking the data up - either by filesystem
or just by directory hierarchy - into digestible chunks. For us that's
at about the
1Tbyte/10 million file point at the most - we're looking at restructuring the
directory hierarchy for the filesystems that are beyond this so we can back them
up in pieces.

> How about NFS access?

Seems to work fine.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to