On 4/28/07, Yaniv Aknin <[EMAIL PROTECTED]> wrote:
Following my previous post across several mailing lists regarding multi-tera 
volumes with small files on them, I'd be glad if people could share real life 
numbers on large filesystems and their experience with them. I'm slowly coming 
to a realization that regardless of theoretical filesystem capabilities (1TB, 
32TB, 256TB or more), more or less across the enterprise filesystem arena 
people are recommending to keep practical filesystems up to 1TB in size, for 
manageability and recoverability.

What's the maximum filesystem size you've used in production environment? How 
did the experience come out?

Works fine. As an example:

Filesystem             size   used  avail capacity  Mounted on
xx                  14T   116K   2.1T     1%    /xx
xx/peter            14T    34G   2.1T     2%    /xx/peter
xx/aa             14T   1.2T   2.1T    37%    /xx/aa
xx/tank               14T   1.5T   2.1T    41%    /xx/tank
xx/tank-archives      14T   3.5T   2.1T    63%    /xx/tank-archives
xx/rework      14T   6.8G   2.1T     1%    /xx/rework
xx/bb             14T    61G   2.1T     3%    /xx/bb
xx/foo         14T    73K   2.1T     1%    /xx/foo
xx/foo/aug06    14T   771K   2.1T     1%    /xx/foo/aug06
xx/foo/cc     14T    55G   2.1T     3%    /xx/foo/cc
xx/foo/mm    14T   2.2G   2.1T     1%    /xx/foo/mm
xx/foo/dd    14T    47M   2.1T     1%    /xx/foo/dd
xx/foo/rr    14T   1.4G   2.1T     1%    /xx/foo/rr
xx/foo/tf    14T   1.6G   2.1T     1%    /xx/foo/tf
xx/ee              14T   1.3T   2.1T    38%    /xx/ee
xx/aa-fe          14T   274G   2.1T    12%    /xx/aa-fe
xx/vv            14T    68G   2.1T     4%    /xx/vv
xx/nn           14T    28G   2.1T     2%    /xx/nn
xx/mm       14T   4.2G   2.1T     1%    /xx/mm
xx/rr        14T   3.1G   2.1T     1%    /xx/rr
xx/ss       14T    48G   2.1T     3%    /xx/ss
xx/ff             14T   305G   2.1T    13%    /xx/ff
xx/gg-jj    14T   570G   2.1T    21%    /xx/gg-jj
xx/gg      14T   882G   2.1T    29%    /xx/gg
xx/aa-tn    14T    35G   2.1T     2%    /xx/aa-tn
xx/pp            14T   234K   2.1T     1%    /xx/pp
xx/ee-tt       14T   256K   2.1T     1%    /xx/ee-tt
xx/tank-r4          14T   2.0T   2.1T    50%    /xx/tank-r4
xx/tank-r1-clone    14T   3.3T   2.1T    61%    /xx/tank-r1-clone
xx/rdce             14T    91G   2.1T     5%    /xx/rdce

That's a fair spread of sizes. Each filesystem in this
case represents a single dataset, so it's hard to make
them any smaller. (Until recently, some of the larger
datasets were spread across multiple ufs filesystems
and merged back together using an automount map.
At least zfs has saved me from that nightmare.)

Many of these filesystems have millions of files - the
most is over 11 million at an average of 115k each,
although one has 8 million at 4k each.

In practical terms, backing up much over a terabyte
in a single chunk isn't ideal. What I would like to see
here is more flexibility from something like Legato
in terms of defining schedules that would allow us to
back this up sensibly. (Basically, the changes are
relatively small, so it would be nice to use quarterly
schedules - Legato only really does weekly or monthly.)

--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to