2012-05-03 3:07, Fred Liu wrote:
There is no specific problem to resolve. Just want to get sort of accurate 
equation between
the "raw storage size" and the "usable storage size" although the *meta file* 
size is trivial.
If you do mass storage budget, this equation is meaningful.

I don't think "accurate equations" are applicable in this case.
You can have estimates like "no more/no less than X" based on,
basically, level of redundancy and its overhead. ZFS metadata
overhead can also be smaller or bigger, depending on your data's
typical block size (fixed for zvols at creation time, variable
for files); i.e. if your data is expected to be in very small
pieces (comparable to single sector size), you'd have big
overhead due to required redundancy and metadata. For data
in large chunks overheads would be smaller.

This gives you something like "available space won't be smaller
than M disks from my M+N redundant raidzN arrays minus O percent
for metadata."

You can also constrain these estimates' range by other
assumptions like expected dedup or compression ratios,
and hope that your end-users would be able to stuff even
more of their addressable data into the pool (because it
would be sparse, compressable, and/or not unique), but
in the end that's unpredictable from the start.

HTH,
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to