Hi Sol,

You can review the Solaris 11 ZFS best practices info, here:


The above section also provides info about the full pool performance

For S11 releases, we're going to increase the 80% pool capacity
recommendation to 90%.

Pool/file system space accounting is dependent on the type
of pool that you can read about, here:




On 12/20/12 10:25, sol wrote:

I know some of this has been discussed in the past but I can't quite
find the exact information I'm seeking
(and I'd check the ZFS wikis but the websites are down at the moment).

Firstly, which is correct, free space shown by "zfs list" or by "zpool
iostat" ?

zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%

zpool iostat:
used 61.9 TB, free 18.1 TB, total = 80 TB, free = 22.6%

(That's a big difference, and the percentage doesn't agree)

Secondly, there's 8 vdevs each of 11 disks.
6 vdevs show used 8.19 TB, free 1.81 TB, free = 18.1%
2 vdevs show used 6.39 TB, free 3.61 TB, free = 36.1%

I've heard that
a) performance degrades when free space is below a certain amount
b) data is written to different vdevs depending on free space

So a) how do I determine the exact value when performance degrades and
how significant is it?
b) has that threshold been reached (or exceeded?) in the first six vdevs?
and if so are the two emptier vdevs being used exclusively to prevent
performance degrading
so it will only degrade when all vdevs reach the magic 18.1% free (or
whatever it is)?

Presumably there's no way to identify which files are on which vdevs in
order to delete them and recover the performance?

Thanks for any explanations!

zfs-discuss mailing list
zfs-discuss mailing list

Reply via email to