I know some of this has been discussed in the past but I can't quite find the
exact information I'm seeking
(and I'd check the ZFS wikis but the websites are down at the moment).
Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ?
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%
used 61.9 TB, free 18.1 TB, total = 80 TB, free = 22.6%
(That's a big difference, and the percentage doesn't agree)
Secondly, there's 8 vdevs each of 11 disks.
6 vdevs show used 8.19 TB, free 1.81 TB, free = 18.1%
2 vdevs show used 6.39 TB, free 3.61 TB, free = 36.1%
I've heard that
a) performance degrades when free space is below a certain amount
b) data is written to different vdevs depending on free space
So a) how do I determine the exact value when performance degrades and how
significant is it?
b) has that threshold been reached (or exceeded?) in the first six vdevs?
and if so are the two emptier vdevs being used exclusively to prevent
so it will only degrade when all vdevs reach the magic 18.1% free (or whatever
Presumably there's no way to identify which files are on which vdevs in order
to delete them and recover the performance?
Thanks for any explanations!
zfs-discuss mailing list