On Thu, 23 Jul 2015 04:28:44 PM Brian May wrote:
> I personally would be nervous about going against Marc's recommendations
> here.

I don't believe that Marc knows more about BTRFS than I do.

Marc also tends to compile his own kernels and I don't recall him mentioning 
any tests of Debian kernels.  I think that my opinion of Debian kernels is 
more relevant than Marc's opinion, I've used them a lot and I don't know if 
Marc has ever used them.

On Thu, 23 Jul 2015 08:32:45 PM James Harper wrote:
> I enabled quota's on the default Jessie kernel and it all blew up, so at
> least that is broken. Disabling quotas was sufficient to put it right
> again.

Quotas is something you shouldn't expect to be reliable any time soon.  Most 
people don't need it and many of the people who do need it have been scared off 
testing it because of past issues.  For a long time the developers paid little 
attention to it due to the large number of more serious issues, even now I 
don't think it gets that much attention.

> Running low on space (>10%) also really hurt performance badly (my limited
> experience with zfs is that it hangs badly when free space gets under
> about 20%). I upgraded to 4.x at about the same time as I resolved the
> disk space problem so I can't say if 4.x improves the low-space
> performance issues though.

I haven't noticed such problems on either filesystem.  I think that it depends 
on what your usage is, maybe my usage happens to miss the corner cases where 
lack of space causes performance problems.

> The problem I had is that I had mythtv storage in a subvolume, and the only
> control you have over mythtv is that you can tell it to leave a certain
> amount of GB free, and the maximum that can be is 200GB, so obviously
> that's a problem on all but the smallest installations. I ended up
> creating a 1TB file (with nocow enabled) and used ext4 on loopback as my
> mythtv store. Performance is probably badly impacted but I don't notice
> it.

If you had a 5TB RAID-1 array (the smallest I would consider buying for home 
use nowadays) then 200G would be 4%.  While there are plenty of "rules of 
thumb" about how much space should be free on a filesystem I really doubt that 
they continue to that size.  On a 10G BTRFS filesystem if you had 10% free 
space that would be a single 1G data chunk free while on a 1TB filesystem that 
would be 100 data chunks free, I don't think that considering those cases to 
be the same makes sense.

I have had serious metadata performance issues with BTRFS on my 4TB RAID-1 
array, such as a "ls -l" taking many seconds to complete.  For that array I 
can just wait for those cases, for that system all the data which needs good 
performance is stored on a SSD.

If I wanted good performance on a BTRFS array I would make the filesystem as a 
RAID-1 array of SSDs.  Then I would create a huge number of small files to 
allocate many gigs of metadata space.  A 4TB array can have 120G of metadata 
so I might use 150G of metadata space.  Then I'd add 2 big disks to the array 
which would get used for data chunks and delete all the small files.  Then as 
long as I never did a balance all the metadata chunks would stay on the SSD 
and the big disks would get used for data.  I expect that performance would be 
great for such an array.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to