> > Running low on space (>10%) also really hurt performance badly (my
> > limited
> > experience with zfs is that it hangs badly when free space gets under
> > about 20%). I upgraded to 4.x at about the same time as I resolved the
> > disk space problem so I can't say if 4.x improves the low-space
> > performance issues though.
> 
> I haven't noticed such problems on either filesystem.  I think that it depends
> on what your usage is, maybe my usage happens to miss the corner cases
> where
> lack of space causes performance problems.

Perhaps. Definitely noticed it on a 26GB ZFS system. As free space got under 
about 20% there started to be odd problems that I couldn't put my finger on. As 
free space crept towards 10% it got to the point where deleting a snapshot took 
a whole day or just would never complete. Deleting files to bring the free 
space back towards 80% made all the problems go away. That was on a fairly old 
FreeBSD install though (8.4 maybe).

> > The problem I had is that I had mythtv storage in a subvolume, and the only
> > control you have over mythtv is that you can tell it to leave a certain
> > amount of GB free, and the maximum that can be is 200GB, so obviously
> > that's a problem on all but the smallest installations. I ended up
> > creating a 1TB file (with nocow enabled) and used ext4 on loopback as my
> > mythtv store. Performance is probably badly impacted but I don't notice
> > it.
> 
> If you had a 5TB RAID-1 array (the smallest I would consider buying for home
> use nowadays) then 200G would be 4%.  While there are plenty of "rules of
> thumb" about how much space should be free on a filesystem I really doubt
> that
> they continue to that size.  On a 10G BTRFS filesystem if you had 10% free
> space that would be a single 1G data chunk free while on a 1TB filesystem
> that
> would be 100 data chunks free, I don't think that considering those cases to
> be the same makes sense.
> 

Yes this is true. My two experiences are on my router with a single 60GB SSD 
(with probably 40GB actually allocated to the filesystem), and my server (2 x 
1.5TB, 2 x 2TB). I don't remember what the poor performance threshold was on 
the former, but on the latter, 200GB of free space caused serious problems. NFS 
clients would see very frequent kernel messages about delays. As soon as I got 
back up to about 1TB free all the problems went away, although no measurements 
were really taken between 200GB and 1TB of free space.

> I have had serious metadata performance issues with BTRFS on my 4TB RAID-
> 1
> array, such as a "ls -l" taking many seconds to complete.  For that array I
> can just wait for those cases, for that system all the data which needs good
> performance is stored on a SSD.
> 
> If I wanted good performance on a BTRFS array I would make the filesystem
> as a
> RAID-1 array of SSDs.  Then I would create a huge number of small files to
> allocate many gigs of metadata space.  A 4TB array can have 120G of
> metadata
> so I might use 150G of metadata space.  Then I'd add 2 big disks to the array
> which would get used for data chunks and delete all the small files.  Then as
> long as I never did a balance all the metadata chunks would stay on the SSD
> and the big disks would get used for data.  I expect that performance would
> be
> great for such an array.
> 

That sounds unreasonably fragile. Especially if you are unable to ever do a 
balance. My first testing of btrfs was on top of bcache, and performance was 
awesome. I went back to entirely rotating media for production though as I only 
had a single SSD at my disposal, didn't really need the extreme performance, 
and had other things to spend money on. Also at the time there were reports of 
incompatibilities between btrfs and bcache. I expect bcache would out-perform 
the hot-relocation project, for most workloads. For my server which does lots 
of streaming writes (mythtv) and lots of random io (other stuff), it would 
balance things nicely.

This guy claims success with bcache + btrfs 
http://www.spinics.net/lists/linux-btrfs/msg42125.html and raises some 
interesting points (interesting to me, at least).

Btw, when you say 5TB RAID1, what exactly do you mean? Is the 5TB referring to 
the raw disks or the usable redundant space? I'm never quite sure.

James

_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to