On Tue, Dec 26, 2017 at 05:20:58PM -0800, Rick Moen wrote:
> btrfs is still scarily beta after rather a lot of years of development.
> Its prospects have dimmed further now that Red Hat have dropped it from
> their roadmap.

And why would Red Hat matter?  It's similar to as if Apple dropped an iTunes
app for Android.

Red Hat never participated in btrfs development, has its own enterprisey
storage tools it peddles to paying customers that directly compete with
btrfs' features, never bothered to fix systemd's issues with basic btrfs
operation.  Also, on their paid distribution (RHEL not Fedora), they provide
ridiculously ancient kernels they then backport features to.  This is not
going to work for a filesystem of the complexity of btrfs unless you put
enough manpower on fixing issues caused by such backporting: Red Hat never
had as much as a single engineer dedicated to btrfs.  No wonder they don't
want to get involved.

If, say, SuSE or Facebook backed out, _this_ would be a concern.


As for its state: btrfs is, well, btrfs.  You get both extremely powerful
data protection features you won't want to live without, and WTF level
caveats.  I wouldn't recommend using btrfs unless you know where the corpses
are buried.

But if you do, you get:

* data and metadata checksums.  It is scary how inadequate disks' own
  checksums are, and how often firmware bugs, bad cables, motherboard or
  hostile fairies cause data corruption.  On ext*, this leads to silent data
  loss that you then discover months later once backups get overwritten.
  Out of all my bad disks/eMMC/SD since I started looking at this, that were
  not total device loss, at least some silent corruption happened in _every_
  _single_ _case_.  You have for example two sectors the controller reported
  and 3K other sectors it did not.

* better chances to survive unclean shutdown than non-cow filesystem.  Ext*
  can be told to provide an equivalent level of protection but then it needs
  to write every bit of data twice.

* O(changes) backup.  Using rsync, a spinning disk is likely to take half an
  hour just to stat() everything (obviously depends on the number of files).
  Btrfs on the other hand can enumerate writes since a past snapshot, and
  immediately knows what to transfer.  If you wanted a full backup every 15
  minutes, here you go.

* snapshots to protect from human error.  You are a human, so are your
  distro's developers.  If X is broken again, you revert to the last working
  snapshot with a single command.  Awesome when running unstable.

These were data protection features.  You also get compression,
deduplication, reflinks, etc.

There are performance downsides, but for POSIX operations, they're
restricted to fsync() and random writes.  There are also performance
upsides: on a slow medium (SD/eMMC, 100Mbit ethernet NBD, etc) compression
can double performance for well-compressible workloads (such as package
building: sources, .o files and especially debug info compress nicely), and
even without compression, switching git branch is ~4 times faster on a SD
card on btrfs and f2fs compared to ext4.

Other downside is the need for maintenance.  On single dev, you can live
well without, but on multi dev you need to do manually a lot that's taken
for granted with MD.

Another caveat: don't forget to mount with noatime.


Meow!
-- 
// If you believe in so-called "intellectual property", please immediately
// cease using counterfeit alphabets.  Instead, contact the nearest temple
// of Amon, whose priests will provide you with scribal services for all
// your writing needs, for Reasonable And Non-Discriminatory prices.
_______________________________________________
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

Reply via email to