On 2018-02-15 10:47, Qu Wenruo wrote:
On 2018年02月14日 22:19, Tomasz Chmielewski wrote:
Just FYI, how dangerous running btrfs can be - we had a fatal,
unrecoverable MySQL corruption when btrfs decided to do one of these
have ~50 GB left, so let's do out of space (and corrupt some files at
the same time, ha ha!)".
I'm recently looking into unexpected corruption problem of btrfs.
Would you please provide some extra info about how the corruption
1) Is there any power reset?
Btrfs should be bullet proof, but in fact it's not, so I'm here to
get some clue.
No power reset.
2) Are MySQL files set with nodatacow?
If so, data corruption is more or less expected, but should be
handled by checkpoint of MySQL.
Yes, MySQL files were using "nodatacow".
I've seen many cases of "filesystem full" with ext4, but none lead to
database corruption (i.e. the database would always recover after
releasing some space)
On the other hand, I've seen a handful of "out of space" with gigabytes
of free space with btrfs, which lead to some light, heavy or
unrecoverable MySQL or mongo corruption.
Can it be because of of how "predictable" out of space situations are
with btrfs and other filesystems?
- in short, ext4 will report out of space when there is 0 bytes left
(perhaps slightly faster for non-root users) - the application trying to
write data will see "out of space" at some point, and it can stay like
this for hours (i.e. until some data is removed manually)
- on the other hand, btrfs can report out of space when there is still
10, 50 or 100 GB left, meaning, any capacity planning is close to
impossible; also, the application trying to write data can be seeing the
fs as transitioning between "out of space" and "data written
successfully" many times per minute/second?
3) Is the filesystem metadata corrupted? (AKA, btrfs check report
If so, that should be the problem I'm looking into.
I don't think so, there are no scary things in dmesg. However, I didn't
unmount the filesystem to run btrfs check.
4) Metadata/data ratio?
"btrfs fi usage" could have quite good result about it.
And "btrfs fi df" also helps.
Here it is - however, that's after removing some 80 GB data, so most
likely doesn't reflect when the failure happened.
# btrfs fi usage /var/lib/lxd
Device size: 846.25GiB
Device allocated: 840.05GiB
Device unallocated: 6.20GiB
Device missing: 0.00B
Free (estimated): 167.96GiB (min: 167.96GiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:411.00GiB, Used:246.14GiB
Metadata,RAID1: Size:9.00GiB, Used:2.99GiB
System,RAID1: Size:32.00MiB, Used:80.00KiB
# btrfs fi df /var/lib/lxd
Data, RAID1: total=411.00GiB, used=246.15GiB
System, RAID1: total=32.00MiB, used=80.00KiB
Metadata, RAID1: total=9.00GiB, used=2.99GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
# btrfs fi show /var/lib/lxd
Label: 'btrfs' uuid: f5f30428-ec5b-4497-82de-6e20065e6f61
Total devices 2 FS bytes used 249.15GiB
devid 1 size 423.13GiB used 420.03GiB path /dev/sda3
devid 2 size 423.13GiB used 420.03GiB path /dev/sdb3
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html