Actually the data on raid0 is not my data but of my users and they knew
and accepted the risk for raid0. So in my case it should be ok - I don't
know the importance of the data files which are affected. I just wanted
to help finding a possible bug and experiment with a broken btrfs
filesystem
Am 26.11.18 um 09:13 schrieb Qu Wenruo:
The corruption itself looks like some disk error, not some btrfs error
like transid error.
You're right! SMART has an increased value for one harddisk on
reallocated sector count. Sorry, I missed to check this first...
I'll try to salvage my data...
Hi,
My data partition with btrfs RAID 0 (/dev/sdc0 and /dev/sdd0) shows
errors in syslog:
BTRFS error (device sdc): cleaner transaction attach returned -30
BTRFS info (device sdc): disk space caching is enabled
BTRFS info (device sdc): has skinny extents
BTRFS info (device sdc): bdev /dev/sdc
Am 03.02.2015 um 01:24 schrieb Tobias Holst:
Hi.
Hi,
There is a known bug when you re-plug in a missing hdd of a btrfs raid
without wiping the device before. In worst case this results in a
totally corrupted filesystem as it did sometimes during my tests of
the raid6 implementation. With
Hello,
I'm testing btrfs RAID5 on three encrypted hdds (dm-crypt) and I'm
simulating a harddisk failure by unplugging one device while writing
some files.
Now the filesystem is damaged. By now is there any chance to repair the
filesystem?
My operating system is ubuntu server (vivid) with