Re: btrfs partition fails to mount after power outage
The output for btrfs-debug-tree –d /dev/… is quite large – so I have placed it here: https://cloud.visageimaging.com/index.php/s/Q7ILbVWFoGeckCI The –b one is this: quickstore2:~/btrfs-progs # ./btrfs-debug-tree -b 29556736 /dev/mapper/VGBIGRAID6-DATA1 btrfs-progs v4.7 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 Ignoring transid failure leaf parent key incorrect 7375769206784 leaf 29556736 items 4 free space 15949 generation 6 owner 5 fs uuid 0397b3eb-fdc4-4d3f-9cc9-d38467dcb6c2 chunk uuid 74addbad-a6f1-4c82-9e70-97d9aab63103 item 0 key (256 INODE_ITEM 0) itemoff 16123 itemsize 160 inode generation 3 transid 6 size 2 nbytes 16384 block group 0 mode 40755 links 1 uid 0 gid 0 rdev 0 flags 0x0 item 1 key (256 INODE_REF 256) itemoff 16111 itemsize 12 inode ref index 0 namelen 2 name: .. item 2 key (256 DIR_ITEM 512897553) itemoff 16080 itemsize 31 location key (257 ROOT_ITEM -1) type DIR namelen 1 datalen 0 name: @ item 3 key (256 DIR_INDEX 2) itemoff 16049 itemsize 31 location key (257 ROOT_ITEM -1) type DIR namelen 1 datalen 0 name: @ On 8/23/16, 6:07 PM, "linux-btrfs-ow...@vger.kernel.org on behalf of Chris Murphy"wrote: On Tue, Aug 23, 2016 at 10:05 AM, Chris Murphy wrote: >btrfs-find-root -d /dev/mapper/VGBIGRAID6-DATA1 I meant to ask for btrfs-debug-tree -d /dev/... -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs partition fails to mount after power outage
On Tue, Aug 23, 2016 at 9:51 AM, Malte Westerhoffwrote: > >> btrfs-show-super -fa /d/dev/mapper/VGBIGRAID6-DATA1 > >>Where is this output? > > I had attached it as a separate file. Here it is again: Sorry, missed that there was an attachment. > backup_roots[4]: > backup 0: > backup_tree_root: 3421559472128 gen: 52059 > level: 1 > backup_chunk_root: 20971520gen: 52005 > level: 1 > backup_extent_root: 3421770170368 gen: 52059 > level: 3 > backup_fs_root: 29556736gen: 6 level: 0 > backup_dev_root:7304828502016 gen: 52059 > level: 1 > backup_csum_root: 7315970588672 gen: 52059 > level: 3 > backup_total_bytes: 17592186044416 > backup_bytes_used: 7386753196032 > backup_num_devices: 1 > > backup 1: > backup_tree_root: 7442297503744 gen: 52056 > level: 1 > backup_chunk_root: 20971520gen: 52005 > level: 1 > backup_extent_root: 7442294161408 gen: 52056 > level: 3 > backup_fs_root: 29556736gen: 6 level: 0 > backup_dev_root:7254075654144 gen: 52005 > level: 1 > backup_csum_root: 7442298175488 gen: 52056 > level: 3 > backup_total_bytes: 17592186044416 > backup_bytes_used: 7386753163264 > backup_num_devices: 1 > > backup 2: > backup_tree_root: 7442289229824 gen: 52057 > level: 1 > backup_chunk_root: 20971520gen: 52005 > level: 1 > backup_extent_root: 7442283692032 gen: 52057 > level: 3 > backup_fs_root: 29556736gen: 6 level: 0 > backup_dev_root:7254075654144 gen: 52005 > level: 1 > backup_csum_root: 7442294063104 gen: 52057 > level: 3 > backup_total_bytes: 17592186044416 > backup_bytes_used: 7386753179648 > backup_num_devices: 1 > > backup 3: > backup_tree_root: 7442297503744 gen: 52058 > level: 1 > backup_chunk_root: 20971520gen: 52005 > level: 1 > backup_extent_root: 7442284920832 gen: 52058 > level: 3 > backup_fs_root: 29556736gen: 6 level: 0 > backup_dev_root:7254075654144 gen: 52005 > level: 1 > backup_csum_root: 7442299355136 gen: 52058 > level: 3 > backup_total_bytes: 17592186044416 > backup_bytes_used: 7386753179648 > backup_num_devices: 1 U, is anyone else having a WTF moment? Why is the backup_fs_root generation 6 on all of these backup roots? Doesn't that seem really unlikely? On all of my filesystems, the fs_root has a lower generation, but isn't separated by this many generations. What do you get for: btrfs-debug-tree -b 29556736 /dev/mapper/VGBIGRAID6-DATA1 -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs partition fails to mount after power outage
On Tue, 23 Aug 2016 12:30:35 + Malte Westerhoffwrote: > parent transid verify failed on 7375567323136 wanted 52059 found 52045 > parent transid verify failed on 7375567323136 wanted 52059 found 52045 > Error: could not find btree root extent for root 12974 > > This is kernel 4.1.27 and btrfs-progs 4.1.2. We also tried the btrfs check > --repair with btrfs-progs 4.7 with the same result. > > The partition with the problem is /dev/mapper/VGBIGRAID6-DATA1 > (The partition is an LVM logical volume that is on top of an md raid6.) > > Anything else that we can try in order to recover the volume? In my experience the last resort solution to this is to simply kill the problematic block with the "btrfs-corrupt-block" tool (compile from btrfs-progs source), then run "btrfs check --repair" again, and this time it should quite trivially set things right. See my success story about that: http://www.spinics.net/lists/linux-btrfs/msg53061.html and especially note the part about making COW snapshots of the entire block device (should be easier since you already run LVM), which will allow you to experiment while having an ability to undo. Some files will be lost or corrupted (likely only an extremely tiny portion, but not knowing which -- sucks), so ideally you should always have means to verify the integrity of user data on any FS. At least for those which change infrequently, e.g. media file libraries, always keep '*.sfv' files produced by the 'cfv' tool beside everything. Also something along the lines of cd /mnt/mydata && find > /somewhereelse/mydata-`date -Iseconds`.lst in crontab. Before starting you can also try checking what kind of data the problematic block contains, IIRC it would be (but not guaranteed to be helpful): btrfs-debug-tree -b 7375567323136 /dev/mapper/VGBIGRAID6-DATA1 -- With respect, Roman pgp3_VrCQIPCT.pgp Description: OpenPGP digital signature
Re: btrfs partition fails to mount after power outage
On Tue, Aug 23, 2016 at 10:05 AM, Chris Murphywrote: >btrfs-find-root -d /dev/mapper/VGBIGRAID6-DATA1 I meant to ask for btrfs-debug-tree -d /dev/... -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs partition fails to mount after power outage
>> btrfs-show-super -fa /dev/mapper/VGBIGRAID6-DATA1 >Where is this output? I had attached it as a separate file. Here it is again: superblock: bytenr=65536, device=/dev/mapper/VGBIGRAID6-DATA1 - csum0xd6fc94ec [match] bytenr 65536 flags 0x1 ( WRITTEN ) magic _BHRfS_M [match] fsid0397b3eb-fdc4-4d3f-9cc9-d38467dcb6c2 label generation 52059 root3421559472128 sys_array_size 226 chunk_root_generation 52005 root_level 1 chunk_root 20971520 chunk_root_level1 log_root0 log_root_transid0 log_root_level 0 total_bytes 17592186044416 bytes_used 7386753196032 sectorsize 4096 nodesize16384 leafsize16384 stripesize 4096 root_dir6 num_devices 1 compat_flags0x0 compat_ro_flags 0x0 incompat_flags 0x163 ( MIXED_BACKREF | DEFAULT_SUBVOL | BIG_METADATA | EXTENDED_IREF | SKINNY_METADATA ) csum_type 0 csum_size 4 cache_generation52059 uuid_tree_generation52059 dev_item.uuid 4e582c98-ccfc-4740-8e59-a1cb1cbfca56 dev_item.fsid 0397b3eb-fdc4-4d3f-9cc9-d38467dcb6c2 [match] dev_item.type 0 dev_item.total_bytes17592186044416 dev_item.bytes_used 7661185662976 dev_item.io_align 4096 dev_item.io_width 4096 dev_item.sector_size4096 dev_item.devid 1 dev_item.dev_group 0 dev_item.seek_speed 0 dev_item.bandwidth 0 dev_item.generation 0 sys_chunk_array[2048]: item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0) chunk length 4194304 owner 2 type SYSTEM num_stripes 1 stripe 0 devid 1 offset 0 item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520) chunk length 8388608 owner 2 type SYSTEM|DUP num_stripes 2 stripe 0 devid 1 offset 20971520 stripe 1 devid 1 offset 29360128 backup_roots[4]: backup 0: backup_tree_root: 3421559472128 gen: 52059 level: 1 backup_chunk_root: 20971520gen: 52005 level: 1 backup_extent_root: 3421770170368 gen: 52059 level: 3 backup_fs_root: 29556736gen: 6 level: 0 backup_dev_root:7304828502016 gen: 52059 level: 1 backup_csum_root: 7315970588672 gen: 52059 level: 3 backup_total_bytes: 17592186044416 backup_bytes_used: 7386753196032 backup_num_devices: 1 backup 1: backup_tree_root: 7442297503744 gen: 52056 level: 1 backup_chunk_root: 20971520gen: 52005 level: 1 backup_extent_root: 7442294161408 gen: 52056 level: 3 backup_fs_root: 29556736gen: 6 level: 0 backup_dev_root:7254075654144 gen: 52005 level: 1 backup_csum_root: 7442298175488 gen: 52056 level: 3 backup_total_bytes: 17592186044416 backup_bytes_used: 7386753163264 backup_num_devices: 1 backup 2: backup_tree_root: 7442289229824 gen: 52057 level: 1 backup_chunk_root: 20971520gen: 52005 level: 1 backup_extent_root: 7442283692032 gen: 52057 level: 3 backup_fs_root: 29556736gen: 6 level: 0 backup_dev_root:7254075654144 gen: 52005 level: 1 backup_csum_root: 7442294063104 gen: 52057 level: 3 backup_total_bytes: 17592186044416 backup_bytes_used: 7386753179648 backup_num_devices: 1 backup 3: backup_tree_root: 7442297503744 gen: 52058 level: 1 backup_chunk_root: 20971520gen: 52005 level: 1 backup_extent_root: 7442284920832 gen: 52058 level: 3 backup_fs_root: 29556736gen: 6 level: 0 backup_dev_root:7254075654144 gen: 52005 level: 1 backup_csum_root: 7442299355136 gen: 52058 level: 3 backup_total_bytes: 17592186044416 backup_bytes_used: 7386753179648 backup_num_devices: 1 superblock:
Re: btrfs partition fails to mount after power outage
On Tue, Aug 23, 2016 at 9:19 AM, Malte Westerhoffwrote: > Hi Chris, > Thanks for the response. > Yes, there is one large raid6 on which there is one LVM physical volume, > which has four logical volumes that each have a btrfs file system. > Only one of them fails to mount, the other three are fine (data2...4). > The RAID itself claims to be clean (see below). > > Output of btrfs-show-super attached. > > btrfs-find-root is running now for 30 minutes. It has produced the following > output so far (but is still running – not sure whether it is hanging or will > eventually terminate). > > quickstore2:~/btrfs-progs # ./btrfs-find-root /dev/mapper/VGBIGRAID6-DATA1 > parent transid verify failed on 7375769206784 wanted 52059 found 52028 > parent transid verify failed on 7375769206784 wanted 52059 found 52028 > parent transid verify failed on 7375769206784 wanted 52059 found 52028 > parent transid verify failed on 7375769206784 wanted 52059 found 52028 > Ignoring transid failure > leaf parent key incorrect 7375769206784 > Superblock thinks the generation is 52059 > Superblock thinks the level is 1 Huh. Well someone who knows more about Btrfs and devices doing the wrong thing will have to speak up. To me this looks like something got the commit order wrong, where the superblock has a more recent generation than any tree generation, as if the metadata for generation 52059 (or even 52058, 52057, or 52056) is not written or not where it's supposed to be, and yet the superblock was updated. >> btrfs-show-super -fa /dev/mapper/VGBIGRAID6-DATA1 Where is this output? -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs partition fails to mount after power outage
On Tue, Aug 23, 2016 at 6:30 AM, Malte Westerhoffwrote: > Hi, > I hope this is the right list for this problem report. > > After an unclean shutdown of a server due to power outage, one of the > partitions fails to mount. > mount -o recovery ...,mount -o recovery,ro ..., mount -o ro do not help. They > produce this error in the log: > > [ 5464.525816] BTRFS info (device dm-0): enabling auto recovery > [ 5464.525823] BTRFS info (device dm-0): disk space caching is enabled > [ 5464.525825] BTRFS: has skinny extents > [ 5499.299686] BTRFS (device dm-0): parent transid verify failed on > 7375769206784 wanted 52059 found 52028 > [ 5499.308576] BTRFS (device dm-0): parent transid verify failed on > 7375769206784 wanted 52059 found 52028 > [ 5499.308587] BTRFS: Failed to read block groups: -5 > [ 5499.391359] BTRFS: open_ctree failed > > > btrfs check --repair fails with an error message: > ... (full output attached) > Ignoring transid failure > parent transid verify failed on 7375567323136 wanted 52059 found 52045 > parent transid verify failed on 7375567323136 wanted 52059 found 52045 > Error: could not find btree root extent for root 12974 > > This is kernel 4.1.27 and btrfs-progs 4.1.2. We also tried the btrfs check > --repair with btrfs-progs 4.7 with the same result. > > The partition with the problem is /dev/mapper/VGBIGRAID6-DATA1 > (The partition is an LVM logical volume that is on top of an md raid6.) Sounds to me like the raid6 is not assembling correctly or is dirty. The btrfs structures are small enough that it's possible for incorrect or dirty raid assembly to present some pieces of the file system to make it look like Btrfs (like the super blocks can be found) but then not enough for mount. Especially if this is a default md chunk size of 512KiB. What do you get for mdadm -E /dev/mdX btrfs-show-super -fa /dev/mapper/VGBIGRAID6-DATA1 btrfs-find-root -d /dev/mapper/VGBIGRAID6-DATA1 > > Anything else that we can try in order to recover the volume? > > Thanks > Malte > > quickstore2:~/btrfs-progs # uname -a > Linux quickstore2 4.1.27-27-default #1 SMP PREEMPT Fri Jul 15 12:46:41 UTC > 2016 (84ae57e) x86_64 x86_64 x86_64 GNU/Linux > quickstore2:~/btrfs-progs # btrfs --version > btrfs-progs v4.1.2+20151002 > quickstore2:~/btrfs-progs # btrfs fi show > Label: none uuid: ef3fc810-d74e-4c8e-97e3-7c7e788795ae > Total devices 1 FS bytes used 12.14GiB > devid1 size 80.00GiB used 15.04GiB path /dev/sda1 > > Label: none uuid: 9b4af403-e40c-4228-825a-20666aa0ec3c > Total devices 1 FS bytes used 581.48GiB > devid1 size 8.00TiB used 616.03GiB path > /dev/mapper/VGBIGRAID6-DATA2 > > Label: none uuid: 98330879-6f67-44f0-92ba-974c86a836d1 > Total devices 1 FS bytes used 1015.34GiB > devid1 size 8.00TiB used 1.08TiB path /dev/mapper/VGBIGRAID6-data3 > > Label: none uuid: 014d9f5c-48b2-4043-add3-5150ce418570 > Total devices 1 FS bytes used 3.15TiB > devid1 size 16.00TiB used 3.21TiB path > /dev/mapper/VGBIGRAID6-data4 > > Label: none uuid: 0397b3eb-fdc4-4d3f-9cc9-d38467dcb6c2 > Total devices 1 FS bytes used 6.72TiB > devid1 size 16.00TiB used 6.97TiB path > /dev/mapper/VGBIGRAID6-DATA1 Is it the same md array that these other dataX LV's are on? And they are mountable? -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs partition fails to mount after power outage
Hi Chris, Thanks for the response. Yes, there is one large raid6 on which there is one LVM physical volume, which has four logical volumes that each have a btrfs file system. Only one of them fails to mount, the other three are fine (data2...4). The RAID itself claims to be clean (see below). Output of btrfs-show-super attached. btrfs-find-root is running now for 30 minutes. It has produced the following output so far (but is still running – not sure whether it is hanging or will eventually terminate). quickstore2:~/btrfs-progs # ./btrfs-find-root /dev/mapper/VGBIGRAID6-DATA1 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 Ignoring transid failure leaf parent key incorrect 7375769206784 Superblock thinks the generation is 52059 Superblock thinks the level is 1 quickstore2:~ # mdadm --detail /dev/md/BIGRAID6 /dev/md/BIGRAID6: Version : 1.0 Creation Time : Fri Jan 29 15:05:42 2016 Raid Level : raid6 Array Size : 109396349440 (104328.49 GiB 112021.86 GB) Used Dev Size : 7814024960 (7452.04 GiB 8001.56 GB) Raid Devices : 16 Total Devices : 16 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue Aug 23 17:03:29 2016 State : active Active Devices : 16 Working Devices : 16 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K Name : any:BIGRAID6 UUID : 7145b27b:cd5b7c9e:79d86f7b:46f54aeb Events : 17908 Number Major Minor RaidDevice State 0 8 170 active sync /dev/sdb1 1 8 331 active sync /dev/sdc1 2 8 492 active sync /dev/sdd1 3 8 653 active sync /dev/sde1 4 8 814 active sync /dev/sdf1 5 8 975 active sync /dev/sdg1 6 8 1136 active sync /dev/sdh1 7 8 1297 active sync /dev/sdi1 8 8 1458 active sync /dev/sdj1 9 8 1619 active sync /dev/sdk1 10 8 177 10 active sync /dev/sdl1 11 8 193 11 active sync /dev/sdm1 12 8 209 12 active sync /dev/sdn1 13 8 225 13 active sync /dev/sdo1 14 8 241 14 active sync /dev/sdp1 15 651 15 active sync /dev/sdq1 quickstore2:~ # mdadm -E /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : 7145b27b:cd5b7c9e:79d86f7b:46f54aeb Name : any:BIGRAID6 Creation Time : Fri Jan 29 15:05:42 2016 Raid Level : raid6 Raid Devices : 16 Avail Dev Size : 15628050152 (7452.04 GiB 8001.56 GB) Array Size : 109396349440 (104328.49 GiB 112021.86 GB) Used Dev Size : 15628049920 (7452.04 GiB 8001.56 GB) Super Offset : 15628050416 sectors Unused Space : before=0 sectors, after=456 sectors State : clean Device UUID : f03ba2d9:e25d52a2:616ee62e:316017ff Internal Bitmap : -40 sectors from superblock Update Time : Tue Aug 23 17:10:45 2016 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 3007a4b6 - correct Events : 17907 Layout : left-symmetric Chunk Size : 128K Device Role : Active device 1 Array State : ('A' == active, '.' == missing, 'R' == replacing) Malte On 8/23/16, 4:52 PM, "ch...@colorremedies.com on behalf of Chris Murphy"wrote: On Tue, Aug 23, 2016 at 6:30 AM, Malte Westerhoff wrote: > Hi, > I hope this is the right list for this problem report. > > After an unclean shutdown of a server due to power outage, one of the partitions fails to mount. > mount -o recovery ...,mount -o recovery,ro ..., mount -o ro do not help. They produce this error in the log: > > [ 5464.525816] BTRFS info (device dm-0): enabling auto recovery > [ 5464.525823] BTRFS info (device dm-0): disk space caching is enabled > [ 5464.525825] BTRFS: has skinny extents > [ 5499.299686] BTRFS (device dm-0): parent transid verify failed on 7375769206784 wanted 52059 found 52028 > [ 5499.308576] BTRFS (device dm-0): parent transid verify failed on 7375769206784 wanted 52059 found 52028 > [ 5499.308587] BTRFS: Failed to read block groups: -5 > [ 5499.391359] BTRFS: open_ctree failed > > > btrfs check --repair fails with an error message: > ... (full output attached) >
btrfs partition fails to mount after power outage
Hi, I hope this is the right list for this problem report. After an unclean shutdown of a server due to power outage, one of the partitions fails to mount. mount -o recovery ...,mount -o recovery,ro ..., mount -o ro do not help. They produce this error in the log: [ 5464.525816] BTRFS info (device dm-0): enabling auto recovery [ 5464.525823] BTRFS info (device dm-0): disk space caching is enabled [ 5464.525825] BTRFS: has skinny extents [ 5499.299686] BTRFS (device dm-0): parent transid verify failed on 7375769206784 wanted 52059 found 52028 [ 5499.308576] BTRFS (device dm-0): parent transid verify failed on 7375769206784 wanted 52059 found 52028 [ 5499.308587] BTRFS: Failed to read block groups: -5 [ 5499.391359] BTRFS: open_ctree failed btrfs check --repair fails with an error message: ... (full output attached) Ignoring transid failure parent transid verify failed on 7375567323136 wanted 52059 found 52045 parent transid verify failed on 7375567323136 wanted 52059 found 52045 Error: could not find btree root extent for root 12974 This is kernel 4.1.27 and btrfs-progs 4.1.2. We also tried the btrfs check --repair with btrfs-progs 4.7 with the same result. The partition with the problem is /dev/mapper/VGBIGRAID6-DATA1 (The partition is an LVM logical volume that is on top of an md raid6.) Anything else that we can try in order to recover the volume? Thanks Malte quickstore2:~/btrfs-progs # uname -a Linux quickstore2 4.1.27-27-default #1 SMP PREEMPT Fri Jul 15 12:46:41 UTC 2016 (84ae57e) x86_64 x86_64 x86_64 GNU/Linux quickstore2:~/btrfs-progs # btrfs --version btrfs-progs v4.1.2+20151002 quickstore2:~/btrfs-progs # btrfs fi show Label: none uuid: ef3fc810-d74e-4c8e-97e3-7c7e788795ae Total devices 1 FS bytes used 12.14GiB devid1 size 80.00GiB used 15.04GiB path /dev/sda1 Label: none uuid: 9b4af403-e40c-4228-825a-20666aa0ec3c Total devices 1 FS bytes used 581.48GiB devid1 size 8.00TiB used 616.03GiB path /dev/mapper/VGBIGRAID6-DATA2 Label: none uuid: 98330879-6f67-44f0-92ba-974c86a836d1 Total devices 1 FS bytes used 1015.34GiB devid1 size 8.00TiB used 1.08TiB path /dev/mapper/VGBIGRAID6-data3 Label: none uuid: 014d9f5c-48b2-4043-add3-5150ce418570 Total devices 1 FS bytes used 3.15TiB devid1 size 16.00TiB used 3.21TiB path /dev/mapper/VGBIGRAID6-data4 Label: none uuid: 0397b3eb-fdc4-4d3f-9cc9-d38467dcb6c2 Total devices 1 FS bytes used 6.72TiB devid1 size 16.00TiB used 6.97TiB path /dev/mapper/VGBIGRAID6-DATA1 btrfs-progs v4.1.2+20151002 quickstore2:~/btrfs-progs # btrfs fi show /dev/mapper/VGBIGRAID6-DATA1 Label: none uuid: 0397b3eb-fdc4-4d3f-9cc9-d38467dcb6c2 Total devices 1 FS bytes used 6.72TiB devid1 size 16.00TiB used 6.97TiB path /dev/mapper/VGBIGRAID6-DATA1 btrfs-progs v4.1.2+20151002 quickstore2:~/btrfs-progs # parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 parent transid verify failed on 7375769206784 wanted 52059 found 52028 Ignoring transid failure leaf parent key incorrect 7375769206784 checking extents parent transid verify failed on 7375715827712 wanted 52059 found 52045 parent transid verify failed on 7375715827712 wanted 52059 found 52045 parent transid verify failed on 7375715827712 wanted 52059 found 52045 parent transid verify failed on 7375715827712 wanted 52059 found 52045 Ignoring transid failure parent transid verify failed on 7375712600064 wanted 52059 found 52045 parent transid verify failed on 7375712600064 wanted 52059 found 52045 parent transid verify failed on 7375716237312 wanted 52059 found 52045 parent transid verify failed on 7375716237312 wanted 52059 found 52045 parent transid verify failed on 7375716237312 wanted 52059 found 52045 parent transid verify failed on 7375716237312 wanted 52059 found 52045 Ignoring transid failure parent transid verify failed on 7375712763904 wanted 52059 found 52045 parent transid verify failed on 7375712763904 wanted 52059 found 52045 parent transid verify failed on 7375712600064 wanted 52059 found 52045 parent transid verify failed on 7375712600064 wanted 52059 found 52045 parent transid verify failed on 7375712763904 wanted 52059 found 52045 parent transid verify failed on 7375712763904 wanted 52059 found 52045 parent transid verify failed on 7375715827712 wanted 52059 found 52045 Ignoring transid failure parent transid verify failed on 7375717089280 wanted 52059 found 52045 parent transid verify failed on 7375717089280 wanted 52059 found 52045 parent transid verify failed on 7375716237312 wanted 52059 found 52045 Ignoring transid failure parent transid verify failed on 7375717089280 wanted 52059 found 52045 parent transid verify failed on 7375717089280 wanted 52059 found 52045 parent transid verify failed on 7375717466112 wanted