Re: [PATCH] xfstests: Add message indicating btrfs-progs support FST in read-only mode

2017-10-25 Thread Eryu Guan
On Thu, Oct 26, 2017 at 01:57:46PM +0800, Gu Jinxiang wrote:
> From: Gu JinXiang 
> 
> btrfs-progs now support FST in read-only mode, so when space_cache=v2
> enabled, this test case will fail.
> Add message to help user to understand this status.

Sorry, I don't quite understand the new 'FST' feature. But is it a bug
that we want to fix when mounting with space_cache=v2 option, or we just
couldn't do btrfs-convert in this case? If it's a real bug, I'd say let
the test fail as it is, and track bug in tools like bugzilla not
comments/messages in the test; if it's the latter case, then just
_notrun the test if space_cache=v2 option is specified, e.g.

_exclude_scratch_mount_option "space_cache=v2"

Thanks,
Eryu

> 
> Signed-off-by: Gu JinXiang 
> ---
>  tests/btrfs/012 | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/tests/btrfs/012 b/tests/btrfs/012
> index 85c82f07..529e6eca 100755
> --- a/tests/btrfs/012
> +++ b/tests/btrfs/012
> @@ -96,6 +96,12 @@ cp -aR /lib/modules/`uname -r`/ $SCRATCH_MNT/new
>  
>  _scratch_unmount
>  
> +space_cache_version=$(echo "$MOUNT_OPTIONS" | grep "space_cache=v2")
> +if [ -n "$space_cache_version" ]; then
> + _fail "since used space_cache=v2 when mount," \
> +"and for FST btrfs-progs support is read-only."\
> +   "so btrfs-convert rollback will fail"
> +fi
>  # Now restore the ext4 device
>  $BTRFS_CONVERT_PROG -r $SCRATCH_DEV >> $seqres.full 2>&1 || \
>   _fail "btrfs-convert rollback failed"
> -- 
> 2.13.5
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] btrfs: fix offset in test Btrfs delalloc accounting overflow

2017-10-25 Thread robbieko
From: Robbie Ko 

Found it when test btrfs delalloc accounting overflow, Fix offset error.
We will fill in the gaps between the created extents,
then outstanding extents will all be merged into 1.

Signed-off-by: Robbie Ko 
---
 tests/btrfs/010 | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/btrfs/010 b/tests/btrfs/010
index ea201ad..00cac67 100755
--- a/tests/btrfs/010
+++ b/tests/btrfs/010
@@ -58,7 +58,7 @@ done
 # Fill in the gaps between the created extents. The outstanding extents will
 # all be merged into 1, but there will still be 32k reserved.
 for ((i = 0; i < 32 * 1024; i++)); do
-   $XFS_IO_PROG -f -c "pwrite $((2 * 4096 * i + 1)) 4096" "$test_file" 
>>"$seqres.full"
+   $XFS_IO_PROG -f -c "pwrite $((2 * 4096 * i + 4096)) 4096" "$test_file" 
>>"$seqres.full"
 done
 
 # Flush the delayed allocations.
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] xfstests: Add message indicating btrfs-progs support FST in read-only mode

2017-10-25 Thread Gu Jinxiang
From: Gu JinXiang 

btrfs-progs now support FST in read-only mode, so when space_cache=v2
enabled, this test case will fail.
Add message to help user to understand this status.

Signed-off-by: Gu JinXiang 
---
 tests/btrfs/012 | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tests/btrfs/012 b/tests/btrfs/012
index 85c82f07..529e6eca 100755
--- a/tests/btrfs/012
+++ b/tests/btrfs/012
@@ -96,6 +96,12 @@ cp -aR /lib/modules/`uname -r`/ $SCRATCH_MNT/new
 
 _scratch_unmount
 
+space_cache_version=$(echo "$MOUNT_OPTIONS" | grep "space_cache=v2")
+if [ -n "$space_cache_version" ]; then
+   _fail "since used space_cache=v2 when mount," \
+  "and for FST btrfs-progs support is read-only."\
+ "so btrfs-convert rollback will fail"
+fi
 # Now restore the ext4 device
 $BTRFS_CONVERT_PROG -r $SCRATCH_DEV >> $seqres.full 2>&1 || \
_fail "btrfs-convert rollback failed"
-- 
2.13.5



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs send yields "ERROR: send ioctl failed with -5: Input/output error"

2017-10-25 Thread Zak Kohler
I don't need to recover in this case. I can just remake the filesystem. I'm
just very concerned that this corruption was able to happen. Here is the entire
history of the filesystem:

2017.10.18 create btrfs from 3 drives aka OfflineJ and rsync
data from old madam raid5
---
# badblocks run on all three drives
$ badblocks -wsv /dev/disk/by-id/WD-XXX
Pass completed, 0 bad blocks found. (0/0/0 errors)

$ mkfs.btrfs -L OfflineJ /dev/disk/by-id/WD-XX1 /dev/disk/by-id/WD-XX2 
/dev/disk/by-id/WD-XX3

$ mount -t btrfs UUID=88406942-e3e1-42c6-ad71-e23bb315caa7 /mnt/

$ btrfs subvolume create /mnt/dataroot

$ mkdir /media/OfflineJ

/etc/fstab
--
UUID=XXX   /media/OfflineJ btrfs   
rw,relatime,subvol=/dataroot,noauto 0 0

$ mount /media/OfflineJ/

$ btrfs filesystem df /media/OfflineJ/
Data, RAID0: total=3.00GiB, used=1.00MiB
System, RAID1: total=8.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=128.00KiB
GlobalReserve, single: total=16.00MiB, used=0.00B

$ btrfs filesystem usage /media/OfflineJ/
Overall:
   Device size:   5.46TiB
   Device allocated:  5.02GiB
   Device unallocated:5.45TiB
   Device missing:5.46TiB
   Used:  1.28MiB
   Free (estimated):  5.46TiB  (min: 2.73TiB)
   Data ratio:   1.00
   Metadata ratio:   2.00
   Global reserve:   16.00MiB  (used: 0.00B)

$ sudo mount -o noatime -o ro /media/oldmdadmraid5/

$ rsync -aAXh --progress --stats /media/oldmdadmraid5/ /media/OfflineJ


I will gladly repeat this process, but I am very concerned why this
corruption happened in the first place.

More tests:

scrub start --offline
All devices had errors in differing amounts
I will verify that these counts are repeatable.
Csum error: 150
Csum error: 238
Csum error: 175

btrfs check
found 2179745955840 bytes used, no error found

btrfs check --check-data-csum
mirror 0 bytenr 13348855808 csum 2387937020 expected csum 562782116
mirror 0 bytenr 23398821888 csum 3602081170 expected csum 1963854755
...

The only thing I could think of is that the btrfs version that I used to mkfs
was not up to date. Is there a way to determine which version was used to
create the filesystem?

Anything else I can do to help determine the cause?

> On October 24, 2017 at 11:43 PM "Lakshmipathi.G"  
> wrote:
> 
> 1.  I guess you should be able to dump tree details via
> 'btrfs-debug-tree' and then map the extent/data (from scrub
> offline output) and track it back to inode-object. Store output of
> both btrfs-debug-tree and scrub-offline in different files and then
> play around with grep to extract required data.
> 
> 2.  I think normal scrub(online) fails to detect these csum errors for
> some reason,I don't have much idea about online scrub.
> 
> 3.  I assume, the issue is not related to hardware. Since the offline
> scrub able to get available (corrupted) csum.
> 
> Yes, offline scrub will try to fix corruption whenever it is possible.
> And also you have quite lot of "all mirror(s) corrupted, can't be repaired",
> which will be hard to recovery.
> 
> I suggest running offline scrub on all devices. Then online scrub
> 
> and finally track those corrupted files with the help of extent info.
> 
> 
> Cheers,
> Lakshmipathi.G
> http://www.giis.co.in http://www.webminal.org
> 
> On Wed, Oct 25, 2017 at 7:22 AM, Zak Kohler  wrote:
> 
> > I apologize for the bad line wrapping on the last post...will be
> > setting up mutt soon.
> > 
> > This is the final result for the offline scrub:
> > Doing offline scrub [O] [681/683]
> > Scrub result:
> > Tree bytes scrubbed: 5234491392
> > Tree extents scrubbed: 638975
> > Data bytes scrubbed: 4353723572224
> > Data extents scrubbed: 374300
> > Data bytes without csum: 533200896
> > Read error: 0
> > Verify error: 0
> > Csum error: 175
> > 
> > The offline scrub apparently corrected some metadata extents while
> > scanning /dev/sdn
> > 
> > I also ran the online scrub directly on the /dev/sdn, "0 errors":
> > 
> > $ btrfs scrub status /dev/sdn
> > scrub status for 88406942-e3e1-42c6-ad71-e23bb315caa7
> >  scrub started at Tue Oct 24 06:55:12 2017 and finished after 01:52:44
> >  total bytes scrubbed: 677.35GiB with 0 errors
> > 
> > The csum mismatches are still missed by the online scrub when choosing
> > a single . Now I am doing offline scrub on the other devices
> > to see if they are clean.
> > 
> > $ lsblk -o +SERIAL
> > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT SERIAL
> > sdh 8:112 0 1.8T 0 disk WD-WMAZA370
> > sdi 8:128 0 1.8T 0 disk WD-WCAZA569
> > sdn 8:208 0 1.8T 0 disk WD-WCAZA580
> > 
> > $ btrfs scrub start --offline --progress /dev/sdh
> > ERROR: data at bytenr 5365456896 ...
> > ERROR: extent 5341712384 ...
> > ...
> > 
> > One thing to note is that a /dev/sdh is also having csum errors
> > detected despite it having never been m

Re: [PATCH] btrfs: avoid misleading talk about "compression level 0"

2017-10-25 Thread Adam Borowski
On Wed, Oct 25, 2017 at 03:23:11PM +0200, David Sterba wrote:
> On Sat, Oct 21, 2017 at 06:49:01PM +0200, Adam Borowski wrote:
> > Many compressors do assign a meaning to level 0: either null compression or
> > the lowest possible level.  This differs from our "unset thus default".
> > Thus, let's not unnecessarily confuse users.
> 
> I agree 'level 0' confusing, however I'd like to keep the level
> mentioned in the message.
> 
> We could add
> 
> #define   BTRFS_COMPRESSION_ZLIB_DEFAULT  3
> 
> and use it in btrfs_compress_str2level.

I considered this but every algorithm has a different default, thus we'd
need separate cases for zlib vs zstd, while lzo has no settable level at
all.  Still, this is just some extra lines of code, thus doable.

> > Signed-off-by: Adam Borowski 
> > ---
> >  fs/btrfs/super.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
> > index f9d4522336db..144fabfbd246 100644
> > --- a/fs/btrfs/super.c
> > +++ b/fs/btrfs/super.c
> > @@ -551,7 +551,9 @@ int btrfs_parse_options(struct btrfs_fs_info *info, 
> > char *options,
> >   compress_force != saved_compress_force)) ||
> > (!btrfs_test_opt(info, COMPRESS) &&
> >  no_compress == 1)) {
> > -   btrfs_info(info, "%s %s compression, level %d",
> > +   btrfs_printk(info, info->compress_level ?
> > +  KERN_INFO"%s %s compression, level 
> > %d" :
> > +  KERN_INFO"%s %s compression",
> 
> Please keep using btrfs_info, the KERN_INFO prefix would not work here.
> btrfs_printk prepends the filesystem description and the message level
> must be at the beginning.

Seems to work for me:
[   14.072575] BTRFS info (device sda1): use lzo compression
with identical colors as other info messages next to it.

But if we're to expand this code, ternary operators would get too hairy,
thus this can go at least for clarity.

> >(compress_force) ? "force" : "use",
> >compress_type, info->compress_level);
> > }
> 

-- 
⢀⣴⠾⠻⢶⣦⠀ Laws we want back: Poland, Dz.U. 1921 nr.30 poz.177 (also Dz.U. 
⣾⠁⢰⠒⠀⣿⡁ 1920 nr.11 poz.61): Art.2: An official, guilty of accepting a gift
⢿⡄⠘⠷⠚⠋⠀ or another material benefit, or a promise thereof, [in matters
⠈⠳⣄ relevant to duties], shall be punished by death by shooting.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs: avoid misleading talk about "compression level 0"

2017-10-25 Thread David Sterba
On Sat, Oct 21, 2017 at 06:49:01PM +0200, Adam Borowski wrote:
> Many compressors do assign a meaning to level 0: either null compression or
> the lowest possible level.  This differs from our "unset thus default".
> Thus, let's not unnecessarily confuse users.

I agree 'level 0' confusing, however I'd like to keep the level
mentioned in the message.

We could add

#define BTRFS_COMPRESSION_ZLIB_DEFAULT  3

and use it in btrfs_compress_str2level.

> 
> Signed-off-by: Adam Borowski 
> ---
>  fs/btrfs/super.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
> index f9d4522336db..144fabfbd246 100644
> --- a/fs/btrfs/super.c
> +++ b/fs/btrfs/super.c
> @@ -551,7 +551,9 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char 
> *options,
> compress_force != saved_compress_force)) ||
>   (!btrfs_test_opt(info, COMPRESS) &&
>no_compress == 1)) {
> - btrfs_info(info, "%s %s compression, level %d",
> + btrfs_printk(info, info->compress_level ?
> +KERN_INFO"%s %s compression, level 
> %d" :
> +KERN_INFO"%s %s compression",

Please keep using btrfs_info, the KERN_INFO prefix would not work here.
btrfs_printk prepends the filesystem description and the message level
must be at the beginning.

>  (compress_force) ? "force" : "use",
>  compress_type, info->compress_level);
>   }
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] Btrfs: free btrfs_device in place

2017-10-25 Thread David Sterba
On Mon, Oct 23, 2017 at 11:02:54PM -0600, Liu Bo wrote:
> It's pointless to defer it to a kthread helper as we're not under a
> special context.
> 
> For reference, commit 1f78160ce1b1 ("Btrfs: using rcu lock in the
> reader side of devices list") introduced RCU freeing for device
> structures.
> 
> Signed-off-by: Liu Bo 
> Reviewed-by: Anand Jain 

Reviewed-by: David Sterba 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Kernel Oops / btrfs problems

2017-10-25 Thread andreas . btrfs

Hi,

I've had problems with a btrfs filesystem on a usb disk. I made a 
successfull backup of all data and created the filesystem from scratch.

I'm not able to restore all backuped data because of a kernel oops.
The problem is reproducible.

I've checked my RAM alreayd with memtest86 for 8h without any problems 
found.


Could you be so kind to provide any information to solve the problem?

Thanks for you help.

Andreas

uname -a

Linux fatblock 4.12.0-0.bpo.2-amd64 #1 SMP Debian 4.12.13-1~bpo9+1 
(2017-09-28) x86_64 GNU/Linux


btrfs --version
===
btrfs-progs v4.9.1

btrfs fi df /mnt/archive/
=
Data, single: total=1.20TiB, used=1.20TiB
System, DUP: total=40.00MiB, used=160.00KiB
Metadata, DUP: total=3.50GiB, used=2.79GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

smartctl -a /dev/sdf

andi@fatblock:/etc/initramfs-tools$ sudo smartctl -a /dev/sdf
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.12.0-0.bpo.2-amd64] (local 
build)

Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Seagate NAS HDD
Device Model: ST4000VN000-1H4168
Serial Number:Z300MKYZ
LU WWN Device Id: 5 000c50 063dd0357
Firmware Version: SC43
User Capacity:4.000.787.030.016 bytes [4,00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate:5900 rpm
Form Factor:  3.5 inches
Device is:In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:Wed Oct 25 14:03:51 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status:  (   0)	The previous self-test routine 
completed

without error or no self-test has ever
been run.
Total time to complete Offline
data collection:(  117) seconds.
Offline data collection
capabilities:(0x73) SMART execute Offline immediate.
Auto Offline data collection on/off 
support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:(   1) minutes.
Extended self-test routine
recommended polling time:( 517) minutes.
Conveyance self-test routine
recommended polling time:(   2) minutes.
SCT capabilities:  (0x10bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE 
UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x000f   109   099   006Pre-fail 
Always   -   23673928
  3 Spin_Up_Time0x0003   094   093   000Pre-fail 
Always   -   0
  4 Start_Stop_Count0x0032   078   078   020Old_age 
Always   -   23040
  5 Reallocated_Sector_Ct   0x0033   100   100   010Pre-fail 
Always   -   0
  7 Seek_Error_Rate 0x000f   083   060   030Pre-fail 
Always   -   218165830
  9 Power_On_Hours  0x0032   059   059   000Old_age 
Always   -   36667
 10 Spin_Retry_Count0x0013   100   100   097Pre-fail 
Always   -   0
 12 Power_Cycle_Count   0x0032   100   100   020Old_age 
Always   -   31
184 End-to-End_Error0x0032   100   100   099Old_age   Always 
  -   0
187 Reported_Uncorrect  0x0032   100   100   000Old_age   Always 
  -   0
188 Command_Timeout 0x0032   100   099   000Old_age   

Re: [PATCH] fstests: btrfs/143: make test case more reliable

2017-10-25 Thread Nikolay Borisov


On 23.10.2017 23:57, Liu Bo wrote:
> Currently drop_caches is used to invalidate file's page cache so that
> buffered read can hit disk, but the problem is that it may also
> invalidate metadata's page cache, so the test case may not get read
> errors (and repair) if reading metadata has consumed the injected
> faults.
> 
> This changes it to do 'fadvise -d' to firstly access all metadata it
> needs to locate the file and then only drops the test file's page
> cache.  Also this changes it to read the file only if pid%2 == 1.
> 
> Reported-by: Nikolay Borisov 
> Signed-off-by: Liu Bo 
> ---
>  tests/btrfs/143 | 20 ++--
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/tests/btrfs/143 b/tests/btrfs/143
> index da7bfd8..dabd03d 100755
> --- a/tests/btrfs/143
> +++ b/tests/btrfs/143
> @@ -127,16 +127,16 @@ echo "step 3..repair the bad copy" >>$seqres.full
>  # since raid1 consists of two copies, and the bad copy was put on stripe #1
>  # while the good copy lies on stripe #0, the bad copy only gets access when 
> the
>  # reader's pid % 2 == 1 is true
> -while true; do
> - # start_fail only fails the following buffered read so the repair is
> - # supposed to work.
> - echo 3 > /proc/sys/vm/drop_caches
> - start_fail
> - $XFS_IO_PROG -c "pread 0 4K" "$SCRATCH_MNT/foobar" > /dev/null &
> - pid=$!
> - wait
> - stop_fail
> - [ $((pid % 2)) == 1 ] && break
> +while [[ -z ${result} ]]; do
> +# invalidate the page cache.
> +$XFS_IO_PROG -c "fadvise -d 0 128K" $SCRATCH_MNT/foobar

I think an even better approach would be to just unmount and then mount
it. That ensures the page cache is truncated.

> +
> +start_fail
> +result=$(bash -c "
> +if [[ \$((\$\$ % 2)) -eq 1 ]]; then
> +exec $XFS_IO_PROG -c \"pread 0 4K\" \"$SCRATCH_MNT/foobar\"
> +fi");
> +stop_fail
>  done
>  
>  _scratch_unmount
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: check: "warning line 4144"

2017-10-25 Thread Qu Wenruo


On 2017年10月25日 14:54, Tom Hale wrote:
> 
> 
> On 19/10/17 15:58, Qu Wenruo wrote:
>> On 2017年10月19日 16:53, Tom Hale wrote:
>>> In running btrfs check, I got the following message:
>>>
>>> warning line 4144
>>>
>>> Could this be a little more descriptive?
>>>
>>> * Does it mean I should rebuild my FS from scratch?
>>> * Is there anything I can do to remove this warning?
>>>
>>
>> --repair is dangerous, use it unless you're sure the problem can be
>> fixed by it.
>>
>> What's the output of "btrfs check" and "btrfs check --mode=lowmem" after
>> doing the repair?
> 
> Plain `btrfs check` gives me:
> 
> checking extents
> checking free space cache
> checking fs roots
> checking csums
> checking root refs
> Checking filesystem on /dev/mapper/vg_svelte-home
> UUID: 93722fa7-7e8f-418a-a7ca-080aca8db94b
> found 192318169092 bytes used, no error found
> total csum bytes: 185482112
> total tree bytes: 1924104192
> total fs tree bytes: 1597767680
> total extent tree bytes: 102514688
> btree space waste bytes: 325265468
> file data blocks allocated: 6169163599872
>  referenced 575636733952

At least your fs is good to go.

> 
> Could somebody please answer my initial questions about this obscure
> warning?

Only two places will output this warning, and they are both used by
original mode.

So at least if you're using lowmem mode, you won't encounter this
warning line.


And for your line at 4144, git blames shows it's from the ancient day of
btrfs-progs.

According to a quick glance of it, it seems that during the check, it
encountered a shared node, which is quite common for fs with snapshots,
but it doesn't do the cleanup correctly.

I think it's caused by some extent tree problem, which lacks some
backref or has some mismatch backref, to cause the shared node checking
code behaved unexpectedly.
(runtime glitch)

Thanks,
Qu


> 
> Thanks,
> 



signature.asc
Description: OpenPGP digital signature