Re: btrfs crash on armv7

2021-04-08 Thread Qu Wenruo




On 2021/4/8 下午7:15, riteshh wrote:


Please excuse my silly queries here.

On 21/04/08 04:38PM, Qu Wenruo wrote:



On 2021/4/8 下午4:16, Joe Hermaszewski wrote:

It took a while but I managed to get hold of another one of these
arm32 boards. Very disappointingly this exact "bitflip" is still
present (log enclosed).


Yeah, we got to the conclusion it's not bitflip, but completely 32bit
limit on armv7.

For ARMv7, it's a 32bit system, where unsigned long is only 32bit.

This means, things like page->index is only 32bit long, and for 4K page
size, it also means all filesystems (not only btrfs) can only utilize at
most 16T bytes.

But there is pitfall for btrfs, btrfs uses its internal address space
for its meatadata, and the address space is U64.


Can you pls point me to the code you are referring here?
So IIUC, you mean since page->index can hold a value which can be upto 32bit in
size so the maximum FS address range which can be accessed is 16T.
This should be true in general for any FS no?



The code is in definition of "struct page",from "include/linux/mm_types.h"

Yes, for all fs.

But no other fs has another internal address space, unlike btrfs.

Btrfs uses its internal space to implement multi-device support.





And furthermore, for btrfs it can have metadata at bytenr way larger
than the total device size.


Is this because of multi-device support?


Yes.




This is possible because btrfs maps part of its address space to real
disks, thus it can have bytenr way larger than device size.


Please a code pointing to that will help me understand this better.
Thanks.


You need to understand btrfs chunk tree first.

Each btrfs chunk item is a mapping from btrfs logical address to each
real device.

The easiest way to understand it is not code, but "btrfs ins dump-tree
-t chunk " to experience it by yourself.





But this brings to a problem, 32bit Linux can only handle 16T, but in
your case, some of your metadata is already beyond 16T in btrfs address
space.


Sorry I am not much aware of the history. Was this disk mkfs on 64-bit system
and then connected to a 32bit board?


Possible.

But there are other cases to go beyond that limit, especially with balance.

So I'm not confident enough to say what's the exact event to make the fs
cross the line.



This also brings me to check with you about other filesystems.
See the capacity section from below wiki[1]. Depending upon the host OS
limitation on the max size of the filesystem may vary right?

[1] https://en.wikipedia.org/wiki/XFS


The last time Dave Chineer said, for xfs larger than 16T, 32bit kernel
will just refuse to mount.

Thanks,
Qu



-ritesh



Then a lot of things are going to be wrong.

I have submitted a patch to do extra check, at least allowing user to
know this is the limit of 32bit:
https://patchwork.kernel.org/project/linux-btrfs/patch/20210225011814.24009-1-...@suse.com/

Unfortunately, this will not help existing fs though.



Re: btrfs crash on armv7

2021-04-08 Thread riteshh


Please excuse my silly queries here.

On 21/04/08 04:38PM, Qu Wenruo wrote:
>
>
> On 2021/4/8 下午4:16, Joe Hermaszewski wrote:
> > It took a while but I managed to get hold of another one of these
> > arm32 boards. Very disappointingly this exact "bitflip" is still
> > present (log enclosed).
>
> Yeah, we got to the conclusion it's not bitflip, but completely 32bit
> limit on armv7.
>
> For ARMv7, it's a 32bit system, where unsigned long is only 32bit.
>
> This means, things like page->index is only 32bit long, and for 4K page
> size, it also means all filesystems (not only btrfs) can only utilize at
> most 16T bytes.
>
> But there is pitfall for btrfs, btrfs uses its internal address space
> for its meatadata, and the address space is U64.

Can you pls point me to the code you are referring here?
So IIUC, you mean since page->index can hold a value which can be upto 32bit in
size so the maximum FS address range which can be accessed is 16T.
This should be true in general for any FS no?

>
> And furthermore, for btrfs it can have metadata at bytenr way larger
> than the total device size.

Is this because of multi-device support?

> This is possible because btrfs maps part of its address space to real
> disks, thus it can have bytenr way larger than device size.

Please a code pointing to that will help me understand this better.
Thanks.

>
> But this brings to a problem, 32bit Linux can only handle 16T, but in
> your case, some of your metadata is already beyond 16T in btrfs address
> space.

Sorry I am not much aware of the history. Was this disk mkfs on 64-bit system
and then connected to a 32bit board?

This also brings me to check with you about other filesystems.
See the capacity section from below wiki[1]. Depending upon the host OS
limitation on the max size of the filesystem may vary right?

[1] https://en.wikipedia.org/wiki/XFS

-ritesh

>
> Then a lot of things are going to be wrong.
>
> I have submitted a patch to do extra check, at least allowing user to
> know this is the limit of 32bit:
> https://patchwork.kernel.org/project/linux-btrfs/patch/20210225011814.24009-1-...@suse.com/
>
> Unfortunately, this will not help existing fs though.
>


Re: btrfs crash on armv7

2021-04-08 Thread Qu Wenruo




On 2021/4/8 下午6:11, Joe Hermaszewski wrote:

Thanks for explaining so patiently, I have a couple more questions if
you have the time:

With the patch I assume that this FS will just refuse to mount on
arm32,


Yes.

If the fs has a chunk beyond 16T, it will be definitely be rejected.
For your case, since you already have such metadata, meaning there is
definitely at least one chunk at or beyond that boundary, thus
immediately rejection will be triggered.


and in general such a large FS can't be used reliably there.


Not "reliably", but completely "unusable".

As btrfs can't even read some metadata, it would be a miracle to mount
the fs while avoiding all metadata read beyond 16T.


(If I'm wrong about this, is it possible to use a 64 bit machine to
move the offending metadata back below 16TB? (I feel that this may be
a gross misunderstanding!))


Moving is possible (balance is here for the work).

But balance can only move data to larger bytenr, meaning it will just
make the case worse.

The requirement for balance to move data/metadata to larger bytenr is,
btrfs assumes chunk with larger bytenr are created newer.
Thus things like balance itself may use that feature to make sure if it
has balanced the full fs.

Thus in btrfs, the bytenr of new chunks are increamental, no way to
create a new chunk with lower bytenr.



I'm not sure I understood this part of your reply:


Btrfs balance will also go forward larger bytenr, thus it's unrelated to size 
at all.


I just realised that the last couple of messages didn't go to the
list, I'm happy for you to reply to the list if you feel that this
conversation would be beneficial there!


Sorry my bad...

Thanks,
Qu



On Thu, Apr 8, 2021 at 5:41 PM Qu Wenruo  wrote:




On 2021/4/8 下午5:36, Joe Hermaszewski wrote:

Thanks for the quick reply! In this case what's the best course of
action for me?

- Can this array be mounted and recovered without problems on a 64 bit
machine?


There should be no problem as long as btrfs check reports no error.

It's purely runtime assumption failed to get met, thus no real damage to
your fs.


- Does this 16TB limit refer to the total size of the devices, or the
apparent size after RAID? (I assume the former as this array is only
14TB after raid I think).


The limit is to any address space, not only some file/device size.

For btrfs even you only have 1 disk with 1T size, you can still get
metadata beyond 16T.

Btrfs balance will also go forward larger bytenr, thus it's unrelated to
size at all.

Thanks,
Qu



Thanks for the patch, I'll take a look at building my system with it in
future.

Best wishes,
Joe


On Thu, Apr 8, 2021, 4:38 PM Qu Wenruo mailto:quwenruo.bt...@gmx.com>> wrote:



 On 2021/4/8 下午4:16, Joe Hermaszewski wrote:
  > It took a while but I managed to get hold of another one of these
  > arm32 boards. Very disappointingly this exact "bitflip" is still
  > present (log enclosed).

 Yeah, we got to the conclusion it's not bitflip, but completely 32bit
 limit on armv7.

 For ARMv7, it's a 32bit system, where unsigned long is only 32bit.

 This means, things like page->index is only 32bit long, and for 4K page
 size, it also means all filesystems (not only btrfs) can only utilize at
 most 16T bytes.

 But there is pitfall for btrfs, btrfs uses its internal address space
 for its meatadata, and the address space is U64.

 And furthermore, for btrfs it can have metadata at bytenr way larger
 than the total device size.
 This is possible because btrfs maps part of its address space to real
 disks, thus it can have bytenr way larger than device size.

 But this brings to a problem, 32bit Linux can only handle 16T, but in
 your case, some of your metadata is already beyond 16T in btrfs address
 space.

 Then a lot of things are going to be wrong.

 I have submitted a patch to do extra check, at least allowing user to
 know this is the limit of 32bit:
 
https://patchwork.kernel.org/project/linux-btrfs/patch/20210225011814.24009-1-...@suse.com/
 


 Unfortunately, this will not help existing fs though.

 Thanks,
 Qu
  >
  > To summarise, as it's been a while:
  >
  > - When running scrub, a "page_start" and "eb_start" mismatch is
  > detected (off by a single bit).
  > - `btrfs check` reports no significant errors on aarch64 or arm32.
  > - `btrfs scrub` completes successfully on aarch64!
  > - Now, I can confirm that `btrfs scrub` fails in the same manner on
  > two arm32 machines.
  >
  > Not really sure where to go from here. The only things I can
 think of are:
  >
  > - Bug in `btrfs scrub` on arm32 (seems unlikely given the single-bit
  > error here).
  > - A bitflip on the original machine wrote bad data to the disk, and
  > somehow only the arm32 i

Re: btrfs crash on armv7

2021-04-08 Thread Qu Wenruo




On 2021/4/8 下午4:16, Joe Hermaszewski wrote:

It took a while but I managed to get hold of another one of these
arm32 boards. Very disappointingly this exact "bitflip" is still
present (log enclosed).


Yeah, we got to the conclusion it's not bitflip, but completely 32bit
limit on armv7.

For ARMv7, it's a 32bit system, where unsigned long is only 32bit.

This means, things like page->index is only 32bit long, and for 4K page
size, it also means all filesystems (not only btrfs) can only utilize at
most 16T bytes.

But there is pitfall for btrfs, btrfs uses its internal address space
for its meatadata, and the address space is U64.

And furthermore, for btrfs it can have metadata at bytenr way larger
than the total device size.
This is possible because btrfs maps part of its address space to real
disks, thus it can have bytenr way larger than device size.

But this brings to a problem, 32bit Linux can only handle 16T, but in
your case, some of your metadata is already beyond 16T in btrfs address
space.

Then a lot of things are going to be wrong.

I have submitted a patch to do extra check, at least allowing user to
know this is the limit of 32bit:
https://patchwork.kernel.org/project/linux-btrfs/patch/20210225011814.24009-1-...@suse.com/

Unfortunately, this will not help existing fs though.

Thanks,
Qu


To summarise, as it's been a while:

- When running scrub, a "page_start" and "eb_start" mismatch is
detected (off by a single bit).
- `btrfs check` reports no significant errors on aarch64 or arm32.
- `btrfs scrub` completes successfully on aarch64!
- Now, I can confirm that `btrfs scrub` fails in the same manner on
two arm32 machines.

Not really sure where to go from here. The only things I can think of are:

- Bug in `btrfs scrub` on arm32 (seems unlikely given the single-bit
error here).
- A bitflip on the original machine wrote bad data to the disk, and
somehow only the arm32 implementation catches this...

I can give a newer kernel a try (I'm on 5.4.80 at the moment) I suppose

Best,
Joe

```
$ dmesg
[  258.791966] BTRFS info (device sda1): scrub: started on devid 4
[  258.792030] BTRFS info (device sda1): scrub: started on devid 1

  the magic generation gap you mention is
15, I'm more suspicious about me

[  258.792062] BTRFS info (device sda1): scrub: started on devid 2
[  411.849669] [ cut here ]
[  411.849786] WARNING: CPU: 0 PID: 218 at fs/btrfs/disk-io.c:531
btree_csum_one_bio+0x22c/0x278 [btrfs]
[  411.849789] Modules linked in: cfg80211 rfkill 8021q ip6table_nat
iptable_nat nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6
nf_defrag_ipv4 ip6t_rpfilter ipt_rpfilter ip6table_raw iptable_raw
xt_pkttype nf_log_ipv6 nf_log_ipv4 nf_log_common xt_LOG xt_tcpudp
ip6table_filter ip6_tables iptable_filter phy_generic uio_pdrv_genirq
uio sch_fq_codel loop tun tap macvlan bridge stp llc lm75 ip_tables
x_tables autofs4 dm_mod dax btrfs libcrc32c xor raid6_pq
[  411.849836] CPU: 0 PID: 218 Comm: btrfs-transacti Not tainted 5.4.80 #1-NixOS
[  411.849838] Hardware name: Marvell Armada 380/385 (Device Tree)
[  411.849840] Backtrace:
[  411.849850] [] (dump_backtrace) from []
(show_stack+0x20/0x24)
[  411.849855]  r7:0213 r6:600f0013 r5: r4:c0f8c0c4
[  411.849861] [] (show_stack) from []
(dump_stack+0x98/0xac)
[  411.849866] [] (dump_stack) from [] (__warn+0xe0/0x108)
[  411.849870]  r7:0213 r6:bf058f4c r5:0009 r4:bf120990
[  411.849875] [] (__warn) from []
(warn_slowpath_fmt+0x74/0xc4)
[  411.849878]  r7:0213 r6:bf120990 r5: r4:e1c1c000
[  411.849936] [] (warn_slowpath_fmt) from []
(btree_csum_one_bio+0x22c/0x278 [btrfs])
[  411.849941]  r9:0001 r8:e6afec78 r7:e1c1c000 r6:ed7f7000
r5:1000 r4:8fd0
[  411.850044] [] (btree_csum_one_bio [btrfs]) from
[] (btree_submit_bio_hook+0xe8/0x100 [btrfs])
[  411.850049]  r10:e6afe4c0 r9:ecc57fc0 r8:ecc57f70 r7:ed7f7000
r6: r5:c1660370
[  411.850051]  r4:bf059e9c
[  411.850154] [] (btree_submit_bio_hook [btrfs]) from
[] (submit_one_bio+0x44/0x5c [btrfs])
[  411.850158]  r7:ef2d58ec r6:e1c1dcac r5: r4:bf059e9c
[  411.850260] [] (submit_one_bio [btrfs]) from []
(btree_write_cache_pages+0x380/0x408 [btrfs])
[  411.850263]  r5: r4:
[  411.850366] [] (btree_write_cache_pages [btrfs]) from
[] (btree_writepages+0x7c/0x84 [btrfs])
[  411.850371]  r10:0001 r9:8fd0 r8:c0280bcc r7:e1c1c000
r6:e1c1dd80 r5:ecc57f70
[  411.850373]  r4:e1c1dd80
[  411.850428] [] (btree_writepages [btrfs]) from
[] (do_writepages+0x58/0xf4)
[  411.850431]  r5:ecc57f70 r4:ecc57e68
[  411.850439] [] (do_writepages) from []
(__filemap_fdatawrite_range+0xf8/0x130)
[  411.850442]  r8:ecc57f70 r7:1000 r6:8fd0bfff r5:e1c1c000 r4:ecc57e68
[  411.850448] [] (__filemap_fdatawrite_range) from
[] (filemap_fdatawrite_range+0x2c/0x34)
[  411.850452]  r10:ecc57f70 r9:1000 r8:8fd0bfff r7:e1c1de4c
r6:e0202428 r5:1000
[  411.850454]  r4:8fd0bfff
[  411.850509] [] (filemap_fdatawrite_range) from
[] (btrf

Re: btrfs crash on armv7

2021-04-08 Thread Joe Hermaszewski
It took a while but I managed to get hold of another one of these
arm32 boards. Very disappointingly this exact "bitflip" is still
present (log enclosed).

To summarise, as it's been a while:

- When running scrub, a "page_start" and "eb_start" mismatch is
detected (off by a single bit).
- `btrfs check` reports no significant errors on aarch64 or arm32.
- `btrfs scrub` completes successfully on aarch64!
- Now, I can confirm that `btrfs scrub` fails in the same manner on
two arm32 machines.

Not really sure where to go from here. The only things I can think of are:

- Bug in `btrfs scrub` on arm32 (seems unlikely given the single-bit
error here).
- A bitflip on the original machine wrote bad data to the disk, and
somehow only the arm32 implementation catches this...

I can give a newer kernel a try (I'm on 5.4.80 at the moment) I suppose

Best,
Joe

```
$ dmesg
[  258.791966] BTRFS info (device sda1): scrub: started on devid 4
[  258.792030] BTRFS info (device sda1): scrub: started on devid 1

 the magic generation gap you mention is
15, I'm more suspicious about me

[  258.792062] BTRFS info (device sda1): scrub: started on devid 2
[  411.849669] [ cut here ]
[  411.849786] WARNING: CPU: 0 PID: 218 at fs/btrfs/disk-io.c:531
btree_csum_one_bio+0x22c/0x278 [btrfs]
[  411.849789] Modules linked in: cfg80211 rfkill 8021q ip6table_nat
iptable_nat nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6
nf_defrag_ipv4 ip6t_rpfilter ipt_rpfilter ip6table_raw iptable_raw
xt_pkttype nf_log_ipv6 nf_log_ipv4 nf_log_common xt_LOG xt_tcpudp
ip6table_filter ip6_tables iptable_filter phy_generic uio_pdrv_genirq
uio sch_fq_codel loop tun tap macvlan bridge stp llc lm75 ip_tables
x_tables autofs4 dm_mod dax btrfs libcrc32c xor raid6_pq
[  411.849836] CPU: 0 PID: 218 Comm: btrfs-transacti Not tainted 5.4.80 #1-NixOS
[  411.849838] Hardware name: Marvell Armada 380/385 (Device Tree)
[  411.849840] Backtrace:
[  411.849850] [] (dump_backtrace) from []
(show_stack+0x20/0x24)
[  411.849855]  r7:0213 r6:600f0013 r5: r4:c0f8c0c4
[  411.849861] [] (show_stack) from []
(dump_stack+0x98/0xac)
[  411.849866] [] (dump_stack) from [] (__warn+0xe0/0x108)
[  411.849870]  r7:0213 r6:bf058f4c r5:0009 r4:bf120990
[  411.849875] [] (__warn) from []
(warn_slowpath_fmt+0x74/0xc4)
[  411.849878]  r7:0213 r6:bf120990 r5: r4:e1c1c000
[  411.849936] [] (warn_slowpath_fmt) from []
(btree_csum_one_bio+0x22c/0x278 [btrfs])
[  411.849941]  r9:0001 r8:e6afec78 r7:e1c1c000 r6:ed7f7000
r5:1000 r4:8fd0
[  411.850044] [] (btree_csum_one_bio [btrfs]) from
[] (btree_submit_bio_hook+0xe8/0x100 [btrfs])
[  411.850049]  r10:e6afe4c0 r9:ecc57fc0 r8:ecc57f70 r7:ed7f7000
r6: r5:c1660370
[  411.850051]  r4:bf059e9c
[  411.850154] [] (btree_submit_bio_hook [btrfs]) from
[] (submit_one_bio+0x44/0x5c [btrfs])
[  411.850158]  r7:ef2d58ec r6:e1c1dcac r5: r4:bf059e9c
[  411.850260] [] (submit_one_bio [btrfs]) from []
(btree_write_cache_pages+0x380/0x408 [btrfs])
[  411.850263]  r5: r4:
[  411.850366] [] (btree_write_cache_pages [btrfs]) from
[] (btree_writepages+0x7c/0x84 [btrfs])
[  411.850371]  r10:0001 r9:8fd0 r8:c0280bcc r7:e1c1c000
r6:e1c1dd80 r5:ecc57f70
[  411.850373]  r4:e1c1dd80
[  411.850428] [] (btree_writepages [btrfs]) from
[] (do_writepages+0x58/0xf4)
[  411.850431]  r5:ecc57f70 r4:ecc57e68
[  411.850439] [] (do_writepages) from []
(__filemap_fdatawrite_range+0xf8/0x130)
[  411.850442]  r8:ecc57f70 r7:1000 r6:8fd0bfff r5:e1c1c000 r4:ecc57e68
[  411.850448] [] (__filemap_fdatawrite_range) from
[] (filemap_fdatawrite_range+0x2c/0x34)
[  411.850452]  r10:ecc57f70 r9:1000 r8:8fd0bfff r7:e1c1de4c
r6:e0202428 r5:1000
[  411.850454]  r4:8fd0bfff
[  411.850509] [] (filemap_fdatawrite_range) from
[] (btrfs_write_marked_extents+0x9c/0x1b0 [btrfs])
[  411.850512]  r5:0001 r4:
[  411.850615] [] (btrfs_write_marked_extents [btrfs]) from
[] (btrfs_write_and_wait_transaction+0x54/0xa4 [btrfs])
[  411.850620]  r10:e1c1c000 r9:ed7f7010 r8:ed7f7000 r7:e0202428
r6:ed7f7000 r5:e1c1c000
[  411.850621]  r4:e56433f0
[  411.850724] [] (btrfs_write_and_wait_transaction [btrfs])
from [] (btrfs_commit_transaction+0x75c/0xc94 [btrfs])
[  411.850728]  r8:ed7f7418 r7:e0202400 r6:ed7f7000 r5:e56433f0 r4:
[  411.850831] [] (btrfs_commit_transaction [btrfs]) from
[] (transaction_kthread+0x19c/0x1e0 [btrfs])
[  411.850836]  r10:ed7f728c r9: r8:001abf6a r7:0064
r6:ed7f7414 r5:0bb8
[  411.850838]  r4:ed7f7000
[  411.850893] [] (transaction_kthread [btrfs]) from
[] (kthread+0x170/0x174)
[  411.850898]  r10:ecbddbfc r9:bf05cbfc r8:ed669800 r7:e1c1c000
r6: r5:ed560d40
[  411.850899]  r4:ed54eb00
[  411.850904] [] (kthread) from []
(ret_from_fork+0x14/0x2c)
[  411.850907] Exception stack(0xe1c1dfb0 to 0xe1c1dff8)
[  411.850911] dfa0: 
  
[  411.850915] dfc0:   0

Re: btrfs crash on armv7

2020-12-19 Thread Qu Wenruo




On 2020/12/19 下午6:35, Joe Hermaszewski wrote:

Ok, so I managed to get hold of a 64bit machine on which to run btrfs
check. `btrfs check` returns exactly the same output as the armv7 box
(no serious problems) which I suppose is good. `btrfs scrub` also
finds no problems. Boring Logs below.

What I don't quite understand is how the scrub problem on armv7l is so
reliable when it's not persisted on the disks, is the same physical
memory location being used for this breaking value, or is it perhaps a
specific pattern of data on the bus which causes this?


If it's so reliably reproducible, then I guess that would be the case,
either a specific memory range has the problem, or a specific pattern on
the bus causing the problem.



If it's the former, how easy would it be to find this broken location
and blacklist it? If it's the latter then I guess there's no hope but
to try replacing the psu/machine.

The machine survived a couple of days of memtester (on about 95% of
the RAM) and `7z b` with no problems, *shrug*


The memtester should rule out the former case, the latter case may be
resolved by newer kernel if it's not a hardware problem but a software one.

Thanks,
Qu



Best wishes and thanks for the generous help so far.
Joe

btrfs scrub aarch64:
```
[j@nixos:~]$ sudo btrfs scrub status -d /mnt
UUID: b8f4ad49-29c8-4d19-a886-cef9c487f124
scrub device /dev/sda1 (id 1) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 7:36:31
Total to scrub:   2.40TiB
Rate: 91.95MiB/s
Error summary:no errors found
scrub device /dev/sdb1 (id 2) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 7:12:51
Total to scrub:   2.40TiB
Rate: 96.90MiB/s
Error summary:no errors found
scrub device /dev/sdd1 (id 3) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 19:47:01
Total to scrub:   7.86TiB
Rate: 115.70MiB/s
Error summary:no errors found
scrub device /dev/sdc1 (id 4) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 19:46:38
Total to scrub:   7.86TiB
Rate: 115.74MiB/s
Error summary:no errors found
```

btrfs check aarch64:
```
[nixos@nixos:/]$ sudo btrfs check --readonly /dev/sda1
Opening filesystem to check...
Checking filesystem on /dev/sda1
UUID: b8f4ad49-29c8-4d19-a886-cef9c487f124
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
root 294 inode 24665 errors 100, file extent discount
Found file extent holes:
 start: 3709534208, len: 163840
root 294 inode 406548 errors 100, file extent discount
Found file extent holes:
 start: 98729984, len: 720896
ERROR: errors found in fs roots
found 11280701063168 bytes used, error(s) found
total csum bytes: 10937464120
total tree bytes: 18538053632
total fs tree bytes: 5877579776
total extent tree bytes: 534052864
btree space waste bytes: 2316660292
file data blocks allocated: 17244587220992
  referenced 14211684794368%
```


On Sat, Nov 28, 2020 at 8:46 AM Qu Wenruo  wrote:




On 2020/11/27 下午11:15, Joe Hermaszewski wrote:

Hi Qu,

Thanks for the patch. I recompiled the kernel ran the scrub and your
patch worked as expected, here is the log:

```
[  337.365239] BTRFS info (device sda1): scrub: started on devid 2
[  337.366283] BTRFS info (device sda1): scrub: started on devid 1
[  337.402822] BTRFS info (device sda1): scrub: started on devid 3
[  337.411944] BTRFS info (device sda1): scrub: started on devid 4
[  471.997496] [ cut here ]
[  471.997614] WARNING: CPU: 0 PID: 218 at fs/btrfs/disk-io.c:531
btree_csum_one_bio+0x22c/0x278 [btrfs]
[  471.997616] Modules linked in: cfg80211 rfkill 8021q ip6table_nat
iptable_nat nf_nat ftdi_sio phy_generic usbserial xt_conntrack
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip6t_rpfilter ipt_rpfilter
ip6table_raw uio_pdrv_genirq iptable_raw uio xt_pkttype nf_log_ipv6
nf_log_ipv4 nf_log_common xt_LOG xt_tcpudp ip6table_filter ip6_tables
iptable_filter sch_fq_codel loop tun tap macvlan bridge stp llc lm75
ip_tables x_tables autofs4 dm_mod dax btrfs libcrc32c xor raid6_pq
[  471.997666] CPU: 0 PID: 218 Comm: btrfs-transacti Not tainted 5.4.78 #1-NixOS
[  471.997668] Hardware name: Marvell Armada 380/385 (Device Tree)
[  471.997669] Backtrace:
[  471.997680] [] (dump_backtrace) from []
(show_stack+0x20/0x24)
[  471.997684]  r7:0213 r6:600c0013 r5: r4:c0f8c044
[  471.997691] [] (show_stack) from []
(dump_stack+0x98/0xac)
[  471.997696] [] (dump_stack) from [] (__warn+0xe0/0x108)
[  471.997700]  r7:0213 r6:bf058f4c r5:0009 r4:bf120990
[  471.997704] [] (__warn) from []
(warn_slowpath_fmt+0x74/0xc4)
[  471.997707]  r7:0213 r6:bf120990 r5: r4:e1822000
[  471.997765] [] (warn_slowpath_fmt) from []
(btree_csum_one_bio+0x22c/0x278 [btrfs])
[  471.997770]  r9:0001 r8:c10decb8 r

Re: btrfs crash on armv7

2020-12-19 Thread Joe Hermaszewski
Ok, so I managed to get hold of a 64bit machine on which to run btrfs
check. `btrfs check` returns exactly the same output as the armv7 box
(no serious problems) which I suppose is good. `btrfs scrub` also
finds no problems. Boring Logs below.

What I don't quite understand is how the scrub problem on armv7l is so
reliable when it's not persisted on the disks, is the same physical
memory location being used for this breaking value, or is it perhaps a
specific pattern of data on the bus which causes this?

If it's the former, how easy would it be to find this broken location
and blacklist it? If it's the latter then I guess there's no hope but
to try replacing the psu/machine.

The machine survived a couple of days of memtester (on about 95% of
the RAM) and `7z b` with no problems, *shrug*

Best wishes and thanks for the generous help so far.
Joe

btrfs scrub aarch64:
```
[j@nixos:~]$ sudo btrfs scrub status -d /mnt
UUID: b8f4ad49-29c8-4d19-a886-cef9c487f124
scrub device /dev/sda1 (id 1) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 7:36:31
Total to scrub:   2.40TiB
Rate: 91.95MiB/s
Error summary:no errors found
scrub device /dev/sdb1 (id 2) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 7:12:51
Total to scrub:   2.40TiB
Rate: 96.90MiB/s
Error summary:no errors found
scrub device /dev/sdd1 (id 3) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 19:47:01
Total to scrub:   7.86TiB
Rate: 115.70MiB/s
Error summary:no errors found
scrub device /dev/sdc1 (id 4) history
Scrub started:Fri Dec 18 14:24:30 2020
Status:   finished
Duration: 19:46:38
Total to scrub:   7.86TiB
Rate: 115.74MiB/s
Error summary:no errors found
```

btrfs check aarch64:
```
[nixos@nixos:/]$ sudo btrfs check --readonly /dev/sda1
Opening filesystem to check...
Checking filesystem on /dev/sda1
UUID: b8f4ad49-29c8-4d19-a886-cef9c487f124
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
root 294 inode 24665 errors 100, file extent discount
Found file extent holes:
start: 3709534208, len: 163840
root 294 inode 406548 errors 100, file extent discount
Found file extent holes:
start: 98729984, len: 720896
ERROR: errors found in fs roots
found 11280701063168 bytes used, error(s) found
total csum bytes: 10937464120
total tree bytes: 18538053632
total fs tree bytes: 5877579776
total extent tree bytes: 534052864
btree space waste bytes: 2316660292
file data blocks allocated: 17244587220992
 referenced 14211684794368%
```


On Sat, Nov 28, 2020 at 8:46 AM Qu Wenruo  wrote:
>
>
>
> On 2020/11/27 下午11:15, Joe Hermaszewski wrote:
> > Hi Qu,
> >
> > Thanks for the patch. I recompiled the kernel ran the scrub and your
> > patch worked as expected, here is the log:
> >
> > ```
> > [  337.365239] BTRFS info (device sda1): scrub: started on devid 2
> > [  337.366283] BTRFS info (device sda1): scrub: started on devid 1
> > [  337.402822] BTRFS info (device sda1): scrub: started on devid 3
> > [  337.411944] BTRFS info (device sda1): scrub: started on devid 4
> > [  471.997496] [ cut here ]
> > [  471.997614] WARNING: CPU: 0 PID: 218 at fs/btrfs/disk-io.c:531
> > btree_csum_one_bio+0x22c/0x278 [btrfs]
> > [  471.997616] Modules linked in: cfg80211 rfkill 8021q ip6table_nat
> > iptable_nat nf_nat ftdi_sio phy_generic usbserial xt_conntrack
> > nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip6t_rpfilter ipt_rpfilter
> > ip6table_raw uio_pdrv_genirq iptable_raw uio xt_pkttype nf_log_ipv6
> > nf_log_ipv4 nf_log_common xt_LOG xt_tcpudp ip6table_filter ip6_tables
> > iptable_filter sch_fq_codel loop tun tap macvlan bridge stp llc lm75
> > ip_tables x_tables autofs4 dm_mod dax btrfs libcrc32c xor raid6_pq
> > [  471.997666] CPU: 0 PID: 218 Comm: btrfs-transacti Not tainted 5.4.78 
> > #1-NixOS
> > [  471.997668] Hardware name: Marvell Armada 380/385 (Device Tree)
> > [  471.997669] Backtrace:
> > [  471.997680] [] (dump_backtrace) from []
> > (show_stack+0x20/0x24)
> > [  471.997684]  r7:0213 r6:600c0013 r5: r4:c0f8c044
> > [  471.997691] [] (show_stack) from []
> > (dump_stack+0x98/0xac)
> > [  471.997696] [] (dump_stack) from [] 
> > (__warn+0xe0/0x108)
> > [  471.997700]  r7:0213 r6:bf058f4c r5:0009 r4:bf120990
> > [  471.997704] [] (__warn) from []
> > (warn_slowpath_fmt+0x74/0xc4)
> > [  471.997707]  r7:0213 r6:bf120990 r5: r4:e1822000
> > [  471.997765] [] (warn_slowpath_fmt) from []
> > (btree_csum_one_bio+0x22c/0x278 [btrfs])
> > [  471.997770]  r9:0001 r8:c10decb8 r7:e1822000 r6:ed66b000
> > r5:1000 r4:4fd0
> > [  471.997872] [] (btree_csum_one_bio [btrfs]) from
> > [] (btree_submit_bio_hook+0xe8/0x100 [btrfs])
> > [  471.997877]  r10:e2197b48 r9:ec845fc0 r8:ec845f70 r7:ed66b000
> > r6:0