On 2021-02-24 01:20, Anand Jain wrote:
On 24/02/2021 01:35, Johannes Thumshirn wrote:
On 23/02/2021 18:20, Steven Davies wrote:
On 2021-02-23 14:30, David Sterba wrote:
On Tue, Feb 23, 2021 at 09:43:04AM +0000, Johannes Thumshirn wrote:
On 23/02/2021 10:13, Johannes Thumshirn wrote:
On 22/02/2021 21:07, Steven Davies wrote:

Booted my system with kernel 5.11.0 vanilla with the first time and received this:

BTRFS info (device nvme0n1p2): has skinny extents
BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
964770336768
BTRFS error (device nvme0n1p2): failed to read chunk tree: -22

Booting with 5.10.12 has no issues.


So the bdev inode's i_size must have changed between mkfs and mount.



That's likely, this is my development/testing machine and I've changed
partitions (and btrfs RAID levels) around more than once since mkfs
time. I can't remember if or how I've modified the fs to take account of
this.


 What you say matches with the kernel logs.

Steven, can you please run:
blockdev --getsize64 /dev/nvme0n1p2

# blockdev --getsize64 /dev/nvme0n1p2
964757028864


 Size at the time of mkfs is 964770336768. Now it is 964757028864.



Is there a simple way to fix this partition so that btrfs and the
partition table agree on its size?


Unless someone's yelling at me that this is a bad advice (David, Anand?),


I'd go for:
btrfs filesystem resize max /

 I was thinking about the same step when I was reading above.

I've personally never shrinked a device but looking at the code it will write the blockdevice's inode i_size to the device extents, and possibly
relocate data.


 Shrink works. I have tested it before.
 I hope shrink helps here too. Please let us know.

Thanks, Anand

Yes, this worked - at least there's no panic on boot (albeit this single device fs is devid 3 now so I had to use 3:max).

--
Steven Davies

Reply via email to