On 23/02/2021 18:20, Steven Davies wrote: > On 2021-02-23 14:30, David Sterba wrote: >> On Tue, Feb 23, 2021 at 09:43:04AM +0000, Johannes Thumshirn wrote: >>> On 23/02/2021 10:13, Johannes Thumshirn wrote: >>>> On 22/02/2021 21:07, Steven Davies wrote: >>>> >>>> [+CC Anand ] >>>> >>>>> Booted my system with kernel 5.11.0 vanilla with the first time and >>>>> received this: >>>>> >>>>> BTRFS info (device nvme0n1p2): has skinny extents >>>>> BTRFS error (device nvme0n1p2): device total_bytes should be at most >>>>> 964757028864 but found >>>>> 964770336768 >>>>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22 >>>>> >>>>> Booting with 5.10.12 has no issues. >>>>> >>>>> # btrfs filesystem usage / >>>>> Overall: >>>>> Device size: 898.51GiB >>>>> Device allocated: 620.06GiB >>>>> Device unallocated: 278.45GiB >>>>> Device missing: 0.00B >>>>> Used: 616.58GiB >>>>> Free (estimated): 279.94GiB (min: 140.72GiB) >>>>> Data ratio: 1.00 >>>>> Metadata ratio: 2.00 >>>>> Global reserve: 512.00MiB (used: 0.00B) >>>>> >>>>> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%) >>>>> /dev/nvme0n1p2 568.00GiB >>>>> >>>>> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%) >>>>> /dev/nvme0n1p2 52.00GiB >>>>> >>>>> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%) >>>>> /dev/nvme0n1p2 64.00MiB >>>>> >>>>> Unallocated: >>>>> /dev/nvme0n1p2 278.45GiB >>>>> >>>>> # parted -l >>>>> Model: Sabrent Rocket Q (nvme) >>>>> Disk /dev/nvme0n1: 1000GB >>>>> Sector size (logical/physical): 512B/512B >>>>> Partition Table: gpt >>>>> Disk Flags: >>>>> >>>>> Number Start End Size File system Name Flags >>>>> 1 1049kB 1075MB 1074MB fat32 boot, esp >>>>> 2 1075MB 966GB 965GB btrfs >>>>> 3 966GB 1000GB 34.4GB linux-swap(v1) swap >>>>> >>>>> What has changed in 5.11 which might cause this? >>>>> >>>>> >>>> >>>> This line: >>>>> BTRFS info (device nvme0n1p2): has skinny extents >>>>> BTRFS error (device nvme0n1p2): device total_bytes should be at most >>>>> 964757028864 but found >>>>> 964770336768 >>>>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22 >>>> >>>> comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in >>>> verify_one_dev_extent") >>>> which went into v5.11-rc1. >>>> >>>> IIUIC the device item's total_bytes and the block device inode's size are >>>> off by 12M, so the check >>>> introduced in the above commit refuses to mount the FS. >>>> >>>> Anand any idea? >>> >>> OK this is getting interesting: >>> btrfs-porgs sets the device's total_bytes at mkfs time and obtains it >>> from ioctl(..., BLKGETSIZE64, ...); >>> >>> BLKGETSIZE64 does: >>> return put_u64(argp, i_size_read(bdev->bd_inode)); >>> >>> The new check in read_one_dev() does: >>> >>> u64 max_total_bytes = >>> i_size_read(device->bdev->bd_inode); >>> >>> if (device->total_bytes > max_total_bytes) { >>> btrfs_err(fs_info, >>> "device total_bytes should be at most %llu but >>> found %llu", >>> max_total_bytes, >>> device->total_bytes); >>> return -EINVAL; >>> >>> >>> So the bdev inode's i_size must have changed between mkfs and mount. > > That's likely, this is my development/testing machine and I've changed > partitions (and btrfs RAID levels) around more than once since mkfs > time. I can't remember if or how I've modified the fs to take account of > this. > >>> Steven, can you please run: >>> blockdev --getsize64 /dev/nvme0n1p2 > > # blockdev --getsize64 /dev/nvme0n1p2 > 964757028864 > >> >> The kernel side verifies that the physical device size is not smaller >> than the size recorded in the device item, so that makes sense. I was a >> bit doubtful about the check but it can detect real problems or point >> out some weirdness. > > Agreed. It's useful, but somewhat painful when it refuses to mount a > root device after reboot. > >> The 12M delta is not big, but I'd expect that for a physical device it >> should not change. Another possibility would be some kind of rounding >> to >> a reasonable number, like 16M. > > Is there a simple way to fix this partition so that btrfs and the > partition table agree on its size? >
Unless someone's yelling at me that this is a bad advice (David, Anand?), I'd go for: btrfs filesystem resize max / I've personally never shrinked a device but looking at the code it will write the blockdevice's inode i_size to the device extents, and possibly relocate data. Hope I didn't give a bad advice, Johannes