Launchpad has imported 6 comments from the remote bug at https://bugzilla.redhat.com/show_bug.cgi?id=1669751.
If you reply to an imported comment from within Launchpad, your comment will be sent to the remote bug automatically. Read more about Launchpad's inter-bugtracker facilities at https://help.launchpad.net/InterBugTracking. ------------------------------------------------------------------------ On 2019-01-26T17:38:46+00:00 nkshirsa wrote: Description of problem: lvm should not allow extending an LV with a PV of different sector size than existing PVs making up the LV, since the FS on the LV does not mount once LVM adds in the new PV and extends the LV. How reproducible: Steps to Reproduce: ** Device: sdc (using the device with default sector size of 512) # blockdev --report /dev/sdc RO RA SSZ BSZ StartSec Size Device rw 8192 512 4096 0 1073741824 /dev/sdc ** LVM is created with the default sector size of 512. # blockdev --report /dev/mapper/testvg-testlv RO RA SSZ BSZ StartSec Size Device rw 8192 512 4096 0 1069547520 /dev/mapper/testvg-testlv ** The filesystem will also pick up 512 sector size. # mkfs.xfs /dev/mapper/testvg-testlv meta-data=/dev/mapper/testvg-testlv isize=512 agcount=4, agsize=65280 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=261120, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=855, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ** Now we will mount it # xfs_info /test meta-data=/dev/mapper/testvg-testlv isize=512 agcount=4, agsize=65280 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=261120, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=855, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ** Let's extend it with a PV with a sector size of 4096: #modprobe scsi_debug sector_size=4096 dev_size_mb=512 # fdisk -l /dev/sdd Disk /dev/sdd: 536 MB, 536870912 bytes, 131072 sectors Units = sectors of 1 * 4096 = 4096 bytes <============== Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 262144 bytes # blockdev --report /dev/sdd RO RA SSZ BSZ StartSec Size Device rw 8192 4096 4096 0 536870912 /dev/sdd # vgextend testvg /dev/sdd Physical volume "/dev/sdd" successfully created Volume group "testvg" successfully extended # lvextend -l +100%FREE /dev/mapper/testvg-testlv Size of logical volume testvg/testlv changed from 1020.00 MiB (255 extents) to 1.49 GiB (382 extents). Logical volume testlv successfully resized. # umount /test # mount /dev/mapper/testvg-testlv /test mount: mount /dev/mapper/testvg-testlv on /test failed: Function not implemented <=========== # dmesg | grep -i dm-2 [ 477.517515] XFS (dm-2): Unmounting Filesystem [ 486.905933] XFS (dm-2): device supports 4096 byte sectors (not 512) <============ The sector size of the lv is now 4096. # blockdev --report /dev/mapper/testvg-testlv RO RA SSZ BSZ StartSec Size Device rw 8192 4096 4096 0 1602224128 /dev/mapper/testvg-testlv Expected results: LVM should fail the lvextend if sector size is different to existing PV's Additional info: Discussed with Zdenek during LVM meeting in Brno Reply at: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/0 ------------------------------------------------------------------------ On 2019-01-28T15:53:23+00:00 teigland wrote: Should we just require all PVs in the VG to have the same sector size? Reply at: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/1 ------------------------------------------------------------------------ On 2019-01-28T16:46:28+00:00 zkabelac wrote: Basically that's what we have agreed in meeting - since we don't know yet how to handle different sector-sized PVs. And a short fix could be to not allow that to happen on creating time. But still there are already users having that VGs already created - so lvm2 can't just say such VG is invalid and disable access to it... So I'd probably see something similar we did for 'mirrorlog' - add lvm.conf option to disable creation - that is respected on vgcreate time Reply at: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/2 ------------------------------------------------------------------------ On 2019-02-25T15:51:54+00:00 teigland wrote: Another report of this problem https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html Reply at: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/8 ------------------------------------------------------------------------ On 2019-02-25T18:25:05+00:00 nsoffer wrote: Interesting, I asked about this here few weeks ago: https://www.redhat.com/archives/linux-lvm/2019-February/msg00002.html Based on the info in this bug, it looks like RHV should care about the only the logical block size when extending or creating a VG. David, Zdenek, what do you think? Reply at: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/9 ------------------------------------------------------------------------ On 2019-03-05T22:43:42+00:00 teigland wrote: Here's an initial, lightly-tested solution to the VG-consistency part. It does not address the issue of checking that a given LV is used with a consistent sector size. Perhaps if a user overrides the VG consistency check, it should be their responsibility to ensure LVs are consistent. https://sourceware.org/git/?p=lvm2.git;a=commit;h=dd6ff9e3a75801fc5c6166aa0983fa8df098e91a vgcreate/vgextend: check for inconsistent logical block sizes When creating or extending a VG, check if the PVs have inconsisent logical block sizes (value from BLKSSZGET ioctl). If so, return an error. The error can be changed to a warning, allowing the command to proceed with the mixed values, by setting lvm.conf allow_mixed_logical_block_sizes=1. Reply at: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/10 ** Changed in: lvm2 Status: Unknown => Confirmed ** Changed in: lvm2 Importance: Unknown => Medium -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1817097 Title: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices Status in lvm2: Confirmed Status in Ubuntu on IBM z Systems: Incomplete Status in linux package in Ubuntu: Invalid Status in lvm2 package in Ubuntu: Incomplete Bug description: Problem Description--- Summary ======= Environment: IBM Z13 LPAR and z/VM Guest IBM Type: 2964 Model: 701 NC9 OS: Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x) Package: lvm2 version 2.02.176-4.1ubuntu3 LVM: pvmove operation corrupts file system when using 4096 (4k) logical block size and default block size being 512 bytes in the underlying devices The problem is immediately reproducible. We see a real usability issue with data destruction as consequence - which is not acceptable. We expect 'pvmove' to fail with error in such situations to prevent fs destruction, which might possibly be overridden by a force flag. Details ======= After a 'pvmove' operation is run to move a physical volume onto an ecrypted device with 4096 bytes logical block size we experience a file system corruption. There is no need for the file system to be mounted, but the problem surfaces differently if so. Either, the 'pvs' command after the pvmove shows /dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument /dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument or a subsequent mount shows (after umount if the fs had previously been mounted as in our setup) mount: /mnt: wrong fs type, bad option, bad superblock on /dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error. A minimal setup of LVM using one volume group with one logical volume defined, based on one physical volume is sufficient to raise the problem. One more physical volume of the same size is needed to run the pvmove operation to. LV | VG: LOOP_VG [ ] | PV: /dev/loop0 --> /dev/mapper/enc-loop ( backed by /dev/mapper/enc-loop ) The physical volumes are backed by loopback devices (losetup) to base the problem report on, but we have seen the error on real SCSI multipath volumes also, with and without cryptsetup mapper devices in use. Further discussion ================== https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html The problem does not occur on block devices with native size of 4k. E.g. DASDs, or file systems with mkfs -b 4096 option. Terminal output =============== See attached file pvmove-error.txt Debug data ========== pvmove was run with -dddddd (maximum debug level) See attached journal file. Contact Information = christian.r...@de.ibm.com ---uname output--- Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 s390x s390x s390x GNU/Linux Machine Type = IBM Type: 2964 Model: 701 NC9 ---Debugger--- A debugger is not configured ---Steps to Reproduce--- 1.) Create two image files of 500MB in size and set up two loopback devices with 'losetup -fP FILE' 2.) Create one physical volume and one volume group 'LOOP_VG', and one logical volume 'VG' Run: pvcreate /dev/loop0 vgcreate LOOP_VG /dev/loop0 lvcreate -L 300MB LOOP_VG -n LV /dev/loop0 3.) Create a file system on the logical volume device: mkfs.ext4 /dev/mapper/LOOP_VG-LV 4.) mount the file system created in the previous step to some empty available directory: mount /dev/mapper/LOOP_VG-LV /mnt 5.) Set up a second physical volume, this time encrypted with LUKS2, and open the volume to make it available: cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1 cryptsetup luksOpen /dev/loop1 enc-loop 6.) Create the second physical volume, and add it to the LOOP_VG pvcreate /dev/mapper/enc-loop vgextend LOOP_VG /dev/mapper/enc-loop 7.) Ensure the new physical volume is part of the volume group: pvs 8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug option: pvmove -dddddd /dev/loop0 /dev/mapper/enc-loop 9.) The previous step succeeds, but corrupts the file system on the logical volume We expect an error here. There might be a command line flag to override used because corruption does not cause a data loss. Userspace tool common name: pvmove The userspace tool has the following bit modes: 64bit Userspace rpm: lvm2 in versoin 2.02.176-4.1ubuntu3 Userspace tool obtained from project website: na *Additional Instructions for christian.r...@de.ibm.com: -Attach ltrace and strace of userspace application. To manage notifications about this bug go to: https://bugs.launchpad.net/lvm2/+bug/1817097/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp