[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2023-07-15 Thread Bug Watch Updater
** Changed in: lvm2
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to e2fsprogs in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Fix Released
Status in Ubuntu on IBM z Systems:
  Fix Released
Status in e2fsprogs package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Invalid

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There might be a command line flag to override used because corruption 
does not cause a data loss.
  
   
  Userspace tool common name: pvmove 
   
  The 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-05-28 Thread Frank Heimes
Updated package landed in release pocket - changing project entry to Fix
Released.

** Changed in: ubuntu-z-systems
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to e2fsprogs in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Confirmed
Status in Ubuntu on IBM z Systems:
  Fix Released
Status in e2fsprogs package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Invalid

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There might be a command line flag to override used because 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-05-28 Thread Launchpad Bug Tracker
This bug was fixed in the package e2fsprogs - 1.45.1-1ubuntu1

---
e2fsprogs (1.45.1-1ubuntu1) eoan; urgency=medium

  * Use 4k blocksize in all ext4 mke2fs.conf such that lvm migration
between non-4k PVs and 4k PVs works irrespective of the volume
size. LP: #1817097

 -- Dimitri John Ledkov   Wed, 15 May 2019 16:15:22
+0200

** Changed in: e2fsprogs (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to e2fsprogs in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Confirmed
Status in Ubuntu on IBM z Systems:
  Incomplete
Status in e2fsprogs package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Invalid

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-05-24 Thread Frank Heimes
Modified version e2fsprogs 1.45.1-1ubuntu1  still in eoan-proposed.
Once it left proposed this ticket will be changed to Fix Released (e2fsprogs 
and project).

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to e2fsprogs in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Confirmed
Status in Ubuntu on IBM z Systems:
  Incomplete
Status in e2fsprogs package in Ubuntu:
  Fix Committed
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Invalid

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There might be a command line flag to override used because 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-05-15 Thread Frank Heimes
With Eoan we now always default to 4k, hence Fix Released in e2fsprogs
and the project.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to e2fsprogs in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Confirmed
Status in Ubuntu on IBM z Systems:
  Incomplete
Status in e2fsprogs package in Ubuntu:
  Fix Committed
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Invalid

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There might be a command line flag to override used because corruption 
does not cause a data loss.
  
   
  Userspace tool common 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-05-15 Thread Dimitri John Ledkov
** Also affects: e2fsprogs (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: lvm2 (Ubuntu)
   Status: Incomplete => Invalid

** Changed in: e2fsprogs (Ubuntu)
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to e2fsprogs in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Confirmed
Status in Ubuntu on IBM z Systems:
  Incomplete
Status in e2fsprogs package in Ubuntu:
  Fix Committed
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Invalid

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-03-13 Thread Frank Heimes
Decreasing importance from critical to medium, because the bug is known to the 
community, it is already discussed in RH Bug 1669751, and here 
https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html / 
https://www.redhat.com/archives/linux-lvm/2019-March/msg0.html, and not 
platform specific, nor specific to a certain Ubuntu release.
On top there are actions possible to easily avoid this situation, like 
explicitly setting / forcing the sector size to be 4096 bytes or using a bigger 
image size (>512 MB - which is not uncommon), so that the sector size default 
changes to 4k anyway.
A patch was already suggested upstream:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=dd6ff9e3a75801fc5c6166aa0983fa8df098e91a
Once that patch got upstream accepted and became picked-up in a new lvm2 
version, it will eventually land in Ubuntu, too.


** Changed in: ubuntu-z-systems
   Importance: Critical => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Confirmed
Status in Ubuntu on IBM z Systems:
  Incomplete
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Incomplete

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-03-11 Thread Bug Watch Updater
Launchpad has imported 6 comments from the remote bug at
https://bugzilla.redhat.com/show_bug.cgi?id=1669751.

If you reply to an imported comment from within Launchpad, your comment
will be sent to the remote bug automatically. Read more about
Launchpad's inter-bugtracker facilities at
https://help.launchpad.net/InterBugTracking.


On 2019-01-26T17:38:46+00:00 nkshirsa wrote:

Description of problem:

lvm should not allow extending an LV with a PV of different sector size
than existing PVs making up the LV, since the FS on the LV does not
mount once LVM adds in the new PV and extends the LV.


How reproducible:
Steps to Reproduce:

** Device: sdc (using the device with default sector size of 512)

# blockdev --report /dev/sdc
RORA   SSZ   BSZ   StartSecSize   Device
rw  8192   512  4096  0  1073741824   /dev/sdc

** LVM is created with the default sector size of 512.

# blockdev --report /dev/mapper/testvg-testlv 
RORA   SSZ   BSZ   StartSecSize   Device
rw  8192   512  4096  0  1069547520   /dev/mapper/testvg-testlv

** The filesystem will also pick up 512 sector size.

# mkfs.xfs /dev/mapper/testvg-testlv 
meta-data=/dev/mapper/testvg-testlv isize=512agcount=4, agsize=65280 blks
 =   sectsz=512   attr=2, projid32bit=1
 =   crc=1finobt=0, sparse=0
data =   bsize=4096   blocks=261120, imaxpct=25
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
log  =internal log   bsize=4096   blocks=855, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0

** Now we will mount it

# xfs_info /test
meta-data=/dev/mapper/testvg-testlv isize=512agcount=4, agsize=65280 blks
 =   sectsz=512   attr=2, projid32bit=1
 =   crc=1finobt=0 spinodes=0
data =   bsize=4096   blocks=261120, imaxpct=25
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
log  =internal   bsize=4096   blocks=855, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0

** Let's extend it with a PV with a sector size of 4096:

#modprobe scsi_debug sector_size=4096 dev_size_mb=512

# fdisk -l /dev/sdd
 
Disk /dev/sdd: 536 MB, 536870912 bytes, 131072 sectors
Units = sectors of 1 * 4096 = 4096 bytes <==
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 262144 bytes
 
# blockdev --report /dev/sdd
RORA   SSZ   BSZ   StartSecSize   Device
rw  8192  4096  4096  0   536870912   /dev/sdd

# vgextend testvg /dev/sdd
  Physical volume "/dev/sdd" successfully created
  Volume group "testvg" successfully extended

# lvextend -l +100%FREE /dev/mapper/testvg-testlv
  Size of logical volume testvg/testlv changed from 1020.00 MiB (255 extents) 
to 1.49 GiB (382 extents).
  Logical volume testlv successfully resized.

# umount /test

# mount /dev/mapper/testvg-testlv /test
mount: mount /dev/mapper/testvg-testlv on /test failed: Function not 
implemented <===

# dmesg | grep -i dm-2

[  477.517515] XFS (dm-2): Unmounting Filesystem
[  486.905933] XFS (dm-2): device supports 4096 byte sectors (not 512) 
<

The sector size of the lv is now 4096.
# blockdev --report /dev/mapper/testvg-testlv 
RORA   SSZ   BSZ   StartSecSize   Device
rw  8192  4096  4096  0  1602224128   /dev/mapper/testvg-testlv


Expected results:
LVM should fail the lvextend if sector size is different to existing PV's


Additional info:
Discussed with Zdenek during LVM meeting in Brno

Reply at:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/0


On 2019-01-28T15:53:23+00:00 teigland wrote:

Should we just require all PVs in the VG to have the same sector size?

Reply at:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1817097/comments/1


On 2019-01-28T16:46:28+00:00 zkabelac wrote:

Basically that's what we have agreed in meeting - since we don't know
yet how to handle different sector-sized PVs.

And a short fix could be to not allow that to happen on creating time.

But still there are already users having that  VGs already created - so lvm2 
can't just say such VG is invalid
and disable access to it...

So I'd probably see something similar we did for 'mirrorlog' -
add lvm.conf option to disable creation - that is respected on vgcreate 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-03-11 Thread Dimitri John Ledkov
/etc/mke2fs.conf:
[defaults]
blocksize = 4096
[fs_types]
small = {
blocksize = 1024
inode_size = 128
inode_ratio = 4096
}

We default to 4k, unless one is formatting small filesystems which from manpage:
If the filesystem size is greater than or equal to 3 but less than 512 
megabytes, mke2fs(8) will use the  filesystem  type  small.

And in your tests you do appear to use 500MiB big images.

I wonder if we should bump even small ext4 filesystems to use 4k sector
sizes.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Unknown
Status in Ubuntu on IBM z Systems:
  Incomplete
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Incomplete

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-03-11 Thread Dimitri John Ledkov
This is a well-known upstream issue/bug.
This is not s390x, Ubuntu 18.10, any other Ubuntu release specific.
There is no dataloss -> one can execute pvmove operation in reverse (or i guess 
onto any 512 sector size PV) to mount the filesystems again.

Thus this is not critical at all.

Also, I am failing to understand what is the expectation for Canonical
to do, w.r.t. this bug report?

If you want support, as a workaround one can force using 4k sizes, with
vgcreate and ext4, then moving volumes to/from 512/4k physical volumes
appears to work seamlessly:

$ sudo vgcreate --physicalextentsize 4k newtestvg /dev/...
$ sudo mkfs.ext4 -b 4096 /dev/mapper/...

For a more general solution, do create stand-alone new VGs/LVs/FSs, and
migrate data over using higther level tools - e.g. dump/restore, rsync,
etc.

But note, that launchpad should not be used for support requests. Please
use your UA account (salesforce) for support request for your production
systems.

This is discussed upstream, where they are trying to introduce a soft
check to prevent from moving data across.
https://bugzilla.redhat.com/show_bug.cgi?id=1669751 But it's not a real
solution, just a weak safety check. As one can still force create ext4fs
of either 512 or 4k, and move the volume to the "wrong" size. As ideally
it would be user friendly if moving to/from mixed sector sizes would
just work(tm) but that's unlikely to happen upstream, thus is wont-fix
downstream too.

Was there anything in particular that you were expecting for us to
change?

We could change the cloud-images (if they don't already), installers
(i.e. d-i / subiquity) or the utils (i.e. vgcreate, mkfs.ext4) to
default to 4k minimum sector sizes. But at the moment, these utils try
to guess the sector sizes based on heuristics at creation time, and
obviously get is "wrong" if the underlying device is swapped away from
under their feet post creation time. Thus this is expected.

References:
The upstream bug report is https://bugzilla.redhat.com/show_bug.cgi?id=1669751
The upstream overridable weak safety-net check is 
https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=dd6ff9e3a75801fc5c6166aa0983fa8df098e91a
And that will make it into ubuntu eventually, when released in a stable lvm2 
release and integration into ubuntu.

Please remove severity critical
Please remove target ubuntu 18.10
Please provide explanation as to why this issue was filed


** Bug watch added: Red Hat Bugzilla #1669751
   https://bugzilla.redhat.com/show_bug.cgi?id=1669751

** Changed in: linux (Ubuntu)
   Status: New => Invalid

** Changed in: ubuntu-z-systems
   Status: New => Incomplete

** Changed in: lvm2 (Ubuntu)
   Status: New => Incomplete

** Also affects: lvm2 via
   https://bugzilla.redhat.com/show_bug.cgi?id=1669751
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in lvm2:
  Unknown
Status in Ubuntu on IBM z Systems:
  Incomplete
Status in linux package in Ubuntu:
  Invalid
Status in lvm2 package in Ubuntu:
  Incomplete

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-03-11 Thread Dimitri John Ledkov
Ok, reproduced this on x86_64 with raw files which are in multiples of
4k.

This is not an architecture specific issue.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in Ubuntu on IBM z Systems:
  New
Status in linux package in Ubuntu:
  New
Status in lvm2 package in Ubuntu:
  New

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There might be a command line flag to override used because corruption 
does not cause a data loss.
  
   
  Userspace tool common name: pvmove 
   
  The userspace tool has the following bit modes: 64bit 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-03-11 Thread Dimitri John Ledkov
I see that this bug was created with Ubuntu 18.10 (judging by the tags).
I am trying to reproduce the issue on Ubuntu 19.04 (current development 
release).

I am failing to produce a mixed blocksize cryptsetup device:
$ sudo cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1

is failing for me with:
"Device size is not aligned to the requested sector size."

And on this machine, I do not have access to native 4k and non-4k drives
at the same time. Let me get a better machine to debug this further.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in Ubuntu on IBM z Systems:
  New
Status in linux package in Ubuntu:
  New
Status in lvm2 package in Ubuntu:
  New

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-02-22 Thread Dimitri John Ledkov
** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in Ubuntu on IBM z Systems:
  New
Status in linux package in Ubuntu:
  New
Status in lvm2 package in Ubuntu:
  New

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There might be a command line flag to override used because corruption 
does not cause a data loss.
  
   
  Userspace tool common name: pvmove 
   
  The userspace tool has the following bit modes: 64bit 

  Userspace rpm: lvm2 in versoin 

[Touch-packages] [Bug 1817097] Re: pvmove causes file system corruption without notice upon move from 512 -> 4096 logical block size devices

2019-02-21 Thread Frank Heimes
** Package changed: linux (Ubuntu) => lvm2 (Ubuntu)

** Also affects: ubuntu-z-systems
   Importance: Undecided
   Status: New

** Changed in: ubuntu-z-systems
 Assignee: (unassigned) => Canonical Foundations Team 
(canonical-foundations)

** Changed in: ubuntu-z-systems
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1817097

Title:
  pvmove causes file system corruption without notice upon move from 512
  -> 4096 logical block size devices

Status in Ubuntu on IBM z Systems:
  New
Status in lvm2 package in Ubuntu:
  New

Bug description:
  Problem Description---
  Summary
  ===
  Environment: IBM Z13 LPAR and z/VM Guest
   IBM Type: 2964 Model: 701 NC9
  OS:  Ubuntu 18.10 (GNU/Linux 4.18.0-13-generic s390x)
   Package: lvm2 version 2.02.176-4.1ubuntu3
  LVM: pvmove operation corrupts file system when using 4096 (4k) logical block 
size
   and default block size being 512 bytes in the underlying devices
  The problem is immediately reproducible.

  We see a real usability issue with data destruction as consequence - which is 
not acceptable.
  We expect 'pvmove' to fail with error in such situations to prevent fs 
destruction,
  which might possibly be overridden by a force flag.

  
  Details
  ===
  After a 'pvmove' operation is run to move a physical volume onto an ecrypted
  device with 4096 bytes logical block size we experience a file system 
corruption.
  There is no need for the file system to be mounted, but the problem surfaces
  differently if so.

  Either, the 'pvs' command after the pvmove shows
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
/dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument

  or

  a subsequent mount shows (after umount if the fs had previously been mounted 
as in our
  setup)
  mount: /mnt: wrong fs type, bad option, bad superblock on 
/dev/mapper/LOOP_VG-LV, missing codepage or helper program, or other error.

  A minimal setup of LVM using one volume group with one logical volume defined,
  based on one physical volume is sufficient to raise the problem. One more 
physical
  volume of the same size is needed to run the pvmove operation to. 

LV
 |
  VG: LOOP_VG [ ]
 |
  PV: /dev/loop0   -->   /dev/mapper/enc-loop
  ( backed by /dev/mapper/enc-loop )

  The physical volumes are backed by loopback devices (losetup) to base the
  problem report on, but we have seen the error on real SCSI multipath volumes
  also, with and without cryptsetup mapper devices in use.

  
  Further discussion
  ==
  https://www.saout.de/pipermail/dm-crypt/2019-February/006078.html
  The problem does not occur on block devices with native size of 4k.
  E.g. DASDs, or file systems with mkfs -b 4096 option.

  
  Terminal output
  ===
  See attached file pvmove-error.txt

  
  Debug data
  ==
  pvmove was run with -dd (maximum debug level)
  See attached journal file.
   
  Contact Information = christian.r...@de.ibm.com 
   
  ---uname output---
  Linux system 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:00:35 UTC 2018 
s390x s390x s390x GNU/Linux
   
  Machine Type = IBM Type: 2964 Model: 701 NC9 
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   1.) Create two image files of 500MB in size
  and set up two loopback devices with 'losetup -fP FILE'
  2.) Create one physical volume and one volume group 'LOOP_VG',
  and one logical volume 'VG'
  Run:
  pvcreate /dev/loop0
  vgcreate LOOP_VG /dev/loop0
  lvcreate -L 300MB LOOP_VG -n LV /dev/loop0
  3.) Create a file system on the logical volume device:
  mkfs.ext4 /dev/mapper/LOOP_VG-LV
  4.) mount the file system created in the previous step to some empty 
available directory:
  mount /dev/mapper/LOOP_VG-LV /mnt
  5.) Set up a second physical volume, this time encrypted with LUKS2,
  and open the volume to make it available:
  cryptsetup luksFormat --type luks2 --sector-size 4096 /dev/loop1
  cryptsetup luksOpen /dev/loop1 enc-loop
  6.) Create the second physical volume, and add it to the LOOP_VG
  pvcreate /dev/mapper/enc-loop
  vgextend LOOP_VG /dev/mapper/enc-loop
  7.) Ensure the new physical volume is part of the volume group:
  pvs
  8.) Move the /dev/loop0 volume onto the encrypted volume with maximum debug 
option:
  pvmove -dd /dev/loop0 /dev/mapper/enc-loop
  9.) The previous step succeeds, but corrupts the file system on the logical 
volume
   We expect an error here. 
   There might be a command line flag to override used