** Description changed:

  This is a weird corner case. Extending an lvmraid(7) type1 mirror for
  the second time seems to result in the mirror legs not getting synced,
  *if* there is another type1 mirror in the vg. This reliably reproduces
  for me:
  
  # quickly fill two 10G files with random data
  openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 
2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 
of=pv1.img iflag=fullblock
  openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 
2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 
of=pv2.img iflag=fullblock
  
  # change loop devices if you have loads of snaps in use
  losetup /dev/loop10 pv1.img
  losetup /dev/loop11 pv2.img
  pvcreate /dev/loop10
  pvcreate /dev/loop11
  vgcreate testvg /dev/loop10 /dev/loop11
  
  lvcreate --type raid1 -L2G -n test testvg
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg
  
  # wait for sync
  
  lvcreate --type raid1 -L2G -n test2 testvg
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg
  
  # wait for sync
  
- # this will sync OK, observe kernel message for output from md subsys noting 
time taken
- # 
+ # the following will sync OK, observe kernel message for output from md 
subsys noting time taken
+ #
  lvextend -L+2G testvg/test2
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg
  
  # wait for sync
  
- # this will FAIL to sync, the sync will seem to complete instantly, e.g:
+ # the following  will FAIL to sync, the sync will seem to complete instantly, 
e.g:
  # Feb 02 15:22:50 asr-host kernel: md: resync of RAID array mdX
  # Feb 02 15:22:50 asr-host kernel: md: mdX: resync done.
  #
  lvextend -L+2G testvg/test2
  
  lvchange --syncaction check testvg/test2
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg
  
  # observe error count
  
  This may cause administrator alarm unnecessarily ... :/
  
  For some reason the precise sizes with which the LVs are created, and
  are then extended by, do appear to matter.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: lvm2 2.02.176-4.1ubuntu3
  ProcVersionSignature: Ubuntu 4.15.0-43.46-generic 4.15.18
  Uname: Linux 4.15.0-43-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.5
  Architecture: amd64
  Date: Sat Feb  2 15:33:16 2019
  ProcEnviron:
-  TERM=screen
-  PATH=(custom, no user)
-  LANG=en_GB.UTF-8
-  SHELL=/bin/bash
+  TERM=screen
+  PATH=(custom, no user)
+  LANG=en_GB.UTF-8
+  SHELL=/bin/bash
  SourcePackage: lvm2
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.lvm.lvm.conf: 2018-07-22T18:30:15.470358

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1814389

Title:
  Second extend of second lvmraid mirror does not sync

Status in lvm2 package in Ubuntu:
  New

Bug description:
  This is a weird corner case. Extending an lvmraid(7) type1 mirror for
  the second time seems to result in the mirror legs not getting synced,
  *if* there is another type1 mirror in the vg. This reliably reproduces
  for me:

  # quickly fill two 10G files with random data
  openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 
2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 
of=pv1.img iflag=fullblock
  openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 
2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 
of=pv2.img iflag=fullblock

  # change loop devices if you have loads of snaps in use
  losetup /dev/loop10 pv1.img
  losetup /dev/loop11 pv2.img
  pvcreate /dev/loop10
  pvcreate /dev/loop11
  vgcreate testvg /dev/loop10 /dev/loop11

  lvcreate --type raid1 -L2G -n test testvg
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # wait for sync

  lvcreate --type raid1 -L2G -n test2 testvg
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # wait for sync

  # the following will sync OK, observe kernel message for output from md 
subsys noting time taken
  #
  lvextend -L+2G testvg/test2
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # wait for sync

  # the following  will FAIL to sync, the sync will seem to complete instantly, 
e.g:
  # Feb 02 15:22:50 asr-host kernel: md: resync of RAID array mdX
  # Feb 02 15:22:50 asr-host kernel: md: mdX: resync done.
  #
  lvextend -L+2G testvg/test2

  lvchange --syncaction check testvg/test2
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # observe error count

  This may cause administrator alarm unnecessarily ... :/

  For some reason the precise sizes with which the LVs are created, and
  are then extended by, do appear to matter.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: lvm2 2.02.176-4.1ubuntu3
  ProcVersionSignature: Ubuntu 4.15.0-43.46-generic 4.15.18
  Uname: Linux 4.15.0-43-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.5
  Architecture: amd64
  Date: Sat Feb  2 15:33:16 2019
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_GB.UTF-8
   SHELL=/bin/bash
  SourcePackage: lvm2
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.lvm.lvm.conf: 2018-07-22T18:30:15.470358

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1814389/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to