[Touch-packages] [Bug 1918970] Re: Unable to netboot Ubuntu 18.04 and older on an IBM Z DPM Partition - no manual nor automatic qeth device configuration

2021-03-31 Thread Lee Trager
** Changed in: linux (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to initramfs-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1918970

Title:
  Unable to netboot Ubuntu 18.04 and older on an IBM Z DPM Partition -
  no manual nor automatic qeth device configuration

Status in MAAS:
  New
Status in Ubuntu on IBM z Systems:
  New
Status in initramfs-tools package in Ubuntu:
  New
Status in linux package in Ubuntu:
  New
Status in s390-tools package in Ubuntu:
  New

Bug description:
  I tried to deploy Ubuntu 18.04 with the GA-18.04 kernel on an IBM Z14
  DPM Partition.  The initrd fails to bring up network and thus fails to
  boot in MAAS. I haven't tried older versions of Ubuntu but suspect
  they also have the same bug.

  mount: mounting /dev on /root/dev failed: No such file or directory
  done.
  mount: mounting /run on /root/run failed: No such file or directory
  run-init: current directory on the same filesystem as the root: error 0
  Target filesystem doesn't have requested /sbin/init.
  run-init: current directory on the same filesystem as the root: error 0
  run-init: current directory on the same filesystem as the root: error 0
  run-init: current directory on the same filesystem as the root: error 0
  run-init: current directory on the same filesystem as the root: error 0
  run-init: current directory on the same filesystem as the root: error 0
  chvt: can't open console
  No init found. Try passing init= bootarg.
  Couldn't get a file descriptor referring to the console
  /scripts/panic/console_setup: line 133: can't create /dev/tty1: No such 
device o
  r address
  /scripts/panic/console_setup: line 1: can't open /dev/tty1: No such device or 
ad
  dress
  /scripts/panic/console_setup: line 1: can't create /dev/tty2: No such device 
or
  address
  /scripts/panic/console_setup: line 1: can't open /dev/tty2: No such device or 
ad
  dress
  /scripts/panic/console_setup: line 1: can't create /dev/tty3: No such device 
or
  address
  /scripts/panic/console_setup: line 1: can't open /dev/tty3: No such device or 
ad
  dress
  /scripts/panic/console_setup: line 1: can't create /dev/tty4: No such device 
or
  address
  /scripts/panic/console_setup: line 1: can't open /dev/tty4: No such device or 
ad
  dress
  /scripts/panic/console_setup: line 1: can't create /dev/tty5: No such device 
or
  address
  /scripts/panic/console_setup: line 1: can't open /dev/tty5: No such device or 
ad
  dress
  /scripts/panic/console_setup: line 1: can't create /dev/tty6: No such device 
or
  address
  /scripts/panic/console_setup: line 1: can't open /dev/tty6: No such device or 
ad
  dress
   
   
  BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3.3) built-in shell (ash)
  Enter 'help' for a list of built-in commands.
   
  (initramfs) [6n
  [   78.114530] random: crng init done
  [   78.114538] random: 7 urandom warning(s) missed due to ratelimiting

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1918970/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1904793] Re: upower abruptly thinks battery has gone to 1% and hibernates

2021-01-12 Thread Lee Trager
Upstream bug reported at
https://gitlab.freedesktop.org/upower/upower/-/issues/136

** Bug watch added: gitlab.freedesktop.org/upower/upower/-/issues #136
   https://gitlab.freedesktop.org/upower/upower/-/issues/136

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to upower in Ubuntu.
https://bugs.launchpad.net/bugs/1904793

Title:
  upower abruptly thinks battery has gone to 1% and hibernates

Status in upower package in Ubuntu:
  New

Bug description:
  Whenever I go on battery after 20-30 minutes upower will very abruptly
  think my battery is at 1% and force my laptop to hibernate. This seems
  to happen at random times, I've seen it when my battery was reported
  to be 90%, 76%, 45%, 25%, etc. If I try to resume Ubuntu locks up
  forcing me to hard reset the machine. I suspect this is because upower
  thinks my battery is still at 1% when its not. My laptops firmware
  correctly reports the battery level and shows that I have plenty of
  power remaining. The last few times this happened I kept powertop up
  which shows that I do have plenty of power even when upower thinks I
  have none. Essentially this makes my laptop unusable on battery.

  Laptop: Lenovo X1 Carbon Extreme Gen 2

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: upower 0.99.11-2
  ProcVersionSignature: Ubuntu 5.8.0-29.31-generic 5.8.14
  Uname: Linux 5.8.0-29-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair nvidia_modeset 
nvidia
  ApportVersion: 2.20.11-0ubuntu50.1
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Wed Nov 18 13:59:24 2020
  InstallationDate: Installed on 2019-12-29 (325 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Alpha amd64 (20191220)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: upower
  UpgradeStatus: Upgraded to groovy on 2020-10-23 (25 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/upower/+bug/1904793/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1904793] Re: upower abruptly thinks battery has gone to 1% and hibernates

2020-11-30 Thread Lee Trager
I've been monitoring upower and I think the problem is it is incorrectly
detecting the number of batteries in my laptop. I only have one yet
upower detects 5 power sources. One of them is a DisplayDevice which
normally shows the same amount of battery as my system battery.
Sometimes that drops to 1% or 0%. I've had my laptop plugged in all day
here are two upower -d dumps from within the last 5 minutes. Notice in
the first dump /org/freedesktop/UPower/devices/DisplayDevice has 0%
battery despite the laptop being plugged in. On the second it has the
same engery, energy-full, and percentage as
/org/freedesktop/UPower/devices/battery_BAT0.

$ upower -d
Device: /org/freedesktop/UPower/devices/line_power_AC
  native-path:  AC
  power supply: yes
  updated:  Mon 30 Nov 2020 02:36:20 PM PST (13554 seconds ago)
  has history:  no
  has statistics:   no
  line-power
warning-level:   none
online:  yes
icon-name:  'ac-adapter-symbolic'

Device: /org/freedesktop/UPower/devices/battery_BAT0
  native-path:  BAT0
  vendor:   Celxpert
  model:5B10V98091
  serial:   1695
  power supply: yes
  updated:  Mon 30 Nov 2020 06:20:22 PM PST (112 seconds ago)
  has history:  yes
  has statistics:   yes
  battery
present: yes
rechargeable:yes
state:   charging
warning-level:   none
energy:  0 Wh
energy-empty:0 Wh
energy-full: 654.79 Wh
energy-full-design:  80.4 Wh
energy-rate: 0 W
voltage: 7.189 V
percentage:  0%
capacity:88.4453%
technology:  lithium-polymer
icon-name:  'battery-caution-charging-symbolic'
  History (charge):
1606789222  0.000   charging

Device: /org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o001
  native-path:  ucsi-source-psy-USBC000:001
  power supply: yes
  updated:  Fri 27 Nov 2020 02:59:19 PM PST (271375 seconds ago)
  has history:  no
  has statistics:   no
  line-power
warning-level:   none
online:  no
icon-name:  'ac-adapter-symbolic'

Device: /org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o002
  native-path:  ucsi-source-psy-USBC000:002
  power supply: yes
  updated:  Fri 27 Nov 2020 02:59:18 PM PST (271376 seconds ago)
  has history:  no
  has statistics:   no
  line-power
warning-level:   none
online:  no
icon-name:  'ac-adapter-symbolic'

Device: /org/freedesktop/UPower/devices/DisplayDevice
  power supply: yes
  updated:  Mon 30 Nov 2020 06:20:22 PM PST (112 seconds ago)
  has history:  no
  has statistics:   no
  battery
present: yes
state:   charging
warning-level:   none
energy:  0 Wh
energy-full: 654.79 Wh
energy-rate: 0 W
percentage:  0%
icon-name:  'battery-caution-charging-symbolic'

Daemon:
  daemon-version:  0.99.11
  on-battery:  no
  lid-is-closed:   no
  lid-is-present:  yes
  critical-action: HybridSleep



$ upower -d
Device: /org/freedesktop/UPower/devices/line_power_AC
  native-path:  AC
  power supply: yes
  updated:  Mon 30 Nov 2020 02:36:20 PM PST (13733 seconds ago)
  has history:  no
  has statistics:   no
  line-power
warning-level:   none
online:  yes
icon-name:  'ac-adapter-symbolic'

Device: /org/freedesktop/UPower/devices/battery_BAT0
  native-path:  BAT0
  vendor:   Celxpert
  model:5B10V98091
  serial:   1695
  power supply: yes
  updated:  Mon 30 Nov 2020 06:24:22 PM PST (51 seconds ago)
  has history:  yes
  has statistics:   yes
  battery
present: yes
rechargeable:yes
state:   fully-charged
warning-level:   none
energy:  71.71 Wh
energy-empty:0 Wh
energy-full: 654.79 Wh
energy-full-design:  80.4 Wh
energy-rate: 0 W
voltage: 17.173 V
percentage:  99%
capacity:88.4453%
technology:  lithium-polymer
icon-name:  'battery-full-charged-symbolic'

Device: /org/freedesktop/UPower/devices/line_power_ucsi_source_psy_USBC000o001
  native-path:  ucsi-source-psy-USBC000:001
  power supply: yes
  updated:  Fri 27 Nov 2020 02:59:19 PM PST (271554 seconds ago)
  has history:  no
  has statistics:   no
  line-power
warning-level:   none
online:  no
icon-name:  'ac-adapter-symbolic'

Device: 

[Touch-packages] [Bug 1904793] [NEW] upower abruptly thinks battery has gone to 1% and hibernates

2020-11-18 Thread Lee Trager
Public bug reported:

Whenever I go on battery after 20-30 minutes upower will very abruptly
think my battery is at 1% and force my laptop to hibernate. This seems
to happen at random times, I've seen it when my battery was reported to
be 90%, 76%, 45%, 25%, etc. If I try to resume Ubuntu locks up forcing
me to hard reset the machine. I suspect this is because upower thinks my
battery is still at 1% when its not. My laptops firmware correctly
reports the battery level and shows that I have plenty of power
remaining. The last few times this happened I kept powertop up which
shows that I do have plenty of power even when upower thinks I have
none. Essentially this makes my laptop unusable on battery.

Laptop: Lenovo X1 Carbon Extreme Gen 2

ProblemType: Bug
DistroRelease: Ubuntu 20.10
Package: upower 0.99.11-2
ProcVersionSignature: Ubuntu 5.8.0-29.31-generic 5.8.14
Uname: Linux 5.8.0-29-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair nvidia_modeset 
nvidia
ApportVersion: 2.20.11-0ubuntu50.1
Architecture: amd64
CasperMD5CheckResult: skip
Date: Wed Nov 18 13:59:24 2020
InstallationDate: Installed on 2019-12-29 (325 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Alpha amd64 (20191220)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: upower
UpgradeStatus: Upgraded to groovy on 2020-10-23 (25 days ago)

** Affects: upower (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug groovy

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to upower in Ubuntu.
https://bugs.launchpad.net/bugs/1904793

Title:
  upower abruptly thinks battery has gone to 1% and hibernates

Status in upower package in Ubuntu:
  New

Bug description:
  Whenever I go on battery after 20-30 minutes upower will very abruptly
  think my battery is at 1% and force my laptop to hibernate. This seems
  to happen at random times, I've seen it when my battery was reported
  to be 90%, 76%, 45%, 25%, etc. If I try to resume Ubuntu locks up
  forcing me to hard reset the machine. I suspect this is because upower
  thinks my battery is still at 1% when its not. My laptops firmware
  correctly reports the battery level and shows that I have plenty of
  power remaining. The last few times this happened I kept powertop up
  which shows that I do have plenty of power even when upower thinks I
  have none. Essentially this makes my laptop unusable on battery.

  Laptop: Lenovo X1 Carbon Extreme Gen 2

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: upower 0.99.11-2
  ProcVersionSignature: Ubuntu 5.8.0-29.31-generic 5.8.14
  Uname: Linux 5.8.0-29-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair nvidia_modeset 
nvidia
  ApportVersion: 2.20.11-0ubuntu50.1
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Wed Nov 18 13:59:24 2020
  InstallationDate: Installed on 2019-12-29 (325 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Alpha amd64 (20191220)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: upower
  UpgradeStatus: Upgraded to groovy on 2020-10-23 (25 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/upower/+bug/1904793/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1869116] Re: smartctl-validate is borked in a recent release

2020-07-17 Thread Lee Trager
** No longer affects: util-linux (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1869116

Title:
  smartctl-validate is borked in a recent release

Status in lxd:
  New
Status in MAAS:
  Fix Committed
Status in MAAS 2.7 series:
  New
Status in MAAS 2.8 series:
  New

Bug description:
  Bug (maybe?) details first, diatribe second.

  Bug Summary: multi-hdd / raid with multiple drives / multiple devices
  or something along those lines cannot be commissioned anymore: 2.4.x
  worked fine, 2.7.0 does not.

  Here is the script output of smartctl-validate:

  -
  # /dev/sda (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66c9220df4ac)

  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66c9220df4ac' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters:
  {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66c9220df4ac', 'model': 'PERC 6/i', 
'name': 'sda', 'physical_blockdevice_id': 33, 'serial': 
'6842b2b0740e9900260e66c9220df4ac'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -
  -
  # /dev/sdb (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66f924ecece0)
  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66f924ecece0' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters: {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66f924ecece0', 'model': 'PERC 6/i', 
'name': 'sdb', 'physical_blockdevice_id': 34, 'serial': 
'6842b2b0740e9900260e66f924ecece0'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -

  You can see that it says the storage cannot be found and immediately
  lists it as a discovered device. It does it for both tests (one for
  each drive), and for both servers

  Bug Details:
  I had maas 2.4.x for the longest time over my journey (see below journey) and 
have never had any problems re-commissioning (or deleting and re-discovering 
over boot PXE) 2 of my servers (r610, r710).

  r610 has an iPERC 6, four 10K X00GB drives configured in a RAID10, 1 virtual 
disk.
  r710 has an iPERC 6, 6x 2TB drives, configured in a RAID10, 2 virtual disks

  So commission after commission trying to get through my journey, 0
  problems. After I finally get everything figured out on the juju,
  network/vlan, quad-nic end, I go to re-commission and I cannot.
  smartctl-validate fails on both, over and over again. I even destroyed
  and re-created the raid/VDs, still not.

  After spending so much time on it I remembered that it was the first
  time I had tried to re-commission these two servers since doing an
  upgrade from 2.4.x->2.7 in an effort to use the updated KVM
  integration to add a couple more guests. Once I got all everything
  figured out I went to re-commission everything and boom.

  [Upgrade path notes]
  In full disclosure, in case this matters. I was on apt install of 2.4.x and 
using snap for 2.7, except it didn't work. So I read on how to do apt 2.7 and 
did that and did not uninstall snap 2.7 yet. I wanted to migrate from apt to 
snap but do not know how to without losing all maas data and could not find 
docs on it, so a problem for another day. But in case that is part of the 
problem for some odd reason, I wanted to share.

  
  [Diatribe]
  My journey to get maas+juju+openstack+kubernets has been less then stellar. I 
have ran into problem after problem; albeit some of which were my own. I am so 
close, after spending the last 6 months on/off when I had time, and really 
hardcore the last 4 days. The last half day of which has been this little gem. 
Maas has been pretty fun to work with but some thing 

[Touch-packages] [Bug 1869116] Re: smartctl-validate is borked in a recent release

2020-07-17 Thread Lee Trager
** Also affects: maas/2.7
   Importance: Undecided
   Status: New

** Changed in: maas/2.7
   Importance: Undecided => Critical

** Changed in: maas/2.7
 Assignee: (unassigned) => Lee Trager (ltrager)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1869116

Title:
  smartctl-validate is borked in a recent release

Status in lxd:
  New
Status in MAAS:
  Triaged
Status in MAAS 2.7 series:
  New
Status in MAAS 2.8 series:
  New
Status in util-linux package in Ubuntu:
  Confirmed

Bug description:
  Bug (maybe?) details first, diatribe second.

  Bug Summary: multi-hdd / raid with multiple drives / multiple devices
  or something along those lines cannot be commissioned anymore: 2.4.x
  worked fine, 2.7.0 does not.

  Here is the script output of smartctl-validate:

  -
  # /dev/sda (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66c9220df4ac)

  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66c9220df4ac' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters:
  {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66c9220df4ac', 'model': 'PERC 6/i', 
'name': 'sda', 'physical_blockdevice_id': 33, 'serial': 
'6842b2b0740e9900260e66c9220df4ac'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -
  -
  # /dev/sdb (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66f924ecece0)
  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66f924ecece0' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters: {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66f924ecece0', 'model': 'PERC 6/i', 
'name': 'sdb', 'physical_blockdevice_id': 34, 'serial': 
'6842b2b0740e9900260e66f924ecece0'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -

  You can see that it says the storage cannot be found and immediately
  lists it as a discovered device. It does it for both tests (one for
  each drive), and for both servers

  Bug Details:
  I had maas 2.4.x for the longest time over my journey (see below journey) and 
have never had any problems re-commissioning (or deleting and re-discovering 
over boot PXE) 2 of my servers (r610, r710).

  r610 has an iPERC 6, four 10K X00GB drives configured in a RAID10, 1 virtual 
disk.
  r710 has an iPERC 6, 6x 2TB drives, configured in a RAID10, 2 virtual disks

  So commission after commission trying to get through my journey, 0
  problems. After I finally get everything figured out on the juju,
  network/vlan, quad-nic end, I go to re-commission and I cannot.
  smartctl-validate fails on both, over and over again. I even destroyed
  and re-created the raid/VDs, still not.

  After spending so much time on it I remembered that it was the first
  time I had tried to re-commission these two servers since doing an
  upgrade from 2.4.x->2.7 in an effort to use the updated KVM
  integration to add a couple more guests. Once I got all everything
  figured out I went to re-commission everything and boom.

  [Upgrade path notes]
  In full disclosure, in case this matters. I was on apt install of 2.4.x and 
using snap for 2.7, except it didn't work. So I read on how to do apt 2.7 and 
did that and did not uninstall snap 2.7 yet. I wanted to migrate from apt to 
snap but do not know how to without losing all maas data and could not find 
docs on it, so a problem for another day. But in case that is part of the 
problem for some odd reason, I wanted to share.

  
  [Diatribe]
  My journey to get maas+juju+openstack+kubernets has been less then stellar. I 
have ran into problem after problem; albeit some of which were my own

[Touch-packages] [Bug 1869116] Re: smartctl-validate is borked in a recent release

2020-07-16 Thread Lee Trager
** Bug watch added: LXD bug tracker #7665
   https://github.com/lxc/lxd/issues/7665

** Changed in: lxd
   Status: Fix Released => Unknown

** Changed in: lxd
 Remote watch: LXD bug tracker #7096 => LXD bug tracker #7665

** Changed in: maas
   Importance: Undecided => High

** Changed in: maas/2.8
   Importance: Undecided => High

** Changed in: maas
   Importance: High => Critical

** Changed in: maas/2.8
   Importance: High => Critical

** Changed in: maas
Milestone: None => 2.9.0b1

** Changed in: maas
 Assignee: (unassigned) => Lee Trager (ltrager)

** Changed in: maas/2.8
 Assignee: (unassigned) => Lee Trager (ltrager)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1869116

Title:
  smartctl-validate is borked in a recent release

Status in lxd:
  Unknown
Status in MAAS:
  Triaged
Status in MAAS 2.8 series:
  New
Status in util-linux package in Ubuntu:
  Confirmed

Bug description:
  Bug (maybe?) details first, diatribe second.

  Bug Summary: multi-hdd / raid with multiple drives / multiple devices
  or something along those lines cannot be commissioned anymore: 2.4.x
  worked fine, 2.7.0 does not.

  Here is the script output of smartctl-validate:

  -
  # /dev/sda (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66c9220df4ac)

  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66c9220df4ac' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters:
  {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66c9220df4ac', 'model': 'PERC 6/i', 
'name': 'sda', 'physical_blockdevice_id': 33, 'serial': 
'6842b2b0740e9900260e66c9220df4ac'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -
  -
  # /dev/sdb (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66f924ecece0)
  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66f924ecece0' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters: {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66f924ecece0', 'model': 'PERC 6/i', 
'name': 'sdb', 'physical_blockdevice_id': 34, 'serial': 
'6842b2b0740e9900260e66f924ecece0'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -

  You can see that it says the storage cannot be found and immediately
  lists it as a discovered device. It does it for both tests (one for
  each drive), and for both servers

  Bug Details:
  I had maas 2.4.x for the longest time over my journey (see below journey) and 
have never had any problems re-commissioning (or deleting and re-discovering 
over boot PXE) 2 of my servers (r610, r710).

  r610 has an iPERC 6, four 10K X00GB drives configured in a RAID10, 1 virtual 
disk.
  r710 has an iPERC 6, 6x 2TB drives, configured in a RAID10, 2 virtual disks

  So commission after commission trying to get through my journey, 0
  problems. After I finally get everything figured out on the juju,
  network/vlan, quad-nic end, I go to re-commission and I cannot.
  smartctl-validate fails on both, over and over again. I even destroyed
  and re-created the raid/VDs, still not.

  After spending so much time on it I remembered that it was the first
  time I had tried to re-commission these two servers since doing an
  upgrade from 2.4.x->2.7 in an effort to use the updated KVM
  integration to add a couple more guests. Once I got all everything
  figured out I went to re-commission everything and boom.

  [Upgrade path notes]
  In full disclosure, in case this matters. I was on apt install of 2.4.x and 
using snap for 2.7, except it didn't work. So I read on how to do apt 2.7 and 
did that and 

[Touch-packages] [Bug 1869116] Re: smartctl-validate is borked in a recent release

2020-07-08 Thread Lee Trager
I was able to reproduce this bug by emulating an IDE in QEMU device and
running the smartctl-validate test. I have filed an upstream bug as
well.

https://github.com/karelzak/util-linux/issues/1098

** Bug watch added: github.com/karelzak/util-linux/issues #1098
   https://github.com/karelzak/util-linux/issues/1098

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1869116

Title:
  smartctl-validate is borked in a recent release

Status in lxd:
  Fix Released
Status in MAAS:
  Triaged
Status in MAAS 2.8 series:
  New
Status in util-linux package in Ubuntu:
  Confirmed

Bug description:
  Bug (maybe?) details first, diatribe second.

  Bug Summary: multi-hdd / raid with multiple drives / multiple devices
  or something along those lines cannot be commissioned anymore: 2.4.x
  worked fine, 2.7.0 does not.

  Here is the script output of smartctl-validate:

  -
  # /dev/sda (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66c9220df4ac)

  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66c9220df4ac' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters:
  {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66c9220df4ac', 'model': 'PERC 6/i', 
'name': 'sda', 'physical_blockdevice_id': 33, 'serial': 
'6842b2b0740e9900260e66c9220df4ac'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -
  -
  # /dev/sdb (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66f924ecece0)
  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66f924ecece0' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters: {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66f924ecece0', 'model': 'PERC 6/i', 
'name': 'sdb', 'physical_blockdevice_id': 34, 'serial': 
'6842b2b0740e9900260e66f924ecece0'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -

  You can see that it says the storage cannot be found and immediately
  lists it as a discovered device. It does it for both tests (one for
  each drive), and for both servers

  Bug Details:
  I had maas 2.4.x for the longest time over my journey (see below journey) and 
have never had any problems re-commissioning (or deleting and re-discovering 
over boot PXE) 2 of my servers (r610, r710).

  r610 has an iPERC 6, four 10K X00GB drives configured in a RAID10, 1 virtual 
disk.
  r710 has an iPERC 6, 6x 2TB drives, configured in a RAID10, 2 virtual disks

  So commission after commission trying to get through my journey, 0
  problems. After I finally get everything figured out on the juju,
  network/vlan, quad-nic end, I go to re-commission and I cannot.
  smartctl-validate fails on both, over and over again. I even destroyed
  and re-created the raid/VDs, still not.

  After spending so much time on it I remembered that it was the first
  time I had tried to re-commission these two servers since doing an
  upgrade from 2.4.x->2.7 in an effort to use the updated KVM
  integration to add a couple more guests. Once I got all everything
  figured out I went to re-commission everything and boom.

  [Upgrade path notes]
  In full disclosure, in case this matters. I was on apt install of 2.4.x and 
using snap for 2.7, except it didn't work. So I read on how to do apt 2.7 and 
did that and did not uninstall snap 2.7 yet. I wanted to migrate from apt to 
snap but do not know how to without losing all maas data and could not find 
docs on it, so a problem for another day. But in case that is part of the 
problem for some odd reason, I wanted to share.

  
  [Diatribe]
  My journey to get maas+juju+openstack+kubernets has been less then 

[Touch-packages] [Bug 1872124] Re: Please integrate ubuntu-drivers --gpgpu into Ubuntu Server

2020-04-15 Thread Lee Trager
MAAS contains a way to automatically install drivers. Every region has a
file, /etc/maas/drivers.yaml, which specifies drivers which should be
automatically installed. I don't have access to any test hardware but I
*think* this should work. One thing I noticed is there isn't a meta
package for the nVidia driver which points to the latest version for
each Ubuntu release. We would need that before adding this to MAAS.

diff --git a/drivers-orig.yaml b/drivers.yaml
index 2d3724c..97a4eb2 100644
--- a/drivers-orig.yaml
+++ b/drivers.yaml
@@ -82,3 +82,10 @@ drivers:
   repository: http://downloads.linux.hpe.com/SDR/repo/ubuntu-hpdsa
   series:
 - trusty
+- blacklist: nouveau
+  comment: nVidia driver
+  modaliases:
+  - 'pci:v10DEd*sv*sd*bc03sc02i00*'
+  - 'pci:v10DEd*sv*sd*bc03sc00i00*'
+  - module: nvidia
+  - package: nvidia-headless-440

** Changed in: maas
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu.
https://bugs.launchpad.net/bugs/1872124

Title:
  Please integrate ubuntu-drivers --gpgpu into Ubuntu Server

Status in cloud-init:
  New
Status in MAAS:
  Triaged
Status in subiquity:
  Triaged
Status in ubuntu-drivers-common package in Ubuntu:
  New
Status in ubuntu-meta package in Ubuntu:
  New

Bug description:
  Could subiquity provide an option in the UI to install and execute
  ubuntu-drivers-common on the target? The use case I'm interested in is
  an "on-rails" installation of Nvidia drivers for servers being
  installed for deep learning workloads.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1872124/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1869116] Re: smartctl-validate is borked in a recent release

2020-03-30 Thread Lee Trager
That makes sense based on the other information in this bug.

MAAS sends the test runner a list of storage devices to run hardware
tests on. The test runner uses the model and serial to map to a block
device name. This block device name is passed to the smartctl-validate
script. smartctl-validate uses smartctl to determine if SMART data is
available for the device.

Because the model name differs between lsblk and lxc it can't be looked
up and the test is marked as a failure. The mapping needs to work so
MAAS can pass the block device to to smartctl-validate which knows the
test can be skipped.

** Tags added: champagne

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1869116

Title:
  smartctl-validate is borked in a recent release

Status in lxd:
  Unknown
Status in MAAS:
  Triaged
Status in util-linux package in Ubuntu:
  New

Bug description:
  Bug (maybe?) details first, diatribe second.

  Bug Summary: multi-hdd / raid with multiple drives / multiple devices
  or something along those lines cannot be commissioned anymore: 2.4.x
  worked fine, 2.7.0 does not.

  Here is the script output of smartctl-validate:

  -
  # /dev/sda (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66c9220df4ac)

  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66c9220df4ac' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters:
  {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66c9220df4ac', 'model': 'PERC 6/i', 
'name': 'sda', 'physical_blockdevice_id': 33, 'serial': 
'6842b2b0740e9900260e66c9220df4ac'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -
  -
  # /dev/sdb (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66f924ecece0)
  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66f924ecece0' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters: {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66f924ecece0', 'model': 'PERC 6/i', 
'name': 'sdb', 'physical_blockdevice_id': 34, 'serial': 
'6842b2b0740e9900260e66f924ecece0'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -

  You can see that it says the storage cannot be found and immediately
  lists it as a discovered device. It does it for both tests (one for
  each drive), and for both servers

  Bug Details:
  I had maas 2.4.x for the longest time over my journey (see below journey) and 
have never had any problems re-commissioning (or deleting and re-discovering 
over boot PXE) 2 of my servers (r610, r710).

  r610 has an iPERC 6, four 10K X00GB drives configured in a RAID10, 1 virtual 
disk.
  r710 has an iPERC 6, 6x 2TB drives, configured in a RAID10, 2 virtual disks

  So commission after commission trying to get through my journey, 0
  problems. After I finally get everything figured out on the juju,
  network/vlan, quad-nic end, I go to re-commission and I cannot.
  smartctl-validate fails on both, over and over again. I even destroyed
  and re-created the raid/VDs, still not.

  After spending so much time on it I remembered that it was the first
  time I had tried to re-commission these two servers since doing an
  upgrade from 2.4.x->2.7 in an effort to use the updated KVM
  integration to add a couple more guests. Once I got all everything
  figured out I went to re-commission everything and boom.

  [Upgrade path notes]
  In full disclosure, in case this matters. I was on apt install of 2.4.x and 
using snap for 2.7, except it didn't work. So I read on how to do apt 2.7 and 
did that and did not uninstall snap 2.7 yet. I wanted to migrate from apt to 
snap but do not know how 

[Touch-packages] [Bug 1868915] Re: [focal] nm-online -s --timeout=10 timeout every time

2020-03-30 Thread Lee Trager
NetworkManager isn't installed in the base MAAS Focal image.

Are you using the default image from images.maas.io?

What cloud-init user-data are you sending to the install?

Have you modified the preseed file at all?

** Changed in: maas
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1868915

Title:
  [focal] nm-online -s --timeout=10 timeout every time

Status in MAAS:
  Incomplete
Status in OEM Priority Project:
  New
Status in network-manager package in Ubuntu:
  New

Bug description:
  Also created bug on upstream :
  https://gitlab.freedesktop.org/NetworkManager/NetworkManager/issues/398

  
  This issue will cause "NetworkManager-wait-online.service: Failed with result 
'exit-code'."
  And cloud-init.service is counting on this service for network status.

  So this issue will finally cause MaaS deploying failed.

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: network-manager 1.22.10-1ubuntu1 [modified: 
lib/systemd/system/NetworkManager-wait-online.service]
  ProcVersionSignature: Ubuntu 5.4.0-1002.4-oem 5.4.8
  Uname: Linux 5.4.0-1002-oem x86_64
  ApportVersion: 2.20.11-0ubuntu20
  Architecture: amd64
  Date: Wed Mar 25 12:43:13 2020
  DistributionChannelDescriptor:
   # This is the distribution channel descriptor for the OEM CDs
   # For more information see 
http://wiki.ubuntu.com/DistributionChannelDescriptor
   canonical-oem-somerville-focal-amd64-20200316-60+fossa-staging+X09
  IfupdownConfig: source /etc/network/interfaces.d/*.cfg
  InstallationDate: Installed on 2020-03-25 (0 days ago)
  InstallationMedia: Ubuntu 20.04 "Focal" - Build amd64 LIVE Binary 
20200316-08:40
  IpRoute:
   default via 192.168.101.1 dev eno1 proto static metric 100
   10.101.46.0/24 via 192.168.101.1 dev eno1 proto static metric 200
   169.254.0.0/16 dev eno1 scope link metric 1000
   192.168.101.0/24 dev eno1 proto kernel scope link src 192.168.101.85 metric 
100
  NetDevice.bonding_masters:
   Error: command ['udevadm', 'info', '--query=all', '--path', 
'/sys/class/net/bonding_masters'] failed with exit code 1: Unknown device 
"/sys/class/net/bonding_masters": No such device

   X: INTERFACE_MAC=Error: command ['cat', 
'/sys/class/net/bonding_masters/address'] failed with exit code 1: cat: 
/sys/class/net/bonding_masters/address: Not a directory
  SourcePackage: network-manager
  UpgradeStatus: No upgrade log present (probably fresh install)
  nmcli-con:
   NAME  UUID  TYPE  TIMESTAMP   
TIMESTAMP-REAL  AUTOCONNECT  AUTOCONNECT-PRIORITY  
READONLY  DBUS-PATH   ACTIVE  DEVICE  STATE 
 ACTIVE-PATH SLAVE  FILENAME
   netplan-eno1  10838d80-caeb-349e-ba73-08ed16d4d666  ethernet  158577  
廿廿年三月廿五日 (週三) 十二時39分37秒  yes  0 no
/org/freedesktop/NetworkManager/Settings/1  yes eno1activated  
/org/freedesktop/NetworkManager/ActiveConnection/1  -- 
/run/NetworkManager/system-connections/netplan-eno1.nmconnection
  nmcli-nm:
   RUNNING  VERSION  STATE  STARTUP   CONNECTIVITY  NETWORKING  WIFI-HW   
WIFI  WWAN-HW  WWAN
   running  1.22.10  connected  starting  full  enabled disabled  
disabled  enabled  enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1868915/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1869116] Re: smartctl-validate is borked in a recent release

2020-03-27 Thread Lee Trager
I spoke with the LXD team and they're passing through the name they get
from the kernel. It seems like this may be a bug that was introduced to
util-linux with RAID devices. I tried running lsblk on Focal against two
NVME drives and I do not see an underscore used in model names.

@cees - Can you try using a different commissioning operating system and
see if the problem is resolved? You can download 16.04 and 20.04 on the
images page and then change the commissioning operating system on the
settings page.

** Also affects: util-linux-ng
   Importance: Undecided
   Status: New

** No longer affects: util-linux-ng

** Also affects: util-linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1869116

Title:
  smartctl-validate is borked in a recent release

Status in lxd:
  Unknown
Status in MAAS:
  Triaged
Status in util-linux package in Ubuntu:
  New

Bug description:
  Bug (maybe?) details first, diatribe second.

  Bug Summary: multi-hdd / raid with multiple drives / multiple devices
  or something along those lines cannot be commissioned anymore: 2.4.x
  worked fine, 2.7.0 does not.

  Here is the script output of smartctl-validate:

  -
  # /dev/sda (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66c9220df4ac)

  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66c9220df4ac' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters:
  {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66c9220df4ac', 'model': 'PERC 6/i', 
'name': 'sda', 'physical_blockdevice_id': 33, 'serial': 
'6842b2b0740e9900260e66c9220df4ac'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -
  -
  # /dev/sdb (Model: PERC 6/i, Serial: 6842b2b0740e9900260e66f924ecece0)
  Unable to run 'smartctl-validate': Storage device 'PERC 6/i' with serial 
'6842b2b0740e9900260e66f924ecece0' not found!
  This indicates the storage device has been removed or the OS is unable to 
find it due to a hardware failure. Please re-commission this node to 
re-discover the storage devices, or delete this device manually.
  Given parameters: {'storage': {'argument_format': '{path
  }', 'type': 'storage', 'value': {'id_path': 
'/dev/disk/by-id/wwn-0x6842b2b0740e9900260e66f924ecece0', 'model': 'PERC 6/i', 
'name': 'sdb', 'physical_blockdevice_id': 34, 'serial': 
'6842b2b0740e9900260e66f924ecece0'
  }
  }
  }
  Discovered storage devices: [
  {'NAME': 'sda', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66c9220df4ac'
  },
  {'NAME': 'sdb', 'MODEL': 'PERC_6/i', 'SERIAL': 
'6842b2b0740e9900260e66f924ecece0'
  },
  {'NAME': 'sr0', 'MODEL': 'TEAC_DVD-ROM_DV-28SW', 'SERIAL': 
'10092013112645'
  }
  ]
  Discovered interfaces: {'xx: xx: xx: xx: xx: xx': 'eno1'
  }
  -

  You can see that it says the storage cannot be found and immediately
  lists it as a discovered device. It does it for both tests (one for
  each drive), and for both servers

  Bug Details:
  I had maas 2.4.x for the longest time over my journey (see below journey) and 
have never had any problems re-commissioning (or deleting and re-discovering 
over boot PXE) 2 of my servers (r610, r710).

  r610 has an iPERC 6, four 10K X00GB drives configured in a RAID10, 1 virtual 
disk.
  r710 has an iPERC 6, 6x 2TB drives, configured in a RAID10, 2 virtual disks

  So commission after commission trying to get through my journey, 0
  problems. After I finally get everything figured out on the juju,
  network/vlan, quad-nic end, I go to re-commission and I cannot.
  smartctl-validate fails on both, over and over again. I even destroyed
  and re-created the raid/VDs, still not.

  After spending so much time on it I remembered that it was the first
  time I had tried to re-commission these two servers since doing an
  upgrade from 2.4.x->2.7 in an effort to use the updated KVM
  integration to add a couple more guests. Once I got all everything
  figured out I went to re-commission everything and boom.

  [Upgrade path notes]
  In full disclosure, in case this matters. I was on apt install of 2.4.x and 
using snap for 2.7, except it didn't work. So I read on how to do apt 2.7 and 
did that and did not 

[Touch-packages] [Bug 1862846] Re: Crash and failure installing focal

2020-02-19 Thread Lee Trager
Ryan has updated Curtin to no longer require that util-linux feature. I
do think it would be good to carry that patch in Ubuntu as it other
users will be effected by that regression.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1862846

Title:
  Crash and failure installing focal

Status in subiquity:
  New
Status in curtin package in Ubuntu:
  Fix Released
Status in util-linux package in Ubuntu:
  New
Status in curtin source package in Eoan:
  Invalid
Status in util-linux source package in Eoan:
  New
Status in curtin source package in Focal:
  Fix Released
Status in util-linux source package in Focal:
  New
Status in util-linux package in Debian:
  New

Bug description:
  During an install of the daily live image for 20.04 Ubuntu Server, the
  installer first crashed and restarted itself, then failed to install
  the system.

  Attached are the logs left on the install USB key.

To manage notifications about this bug go to:
https://bugs.launchpad.net/subiquity/+bug/1862846/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1862846] Re: Crash and failure installing focal

2020-02-12 Thread Lee Trager
** Bug watch added: Debian Bug tracker #951217
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=951217

** Also affects: util-linux (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=951217
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1862846

Title:
  Crash and failure installing focal

Status in subiquity:
  New
Status in curtin package in Ubuntu:
  Triaged
Status in util-linux package in Ubuntu:
  New
Status in curtin source package in Eoan:
  Invalid
Status in util-linux source package in Eoan:
  New
Status in curtin source package in Focal:
  Triaged
Status in util-linux source package in Focal:
  New
Status in util-linux package in Debian:
  Unknown

Bug description:
  During an install of the daily live image for 20.04 Ubuntu Server, the
  installer first crashed and restarted itself, then failed to install
  the system.

  Attached are the logs left on the install USB key.

To manage notifications about this bug go to:
https://bugs.launchpad.net/subiquity/+bug/1862846/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1813227] Re: lvm2-pvscan services fails to start on S390X DPM

2019-04-04 Thread Lee Trager
I tested a Disco image today with multipath-tools installed and I still
get the same thing.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1813227

Title:
  lvm2-pvscan services fails to start on S390X DPM

Status in lvm2 package in Ubuntu:
  New

Bug description:
  The lvm2-pvscan service fails to start in the MAAS ephemeral
  environment. The service takes ~3 minutes to fail, blocking boot until
  failure. Curtin is expereincing a similar bug(LP:1813228) due to LVM.

  Release: 18.10(18.04 does not currently boot on S390X DPM)
  Kernel: 4.18.0-13-generic
  LVM: 2.02.176-4.1ubuntu3

  root@node3:~# lvm pvscan --cache --activate ay 8:98
    WARNING: Not using lvmetad because duplicate PVs were found.
    WARNING: Autoactivation reading from disk instead of lvmetad.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdb2 was already 
found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sde2 was already 
found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdba2 was 
already found on /dev/sdhd2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdfj2 was 
already found on /dev/sddi2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdg2 was already 
found on /dev/sddi2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdbc2 was 
already found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sdbf2 was 
already found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sddb2 was 
already found on /dev/sdhd2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sddr2 was 
already found on /dev/sdbq2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdbh2 was 
already found on /dev/sddi2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sddd2 was 
already found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sddg2 was 
already found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdfc2 was 
already found on /dev/sdhd2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdfs2 was 
already found on /dev/sdbq2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdp2 was already 
found on /dev/sdbq2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    Cannot activate LVs in VG zkvm4 while PVs appear on duplicate devices.
    0 logical volume(s) in volume group "zkvm4" now active
    zkvm4: autoactivation failed.
    Cannot activate LVs in VG zkvm1 while PVs appear on duplicate devices.
    0 logical volume(s) in volume group "zkvm1" now active
    zkvm1: autoactivation failed.
    Cannot activate LVs in VG Z_APPL_ROOT_lvm_473D86E3 while PVs appear on 
duplicate devices.
    0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_473D86E3" now active
    Z_APPL_ROOT_lvm_473D86E3: autoactivation failed.
    Cannot activate LVs in VG Z_APPL_ROOT_lvm_BBD0E8D4 while PVs appear on 
duplicate devices.
    0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_BBD0E8D4" now active
    Z_APPL_ROOT_lvm_BBD0E8D4: autoactivation failed.
    Cannot activate LVs in VG Z_APPL_ROOT_lvm_382F5B8E while PVs appear on 

[Touch-packages] [Bug 1813227] Re: lvm2-pvscan services fails to start on S390X DPM

2019-01-28 Thread Lee Trager
** Attachment added: "lsblk output"
   
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1813227/+attachment/5233345/+files/lsblk.log

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1813227

Title:
  lvm2-pvscan services fails to start on S390X DPM

Status in lvm2 package in Ubuntu:
  New

Bug description:
  The lvm2-pvscan service fails to start in the MAAS ephemeral
  environment. The service takes ~3 minutes to fail, blocking boot until
  failure. Curtin is expereincing a similar bug(LP:1813228) due to LVM.

  Release: 18.10(18.04 does not currently boot on S390X DPM)
  Kernel: 4.18.0-13-generic
  LVM: 2.02.176-4.1ubuntu3

  root@node3:~# lvm pvscan --cache --activate ay 8:98
    WARNING: Not using lvmetad because duplicate PVs were found.
    WARNING: Autoactivation reading from disk instead of lvmetad.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdb2 was already 
found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sde2 was already 
found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdba2 was 
already found on /dev/sdhd2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdfj2 was 
already found on /dev/sddi2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdg2 was already 
found on /dev/sddi2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdbc2 was 
already found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sdbf2 was 
already found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sddb2 was 
already found on /dev/sdhd2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sddr2 was 
already found on /dev/sdbq2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdbh2 was 
already found on /dev/sddi2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sddd2 was 
already found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sddg2 was 
already found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdfc2 was 
already found on /dev/sdhd2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdfs2 was 
already found on /dev/sdbq2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdp2 was already 
found on /dev/sdbq2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    Cannot activate LVs in VG zkvm4 while PVs appear on duplicate devices.
    0 logical volume(s) in volume group "zkvm4" now active
    zkvm4: autoactivation failed.
    Cannot activate LVs in VG zkvm1 while PVs appear on duplicate devices.
    0 logical volume(s) in volume group "zkvm1" now active
    zkvm1: autoactivation failed.
    Cannot activate LVs in VG Z_APPL_ROOT_lvm_473D86E3 while PVs appear on 
duplicate devices.
    0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_473D86E3" now active
    Z_APPL_ROOT_lvm_473D86E3: autoactivation failed.
    Cannot activate LVs in VG Z_APPL_ROOT_lvm_BBD0E8D4 while PVs appear on 
duplicate devices.
    0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_BBD0E8D4" now active
    Z_APPL_ROOT_lvm_BBD0E8D4: autoactivation failed.
    Cannot activate LVs in VG 

[Touch-packages] [Bug 1813227] Re: lvm2-pvscan services fails to start on S390X DPM

2019-01-28 Thread Lee Trager
The MAAS ephemeral environment does not have the multipath kernel module
nor multipath-tools. To get this log I had to install both.

** Attachment added: "multipath.log"
   
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1813227/+attachment/5233346/+files/multipath.log

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1813227

Title:
  lvm2-pvscan services fails to start on S390X DPM

Status in lvm2 package in Ubuntu:
  New

Bug description:
  The lvm2-pvscan service fails to start in the MAAS ephemeral
  environment. The service takes ~3 minutes to fail, blocking boot until
  failure. Curtin is expereincing a similar bug(LP:1813228) due to LVM.

  Release: 18.10(18.04 does not currently boot on S390X DPM)
  Kernel: 4.18.0-13-generic
  LVM: 2.02.176-4.1ubuntu3

  root@node3:~# lvm pvscan --cache --activate ay 8:98
    WARNING: Not using lvmetad because duplicate PVs were found.
    WARNING: Autoactivation reading from disk instead of lvmetad.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdb2 was already 
found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sde2 was already 
found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdba2 was 
already found on /dev/sdhd2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdfj2 was 
already found on /dev/sddi2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdg2 was already 
found on /dev/sddi2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdbc2 was 
already found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sdbf2 was 
already found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sddb2 was 
already found on /dev/sdhd2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sddr2 was 
already found on /dev/sdbq2.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdbh2 was 
already found on /dev/sddi2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sddd2 was 
already found on /dev/sdfe2.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sddg2 was 
already found on /dev/sdfh2.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdfc2 was 
already found on /dev/sdhd2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdfs2 was 
already found on /dev/sdbq2.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdp2 was already 
found on /dev/sdbq2.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
    Cannot activate LVs in VG zkvm4 while PVs appear on duplicate devices.
    0 logical volume(s) in volume group "zkvm4" now active
    zkvm4: autoactivation failed.
    Cannot activate LVs in VG zkvm1 while PVs appear on duplicate devices.
    0 logical volume(s) in volume group "zkvm1" now active
    zkvm1: autoactivation failed.
    Cannot activate LVs in VG Z_APPL_ROOT_lvm_473D86E3 while PVs appear on 
duplicate devices.
    0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_473D86E3" now active
    Z_APPL_ROOT_lvm_473D86E3: autoactivation failed.
    Cannot activate LVs in VG Z_APPL_ROOT_lvm_BBD0E8D4 while PVs appear on 
duplicate devices.
    0 logical volume(s) 

[Touch-packages] [Bug 1813227] Re: lvm2-pvscan services fails to start on S390X DPM

2019-01-24 Thread Lee Trager
** Description changed:

  The lvm2-pvscan service fails to start in the MAAS ephemeral
  environment. The service takes ~3 minutes to fail, blocking boot until
- failure.
+ failure. Curtin is expereincing a similar bug(LP:1813228) due to LVM.
  
  Release: 18.10(18.04 does not currently boot on S390X DPM)
  Kernel: 4.18.0-13-generic
  LVM: 2.02.176-4.1ubuntu3
  
  root@node3:~# lvm pvscan --cache --activate ay 8:98
-   WARNING: Not using lvmetad because duplicate PVs were found.
-   WARNING: Autoactivation reading from disk instead of lvmetad.
-   WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdb2 was already 
found on /dev/sdfe2.
-   WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sde2 was already 
found on /dev/sdfh2.
-   WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdba2 was 
already found on /dev/sdhd2.
-   WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdfj2 was 
already found on /dev/sddi2.
-   WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdg2 was already 
found on /dev/sddi2.
-   WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdbc2 was 
already found on /dev/sdfe2.
-   WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sdbf2 was 
already found on /dev/sdfh2.
-   WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sddb2 was 
already found on /dev/sdhd2.
-   WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sddr2 was 
already found on /dev/sdbq2.
-   WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdbh2 was 
already found on /dev/sddi2.
-   WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sddd2 was 
already found on /dev/sdfe2.
-   WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sddg2 was 
already found on /dev/sdfh2.
-   WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdfc2 was 
already found on /dev/sdhd2.
-   WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdfs2 was 
already found on /dev/sdbq2.
-   WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdp2 was already 
found on /dev/sdbq2.
-   WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
-   WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
-   WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
-   WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
-   WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
-   WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
-   WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
-   WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
-   WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
-   WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
-   WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
-   WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
-   WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
-   WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
-   WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
-   Cannot activate LVs in VG zkvm4 while PVs appear on duplicate devices.
-   0 logical volume(s) in volume group "zkvm4" now active
-   zkvm4: autoactivation failed.
-   Cannot activate LVs in VG zkvm1 while PVs appear on duplicate devices.
-   0 logical volume(s) in volume group "zkvm1" now active
-   zkvm1: autoactivation failed.
-   Cannot activate LVs in VG Z_APPL_ROOT_lvm_473D86E3 while PVs appear on 
duplicate devices.
-   0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_473D86E3" now active
-   Z_APPL_ROOT_lvm_473D86E3: autoactivation failed.
-   Cannot activate LVs in VG Z_APPL_ROOT_lvm_BBD0E8D4 while PVs appear on 
duplicate devices.
-   0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_BBD0E8D4" now active
-   Z_APPL_ROOT_lvm_BBD0E8D4: autoactivation failed.
-   Cannot activate LVs in VG Z_APPL_ROOT_lvm_382F5B8E while PVs appear on 
duplicate devices.
-   0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_382F5B8E" now active
-   Z_APPL_ROOT_lvm_382F5B8E: autoactivation failed.
+   WARNING: Not using lvmetad because duplicate PVs were found.
+   WARNING: Autoactivation reading from disk instead of lvmetad.
+   WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on 

[Touch-packages] [Bug 1813227] Re: lvm2-pvscan services fails to start on S390X DPM

2019-01-24 Thread Lee Trager
** Attachment added: "jourctl output of lvm2"
   
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1813227/+attachment/5232337/+files/lvm.log

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1813227

Title:
  lvm2-pvscan services fails to start on S390X DPM

Status in lvm2 package in Ubuntu:
  New

Bug description:
  The lvm2-pvscan service fails to start in the MAAS ephemeral
  environment. The service takes ~3 minutes to fail, blocking boot until
  failure.

  Release: 18.10(18.04 does not currently boot on S390X DPM)
  Kernel: 4.18.0-13-generic
  LVM: 2.02.176-4.1ubuntu3

  root@node3:~# lvm pvscan --cache --activate ay 8:98
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Autoactivation reading from disk instead of lvmetad.
WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdb2 was already 
found on /dev/sdfe2.
WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sde2 was already 
found on /dev/sdfh2.
WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdba2 was 
already found on /dev/sdhd2.
WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdfj2 was 
already found on /dev/sddi2.
WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdg2 was already 
found on /dev/sddi2.
WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdbc2 was 
already found on /dev/sdfe2.
WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sdbf2 was 
already found on /dev/sdfh2.
WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sddb2 was 
already found on /dev/sdhd2.
WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sddr2 was 
already found on /dev/sdbq2.
WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdbh2 was 
already found on /dev/sddi2.
WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sddd2 was 
already found on /dev/sdfe2.
WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sddg2 was 
already found on /dev/sdfh2.
WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdfc2 was 
already found on /dev/sdhd2.
WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdfs2 was 
already found on /dev/sdbq2.
WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdp2 was already 
found on /dev/sdbq2.
WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device 
/dev/sdfe2 because device was seen first.
WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device 
/dev/sdfh2 because device was seen first.
WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device 
/dev/sdhd2 because device was seen first.
WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device 
/dev/sddi2 because device was seen first.
WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device 
/dev/sdbq2 because device was seen first.
Cannot activate LVs in VG zkvm4 while PVs appear on duplicate devices.
0 logical volume(s) in volume group "zkvm4" now active
zkvm4: autoactivation failed.
Cannot activate LVs in VG zkvm1 while PVs appear on duplicate devices.
0 logical volume(s) in volume group "zkvm1" now active
zkvm1: autoactivation failed.
Cannot activate LVs in VG Z_APPL_ROOT_lvm_473D86E3 while PVs appear on 
duplicate devices.
0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_473D86E3" now active
Z_APPL_ROOT_lvm_473D86E3: autoactivation failed.
Cannot activate LVs in VG Z_APPL_ROOT_lvm_BBD0E8D4 while PVs appear on 
duplicate devices.
0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_BBD0E8D4" now active
Z_APPL_ROOT_lvm_BBD0E8D4: autoactivation failed.
Cannot activate LVs in VG Z_APPL_ROOT_lvm_382F5B8E while PVs appear on 

[Touch-packages] [Bug 1813227] [NEW] lvm2-pvscan services fails to start on S390X DPM

2019-01-24 Thread Lee Trager
Public bug reported:

The lvm2-pvscan service fails to start in the MAAS ephemeral
environment. The service takes ~3 minutes to fail, blocking boot until
failure.

Release: 18.10(18.04 does not currently boot on S390X DPM)
Kernel: 4.18.0-13-generic
LVM: 2.02.176-4.1ubuntu3

root@node3:~# lvm pvscan --cache --activate ay 8:98
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Autoactivation reading from disk instead of lvmetad.
  WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdb2 was already 
found on /dev/sdfe2.
  WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sde2 was already 
found on /dev/sdfh2.
  WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdba2 was already 
found on /dev/sdhd2.
  WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdfj2 was already 
found on /dev/sddi2.
  WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdg2 was already 
found on /dev/sddi2.
  WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sdbc2 was already 
found on /dev/sdfe2.
  WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sdbf2 was already 
found on /dev/sdfh2.
  WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sddb2 was already 
found on /dev/sdhd2.
  WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sddr2 was already 
found on /dev/sdbq2.
  WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S on /dev/sdbh2 was already 
found on /dev/sddi2.
  WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l on /dev/sddd2 was already 
found on /dev/sdfe2.
  WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs on /dev/sddg2 was already 
found on /dev/sdfh2.
  WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF on /dev/sdfc2 was already 
found on /dev/sdhd2.
  WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdfs2 was already 
found on /dev/sdbq2.
  WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 on /dev/sdp2 was already 
found on /dev/sdbq2.
  WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device /dev/sdfe2 
because device was seen first.
  WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device /dev/sdfe2 
because device was seen first.
  WARNING: PV tIvQUe-wsuE-RWlZ-Wmdo-uzws-qL5H-Y9zW5l prefers device /dev/sdfe2 
because device was seen first.
  WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device /dev/sdfh2 
because device was seen first.
  WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device /dev/sdfh2 
because device was seen first.
  WARNING: PV 3zsO6H-QCQq-eyJZ-8aLc-TGU1-q91J-LFWsKs prefers device /dev/sdfh2 
because device was seen first.
  WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device /dev/sdhd2 
because device was seen first.
  WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device /dev/sdhd2 
because device was seen first.
  WARNING: PV VEyfqe-Vln5-zMp5-wCeK-hkQN-u8Wp-LkPTMF prefers device /dev/sdhd2 
because device was seen first.
  WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device /dev/sddi2 
because device was seen first.
  WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device /dev/sddi2 
because device was seen first.
  WARNING: PV H7Ucbo-TJSw-tS4y-AwKT-xdql-NiMp-YvZC1S prefers device /dev/sddi2 
because device was seen first.
  WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device /dev/sdbq2 
because device was seen first.
  WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device /dev/sdbq2 
because device was seen first.
  WARNING: PV 6ZwSAD-cFkP-Qjeg-WIDt-g3Hc-m8Tt-i3ex81 prefers device /dev/sdbq2 
because device was seen first.
  Cannot activate LVs in VG zkvm4 while PVs appear on duplicate devices.
  0 logical volume(s) in volume group "zkvm4" now active
  zkvm4: autoactivation failed.
  Cannot activate LVs in VG zkvm1 while PVs appear on duplicate devices.
  0 logical volume(s) in volume group "zkvm1" now active
  zkvm1: autoactivation failed.
  Cannot activate LVs in VG Z_APPL_ROOT_lvm_473D86E3 while PVs appear on 
duplicate devices.
  0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_473D86E3" now active
  Z_APPL_ROOT_lvm_473D86E3: autoactivation failed.
  Cannot activate LVs in VG Z_APPL_ROOT_lvm_BBD0E8D4 while PVs appear on 
duplicate devices.
  0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_BBD0E8D4" now active
  Z_APPL_ROOT_lvm_BBD0E8D4: autoactivation failed.
  Cannot activate LVs in VG Z_APPL_ROOT_lvm_382F5B8E while PVs appear on 
duplicate devices.
  0 logical volume(s) in volume group "Z_APPL_ROOT_lvm_382F5B8E" now active
  Z_APPL_ROOT_lvm_382F5B8E: autoactivation failed.
root@node3:~# echo $?
5

** Affects: lvm2 (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: "syslog"
   https://bugs.launchpad.net/bugs/1813227/+attachment/5232336/+files/syslog

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1813227


[Touch-packages] [Bug 1641832] Re: Bash ignores exit trap on success when part of a command string

2016-11-15 Thread Lee Trager
I'm having trouble coming up with a simpler test case myself. Here is
how you can reproduce with lp:maas-images on Zesty with
bash-4.4-1ubuntu1

mkdir maas-images
cd maas-images
bzr init
bzr branch lp:maas-images
cd
export PATH=/home/ubuntu/maas-images/trunk/bin:$PATH
wget 
http://cloud-images.ubuntu.com/daily/server/zesty/2016/zesty-server-cloudimg-amd64.squashfs
meph2-build --image-format squashfs-image -vv --enable-di amd64 zesty 2016 
zesty-server-cloudimg-amd64.squashfs out.d

If you execute that with 4.4-1ubuntu1 it fails due to the loop back
mount still being mounted(mount | grep maas). If you downgrade to
4.3-15ubuntu1 or use the patch from lp:~ltrager/maas-
images/workaround_lp1641832 umount is called and the script executes
successfully.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to bash in Ubuntu.
https://bugs.launchpad.net/bugs/1641832

Title:
  Bash ignores exit trap on success when part of a command string

Status in bash package in Ubuntu:
  Triaged

Bug description:
  The MAAS team uses a script, lp:maas-images, which generates the
  images available at images.maas.io. As part of this process we use the
  following to convert a SquashFS image to an ext4 image.

  sudo bash -ec 'src="$1"; img="$2"; trgmp="$3";
  mounts=""
  cleanup() { for m in $mounts; do umount "$m"; done; }
  trap cleanup EXIT
  mount -o loop "$img" "$trgmp"
  mounts="$trgmp"
  unsquashfs -force -xattrs -dest "$trgmp" "$src"' \
  "squashimg-to-image" "$squashimg" "$output" "$trgmp"
  ret=$?
  rm -Rf "$mtmp" || return
  return $ret

  Prior to 4.4-1ubuntu1 the trap would always cause the cleanup function
  to always be called. Its now only called on failure. This causes the
  mount to remain and the following rm to fail. If I add 'false' to the
  end of the command script or downgrade to 4.3-15ubuntu1 the cleanup
  occurs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bash/+bug/1641832/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1641832] [NEW] Bash ignores exit trap on success when part of a command string

2016-11-14 Thread Lee Trager
Public bug reported:

The MAAS team uses a script, lp:maas-images, which generates the images
available at images.maas.io. As part of this process we use the
following to convert a SquashFS image to an ext4 image.

sudo bash -ec 'src="$1"; img="$2"; trgmp="$3";
mounts=""
cleanup() { for m in $mounts; do umount "$m"; done; }
trap cleanup EXIT
mount -o loop "$img" "$trgmp"
mounts="$trgmp"
unsquashfs -force -xattrs -dest "$trgmp" "$src"' \
"squashimg-to-image" "$squashimg" "$output" "$trgmp"
ret=$?
rm -Rf "$mtmp" || return
return $ret

Prior to 4.4-1ubuntu1 the trap would always cause the cleanup function
to always be called. Its now only called on failure. This causes the
mount to remain and the following rm to fail. If I add 'false' to the
end of the command script or downgrade to 4.3-15ubuntu1 the cleanup
occurs.

** Affects: bash (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to bash in Ubuntu.
https://bugs.launchpad.net/bugs/1641832

Title:
  Bash ignores exit trap on success when part of a command string

Status in bash package in Ubuntu:
  New

Bug description:
  The MAAS team uses a script, lp:maas-images, which generates the
  images available at images.maas.io. As part of this process we use the
  following to convert a SquashFS image to an ext4 image.

  sudo bash -ec 'src="$1"; img="$2"; trgmp="$3";
  mounts=""
  cleanup() { for m in $mounts; do umount "$m"; done; }
  trap cleanup EXIT
  mount -o loop "$img" "$trgmp"
  mounts="$trgmp"
  unsquashfs -force -xattrs -dest "$trgmp" "$src"' \
  "squashimg-to-image" "$squashimg" "$output" "$trgmp"
  ret=$?
  rm -Rf "$mtmp" || return
  return $ret

  Prior to 4.4-1ubuntu1 the trap would always cause the cleanup function
  to always be called. Its now only called on failure. This causes the
  mount to remain and the following rm to fail. If I add 'false' to the
  end of the command script or downgrade to 4.3-15ubuntu1 the cleanup
  occurs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bash/+bug/1641832/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1603898] Re: DNS resolution fails when using VPN and routing all traffic over it

2016-08-31 Thread Lee Trager
I'm running into the same issue. My network doesn't have IPv6 although
its configured to try, turning off IPv6 had no effect.

If I direct all traffic through the VPN ('Use this connection only for
resources on its network' in the routes window is left unchecked) I get
a DNS server but its not used by default

$ dig @127.0.1.1 +short chaos txt servers.bind
"10.172.64.1#53 12 0"
$ dig google.com +short
# No result returned
$ dig google.com +short @10.172.64.1
172.217.4.174

If I only direct VPN traffic for resources on the VPN network('Use this
connection only for resources on its network' in the routes window is
checked) on BOTH IPv4 and IPv6 I get two DNS servers and DNS seems to
work.

$ dig @127.0.1.1 +short chaos txt servers.bind
"192.168.1.1#53 6 0" "10.172.64.1#53 0 0"
$ dig google.com +short
216.58.216.174

So it seems network manager is adding the VPN DNS server but its not
using it.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1603898

Title:
  DNS resolution fails when using VPN and routing all traffic over it

Status in network-manager package in Ubuntu:
  New
Status in network-manager source package in Xenial:
  New

Bug description:
  When using our company VPN, the Network Manager configured dnsmasq
  ends up in a weird state where its unable to answer queries because
  it's (incorrectly) sending them to 127.0.0.1:53 where nothing is
  listening.

  | root@ornery:~# nmcli con show 'Canonical UK - All Traffic' | grep -i dns
  | ipv4.dns:
  | ipv4.dns-search:
  | ipv4.dns-options:   (default)
  | ipv4.ignore-auto-dns:   no
  | ipv6.dns:
  | ipv6.dns-search:
  | ipv6.dns-options:   (default)
  | ipv6.ignore-auto-dns:   no
  | IP4.DNS[1]: 10.172.192.1
  | root@ornery:~# ps auxfw | grep [4]035
  | nobody4035  0.0  0.0  52872  1620 ?SJun29   6:39  \_ 
/usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces 
--pid-file=/var/run/NetworkManager/dnsmasq.pid --listen-address=127.0.1.1 
--cache-size=0 --proxy-dnssec 
--enable-dbus=org.freedesktop.NetworkManager.dnsmasq 
--conf-dir=/etc/NetworkManager/dnsmasq.d
  | root@ornery:~# 

  Querying the DNS server provided by the VPN connection works; querying
  dnsmasq doesn't:

  | root@ornery:~# dig +short @10.172.192.1 www.openbsd.org
  | 129.128.5.194
  | root@ornery:~# dig @127.0.1.1 www.openbsd.org
  | 
  | ; <<>> DiG 9.10.3-P4-Ubuntu <<>> @127.0.1.1 www.openbsd.org
  | ; (1 server found)
  | ;; global options: +cmd
  | ;; Got answer:
  | ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 6996
  | ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
  | 
  | ;; QUESTION SECTION:
  | ;www.openbsd.org.   IN  A
  | 
  | ;; Query time: 0 msec
  | ;; SERVER: 127.0.1.1#53(127.0.1.1)
  | ;; WHEN: Mon Jul 18 10:25:48 CEST 2016
  | ;; MSG SIZE  rcvd: 33
  | 
  | root@ornery:~# 

  While running 'dig @127.0.1.1 www.openbsd.org':

  | root@ornery:~# tcpdump -i lo port 53 -v -n
  | tcpdump: listening on lo, link-type EN10MB (Ethernet), capture size 262144 
bytes
  | 10:26:04.728905 IP (tos 0x0, ttl 64, id 56577, offset 0, flags [none], 
proto UDP (17), length 72)
  | 127.0.0.1.54917 > 127.0.1.1.53: 32273+ [1au] A? www.openbsd.org. (44)
  | 10:26:04.729001 IP (tos 0x0, ttl 64, id 49204, offset 0, flags [DF], proto 
UDP (17), length 61)
  | 127.0.1.1.53 > 127.0.0.1.54917: 32273 Refused$ 0/0/0 (33)

  | root@ornery:~# netstat -anp | grep 127.0.[01].1:53
  | tcp0  0 127.0.1.1:530.0.0.0:*   LISTEN  
4035/dnsmasq
  | udp0  0 127.0.1.1:530.0.0.0:*   
4035/dnsmasq
  | root@ornery:~# 

  You can see below a) that dnsmasq thinks it is configured to use a DNS
  server provided by the VPN, and/but that b) it tries to answer a non
  local query like www.openbsd.org locally.

  | root@ornery:~# kill -USR1 4035; tail /var/log/syslog | grep dnsmasq
  | Jul 18 09:29:22 ornery dnsmasq[4035]: time 1468830562
  | Jul 18 09:29:22 ornery dnsmasq[4035]: cache size 0, 0/0 cache insertions 
re-used unexpired cache entries.
  | Jul 18 09:29:22 ornery dnsmasq[4035]: queries forwarded 1880976, queries 
answered locally 375041
  | Jul 18 09:29:22 ornery dnsmasq[4035]: queries for authoritative zones 0
  | Jul 18 09:29:22 ornery dnsmasq[4035]: server 10.172.192.1#53: queries sent 
792, retried or failed 0
  | root@ornery:~# dig +short @127.0.1.1 www.openbsd.org
  | root@ornery:~# kill -USR1 4035; tail /var/log/syslog | grep dnsmasq
  | Jul 18 09:29:22 ornery dnsmasq[4035]: queries for authoritative zones 0
  | Jul 18 09:29:22 ornery dnsmasq[4035]: server 10.172.192.1#53: queries sent 
792, retried or failed 0
  | Jul 18 09:29:37 ornery dnsmasq[4035]: time 1468830577
  | Jul 18