** Description changed:

- cc_grub_dpkg was fixed to support nvme drives, but didn't clear the
- state of cc_grub_dpkg and didn't rerun it on upgrades
+ === Begin SRU Template ===
+ [Impact]
+ Older versions of cloud-init could misconfigure grub on nvme devices,
+ which could prevent instances from booting after a grub upgrade.
+ 
+ [Test Case]
+ For focal, bionic, and xenial verify the following:
+ 1. on an affected instance, test that installing the new version of 
cloud-init appropriately updates debconf
+ 2. on an affected instance, modify of the debconf settings and test that 
installing the new version of cloud-init does not touch those values
+ 3. in a container, confirm that cloud-init does not touch the values
+ 4. in an unaffected instance (i.e. one without an NVMe root), confirm that 
cloud-init does not touch the values
+ 
+ Steps for test 1:
+ # Find an old affected image with
+ aws ec2 describe-images --filters "Name=name,Values=Ubuntu <release number>*"
+ 
+ # Launch an AWS with affected image-id, ssh in
+ 
+ # After startup, connect via SSH, then
+ # Verify we're on an nvme device
+ lsblk | grep nvme
+ 
+ # Verify install_devices set incorrectly
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ # update cloud-init to proposed
+ mirror=http://archive.ubuntu.com/ubuntu
+ echo deb $mirror $(lsb_release -sc)-proposed main | tee 
/etc/apt/sources.list.d/proposed.list
+ apt-get update -q
+ apt-get install -qy cloud-init
+ 
+ # Verify "Reconfiguring grub" message in upgrade output
+ 
+ # Verify install_devices set correctly
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ # Verify that after reboot we can still connect
+ 
+ Steps for test 2:
+ # Find an old affected image with
+ aws ec2 describe-images --filters "Name=name,Values=Ubuntu <release number>*"
+ 
+ # Launch an AWS with affected image-id
+ 
+ # After startup, connect via SSH, then
+ # Verify we're on an nvme device
+ lsblk | grep nvme
+ 
+ # Verify install_devices set incorrectly
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ # Update install device to something (anything) else
+ echo 'set grub-pc/install_devices /dev/sdb' | debconf-communicate
+ 
+ # update cloud-init to proposed
+ mirror=http://archive.ubuntu.com/ubuntu
+ echo deb $mirror $(lsb_release -sc)-proposed main | tee 
/etc/apt/sources.list.d/proposed.list
+ apt-get update -q
+ apt-get install -qy cloud-init
+ 
+ # Verify no "Reconfiguring grub" message in upgrade output
+ # Verify install_devices not changed
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ Steps for test 3:
+ # lxd launch affected image
+ lxc launch <image>
+ 
+ # Obtain bash shell
+ lxc exec <image> bash
+ 
+ # Check install_devices
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ # Update cloud-init to propsed
+ mirror=http://archive.ubuntu.com/ubuntu
+ echo deb $mirror $(lsb_release -sc)-proposed main | tee 
/etc/apt/sources.list.d/proposed.list
+ apt-get update -q
+ apt-get install -qy cloud-init
+ 
+ # Verify no "Reconfiguring grub" message in upgrade output
+ # Verify install_devices not changed
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ Steps for test 4:
+ # Launch GCE image with:
+ gcloud compute instances create falcon-test --image <image> --image-project 
ubuntu-os-cloud --zone=us-central1-a
+ 
+ # After startup, connect via SSH, then
+ # Verify we're not on an nvme device
+ lsblk | grep nvme
+ 
+ # Check install_devices
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ # update cloud-init to proposed
+ 
+ # Verify "Reconfiguring grub" message not in upgrade output
+ 
+ # Verify install_devices set correctly
+ debconf-show grub-pc | grep "install_devices:"
+ 
+ # Verify that after reboot we can still connect
+ 
+ [Regression Potential]
+ If a user manually configured their system in such a way that both devices
+ exist and it matches our error condition, the grub install device
+ could be reconfigured incorrectly.
  
  
- However, that only fixed the issue for the newly first-booted instances on 
nvme.
+ [Other Info]
+ Pull request: https://github.com/canonical/cloud-init/pull/514/files
+ Upstream commit: 
https://github.com/canonical/cloud-init/commit/f48acc2bdc41c347d2eb899038e2520383851103
+ 
+ 
+ ==== Original Description ====
+ cc_grub_dpkg was fixed to support nvme drives, but didn't clear the state of 
cc_grub_dpkg and didn't rerun it on upgrades
+ 
+ However, that only fixed the issue for the newly first-booted instances
+ on nvme.
  
  All existing boots of cloud-init on nvmes are still broken, and will
  fail to apply the latest grub2 update for BootHole mitigation.
  
  Please add maintainer scripts changes to re-run cc_grub_dpkg, once-only,
  when cloud-init is upgraded to a new sru. To ensure that cc_grub_dpkg
  has been rerun, once, since nvme fixes.
  
  You could guard this call, if debconf database grub-pc devices do not
  exist on the instance. (i.e. debconf has /dev/sda, and yet /dev/sda does
  not exist)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1889555

Title:
  cc_grub_dpkg was fixed to support nvme drives, but didn't clear the
  state of cc_grub_dpkg and didn't rerun it on upgrades

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1889555/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to