** Description changed:

- 18.04 is installed using GUI installer in 'Guided - use entire volume'
- mode on a disk which was previously used as md raid 6 volume. Installer
- repartitions the disk and installs the system, system reboots any number
- of times without issues. Then packages are upgraded to the current
- states and some new packages are installed including mdadm which *might*
- be the culprit, after that system won't boot any more failing into ramfs
- prompt with 'gave up waiting for root filesystem device' message, at
- this point blkid shows boot disk as a device with
- TYPE='linux_raid_member', not as two partitions for EFI and root
- (/dev/sda, not /dev/sda1 and /dev/sda2). I was able fix this issue by
- zeroing the whole disk (dd if=/dev/zero of=/dev/sda bs=4096) and
- reinstalling. Probably md superblock is not destroyed when disk is
- partitioned by the installer, not overwritten by installed files and
- somehow takes precedence over partition table (gpt) during boot.
+ [impact]
+ Installing ubuntu on a disk that was previously a md raid volume leads to a 
system that doesn't boot (or perhaps does not reliably boot)
+ 
+ [test case]
+ Create a disk image that has a md RAID 6, metadata 0.90 device on it using 
the attached "mkraid6" script.
+ 
+ $ sudo mkraid6
+ 
+ Install to it in a VM:
+ 
+ $ kvm -m 2048 -cdrom ~/isos/ubuntu-18.04.2-desktop-amd64.iso -drive
+ file=raid2.img,format=raw
+ 
+ Reboot into the installed system. Check that it boots and that there are
+ no occurrences of linux_raid_member in the output of "sudo wipefs
+ /dev/sda".
+ 
+ [regression potential]
+ The patch makes a change to a core part of the partitioner. A bug here could 
crash the installer, rendering it impossible to install. The code is adapted 
from battle-tested code in wipefs from util-linux and has been somewhat tested 
before uploading to eoan. The nature of the code makes regressions beyond 
crashing the installer or failing to do what it's supposed to very unlikely -- 
it is hard to see how this could result on data loss on a drive not selected to 
be formatted, for example.
+ 
+ [original description]
+ 18.04 is installed using GUI installer in 'Guided - use entire volume' mode 
on a disk which was previously used as md raid 6 volume. Installer repartitions 
the disk and installs the system, system reboots any number of times without 
issues. Then packages are upgraded to the current states and some new packages 
are installed including mdadm which *might* be the culprit, after that system 
won't boot any more failing into ramfs prompt with 'gave up waiting for root 
filesystem device' message, at this point blkid shows boot disk as a device 
with TYPE='linux_raid_member', not as two partitions for EFI and root 
(/dev/sda, not /dev/sda1 and /dev/sda2). I was able fix this issue by zeroing 
the whole disk (dd if=/dev/zero of=/dev/sda bs=4096) and reinstalling. Probably 
md superblock is not destroyed when disk is partitioned by the installer, not 
overwritten by installed files and somehow takes precedence over partition 
table (gpt) during boot.

** Attachment added: "mkraid6"
   
https://bugs.launchpad.net/ubuntu/+source/partman-base/+bug/1828558/+attachment/5280443/+files/mkraid6

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828558

Title:
  installing ubuntu on a former md raid volume makes system unusable

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/partman-base/+bug/1828558/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to