------- Comment From [email protected] 2016-11-21 05:20 EDT-------
[email protected],
(In reply to comment #12)
> (In reply to comment #8)
> > (In reply to comment #7)
> > > (In reply to comment #2)
> > > > (In reply to comment #1)
> > > > PV Volume information:
> > > > physical_volumes {
> > > >
> > > > pv0 {
> > > > device = "/dev/sdb5" # Hint only
> > >
> > > > pv1 {
> > > > device = "/dev/sda" # Hint only
> > >
> > > This does not look very good, having single path scsi disk devices
> > > mentioned
> > > by LVM. With zfcp-attached SCSI disks, LVM must be on top of multipathing.
> > > Could you please double check if your installation with LVM and
> > > multipathing
> > > does the correct layering? If not, this would be an independent bug. See
> > > also [1, slide 28 "Multipathing for Disks ? LVM on Top"].
>
> ping
>
> maybe this is part of the root cases for sudden failure
> However, I still don't understand why it had worked before and now no longer.
> What has changed meanwhile to break it?
> Have you used zfcp auto lun scan before but now no longer?
> Or the LVM on multipathing is broken (see above)?
> Actually, we're now in a lot of guessing and desperately need debug data
> from the broken system. Typically we need the output of dbginfo.sh (Ubuntu
> may prefer the output of sosreport).
> Since the system does not boot that's a but tricky, but maybe the method
> described in the previous paragraph works and you can run dbginfo.sh in
> chroot of the broken root-fs; that would us at least give the persistent
> config on disk (though of course not the dynamic config).
debug data?
What does the output of the following command look like?:
$ sudo dmsetup ls --tree -o device,blkdevname,active,open,rw,uuid
Otherwise, it seems to me we can close this as notabug.
> > The "update-initramfs -u" command was never explicitly run after the system
> > was built.
> > The second PV volume was added to VG on 10/26/2016. However, it was not
> > until early November that the root FS was extended.
> >
> > Between 10/16/2016 and the date the root fs was extended, the second PV was
> > always online and and active in a VG and LV display after every Reboot.
>
> I don't understand how it would have ever worked without having ran
> "update-initramfs -u" after the addition of another PV to the root-fs
> dependencies. Maybe chzdev did some magic; what was it's exact output when
> you made the actively added paths persistent with "chzdev zfcp-lun -e
> --online"?
ping
> While zipl does support some cases of device-mapper targets under certain
> circumstances for the "zipl target" (/boot/ with Ubuntu 16.04), it's still
> dangerous to have a multi-PV root-fs _and_ the zipl target being a part of
> the root-fs, i.e. the zipl target not being it's own mount point withOUT LVM.
> [1, slide 25 "Multipathing for Disks ? Persistent Configuration"]
> http://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.ludd/
> ludd_c_zipl_lboot.html
> http://www.mail-archive.com/linux-390%40vm.marist.edu/msg62492.html
> (root-fs on LVM in general:
> http://www.mail-archive.com/[email protected]/msg69553.html)
NB
> > > REFERENCE
> > >
> > > [1]
> > > http://www-05.ibm.com/de/events/linux-on-z/pdf/day2/4_Steffen_Maier_zfcp-best-practices-2015.pdf
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1641078
Title:
System cannot be booted up when root filesystem is on an LVM on two
disks
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1641078/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs