I don't get this bug.
I have at least 1 snapshot going on my "/home" partition all the time.
The VG that /home is in contains most of my partitions (26), with
2 more partitions on a separate (VG+PD's) VG.
Now, I've noticed when I am booting, it *does* take a bit of time to mount
bring up and mount all of the lvs, but you can the root mount is NOT
in an VG/LV -- It's on a "regular device" (numbers on left are w/kernel time
printing turned on -- so they are in seconds after boot):
[ 4.207621] XFS (sdc1): Mounting V4 Filesystem
[ 4.278746] XFS (sdc1): Starting recovery (logdev: internal)
[ 4.370757] XFS (sdc1): Ending recovery (logdev: internal)
[ 4.379839] VFS: Mounted root (xfs filesystem) on device 8:33.
..
[ 4.449462] devtmpfs: mounted
... last msg before my "long pause" where pretty much everything
get activated:
[ 4.591580] input: Dell Dell USB Keyboard as
/devices/pci0000:00/0000:00:1a.7/usb1/1-3/1-3.2/1-3.2:1.0/0003:413C:2003.0002/input/input4
[ 4.604588] hid-generic 0003:413C:2003.0002: input,hidraw1: USB HID v1.10
Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1a.7-3.2/input0
[ 19.331731] showconsole (170) used greatest stack depth: 13080 bytes left
[ 19.412411] XFS (sdc6): Mounting V4 Filesystem
[ 19.505374] XFS (sdc6): Ending clean mount
.... more mostly unrelated messages... then you start seeing "dm's" mixed in
with the mounting messages -- just before kernel logging stops:
[ 22.205351] XFS (sdc2): Mounting V4 Filesystem
[ 22.205557] XFS (sdc3): Mounting V4 Filesystem
[ 22.216414] XFS (dm-5): Mounting V4 Filesystem
[ 22.217893] XFS (dm-6): Mounting V4 Filesystem
[ 22.237345] XFS (dm-1): Mounting V4 Filesystem
[ 22.245201] XFS (dm-8): Mounting V4 Filesystem
[ 22.267971] XFS (dm-13): Mounting V4 Filesystem
[ 22.293152] XFS (dm-15): Mounting V4 Filesystem
[ 22.299737] XFS (sdc8): Mounting V4 Filesystem
[ 22.340692] XFS (sdc2): Ending clean mount
[ 22.373169] XFS (sdc3): Ending clean mount
[ 22.401381] XFS (dm-5): Ending clean mount
[ 22.463974] XFS (dm-13): Ending clean mount
[ 22.474813] XFS (dm-1): Ending clean mount
[ 22.494807] XFS (dm-8): Ending clean mount
[ 22.505380] XFS (sdc8): Ending clean mount
[ 22.544059] XFS (dm-15): Ending clean mount
[ 22.557865] XFS (dm-6): Ending clean mount
[ 22.836244] Adding 8393924k swap on /dev/sdc5. Priority:-1 extents:1
across:8393924k FS
Kernel logging (ksyslog) stopped.
Kernel log daemon terminating.
-----
A couple of things different about my setup from the 'norm' --
1) since my distro(openSuSE) jumped to systemd, (and I haven't), I had to write
some
rc scripts to help bring up the system.
2) one reason for this was my "/usr" partition is separate from root and
my distro decided to move many libs/bins ->usr and leave symlinks on the
root device to the programs in /usr. One of those was 'mount' (and its
associated libs).
That meant that once the rootfs was booted I had no way to mount /usr, where
most
of the binaries are (I asked why they didn't do it the "safe way" and move most
of the binaries to /bin & /lib64 and put symlinks in /usr but they evaded
answering that question for ~2 years . So one script I run after updating my
system is a
dependency checker that checks mount orders and tries to make sure that early
mounted disks don't have dependencies on later mounted disks.
3) adding to my problem was that I don't use an initrd to boot. I boot
from my hard disk. My distro folks thought they had solved the problem
by hiding the mount of /usr in the initrd, so when they start systemd to
control the boot, it is happy. But if you boot from HD, I was told my
~15 year old configuration was no longer supported. Bleh!,
One thing that might account for speed diffs, is that I don't wait for
udev to start my VG's, ... and here is where I think I see my ~15 second
pause:
if test -d /etc/lvm -a -x /sbin/vgscan -a -x /sbin/vgchange ; then
# Waiting for udev to settle
if [ "$LVM_DEVICE_TIMEOUT" -gt 0 ] ; then
echo "Waiting for udev to settle..."
/sbin/udevadm settle --timeout=$LVM_DEVICE_TIMEOUT
fi
echo "Scanning for LVM volume groups..."
/sbin/vgscan --mknodes
echo "Activating LVM volume groups..."
/sbin/vgchange -a y $LVM_VGS_ACTIVATED_ON_BOOT
mount -c -a -F
...
So at the point where I have a pause, I'm doing vgscan and vgchange, then
a first shot at mounting "all" (it was the easiest thing to fix/change).
Without that mount all attempt in my 4th "boot script" to execute -- boot.lvm,
I often had long timeouts in the boot process. But as you can see, I
tell mount to go "fork(-F)" and try to mount all FS's at the same time. I'm
pretty sure that's where the pause is given that right after the pause,
XFS starts issuing messages about "DM's" being mounted.
Somewhere around script #8 is my distro's "localfs" mounts -- but for me,
that was way too late, since many of the boot utils not only used
/usr, but /usr/share (another partition after /usr/share grew too big -- and
it *is* on a VG.
In summary -- I had very long waits (minutes) using the distro boot
scripts (people report longer wait times using the systemd startup
method where they are also using non-ssd disks). But after I made my
own changes -- out of necessity, I had nominal (no error/timeout probs)
boot times drop from around 35 down to around 25 seconds (it's a server
-- so lots of server processes, but no desktop).
So, it seems like the dm functions aren't being announced to udev properly
(maybe?),
but the early 'mount "all, in parallel, in background" early in my boot process
seems to have solved most of the time delay probs.
I *do* get duplicate mount messages later on when distro scripts try to mount
everything, but they seem to be harmless at this point:
tmpfs : already mounted
tmpfs : already mounted
/run : successfully mounted
/dev/sdc6 : already mounted
/dev/Data/UsrShare : already mounted
/dev/sdc2 : already mounted
/var/rtmp : successfully mounted
/dev/Data/Home.diff : already mounted
/dev/sdc3 : already mounted
/dev/Data/Home : already mounted
/dev/Data/Share : already mounted
/dev/Data/Media : already mounted
/dev/sdc8 : already mounted
/dev/Data/cptest : already mounted
---
As far as lv's, most are on the same 'VG': Data:
Home Data owc-aos-- 1.50t
Home-2015.02.17-03.07.03 Data -wi-ao--- 796.00m
Home-2015.03.01-03.07.03 Data -wi-ao--- 884.00m
Home-2015.03.03-03.07.03 Data -wi-ao--- 812.00m
Home-2015.03.05-03.07.03 Data -wi-ao--- 868.00m
Home-2015.03.07-03.07.02 Data -wi-ao--- 740.00m
Home-2015.03.09-03.07.02 Data -wi-ao--- 856.00m
Home-2015.03.11-03.07.03 Data -wi-ao--- 1.14g
Home-2015.03.12-03.07.02 Data -wi-ao--- 868.00m
Home-2015.03.13-03.07.06 Data -wi-ao--- 1000.00m
Home-2015.03.16-03.07.12 Data -wi-ao--- 840.00m
Home-2015.03.17-03.07.03 Data -wi-ao--- 888.00m
Home-2015.03.19-03.07.03 Data swi-aos-- 1.50t Home 0.65
Home.diff Data -wi-ao--- 512.00g
Lmedia Data -wi-ao--- 8.00t
Local Data -wi-ao--- 1.50t
Media Data -wc-ao--- 10.00t
Share Data -wc-ao--- 1.50t
Squid_Cache Data -wc-ao--- 128.00g
Sys Data -wc-a---- 96.00g
Sysboot Data -wc-a---- 4.00g
Sysvar Data -wc-a---- 28.00g
UsrShare Data -wc-ao--- 50.00g
Win Data -wi-a---- 1.00t
cptest Data -wi-ao--- 5.00g
---
As you can see, I have the 1 snapshot of home (that gets used to
generate the time-labeled snaps)....
So maybe try adding a "2nd" mount of the file systems right after udev & lvm
activate. My startup scripts before I do the mount:
S01boot.sysctl@
S01boot.usr-mount@
S02boot.udev@
S04boot.lvm@
--- in the lvm script is where I added the early mount w/fork.
(I did have to make sure my mount/fsck order in fstab was correct. Since the
disks are all XFS, the fsck is mostly a no-op, so I usually point fsck.xfs at
/bin/true.
Anyway, it's a fairly trivial test...
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1396213
Title:
LVM VG is not activated during system boot
To manage notifications about this bug go to:
https://bugs.launchpad.net/hundredpapercuts/+bug/1396213/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs