On Sun, 2025-02-16 at 15:45 -0600, Roger Heflin wrote:
> The first thing I would do is change the mount line to add ",nofail"
> on the options so that a failure does not drop you to single user
> mode.
> 
> Then you can boot the system up and with the system on the network
> figure out the state of things.

I wasn't aware of this option, but I'm not sure if this is any better,
right now I can just run vgchange -ay end exit to resume the boot
process, to end up in the Gnome envronment, if add nofail to /home i
stil end up in an unussable state because gnome can't load my user's
files

> In the old days I have seen lvm2-lvmetad break systems on boot up in
> bizarre ways.

I'm not sure that fedora uses lvm2-lvmetad, and google isn't helping
me,  any hits i find are for red hat 9

I wonder if this is relevant:

>From lvm.conf:

 # Configuration option devices/scan_lvs.
        # Allow LVM LVs to be used as PVs. When enabled, LVM commands
will
        # scan active LVs to look for other PVs. Caution is required to
        # avoid using PVs that belong to guest images stored on LVs.
        # When enabled, the LVs scanned should be restricted using the
        # devices file or the filter. This option does not enable
autoactivation
        # of layered VGs, which requires editing LVM udev rules (see
LVM_PVSCAN_ON_LVS.)
        # This configuration option has an automatic default value.
        # scan_lvs = 1

I had no luck on googling LVM_PVSCAN_ON_LVS


> I disabled it on my systems(and the 1000's of enterprise machines I
> used to support) because it caused random PV's to not be found
> sometimes.   Typically if something causes a pv to not get found it
> will be repeatable on that given system(likely some timing problem).
> 
> The only useful thing it does is it speeds up scans when a disk is
> spun down and/or when you have 1000's of disks.  But it does not
> speed
> anything up that much unless you have a huge number of disks that are
> spun down.  On large san systems the testing I did said without it it
> would take 2-3 seconds to scan 1000's of disks (worth the wait given
> the random failures that caused havoc), verses immediate.
> 
> And tiny changes in lvm/udev rules have changed it from working to
> broken.
> 
> On Sun, Feb 16, 2025 at 9:11 AM <christophe.oc...@gmail.com> wrote:
> > 
> > I'm at present fighting with LVM2, for some weird reason I can't
> > get my
> > lvm volume
> > to be activated & mounted at boot resulting in me having to do
> > "vgchange -ay" at boot (after I'm dropped in a shell & prompted for
> > my
> > root password,to make debugging even more troublesome I can only
> > mess
> > with this over the weekends, i've included the lvmdump to this
> > mail,
> > any more help would be very welcome, as i'm at a loss of how to
> > proceed. this might also have relevant information.
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=2338735
> > 
> > 
> > 
> > 


Reply via email to