Re: [linux-lvm] Creating/restoring snapshots in early userspace

2022-08-08 Thread Zdenek Kabelac

Dne 08. 08. 22 v 20:14 cd napsal(a):



Dne 07. 08. 22 v 22:38 cd napsal(a):


Hello,

I have created some scripts which runs in the initramfs during the boot 
process. Very specifically, it's an initcpio runtime hook 
https://man.archlinux.org/man/mkinitcpio.8#ABOUT_RUNTIME_HOOKS
Question: Is this a supported environment which to create/restore snapshots?

When my script runs lvm lvconvert --merge 
testvg/lvmautosnap-root-1659902622-good
it appears to succeed (exit code is 0, and the restore appears to work 
properly). However, the following warnings appear in stderr as part of the 
restore process:

/usr/bin/dmeventd: stat failed: No such file or directory
WARNING: Failed to unmonitor testvg/lvmautosnap-root-1659902622-good.
/usr/bin/dmeventd: stat failed: No such file or directory



Hi

Your initramfs likely needs to contain 'modified' version of your system's
lvm.conf where 'monitoring' will be disabled (set to 0) - as you do not want
to start your monitoring while you are operating in your ramdisk.

Once you flip to your rootfs with your regular /etc/lvm/lvm.conf - you need
to start monitoring of you activated LVs (vgchange --monitor y)


Merging of volume testvg/lvmautosnap-root-1659902622-good started.
/run/lvm/lvmpolld.socket: connect failed: No such file or directory



Again a thing you do not want to run in your ramdisk - lvmpolld is another
service/daemon you should run while you are in your rootfs.

fully removed.


And I get similar errors when trying to create new volumes with lvm lvcreate 
--permission=r --snapshot --monitor n --name my_snapshot
/usr/bin/dmeventd: stat failed: No such file or directory

In summary, I'm happy to just ignore the warning messages. I just want to make 
sure I'm not risking the integrity of the lvm volumes by modifying them during 
this part of the boot process.



It looks like you are trying to do something in your ramdisk you really should
be doing once you flip to your rootfs - ramdisk is purely meant to be used
to get things 'booting' and flip to rootfs ASAP - doing things in your
ramdisk which is really not a 'working environment' sounds like you are
asking yourself for some big troubles with resolving error paths (i.e. using
unmonitored devices like 'snapshot/mirror/raids/thin...' for longer period of
time is simply 'bad design/plan' - switch to rootfs should happen quickly
after you initiate things in your initramdfs...

Regards

Zdenek


Thanks for the insightful response. Indeed, setting monitoring = 0 in lvm.conf 
makes the warning messages go away. Interestingly, on arch, the initcpio hook 
for lvm2 _does_ attempt to set this setting:
with sed -i '/^\smonitoring =/s/1/0/' "${BUILDROOT}/etc/lvm/lvm.conf"
https://github.com/archlinux/svntogit-packages/blob/packages/lvm2/trunk/lvm2_install#L38

However, the sed pattern fails to match because the line is commented out in 
lvm.conf.
I've filed a bug with arch to address this: 
https://bugs.archlinux.org/task/75552?project=1&string=lvm2



Yep - I think there was a similar issue with Dracut.
It's the side-effect result of making most of default settings as 'commented' 
- then this scripts stopped to work in such case.


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Creating/restoring snapshots in early userspace

2022-08-08 Thread Zdenek Kabelac

Dne 07. 08. 22 v 22:38 cd napsal(a):

Hello,

I have created some scripts which runs in the initramfs during the boot 
process. Very specifically, it's an initcpio runtime hook 
https://man.archlinux.org/man/mkinitcpio.8#ABOUT_RUNTIME_HOOKS
Question: Is this a supported environment which to create/restore snapshots?

When my script runs lvm lvconvert --merge 
testvg/lvmautosnap-root-1659902622-good
it appears to succeed (exit code is 0, and the restore appears to work 
properly). However, the following warnings appear in stderr as part of the 
restore process:

/usr/bin/dmeventd: stat failed: No such file or directory
WARNING: Failed to unmonitor testvg/lvmautosnap-root-1659902622-good.
/usr/bin/dmeventd: stat failed: No such file or directory


Hi

Your initramfs likely needs to contain  'modified' version of your system's 
lvm.conf where 'monitoring'  will be disabled  (set to 0) - as you do not want 
to start your monitoring while you are operating in your ramdisk.


Once you flip to your rootfs with your regular /etc/lvm/lvm.conf  - you need 
to start monitoring of you activated LVs  (vgchange --monitor y)



Merging of volume testvg/lvmautosnap-root-1659902622-good started.
/run/lvm/lvmpolld.socket: connect failed: No such file or directory


Again a thing you do not want to run in your ramdisk - lvmpolld is another 
service/daemon you should run while you are in your rootfs.


fully removed.


And I get similar errors when trying to create new volumes with lvm lvcreate 
--permission=r --snapshot --monitor n --name my_snapshot
  /usr/bin/dmeventd: stat failed: No such file or directory

In summary, I'm happy to just ignore the warning messages. I just want to make 
sure I'm not risking the integrity of the lvm volumes by modifying them during 
this part of the boot process.


It looks like you are trying to do something in your ramdisk you really should 
be doing once you flip to your rootfs  -   ramdisk is purely meant to be used 
to  get things  'booting' and flip to rootfs  ASAP -  doing things in your 
ramdisk which is really not a 'working environment'  sounds like you are 
asking yourself  for some big troubles with resolving error paths  (i.e. using 
unmonitored devices like 'snapshot/mirror/raids/thin...' for longer period of 
time is simply 'bad design/plan' - switch to rootfs should happen quickly 
after you initiate things in your initramdfs...


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Problem with partially activate logical volume

2022-08-08 Thread Ken Bass
Zdenek: thanks. That makes more sense. I will try after (re-)cloning. Give it  
a day or so. 

Ken 


On Aug 5, 2022, 7:03 AM, at 7:03 AM, Zdenek Kabelac  
wrote:
>Dne 03. 08. 22 v 23:31 Ken Bass napsal(a):
>> 
>> That's pretty much it. Whenever any app attempts to read a block from
>the 
>> missing drive, I get the "Buffer I/O error" message. So, even though
>my 
>> recovery apps can scan the LV, marking blocks on the last drive as 
>> missing/unknown/etc., they can't display any recovered data - which I
>know 
>> does exist. Looking at raw data from the apps' scans, I can see
>directory 
>> entries, as well as files. I'm sure the inodes and bitmaps are still
>there for 
>> some of these, I just can't really reverse engineer and follow them
>through. 
>> But isn't that what the apps are supposed to do?
>
>As mentioned by my previous email you shall *NOT* fix the partially
>activated 
>device in-place - this will not lead to good result.
>
>User should copy the content to some valid storage device with the same
>size 
>as he tries to recover.
>
>You can 'partially' activate device with  "zero"  filler instead of
>"error" 
>(see the  lvm.conf setting: missing_stripe_filler="...") - this way
>you 
>will just 'read' zero for missing parts.
>
>Your another 2nd. option is to 'correct' the VG by filling  missing PV
>with a 
>new one with preferable zeroed content - so you will not read 'random'
>garbage 
>in places this new PV will fill the space after your missing PV.
>Although even in this case - I'd still run  'fsck' on the snapshot
>created on 
>top of such LV to give you another chance of recovery if you will pick
>a wrong 
>answer  (since fsck might be 'quite' interactive when doing such
>large-scale 
>repair)
>
>
>> Sorry I haven't replied sooner, but it takes a long time (days) to
>clone, then 
>> scan 16Tb...
>> 
>> So, please any suggestions are greatly appreciated, as well as
>needed.
>> 
>> ken
>> 
>> (I know: No backup; got burned; it hurts; and I will now always have
>backups. 
>> 'Nuf said.)
>
>Before you run your 'fsck' create a snapshot of your newly created
>'backup' 
>and make all the repair actions in the snapshots.
>
>Once you are 'satisfied' with 'repaired'  filesystem you can then
>'merge' 
>snapshot back to your origin and use it.
>
>Regards
>
>Zdenek
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/