[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-20 Thread Roman Bednar
Did you update the packages as suggested by Nir? If so and it still does
not work, maybe try the pvck recovery that Nir described too.

If that still does not work consider filing a bug for lvm and providing a
failing command(s) output with - option in the description or
attachment. Perhaps there is a better way or a known workaround.


-Roman

On Mon, Sep 20, 2021 at 2:22 PM  wrote:

> So, I've made several attempts to restore the metadata.
>
> In my last e-mail I said in step 2 that the PV ID is:
> 36001405063455cf7cd74c20bc06e9304, which is incorrect.
>
> I'm trying to find out the PV UUID running "pvs -o pv_name,pv_uuid
> --config='devices/filter = ["a|.*|"]'
> /dev/mapper/36001405063455cf7cd74c20bc06e9304". However, it shows no PV
> UUID. All I get from the command output is:
>
> # pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
> /dev/mapper/36001405063455cf7cd74c20bc06e9304
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to find device "/dev/mapper/36001405063455cf7cd74c20bc06e9304".
>
> I tried running a bare "vgcfgrestore
> 219fa16f-13c9-44e4-a07d-a40c0a7fe206" command, which returned:
>
> # vgcfgrestore 219fa16f-13c9-44e4-a07d-a40c0a7fe206
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Couldn't find device with uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
>Cannot restore Volume Group 219fa16f-13c9-44e4-a07d-a40c0a7fe206 with
> 1 PVs marked as missing.
>Restore failed.
>
> Seems that the PV is missing, however, I assume the PV UUID (from output
> above) is Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
>
> So I tried running:
>
> # pvcreate --uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb --restore
> /etc/lvm/archive/219fa16f-13c9-44e4-a07d-a40c0a7fe206_00200-1084769199.vg
> /dev/sdb1
>Couldn't find device with uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Device /dev/sdb1 excluded by a filter.
>
> Either the PV UUID is not the one I specified, or the system can't find
> it (or both).
>
> El 2021-09-20 09:21, nico...@devels.es escribió:
> > Hi Roman and Nir,
> >
> > El 2021-09-16 13:42, Roman Bednar escribió:
> >> Hi Nicolas,
> >>
> >> You can try to recover VG metadata from a backup or archive which lvm
> >> automatically creates by default.
> >>
> >> 1) To list all available backups for given VG:
> >>
> >> #vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
> >>
> >> Select the latest one which sounds right, something with a description
> >> along the lines of "Created *before* lvremove".
> >> You might want to select something older than the latest as lvm does a
> >> backup also *after* running some command.
> >>
> >
> > You were right. There actually *are* LV backups, I was specifying an
> > incorrect ID.
> >
> > So the correct command would return:
> >
> > # vgcfgrestore --list 219fa16f-13c9-44e4-a07d-a40c0a7fe206
> > [...]
> >
> > File: /etc/lvm/archive/
> 219fa16f-13c9-44e4-a07d-a40c0a7fe206_00202-1152107223.vg
> >   VG name:219fa16f-13c9-44e4-a07d-a40c0a7fe206
> >   Description:Created *before* executing 'vgs --noheading
> --nosuffix
> > --units b -o +vg_uuid,vg_extent_size'
> >   Backup Time:Sat Sep 11 03:41:25 2021
> > [...]
> >
> > That one seems ok.

[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-20 Thread nicolas

So, I've made several attempts to restore the metadata.

In my last e-mail I said in step 2 that the PV ID is: 
36001405063455cf7cd74c20bc06e9304, which is incorrect.


I'm trying to find out the PV UUID running "pvs -o pv_name,pv_uuid 
--config='devices/filter = ["a|.*|"]' 
/dev/mapper/36001405063455cf7cd74c20bc06e9304". However, it shows no PV 
UUID. All I get from the command output is:


# pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]' 
/dev/mapper/36001405063455cf7cd74c20bc06e9304
  /dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at 
offset 2198927383040
  Couldn't read volume group metadata from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f.
  Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at 
2198927383040 has invalid summary for VG.
  Failed to read metadata summary from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f

  Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
  /dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at 
offset 2198927383040
  Couldn't read volume group metadata from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f.
  Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at 
2198927383040 has invalid summary for VG.
  Failed to read metadata summary from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f

  Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
  Failed to find device "/dev/mapper/36001405063455cf7cd74c20bc06e9304".

I tried running a bare "vgcfgrestore 
219fa16f-13c9-44e4-a07d-a40c0a7fe206" command, which returned:


# vgcfgrestore 219fa16f-13c9-44e4-a07d-a40c0a7fe206
  /dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at 
offset 2198927383040
  Couldn't read volume group metadata from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f.
  Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at 
2198927383040 has invalid summary for VG.
  Failed to read metadata summary from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f

  Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
  Couldn't find device with uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
  Cannot restore Volume Group 219fa16f-13c9-44e4-a07d-a40c0a7fe206 with 
1 PVs marked as missing.

  Restore failed.

Seems that the PV is missing, however, I assume the PV UUID (from output 
above) is Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.


So I tried running:

# pvcreate --uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb --restore 
/etc/lvm/archive/219fa16f-13c9-44e4-a07d-a40c0a7fe206_00200-1084769199.vg 
/dev/sdb1

  Couldn't find device with uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
  /dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at 
offset 2198927383040
  Couldn't read volume group metadata from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f.
  Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at 
2198927383040 has invalid summary for VG.
  Failed to read metadata summary from 
/dev/mapper/360014057b367e3a53b44ab392ae0f25f

  Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
  Device /dev/sdb1 excluded by a filter.

Either the PV UUID is not the one I specified, or the system can't find 
it (or both).


El 2021-09-20 09:21, nico...@devels.es escribió:

Hi Roman and Nir,

El 2021-09-16 13:42, Roman Bednar escribió:

Hi Nicolas,

You can try to recover VG metadata from a backup or archive which lvm
automatically creates by default.

1) To list all available backups for given VG:

#vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp

Select the latest one which sounds right, something with a description
along the lines of "Created *before* lvremove".
You might want to select something older than the latest as lvm does a
backup also *after* running some command.



You were right. There actually *are* LV backups, I was specifying an
incorrect ID.

So the correct command would return:

# vgcfgrestore --list 219fa16f-13c9-44e4-a07d-a40c0a7fe206
[...]

File:   
/etc/lvm/archive/219fa16f-13c9-44e4-a07d-a40c0a7fe206_00202-1152107223.vg
  VG name:  219fa16f-13c9-44e4-a07d-a40c0a7fe206
  Description:  Created *before* executing 'vgs --noheading --nosuffix
--units b -o +vg_uuid,vg_extent_size'
  Backup Time:  Sat Sep 11 03:41:25 2021
[...]

That one seems ok.


2) Find UUID of your broken PV (filter might not be needed, depends on
your local conf):

#pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
/dev/mapper/36001405063455cf7cd74c20bc06e9304



As I understand it, the PV won't be listed in the 'pvs' command, this
is just a matter of finding the associated VG. The command above won't
list a PV associated to the VG in step 1, it just complains the PV
cannot be read.

# pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
/dev/mapper/36001405063455cf7cd74c20bc06e9304
  /dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
offset 2198927383040
  Couldn't read volume group metadata from
/dev

[ovirt-users] Re: Failed to activate Storage Domain --- ovirt 4.2

2021-09-20 Thread fabian . rapetti
Nir,
Thanks for your time. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLYQP5LKXJDP37V5ZP63WNZZY2A7JJV5/


[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-20 Thread nicolas

Hi Roman and Nir,

El 2021-09-16 13:42, Roman Bednar escribió:

Hi Nicolas,

You can try to recover VG metadata from a backup or archive which lvm
automatically creates by default.

1) To list all available backups for given VG:

#vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp

Select the latest one which sounds right, something with a description
along the lines of "Created *before* lvremove".
You might want to select something older than the latest as lvm does a
backup also *after* running some command.



You were right. There actually *are* LV backups, I was specifying an 
incorrect ID.


So the correct command would return:

# vgcfgrestore --list 219fa16f-13c9-44e4-a07d-a40c0a7fe206
[...]
  
File:		/etc/lvm/archive/219fa16f-13c9-44e4-a07d-a40c0a7fe206_00202-1152107223.vg

  VG name:  219fa16f-13c9-44e4-a07d-a40c0a7fe206
  Description:	Created *before* executing 'vgs --noheading --nosuffix 
--units b -o +vg_uuid,vg_extent_size'

  Backup Time:  Sat Sep 11 03:41:25 2021
[...]

That one seems ok.


2) Find UUID of your broken PV (filter might not be needed, depends on
your local conf):

#pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
/dev/mapper/36001405063455cf7cd74c20bc06e9304



As I understand it, the PV won't be listed in the 'pvs' command, this is 
just a matter of finding the associated VG. The command above won't list 
a PV associated to the VG in step 1, it just complains the PV cannot be 
read.


# pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]' 
/dev/mapper/36001405063455cf7cd74c20bc06e9304
  /dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at 
offset 2198927383040
  Couldn't read volume group metadata from 
/dev/mapper/36001405063455cf7cd74c20bc06e9304.
  Metadata location on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at 
2198927383040 has invalid summary for VG.
  Failed to read metadata summary from 
/dev/mapper/36001405063455cf7cd74c20bc06e9304

  Failed to scan VG from /dev/mapper/36001405063455cf7cd74c20bc06e9304
  No physical volume label read from 
/dev/mapper/36001405063455cf7cd74c20bc06e9304.


So, associated PV ID is: 36001405063455cf7cd74c20bc06e9304


3) Create a new PV on a different partition or disk (/dev/sdX) using
the UUID found in previous step and restorefile option:

#pvcreate --uuid  --restorefile 




I have a question here. As I understand it, pvcreate will restore the 
correct metadata on . Then how do you restore that metadata 
on the broken storage domain, so other hosts can see the right 
information as well? Or is this just a step to recover data on 
 and then reattach the disks on the affected VMs?


Thanks so much.


4) Try to display the VG:

# vgdisplay Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp

-Roman

On Thu, Sep 16, 2021 at 1:47 PM  wrote:


I can also see...

kvmr03:~# lvs | grep 927f423a-6689-4ddb-8fda-b3375c3bbca3
/dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
offset 2198927383040
Couldn't read volume group metadata from
/dev/mapper/36001405063455cf7cd74c20bc06e9304.
Metadata location on
/dev/mapper/36001405063455cf7cd74c20bc06e9304 at
2198927383040 has invalid summary for VG.
Failed to read metadata summary from
/dev/mapper/36001405063455cf7cd74c20bc06e9304
Failed to scan VG from
/dev/mapper/36001405063455cf7cd74c20bc06e9304

Seems to me like metadata from that VG has been corrupted. Is there
a
way to recover?

El 2021-09-16 11:19, nico...@devels.es escribió:

The most relevant log snippet I have found is the following. I

assume

it cannot scan the Storage Domain, but I'm unsure why, as the

storage

domain backend is up and running.

021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM]
Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {
preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
write_cache_state=0  disable_after_error_count=3






filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648







35b628d30610500$|^/dev/mapper/3600c0ff00064835b638d30610100$|^/dev/mapper/3600c0ff00064835b638d30610300$|^/dev/map