[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-20 Thread Roman Bednar
Did you update the packages as suggested by Nir? If so and it still does
not work, maybe try the pvck recovery that Nir described too.

If that still does not work consider filing a bug for lvm and providing a
failing command(s) output with - option in the description or
attachment. Perhaps there is a better way or a known workaround.


-Roman

On Mon, Sep 20, 2021 at 2:22 PM  wrote:

> So, I've made several attempts to restore the metadata.
>
> In my last e-mail I said in step 2 that the PV ID is:
> 36001405063455cf7cd74c20bc06e9304, which is incorrect.
>
> I'm trying to find out the PV UUID running "pvs -o pv_name,pv_uuid
> --config='devices/filter = ["a|.*|"]'
> /dev/mapper/36001405063455cf7cd74c20bc06e9304". However, it shows no PV
> UUID. All I get from the command output is:
>
> # pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
> /dev/mapper/36001405063455cf7cd74c20bc06e9304
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to find device "/dev/mapper/36001405063455cf7cd74c20bc06e9304".
>
> I tried running a bare "vgcfgrestore
> 219fa16f-13c9-44e4-a07d-a40c0a7fe206" command, which returned:
>
> # vgcfgrestore 219fa16f-13c9-44e4-a07d-a40c0a7fe206
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Couldn't find device with uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
>Cannot restore Volume Group 219fa16f-13c9-44e4-a07d-a40c0a7fe206 with
> 1 PVs marked as missing.
>Restore failed.
>
> Seems that the PV is missing, however, I assume the PV UUID (from output
> above) is Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
>
> So I tried running:
>
> # pvcreate --uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb --restore
> /etc/lvm/archive/219fa16f-13c9-44e4-a07d-a40c0a7fe206_00200-1084769199.vg
> /dev/sdb1
>Couldn't find device with uuid Q3xkre-25cg-L3Do-aeMD-iLem-wOHh-fb8fzb.
>/dev/mapper/360014057b367e3a53b44ab392ae0f25f: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f.
>Metadata location on /dev/mapper/360014057b367e3a53b44ab392ae0f25f at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Failed to scan VG from /dev/mapper/360014057b367e3a53b44ab392ae0f25f
>Device /dev/sdb1 excluded by a filter.
>
> Either the PV UUID is not the one I specified, or the system can't find
> it (or both).
>
> El 2021-09-20 09:21, nico...@devels.es escribió:
> > Hi Roman and Nir,
> >
> > El 2021-09-16 13:42, Roman Bednar escribió:
> >> Hi Nicolas,
> >>
> >> You can try to recover VG metadata from a backup or archive which lvm
> >> automatically creates by default.
> >>
> >> 1) To list all available backups for given VG:
> >>
> >> #vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
> >>
> >> Select the latest one which sounds right, something with a description
> >> along the lines of "Created *before* lvremove".
> >> You might want to select something older than the latest as lvm does a
> >> backup also *after* running some command.
> >>
> >
> > You were right. There actually *are* LV backups, I was specifying an
> > incorrect ID.
> >
> > So the correct command would re

[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread Roman Bednar
Make sure the VG name is correct, it won't complain if the name is wrong.

Also you can check if the backups are enabled on the hosts, to be sure:

# lvmconfig --typeconfig current | egrep "backup|archive"
backup {
backup=1
backup_dir="/etc/lvm/backup"
archive=1
archive_dir="/etc/lvm/archive"


If the backups are not available I'm afraid there's not much you can do at
this point.

On Thu, Sep 16, 2021 at 2:56 PM  wrote:

> Hi Roman,
>
> Unfortunately, step 1 returns nothing:
>
> kvmr03:~# vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
>No archives found in /etc/lvm/archive
>
> I tried several hosts and noone has a copy.
>
> Any other way to get a backup of the VG?
>
> El 2021-09-16 13:42, Roman Bednar escribió:
> > Hi Nicolas,
> >
> > You can try to recover VG metadata from a backup or archive which lvm
> > automatically creates by default.
> >
> > 1) To list all available backups for given VG:
> >
> > #vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
> >
> > Select the latest one which sounds right, something with a description
> > along the lines of "Created *before* lvremove".
> > You might want to select something older than the latest as lvm does a
> > backup also *after* running some command.
> >
> > 2) Find UUID of your broken PV (filter might not be needed, depends on
> > your local conf):
> >
> > #pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
> > /dev/mapper/36001405063455cf7cd74c20bc06e9304
> >
> > 3) Create a new PV on a different partition or disk (/dev/sdX) using
> > the UUID found in previous step and restorefile option:
> >
> > #pvcreate --uuid  --restorefile 
> > 
> >
> > 4) Try to display the VG:
> >
> > # vgdisplay Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp
> >
> > -Roman
> >
> > On Thu, Sep 16, 2021 at 1:47 PM  wrote:
> >
> >> I can also see...
> >>
> >> kvmr03:~# lvs | grep 927f423a-6689-4ddb-8fda-b3375c3bbca3
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
> >> offset 2198927383040
> >> Couldn't read volume group metadata from
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304.
> >> Metadata location on
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304 at
> >> 2198927383040 has invalid summary for VG.
> >> Failed to read metadata summary from
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304
> >> Failed to scan VG from
> >> /dev/mapper/36001405063455cf7cd74c20bc06e9304
> >>
> >> Seems to me like metadata from that VG has been corrupted. Is there
> >> a
> >> way to recover?
> >>
> >> El 2021-09-16 11:19, nico...@devels.es escribió:
> >>> The most relevant log snippet I have found is the following. I
> >> assume
> >>> it cannot scan the Storage Domain, but I'm unsure why, as the
> >> storage
> >>> domain backend is up and running.
> >>>
> >>> 021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM]
> >>> Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {
> >>> preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
> >>> write_cache_state=0  disable_after_error_count=3
> >>>
> >>
> >
>
> filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648
> >>>
> >>
> >
>
> 35b628d30610500$|^/dev/mapper/3600c0ff00064835b638d30610100$|^/dev/mapper/3600c0ff00064835b638d30610300$|^/dev/mapper/3600c0ff00064835b638d30610500$|^/dev/mapper/3600c0ff00064835b638d30610700$|^/dev/mapper/3600c0ff00064835b638d30610900$|^/dev/mapper

[ovirt-users] Re: Cannot activate a Storage Domain after an oVirt crash

2021-09-16 Thread Roman Bednar
Hi Nicolas,

You can try to recover VG metadata from a backup or archive which lvm
automatically creates by default.

1) To list all available backups for given VG:

#vgcfgrestore --list Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp

Select the latest one which sounds right, something with a description
along the lines of "Created *before* lvremove".
You might want to select something older than the latest as lvm does a
backup also *after* running some command.


2) Find UUID of your broken PV (filter might not be needed, depends on your
local conf):

#pvs -o pv_name,pv_uuid --config='devices/filter = ["a|.*|"]'
/dev/mapper/36001405063455cf7cd74c20bc06e9304


3) Create a new PV on a different partition or disk (/dev/sdX) using the
UUID found in previous step and restorefile option:

#pvcreate --uuid  --restorefile 



4) Try to display the VG:

# vgdisplay Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp



-Roman

On Thu, Sep 16, 2021 at 1:47 PM  wrote:

> I can also see...
>
> kvmr03:~# lvs | grep 927f423a-6689-4ddb-8fda-b3375c3bbca3
>/dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
> offset 2198927383040
>Couldn't read volume group metadata from
> /dev/mapper/36001405063455cf7cd74c20bc06e9304.
>Metadata location on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at
> 2198927383040 has invalid summary for VG.
>Failed to read metadata summary from
> /dev/mapper/36001405063455cf7cd74c20bc06e9304
>Failed to scan VG from /dev/mapper/36001405063455cf7cd74c20bc06e9304
>
>
> Seems to me like metadata from that VG has been corrupted. Is there a
> way to recover?
>
> El 2021-09-16 11:19, nico...@devels.es escribió:
> > The most relevant log snippet I have found is the following. I assume
> > it cannot scan the Storage Domain, but I'm unsure why, as the storage
> > domain backend is up and running.
> >
> > 021-09-16 11:16:58,884+0100 WARN  (monitor/219fa16) [storage.LVM]
> > Command ['/usr/sbin/lvm', 'vgs', '--config', 'devices {
> > preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
> > write_cache_state=0  disable_after_error_count=3
> >
>
> filter=["a|^/dev/mapper/36001405063455cf7cd74c20bc06e9304$|^/dev/mapper/360014056481868b09dd4d05bee5b4185$|^/dev/mapper/360014057d9d4bc57df046888b8d8b6eb$|^/dev/mapper/360014057e612d2079b649d5b539e5f6a$|^/dev/mapper/360014059b49883b502a4fa9b81add3e4$|^/dev/mapper/36001405acece27e83b547e3a873b19e2$|^/dev/mapper/36001405dc03f6be1b8c42219e8912fbd$|^/dev/mapper/36001405f3ab584afde347d3a8855baf0$|^/dev/mapper/3600c0ff00052a0fe013ec65f0100$|^/dev/mapper/3600c0ff00052a0fe033ec65f0100$|^/dev/mapper/3600c0ff00052a0fe1b40c65f0100$|^/dev/mapper/3600c0ff00052a0fe2294c75f0100$|^/dev/mapper/3600c0ff00052a0fe2394c75f0100$|^/dev/mapper/3600c0ff00052a0fe2494c75f0100$|^/dev/mapper/3600c0ff00052a0fe2594c75f0100$|^/dev/mapper/3600c0ff00052a0fe2694c75f0100$|^/dev/mapper/3600c0ff00052a0fee293c75f0100$|^/dev/mapper/3600c0ff00052a0fee493c75f0100$|^/dev/mapper/3600c0ff00064835b628d30610100$|^/dev/mapper/3600c0ff00064835b628d30610300$|^/dev/mapper/3600c0ff000648
> >
>
> 35b628d30610500$|^/dev/mapper/3600c0ff00064835b638d30610100$|^/dev/mapper/3600c0ff00064835b638d30610300$|^/dev/mapper/3600c0ff00064835b638d30610500$|^/dev/mapper/3600c0ff00064835b638d30610700$|^/dev/mapper/3600c0ff00064835b638d30610900$|^/dev/mapper/3600c0ff00064835b638d30610b00$|^/dev/mapper/3600c0ff00064835cb98f30610100$|^/dev/mapper/3600c0ff00064835cb98f30610300$|^/dev/mapper/3600c0ff00064835cb98f30610500$|^/dev/mapper/3600c0ff00064835cb98f30610700$|^/dev/mapper/3600c0ff00064835cb98f30610900$|^/dev/mapper/3600c0ff00064835cba8f30610100$|^/dev/mapper/3600c0ff00064835cba8f30610300$|^/dev/mapper/3600c0ff00064835cba8f30610500$|^/dev/mapper/3600c0ff00064835cba8f30610700$|^/dev/mapper/3634b35410019574796dcb0e30007$|^/dev/mapper/3634b35410019574796dcdffc0008$|^/dev/mapper/3634b354100195747999c2dc50003$|^/dev/mapper/3634b354100195747999c3c4a0004$|^/dev/mapper/3634b3541001957479c2b9c640001$|^/dev/mapper/3634
> > b3541001957479c2baba50002$|", "r|.*|"] } global {  locking_type=4
> > prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup {
> > retain_min=50  retain_days=0 }', '--noheadings', '--units', 'b',
> > '--nosuffix', '--separator', '|', '--ignoreskippedcluster', '-o',
> >
> 'uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name',
> > '--select', 'vg_name = 219fa16f-13c9-44e4-a07d-a40c0a7fe206']
> > succeeded with warnings: ['
> > /dev/mapper/36001405063455cf7cd74c20bc06e9304: Checksum error at
> > offset 2198927383040', "  Couldn't read volume group metadata from
> > /dev/mapper/36001405063455cf7cd74c20bc06e9304.", '  Metadata location
> > on /dev/mapper/36001405063455cf7cd74c20bc06e9304 at 2198927383040 has
> > invalid summary for VG.', '  Failed to read metadata summary from
> > 

[ovirt-users] Re: Upgrading Node that use Local Storage Domain

2021-09-07 Thread Roman Bednar
Hello,

it's always better to share logs showing the error message, that way we can
look at what exactly went wrong. But in general there is a guide [1] on
upgrading from 4.3 when using local storage.

If that's not the case there's also a chance your local storage domain is
not placed correctly [2].

If neither of those two help you resolve the issue please share the logs
with us and we'll look further.


-Roman


[1]
https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_hypervisor_preserving_local_storage_4-3_local_db
[2]
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/CPG7H7J7C44TN3DSDAJGHMEBEPFXH6HU/

On Tue, Sep 7, 2021 at 6:22 AM Nur Imam Febrianto 
wrote:

> Hi,
>
>
>
> Currently we have a several node (using oVirt Node) that configured using
> Local Storage Domain. How do I appropriately update the node ? Using yum
> update always failed while installing ovirt image update. (scriptlet
> failed). Is there any specific step to do the update for node that using
> local storage ?
>
> Thanks before.
>
>
>
> Regards,
>
>
>
> Nur Imam Febrianto
>
>
>
> Sent from Mail  for
> Windows
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPSGOMH3N2PCMGYFRE4RPEFML5JOUTP7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WRX3IVIXTH7V3ZOVNMYRNMOCPRTRTOYO/


[ovirt-users] Re: Correct way to install VDSM hooks

2021-08-26 Thread Roman Bednar
Hello Shantur,

it seems your yum repos might be just misconfigured. The easiest way to
configure it is to use one of the rpms provided:

*$sudo yum install -y
http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
*

See this link for other releases (rpms) if master is not the right
choice for you:
http://resources.ovirt.org/pub/yum-repo/

This rpm provides yum repo configs you need for installing vdsm hooks:































*[root@host-vm yum.repos.d]# rpm -ql
ovirt-release-master-4.4.8-0.0.master.20210825011136.git59df936.el8.noarch/etc/yum.repos.d/ovirt-master-dependencies.repo/etc/yum.repos.d/ovirt-master-snapshot.repo/usr/share/ovirt-release-master/usr/share/ovirt-release-master/node-optional.repo/usr/share/ovirt-release-master/ovirt-el8-ppc64le-deps.repo/usr/share/ovirt-release-master/ovirt-el8-stream-ppc64le-deps.repo/usr/share/ovirt-release-master/ovirt-el8-stream-x86_64-deps.repo/usr/share/ovirt-release-master/ovirt-el8-x86_64-deps.repo/usr/share/ovirt-release-master/ovirt-el9-stream-x86_64-deps.repo/usr/share/ovirt-release-master/ovirt-snapshot.repo/usr/share/ovirt-release-master/ovirt-tested.repo[root@host-vm
yum.repos.d]# cat
/etc/yum.repos.d/ovirt-master-snapshot.repo[ovirt-master-snapshot]name=Latest
oVirt master nightly
snapshot#baseurl=https://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el$releasever/
mirrorlist=https://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-master-snapshot-el$releasever
enabled=1gpgcheck=0countme=1fastestmirror=1[ovirt-master-snapshot-static]name=Latest
oVirt master additional nightly
snapshot#baseurl=https://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el$releasever/
mirrorlist=https://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-master-snapshot-static-el$releasever
enabled=1gpgcheck=0countme=1fastestmirror=1*


Now the hook installation should work:



















*[root@host-vm yum.repos.d]# yum repolist ovirt-master-snapshot -vLoaded
plugins: builddep, changelog, config-manager, copr, debug,
debuginfo-install, download, generate_completion_cache, groups-manager,
needs-restarting, playground, repoclosure, repodiff, repograph, repomanage,
reposync, uploadprofile, vdsmupgradeYUM version: 4.7.0cachedir:
/var/cache/dnfLast metadata expiration check: 0:04:54 ago on Wed 25 Aug
2021 05:06:23 PM CEST.Repo-id: ovirt-master-snapshotRepo-name
   : Latest oVirt master nightly snapshotRepo-status:
enabledRepo-revision  : 1629947320Repo-updated   : Thu 26 Aug 2021
05:08:40 AM CESTRepo-pkgs  : 256Repo-available-pkgs: 256Repo-size
   : 12 GRepo-mirrors   :
https://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-master-snapshot-el8
Repo-baseurl
  : https://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el8/
 (14
more)Repo-expire: 172,800 second(s) (last: Wed 25 Aug 2021 05:06:20
PM CEST)Repo-filename  :
/etc/yum.repos.d/ovirt-master-snapshot.repoTotal packages: 256*
















*[root@host-vm yum.repos.d]# yum --disablerepo=*
--enablerepo=ovirt-master-snapshot install vdsm-hook-scratchpad.noarchLast
metadata expiration check: 0:05:19 ago on Wed 25 Aug 2021 05:06:20 PM
CEST.Dependencies
resolved.=
Package
Architecture  Version
 Repository
 
Size=Installing:
vdsm-hook-scratchpad
   noarch4.40.90-1.el8
 ovirt-master-snapshot9.0 kTransaction
Summary=Install
 1 PackageTotal download size: 9.0 kInstalled size: 4.9 kIs this ok [y/N]:*


Let me know if you need further assistance and have a great day.

On Wed, Aug 25, 2021 at 2:55 PM Shantur Rathore 
wrote:

> Hi all,
>
> just bumping if anyone missed this
>
> Thanks
>
> On Tue, Aug 24, 2021 at 9:29 AM Shantur Rathore 
> wrote:
>
>> Hi all,
>>
>> I am trying to install vdsm hooks (scratchpad) specifically.
>> I can see that there are rpms available in
>> https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ but 

[ovirt-users] Fwd: Can't remove snapshot

2021-06-03 Thread Roman Bednar
Ok, sounds good. Forgot to include the mailing list, doing it now.


-- Forwarded message -
From: David Johnson 
Date: Thu, Jun 3, 2021 at 11:17 AM
Subject: Re: [ovirt-users] Can't remove snapshot
To: Roman Bednar 


Thanks, I'll check it out.

Since my business is replatforming and transforming databases, digging
around the DB is something I will be very comfortable with.

I won't be able to do anything until Friday. I'll let you know how it goes.

David Johnson

On Thu, Jun 3, 2021, 2:40 AM Roman Bednar  wrote:

> Digging a bit further I found this is a known issue. A discrepancy can
> occur between vdsm and engine db when removing a snapshot.
>
> It's been already discussed [1] and a bug is filed [2]. In the discussion
> you can find a workaround which is manual removal of the snapshot.
>
> Don't forget to backup the engine database by running 'engine-backup' tool
> on the engine node before doing any changes.
> Restore requires a bit more options and can be done like this:
>
> # engine-backup
> --file=/var/lib/ovirt-engine-backup/ovirt-engine-backup-20210602055605.backup
> --mode=restore --provision-all-databases
>
> To check if the discrepancy occurred you can check the db and compare that
> to what vdsm sees (which is a source of truth).
>
> The example below shows a consistent setup from my env with one snapshot,
> if there is anything extra in your env in the
> db it should be removed and the parent id changed accordingly [3].
>
> image_group_id (db) == image (vdsm)
> image_guid (db) == logical volume on host (vdsm)
>
> Engine node:
>
> # su - postgres
> # psql
> postgres=# \c engine
> engine=# select image_guid, image_group_id, parentid from images where
> image_group_id = 'e75318bf-c563-4d66-99e4-63645736a418';
>   image_guid  |image_group_id
>|   parentid
>
> --+--+--
>  1955f6de-658a-43c3-969b-79db9b4bf14c |
> e75318bf-c563-4d66-99e4-63645736a418 | ----
>  d6662661-eb87-4c01-a204-477919e65221 |
> e75318bf-c563-4d66-99e4-63645736a418 | 1955f6de-658a-43c3-969b-79db9b4bf14c
>
>
> Host node:
>
> # vdsm-tool dump-volume-chains 
>
> Images volume chains (base volume first)
>
>image:e75318bf-c563-4d66-99e4-63645736a418
>
>  - 1955f6de-658a-43c3-969b-79db9b4bf14c
>status: OK, voltype: INTERNAL, format: RAW, legality:
> LEGAL, type: PREALLOCATED, capacity: 5368709120, truesize: 5368709120
>
>  - d6662661-eb87-4c01-a204-477919e65221
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE, capacity: 5368709120, truesize: 3221225472
> ...
>
>
> I hope this helps a bit and if you need further assistance let us know,
> it's not very convenient to change the db
> manually like this but a fix should be on the way :)
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1948599
> [2]
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/7ZU7NWHBW3B2NBPQPNRVAAU7CVJ5PEKG/
> [3]
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/D2HKS2RFMNKGP54JVA3D5MVUYKKQVZII/
>
> On Tue, Jun 1, 2021 at 8:19 PM David Johnson 
> wrote:
>
>> Yes, I have the same error on the second try.
>>
>> You can see it happening in the engine log starting at 2021-05-31 07:49.
>>
>>
>> *David Johnson*
>> *Director of Development, Maxis Technology*
>> 844.696.2947 ext 702 (o) | 479.531.3590 (c)
>> <https://www.linkedin.com/in/pojoguy/>
>> <https://maxistechnology.com/wp-content/uploads/vcards/vcard-David_Johnson.vcf>
>> <https://maxistechnology.com/>
>>
>> *Follow us:*  <https://www.linkedin.com/company/maxis-tech-inc/>
>>
>>
>> On Tue, Jun 1, 2021 at 7:48 AM Roman Bednar  wrote:
>>
>>> Hi David,
>>>
>>> awesome, thanks for the reply. Looking at the logs there does not seem
>>> anything suspicious on vdsm side and as you said the snapshots are really
>>> gone when looking from vdsm. I tried to reproduce without much success but
>>> it looks like a problem on the engine side.
>>>
>>> Did you get the same error saying that the disks are illegal on the
>>> second try? There should be more in the engine log so try checking it as
>>> well to see if this is really on the engine side.
>>>
>>> It would be great to have a reproducer for this and file the bug so we
>>> can track this and provide a fix.
>>>
>>>
>>> -Roman
>>>
>>>
>>>
>&g

[ovirt-users] Re: How to use Ovirt node 4.4.5.0 with local storage

2021-05-04 Thread Roman Bednar
Hello,

welcome to ovirt, I hope you'll have a wonderful experience :)

Since you mentioned having (one) server I suppose you're deploying a
self-hosted ovirt engine. I haven't tried deploying everything on one node
but I believe this should be possible although not recommended, certainly
not for a production environment. The minimum is two nodes for a
self-hosted engine [1].

Similar to storage, according to documentation [1] it is possible to use
local storage when it's configured as a service accessible to all hosts. I
suppose having for example NFS on one of the nodes (using local storage)
and creating NFS storage domain on top would be sufficient. Shared storage
is required because it stores data associated with VMs, which can migrate
to different hosts. Also consider storage space requirements [2]: host
~55G, engine ~5G, +your storage domain on the same node.

Additionally it is also recommended to create more than one storage domain
to be able to recover if the storage domain gets corrupted. If only one
storage domain is configured, the recovery is not possible [3].

Hope this helps a bit, I'm kind of new to ovirt too so I expect better
hints to land here as well :) Have a great day!


-Roman


[1]
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/#Self-hosted_Engine_Architecture_SHE_cli_deploy
[2]
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/#Storage_Requirements_SHE_cli_deploy
[3]
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/#Preparing_Storage_for_RHV_SHE_cli_deploy



On Tue, May 4, 2021 at 7:50 AM  wrote:

> Hi, I'm totally new with Ovirt.
> I have a production server in which I planned to install Ovirt.
> I need to use their local storage for Virtualization, but it seems that
> only have chance to storage in NAS or other network or fiber storage.
> Can I use the local storage for virtualization?. If yes, how can I do that.
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHFAWPV6ONQQBL7LHSNVC4X7VNNYVW7S/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GOK6SMSORKYSWR4ODVX2GACW7ECU33GI/


[ovirt-users] Re: Power failure makes cluster and hosted engine unusable

2021-04-01 Thread Roman Bednar
Hi Thomas,

Thanks for looking into this, the problem is really somewhere around this
tasks file. However I just tried faking the memory values directly inside
the tasks file to something way higher and everything looks fine. I think
the problem resides in registering the output of the "free -m" at the
beginning of this file. There are also debug tasks which print registered
values from the shell commands where we could take a closer look, see if it
looks normal (stdout mainly).

This part that of the output that Seann provided seems particularly
strange: Available memory ( {'failed': False, 'changed': False,
'ansible_facts': {u'max_mem': u'180746'}}MB )

Normally it should just show the exact value/string, here we're getting
some dictionary from python most likely. I'd check if the latest version of
ansible is installed and see if this can be reproduced if there was an
update available.

If the issue persists please provide full log of the ansible run (ideally
with -).


-Roman

On Wed, Mar 31, 2021 at 9:19 PM Thomas Hoberg  wrote:

> Roman, I believe the bug is in
> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml
>
>   - name: Set Max memory
> set_fact:
>   max_mem: "{{ free_mem.stdout|int + cached_mem.stdout|int -
> he_reserved_memory_MB + he_avail_memory_grace_MB }}"
>
>
> If these lines are casting the result of `free -m` into 'int', that seems
> to fail at bigger RAM sizes.
>
> I wound up having to delete all the available memory checks from that file
> to have the wizard progress on a machine with 512GB of RAM.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CARDJXYUPFUFJT2VE2UNXELL2PSUZSPS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WTFXXEZDZ6V6RHBYDSGIBZ7B2DAFQHHC/


[ovirt-users] Re: Power failure makes cluster and hosted engine unusable

2021-03-30 Thread Roman Bednar
Hi Seann,


On Mon, Mar 29, 2021 at 8:31 PM Seann G. Clark via Users 
wrote:

> All,
>
>
>
> After a power failure, and generator failure I lost my cluster, and the
> Hosted engine refused to restart after power was restored. I would expect,
> once storage comes up that the hosted engine comes back online without too
> much of a fight. In practice because the SPM went down as well, there is no
> (clearly documented) way to clear any of the stale locks, and no way to
> recover both the hosted engine and the cluster.
>

Could you provide more details/logs on storage not coming up? Also more
information about the current locks would be great, is there any procedure
you tried that did not work for cleaning those up?

I have spent the last 12 hours trying to get a functional hosted-engine
> back online, on a new node and each attempt hits a new error, from the
> installer not understanding that 16384mb of dedicated VM memory out of
> 192GB free on the host is indeed bigger than 4096MB, to ansible dying  on
> an error like this “Error while executing action: Cannot add Storage
> Connection. Storage connection already exists.”
>
> The memory error referenced above shows up as:
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Available memory ( {'failed': False, 'changed': False, 'ansible_facts':
> {u'max_mem': u'180746'}}MB ) is less then the minimal requirement (4096MB).
> Be aware that 512MB is reserved for the host and cannot be allocated to the
> engine VM."}
>
> That is what I typically get when I try the steps outlined in the KB
> “CHAPTER 7. RECOVERING A SELF-HOSTED ENGINE FROM AN EXISTING BACKUP” from
> the RH Customer portal. I have tried this numerous ways, and the cluster
> still remains in a bad state, with the hosted engine being 100% inoperable.
>

This could be a bug in the ansible role, did that happen during
"hosted-engine --deploy" or other part of the recovery guide? Provide logs
here as well please, its seems like a completely separate issue though.


>
> What I do have are the two host that are part of the cluster and can host
> the engine, and backups of the original hosted engine, both disk and
> engine-backup generated. I am not sure what I can do next, to recover this
> cluster, any suggestions would be apricated.
>
>
>
> Regards,
>
> Seann
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JLDIFTKYDPQ6YK5IGH7RVOXKTTRD6ZBH/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NEBKOCJ452ASTDAEAD6DHP2D5JZLV7ZN/


[ovirt-users] Re: Q: Set host CPU type to kvm64 for a single VM

2021-03-29 Thread Roman Bednar
Hi Andrei,

kvm64 is a legacy type, not recommended for use by qemu project [1]. I
suppose that's the reason it's been left out from the list you're looking
at.

Not sure how this is implemented exactly, anyway it seems like you can type
in any custom value into the field for cpu type, instead of just picking
one from the list provided. It seems to have worked for me:

# virsh -r dumpxml vm1 | xpath -q -e "//domain/cpu/model"
kvm64
# virsh -r list
 Id   Name   State
--
 1vm1running


[1]
https://qemu-project.gitlab.io/qemu/system/qemu-cpu-models.html#other-non-recommended-x86-cpus


-Roman

On Mon, Mar 29, 2021 at 10:23 AM Andrei Verovski 
wrote:

> Hi,
>
> OK, thanks, found this option.
>
> But host CPU type “KVM64” is not available here, only Conroe, Penryn,
> Nehalem, Westmere.
>
> Where I can add CPU type “KVM64” in this list?
>
>
> On 29 Mar 2021, at 11:00, Ritesh Chikatwar  wrote:
>
> Hello,
>
> Yes , There is an option but I have not tried.
> Log in to the portal and edit the vm properties.
> Steps:
>
> Select VM -> Edit -> Click System Tab -> then click Advanced Parameter ->
> Custom CPU Type
>
> 
>
> On Mon, Mar 29, 2021 at 12:37 PM Andrei Verovski 
> wrote:
>
>> Hi !
>>
>>
>> Is it possible to set host CPU type to kvm64 for a single VM ?
>>
>>
>> Thanks.
>> Andrei
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFTVL6WIALN6QL6D6MZMUIGLF3D2R2L6/
>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WNPVLRRAH7ZDJPF56EEKPBASSNJ76CVE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2NTGFKFHETVYW3V2XSAMYSBIV5XWP46/