thanks, started upgrading nodes

After running ovirt (manual install) for 5+ years without problem .. (did
yust a few upgrades)
I've bought myself 2 big hypervisors and started from image based installs.
4.4.0 installed but snapshots problems ...
upgrade was a hell due to dependency  failures, there where duplicates
rpm/repos

started with 4.4.1-2020071311 .. python2 known problem ... damn
4.4.1-2020070811 .. boot problems after install, install from running 4.4.0
hypervisor with upgraded engine .. install failed times after times .. repo
error/python error

my old manual install worked/installed without problem, why problems with
image based?

Well .. going to be reinstalling/upgrading to 4.4.1 .. and hoping to
recover my vm's

On Thu, 23 Jul 2020 at 09:57, Benny Zlotnik <bzlot...@redhat.com> wrote:

> it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939
>
>
> On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot <f...@fash.nu> wrote:
>
>>
>>
>>
>>
>> Hi all,
>>
>> I've got 2 two node setup, image based installs.
>> When doing ova exports or generic snapshots, things seem in order.
>> Removing snapshots shows warning 'disk in illegal state'
>>
>> Mouse hover shows .. please do not shutdown before succesfully remove
>> snapshot
>>
>>
>> ovirt-engine log
>> 2020-07-22 16:40:37,549+02 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
>> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM node2.lab command MergeVDS failed:
>> Merge failed
>> 2020-07-22 16:40:37,549+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command 'MergeVDSCommand(HostName =
>> node2.lab,
>> MergeVDSCommandParameters:{hostId='02df5213-1243-4671-a1c6-6489d7146319',
>> vmId='64c25543-bef7-4fdd-8204-6507046f5a34',
>> storagePoolId='5a4ea80c-b3b2-11ea-a890-00163e3cb866',
>> storageDomainId='9a12f1b2-5378-46cc-964d-3575695e823f',
>> imageGroupId='3f7ac8d8-f1ab-4c7a-91cc-f34d0b8a1cb8',
>> imageId='c757e740-9013-4ae0-901d-316932f4af0e',
>> baseImageId='ebe50730-dec3-4f29-8a38-9ae7c59f2aef',
>> topImageId='c757e740-9013-4ae0-901d-316932f4af0e', bandwidth='0'})'
>> execution failed: VDSGenericException: VDSErrorException: Failed to
>> MergeVDS, error = Merge failed, code = 52
>> 2020-07-22 16:40:37,549+02 ERROR [org.ovirt.engine.core.bll.MergeCommand]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Engine exception thrown while
>> sending merge command: org.ovirt.engine.core.common.errors.EngineException:
>> EngineException:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
>> failed, code = 52 (Failed with error mergeErr and code 52)
>> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
>> failed, code = 52
>>       <driver name='qemu' error_policy='report'/>
>>       <driver name='qemu' type='qcow2' cache='none' error_policy='stop'
>> io='threads'/>
>> 2020-07-22 16:40:39,659+02 ERROR
>> [org.ovirt.engine.core.bll.MergeStatusCommand]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-3)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Failed to live merge. Top volume
>> c757e740-9013-4ae0-901d-316932f4af0e is still in qemu chain
>> [ebe50730-dec3-4f29-8a38-9ae7c59f2aef, c757e740-9013-4ae0-901d-316932f4af0e]
>> 2020-07-22 16:40:41,524+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command id:
>> 'e0b2bce7-afe0-4955-ae46-38bcb8719852 failed child command status for step
>> 'MERGE_STATUS'
>> 2020-07-22 16:40:42,597+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Merging of snapshot
>> 'ef8f7e06-e48c-4a8c-983c-64e3d4ebfcf9' images
>> 'ebe50730-dec3-4f29-8a38-9ae7c59f2aef'..'c757e740-9013-4ae0-901d-316932f4af0e'
>> failed. Images have been marked illegal and can no longer be previewed or
>> reverted to. Please retry Live Merge on the snapshot to complete the
>> operation.
>> 2020-07-22 16:40:42,603+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
>> with failure.
>> 2020-07-22 16:40:43,679+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
>> 2020-07-22 16:40:43,774+02 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
>> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot
>> 'Auto-generated for Export To OVA' for VM 'Adhoc'.
>>
>>
>> VDSM on hypervisor
>> 2020-07-22 14:14:30,220+0200 ERROR (jsonrpc/5) [virt.vm]
>> (vmId='14283e6d-c3f0-4011-b90f-a1272f0fbc10') Live merge failed (job:
>> e59c54d9-b8d3-44d0-9147-9dd40dff57b9) (vm:5381)
>>     if ret == -1: raise libvirtError ('virDomainBlockCommit() failed',
>> dom=self)
>> libvirt.libvirtError: internal error: qemu block name 'json:{"backing":
>> {"driver": "qcow2", "file": {"driver": "file", "filename":
>> "/rhev/data-center/mnt/10.12.0.9:_exports_data/9a12f1b2-5378-46cc-964d-3575695e823f/images/3206de41-ccdc-4f2d-a968-5e4da6c2ca3e/bb3aed4b-fc41-456a-9c18-1409a9aa6d14"}},
>> "driver": "qcow2", "file": {"driver": "file", "filename":
>> "/rhev/data-center/mnt/10.12.0.9:_exports_data/9a12f1b2-5378-46cc-964d-3575695e823f/images/3206de41-ccdc-4f2d-a968-5e4da6c2ca3e/3995b256-2afb-4853-9360-33d0c12e5fd1"}}'
>> doesn't match expected '/rhev/data-center/mnt/10.12.0.9:
>> _exports_data/9a12f1b2-5378-46cc-964d-3575695e823f/images/3206de41-ccdc-4f2d-a968-5e4da6c2ca3e/3995b256-2afb-4853-9360-33d0c12e5fd1'
>> 2020-07-22 14:14:30,234+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer]
>> RPC call VM.merge failed (error 52) in 0.17 seconds (__init__:312)
>>
>> 2020-07-22 14:17:28,798+0200 INFO  (jsonrpc/2) [api] FINISH getStats
>> error=Virtual machine does not exist: {'vmId':
>> '698d486c-edbf-4e28-a199-31a2e27bd808'} (api:129)
>> 2020-07-22 14:17:28,798+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer]
>> RPC call VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
>>
>> Also in log,
>> INFO  (jsonrpc/1) [api.virt] FINISH getStats return={'status': {'code':
>> 1, 'message': "Virtual machine does not exist:
>> But is is there and accessible
>>
>> Any advice here?
>> Henri
>>
>>
>> ovirt 4.4.0.3-1.el8
>>
>> OS Version:
>> RHEL - 8 - 1.1911.0.9.el8
>> OS Description:
>> oVirt Node 4.4.0
>> Kernel Version:
>> 4.18.0 - 147.8.1.el8_1.x86_64
>> KVM Version:
>> 4.1.0 - 23.el8.1
>> LIBVIRT Version:
>> libvirt-5.6.0-10.el8
>> VDSM Version:
>> vdsm-4.40.16-1.el8
>> SPICE Version:
>> 0.14.2 - 1.el8
>> GlusterFS Version:
>> glusterfs-7.5-1.el8
>> CEPH Version:
>> librbd1-12.2.7-9.el8
>> Open vSwitch Version:
>> openvswitch-2.11.1-5.el8
>> Nmstate Version:
>> nmstate-0.2.10-1.el8
>>
>> _______________________________________________
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGKKKAHJIOZNLIUPE37VIUAIMLG2UMUX/
>>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XUOBQE3EYBCBRE64GXX4EVPJTTMV6XK/
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K35KGPSFEPRJ4OW3KZRXPJ3NPSRT5GJE/

Reply via email to