[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-28 Thread Henri Aanstoot
Hi,

Ended up reinstalling everything manual, not using the ovirt node iso's
Non of the combinations/versions/manual playbook upgrades worked for me.
Installed as i used to do previously ,am a happy camper again.
untarred old ova's, imported disks .. up and running again

Henri


On Mon, 27 Jul 2020 at 15:45, h aanst  wrote:

> Hi,
>
> Removed everything .. reinstall with images
>
> Hitting know bug
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed":
> false, "msg": "The Python 2 yum module is needed for this module. If you
> require Python 3 support use the `dnf` Ansible module instead."}
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1858234
>
> clean installs with
> ovirt-node-ng-installer-4.4.1-2020071311.el8.iso
> ovirt-node-ng-installer-4.4.1-2020070811.el8.iso
>
> 4.4.0 install would not upgrade .. how to install now?
>
> any advice?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE6BD4EWH2MKNNO2M4NDOZPI47GUTNO5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4U7KBPOTT3CQC2PFFT563NSRG5CC6NO5/


[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-27 Thread h aanst
Hi, 

Removed everything .. reinstall with images

Hitting know bug
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, 
"msg": "The Python 2 yum module is needed for this module. If you require 
Python 3 support use the `dnf` Ansible module instead."}

https://bugzilla.redhat.com/show_bug.cgi?id=1858234

clean installs with 
ovirt-node-ng-installer-4.4.1-2020071311.el8.iso
ovirt-node-ng-installer-4.4.1-2020070811.el8.iso

4.4.0 install would not upgrade .. how to install now?

any advice?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE6BD4EWH2MKNNO2M4NDOZPI47GUTNO5/


[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-24 Thread Henri Aanstoot
thanks, started upgrading nodes

After running ovirt (manual install) for 5+ years without problem .. (did
yust a few upgrades)
I've bought myself 2 big hypervisors and started from image based installs.
4.4.0 installed but snapshots problems ...
upgrade was a hell due to dependency  failures, there where duplicates
rpm/repos

started with 4.4.1-2020071311 .. python2 known problem ... damn
4.4.1-2020070811 .. boot problems after install, install from running 4.4.0
hypervisor with upgraded engine .. install failed times after times .. repo
error/python error

my old manual install worked/installed without problem, why problems with
image based?

Well .. going to be reinstalling/upgrading to 4.4.1 .. and hoping to
recover my vm's

On Thu, 23 Jul 2020 at 09:57, Benny Zlotnik  wrote:

> it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939
>
>
> On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot  wrote:
>
>>
>>
>>
>>
>> Hi all,
>>
>> I've got 2 two node setup, image based installs.
>> When doing ova exports or generic snapshots, things seem in order.
>> Removing snapshots shows warning 'disk in illegal state'
>>
>> Mouse hover shows .. please do not shutdown before succesfully remove
>> snapshot
>>
>>
>> ovirt-engine log
>> 2020-07-22 16:40:37,549+02 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
>> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM node2.lab command MergeVDS failed:
>> Merge failed
>> 2020-07-22 16:40:37,549+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command 'MergeVDSCommand(HostName =
>> node2.lab,
>> MergeVDSCommandParameters:{hostId='02df5213-1243-4671-a1c6-6489d7146319',
>> vmId='64c25543-bef7-4fdd-8204-6507046f5a34',
>> storagePoolId='5a4ea80c-b3b2-11ea-a890-00163e3cb866',
>> storageDomainId='9a12f1b2-5378-46cc-964d-3575695e823f',
>> imageGroupId='3f7ac8d8-f1ab-4c7a-91cc-f34d0b8a1cb8',
>> imageId='c757e740-9013-4ae0-901d-316932f4af0e',
>> baseImageId='ebe50730-dec3-4f29-8a38-9ae7c59f2aef',
>> topImageId='c757e740-9013-4ae0-901d-316932f4af0e', bandwidth='0'})'
>> execution failed: VDSGenericException: VDSErrorException: Failed to
>> MergeVDS, error = Merge failed, code = 52
>> 2020-07-22 16:40:37,549+02 ERROR [org.ovirt.engine.core.bll.MergeCommand]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Engine exception thrown while
>> sending merge command: org.ovirt.engine.core.common.errors.EngineException:
>> EngineException:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
>> failed, code = 52 (Failed with error mergeErr and code 52)
>> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
>> failed, code = 52
>>   
>>   > io='threads'/>
>> 2020-07-22 16:40:39,659+02 ERROR
>> [org.ovirt.engine.core.bll.MergeStatusCommand]
>> (EE-ManagedExecutorService-commandCoordinator-Thread-3)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Failed to live merge. Top volume
>> c757e740-9013-4ae0-901d-316932f4af0e is still in qemu chain
>> [ebe50730-dec3-4f29-8a38-9ae7c59f2aef, c757e740-9013-4ae0-901d-316932f4af0e]
>> 2020-07-22 16:40:41,524+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command id:
>> 'e0b2bce7-afe0-4955-ae46-38bcb8719852 failed child command status for step
>> 'MERGE_STATUS'
>> 2020-07-22 16:40:42,597+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Merging of snapshot
>> 'ef8f7e06-e48c-4a8c-983c-64e3d4ebfcf9' images
>> 'ebe50730-dec3-4f29-8a38-9ae7c59f2aef'..'c757e740-9013-4ae0-901d-316932f4af0e'
>> failed. Images have been marked illegal and can no longer be previewed or
>> reverted to. Please retry Live Merge on the snapshot to complete the
>> operation.
>> 2020-07-22 16:40:42,603+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
>> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
>> with failure.
>> 2020-07-22 16:40:43,679+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
>> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending 

[ovirt-users] Re: New ovirt 4.4.0.3-1.el8 leaves disks in illegal state on all snapshot actions

2020-07-23 Thread Benny Zlotnik
it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939


On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot  wrote:

>
>
>
>
> Hi all,
>
> I've got 2 two node setup, image based installs.
> When doing ova exports or generic snapshots, things seem in order.
> Removing snapshots shows warning 'disk in illegal state'
>
> Mouse hover shows .. please do not shutdown before succesfully remove
> snapshot
>
>
> ovirt-engine log
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM node2.lab command MergeVDS failed:
> Merge failed
> 2020-07-22 16:40:37,549+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command 'MergeVDSCommand(HostName =
> node2.lab,
> MergeVDSCommandParameters:{hostId='02df5213-1243-4671-a1c6-6489d7146319',
> vmId='64c25543-bef7-4fdd-8204-6507046f5a34',
> storagePoolId='5a4ea80c-b3b2-11ea-a890-00163e3cb866',
> storageDomainId='9a12f1b2-5378-46cc-964d-3575695e823f',
> imageGroupId='3f7ac8d8-f1ab-4c7a-91cc-f34d0b8a1cb8',
> imageId='c757e740-9013-4ae0-901d-316932f4af0e',
> baseImageId='ebe50730-dec3-4f29-8a38-9ae7c59f2aef',
> topImageId='c757e740-9013-4ae0-901d-316932f4af0e', bandwidth='0'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> MergeVDS, error = Merge failed, code = 52
> 2020-07-22 16:40:37,549+02 ERROR [org.ovirt.engine.core.bll.MergeCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-2)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Engine exception thrown while
> sending merge command: org.ovirt.engine.core.common.errors.EngineException:
> EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52 (Failed with error mergeErr and code 52)
> Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge
> failed, code = 52
>   
>io='threads'/>
> 2020-07-22 16:40:39,659+02 ERROR
> [org.ovirt.engine.core.bll.MergeStatusCommand]
> (EE-ManagedExecutorService-commandCoordinator-Thread-3)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Failed to live merge. Top volume
> c757e740-9013-4ae0-901d-316932f4af0e is still in qemu chain
> [ebe50730-dec3-4f29-8a38-9ae7c59f2aef, c757e740-9013-4ae0-901d-316932f4af0e]
> 2020-07-22 16:40:41,524+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Command id:
> 'e0b2bce7-afe0-4955-ae46-38bcb8719852 failed child command status for step
> 'MERGE_STATUS'
> 2020-07-22 16:40:42,597+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Merging of snapshot
> 'ef8f7e06-e48c-4a8c-983c-64e3d4ebfcf9' images
> 'ebe50730-dec3-4f29-8a38-9ae7c59f2aef'..'c757e740-9013-4ae0-901d-316932f4af0e'
> failed. Images have been marked illegal and can no longer be previewed or
> reverted to. Please retry Live Merge on the snapshot to complete the
> operation.
> 2020-07-22 16:40:42,603+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-53)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
> with failure.
> 2020-07-22 16:40:43,679+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] Ending command
> 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
> 2020-07-22 16:40:43,774+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-15)
> [264b0047-5aa6-4380-9d32-eb328fd6bed0] EVENT_ID:
> USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Failed to delete snapshot
> 'Auto-generated for Export To OVA' for VM 'Adhoc'.
>
>
> VDSM on hypervisor
> 2020-07-22 14:14:30,220+0200 ERROR (jsonrpc/5) [virt.vm]
> (vmId='14283e6d-c3f0-4011-b90f-a1272f0fbc10') Live merge failed (job:
> e59c54d9-b8d3-44d0-9147-9dd40dff57b9) (vm:5381)
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed',
> dom=self)
> libvirt.libvirtError: internal error: qemu block name 'json:{"backing":
> {"driver": "qcow2", "file": {"driver": "file", "filename":
>