[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-15 Thread Ladislav Humenik

I guess from the libvirt-latest repository

On 12.04.19 16:09, Nir Soffer wrote:



On Fri, Apr 12, 2019, 12:07 Ladislav Humenik 
mailto:ladislav.hume...@1und1.de>> wrote:


Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8
version
(actually 9 ovirt engine nodes), where the live storage migration
stopped to work, and leave auto-generated snapshot behind.

If we power the guest VM down, the migration works as expected. Is
there
a known bug for this? Shall we open a new one?

Setup:
ovirt - Dell PowerEdge R630
     - CentOS Linux release 7.6.1810 (Core)
     - ovirt-engine-4.2.8.2-1.el7.noarch
     - kernel-3.10.0-957.10.1.el7.x86_64
hypervisors    - Dell PowerEdge R640
     - CentOS Linux release 7.6.1810 (Core)
     - kernel-3.10.0-957.10.1.el7.x86_64
     - vdsm-4.20.46-1.el7.x86_64
     - libvirt-5.0.0-1.el7.x86_64


This is known issue in libvirt < 5.2.

How did you get this version on CentOS 7.6?

On my CentOS 7.6 I have libvirt 4.5, which is not affected by this issue.

Nir

     - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
storage domain  - netapp NFS share


logs are attached

-- 
Ladislav Humenik


System administrator

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSKUEPUOPJDSRWYYMZEKAVTZ62YP6UK2/


--
Ladislav Humenik

System administrator / VI
IT Operations Hosting Infrastructure

1&1 IONOS SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany
Phone:  +49 721 91374-8361
Mobile: +49 152 2929-6349
E-Mail: ladislav.hume...@1und1.de | Web: www.1und1.de

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498

Vorstand: Dr. Christian Böing, Hüseyin Dogan, Hans-Henning Kettler, Matthias 
Steinberg, Achim Weiß
Aufsichtsratsvorsitzender: Markus Kadelke


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte Informationen 
enthalten. Wenn Sie nicht der bestimmungsgemäße
Adressat sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie 
bitte den Absender und vernichten Sie diese E-Mail.
Anderen als dem bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu 
speichern, weiterzuleiten oder ihren Inhalt auf welche
Weise auch immer zu verwenden.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient of this e-mail, you are hereby
notified that saving, distribution or use of the content of this e-mail in any 
way is prohibited. If you have received this e-mail in error,
please notify the sender and delete the e-mail.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7D4DAPTZQH423D5PVCQ4WJFG75F7TGG3/


[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Nir Soffer
On Fri, Apr 12, 2019, 12:07 Ladislav Humenik 
wrote:

> Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version
> (actually 9 ovirt engine nodes), where the live storage migration
> stopped to work, and leave auto-generated snapshot behind.
>
> If we power the guest VM down, the migration works as expected. Is there
> a known bug for this? Shall we open a new one?
>
> Setup:
> ovirt - Dell PowerEdge R630
>  - CentOS Linux release 7.6.1810 (Core)
>  - ovirt-engine-4.2.8.2-1.el7.noarch
>  - kernel-3.10.0-957.10.1.el7.x86_64
> hypervisors- Dell PowerEdge R640
>  - CentOS Linux release 7.6.1810 (Core)
>  - kernel-3.10.0-957.10.1.el7.x86_64
>  - vdsm-4.20.46-1.el7.x86_64
>  - libvirt-5.0.0-1.el7.x86_64
>

This is known issue in libvirt < 5.2.

How did you get this version on CentOS 7.6?

On my CentOS 7.6 I have libvirt 4.5, which is not affected by this issue.

Nir

 - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
> storage domain  - netapp NFS share
>
>
> logs are attached
>
> --
> Ladislav Humenik
>
> System administrator
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSKUEPUOPJDSRWYYMZEKAVTZ62YP6UK2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3B3TLAJ7QPC6LLPBZYRD7WXUJZXQE5P6/


[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Benny Zlotnik
2019-04-12 10:39:25,643+0200 ERROR (jsonrpc/0) [virt.vm]
(vmId='71f27df0-f54f-4a2e-a51c-e61aa26b370d') Unable to start
replication for vda to {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'volumeInfo': {'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'type': 'file'}, 'diskType': 'file', 'format': 'cow', 'cache': 'none',
'volumeID': '5c2738a4-4279-4cc3-a0de-6af1095f8879', 'imageID':
'9a66bf0f-1333-4931-ad58-f6f1aa1143be', 'poolID':
'b1a475aa-c084-46e5-b65a-bf4a47143c88', 'device': 'disk', 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'propagateErrors': 'off', 'volumeChain': [{'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2',
'volumeID': u'cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2', 'leasePath':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2.lease',
'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}, {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
'volumeID': u'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'leasePath':
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879.lease',
'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}]} (vm:4710)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704,
in diskReplicateStart
self._startDriveReplication(drive)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843,
in _startDriveReplication
self._dom.blockCopy(drive.name, destxml, flags=flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
line 92, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 728, in blockCopy
ret = libvirtmod.virDomainBlockCopy(self._o, disk, destxml, params, flags)
TypeError: block params must be a dictionary


It looks like a bug in libvirt[1]

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1687114

On Fri, Apr 12, 2019 at 12:06 PM Ladislav Humenik
 wrote:
>
> Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version
> (actually 9 ovirt engine nodes), where the live storage migration
> stopped to work, and leave auto-generated snapshot behind.
>
> If we power the guest VM down, the migration works as expected. Is there
> a known bug for this? Shall we open a new one?
>
> Setup:
> ovirt - Dell PowerEdge R630
>  - CentOS Linux release 7.6.1810 (Core)
>  - ovirt-engine-4.2.8.2-1.el7.noarch
>  - kernel-3.10.0-957.10.1.el7.x86_64
> hypervisors- Dell PowerEdge R640
>  - CentOS Linux release 7.6.1810 (Core)
>  - kernel-3.10.0-957.10.1.el7.x86_64
>  - vdsm-4.20.46-1.el7.x86_64
>  - libvirt-5.0.0-1.el7.x86_64
>  - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
> storage domain  - netapp NFS share
>
>
> logs are attached
>
> --
> Ladislav Humenik
>
> System administrator
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSKUEPUOPJDSRWYYMZEKAVTZ62YP6UK2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVDEMZED7TSZNRIV3CURBI3YUKUXV5ZT/