[ovirt-users] VM Migrations failing to newly upgraded host

2022-01-24 Thread k.gunasekhar--- via Users
I am able to power on vms on newly upgraded host, But not able to migrate VMs 
from other hosts to new host or newly upgraded hosts to other hosts. This was 
worked fine before upgraded.

i see below logs

Unable to read from monitor: Connection reset by peer
internal error: qemu unexpectedly closed the monitor: 
2022-01-24T17:51:46.598571Z qemu-kvm: get_pci_config_device: Bad config >
2022-01-24T17:51:46.598627Z qemu-kvm: Failed to load PCIDevice:config
2022-01-24T17:51:46.598635Z qemu-kvm: Failed to load 
pcie-root-port:parent_obj.parent_obj.parent_obj
2022-01-24T17:51:46.598642Z qemu-kvm: error while loading state for instance 
0x0 of device ':00:02.0/pcie-root-port'
2022-01-24T17:51:46.598830Z qemu-kvm: load of migration failed: Invalid argument
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected

OS Version:
RHEL - 8.6 - 1.el8
OS Description:
CentOS Stream 8
Kernel Version:
4.18.0 - 358.el8.x86_64
KVM Version:
6.1.0 - 5.module_el8.6.0+1040+0ae94936
LIBVIRT Version:
libvirt-7.10.0-1.module_el8.6.0+1046+bd8eec5e
VDSM Version:
vdsm-4.40.100.2-1.el8
SPICE Version:
0.14.3 - 4.el8
GlusterFS Version:
[N/A]
CEPH Version:
librbd1-16.2.7-1.el8s
Open vSwitch Version:
openvswitch-2.11-1.el8
Nmstate Version:
nmstate-1.2.1-0.1.alpha1.el8
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XBZY2SXD42NL5JETM4O2AAYDZFEUR27M/


[ovirt-users] VM Migrations Failing

2019-07-10 Thread Michael Watters
I need to migrate running VMs from one host in our cluster to another
however the task keeps failing any time I start a migration.  The engine
logs show a few different errors as follows.

2019-07-10 09:53:19,440-04 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-2) [] Migration of VM 'vm1' to host
'ovirt-node-production3' failed: VM destroyed during the startup.

2019-07-10 09:53:06,542-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-51) [c1daf639-ff39-4397-b1bf-09426cacf72d] EVENT_ID:
VM_MIGRATION_FAILED(65), Migration failed due to a failed validation:
[Cannot migrate VM. There is no host that satisfies current scheduling
constraints. See below for details:, The host ovirt-node-production3 did
not satisfy internal filter Memory.] (VM: vm2, Source:
ovirt-node-production2).

I also checked the vdsm.log on the destination host which is showing an
error like this.

ERROR (jsonrpc/2) [virt.vm]
(vmId='f5bb25a8-3176-40b6-b21b-f07ed1089b27') Alias not found for device
type balloon during migration at destination host (vm:5562)

Does anybody know how to resolve this?  The destination host is able to
run VMs and I've been able to move a few VMs by shutting down and doing
a restart however I'd like to do a live migration if possible.





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AWOTK2Q7OJGJXHKNSBBHJJNJP3O2ANVJ/


[ovirt-users] VM migrations failing after upgrade 4.3.2 -> 4.3.3

2019-04-17 Thread Eduardo Mayoral

Hi,

    After upgrade to 4.3.3 from 4.3.2, VM migrations are failing with
"No available host to migrate VMs to"

    Interestingly, this happens in one of our clusters, not on the
other. Both on the same datacenter, both in 4.3 compatibility version.

    This are the relevant lines from /var/log/ovirt-engine/engine.log




2019-04-17 08:58:41,816Z ERROR
[org.ovirt.engine.core.bll.GetValidHostsForVmsQuery] (default task-7)
[51191ffb-0ae1-4414-8916-9e5012d86289] Query 'GetValidHostsForVmsQuery'
failed: null
2019-04-17 08:58:41,816Z ERROR
[org.ovirt.engine.core.bll.GetValidHostsForVmsQuery] (default task-7)
[51191ffb-0ae1-4414-8916-9e5012d86289] Exception:
java.lang.NullPointerException
    at
org.ovirt.engine.core.bll.scheduling.SchedulingManager.subtractRunningVmResources(SchedulingManager.java:923)
[bll.jar:]
    at
org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSchedule(SchedulingManager.java:616)
[bll.jar:]
    at
org.ovirt.engine.core.bll.GetValidHostsForVmsQuery.lambda$getValidHosts$0(GetValidHostsForVmsQuery.java:56)
[bll.jar:]
    at
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
[rt.jar:1.8.0_201]
    at
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
[rt.jar:1.8.0_201]
    at
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
[rt.jar:1.8.0_201]
    at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
[rt.jar:1.8.0_201]
    at
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
[rt.jar:1.8.0_201]
    at
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
[rt.jar:1.8.0_201]
    at
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
[rt.jar:1.8.0_201]
    at
org.ovirt.engine.core.bll.GetValidHostsForVmsQuery.getValidHosts(GetValidHostsForVmsQuery.java:59)
[bll.jar:]
    at
org.ovirt.engine.core.bll.GetValidHostsForVmsQuery.executeQueryCommand(GetValidHostsForVmsQuery.java:36)
[bll.jar:]
    at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:106)
[bll.jar:]
    at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:]
    at
org.ovirt.engine.core.bll.executor.DefaultBackendQueryExecutor.execute(DefaultBackendQueryExecutor.java:14)
[bll.jar:]
    at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:521)
[bll.jar:]
    at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:490)
[bll.jar:]
    at sun.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
[:1.8.0_201]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_201]
    at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_201]
    at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
    at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
    at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
    at
org.jboss.as.weld.ejb.DelegatingInterceptorInvocationContext.proceed(DelegatingInterceptorInvocationContext.java:92)
[wildfly-weld-ejb-15.0.1.Final.jar:15.0.1.Final]
    at
org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.interceptorChainCompleted(WeldInvocationContextImpl.java:107)
[weld-core-impl-3.0.5.Final.jar:3.0.5.Final]
    at
org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.proceed(WeldInvocationContextImpl.java:126)
[weld-core-impl-3.0.5.Final.jar:3.0.5.Final]
    at
org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
[common.jar:]
    at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
[:1.8.0_201]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_201]
    at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_201]
    at
org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
[weld-core-impl-3.0.5.Final.jar:3.0.5.Final]
    at
org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.invokeNext(WeldInvocationContextImpl.java:92)
[weld-core-impl-3.0.5.Final.jar:3.0.5.Final]
    at
org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.proceed(WeldInvocationContextImpl.java:124)
[weld-core-impl-3.0.5.Final.jar:3.0.5.Final]
    at
org.jboss.weld.bean.InterceptorImpl.intercept(InterceptorImpl.java:105)
[weld-core-impl-3.0.5.Final.jar:3.0.5.Final]
    at
org.jboss.as.weld.ejb.DelegatingInterceptorInvocationContext.proceed(DelegatingInterceptorInvocationContext.java:82)
[wildfly-weld-ejb-15.0.1.Final.jar:15.0.1.Final]
    at
org.jboss.as.weld.interceptors.EjbComponentInterceptorSupport.delegateInterception(EjbComponentInterceptorSupport.java:60)
    at
org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:77)
    at
or

Re: [Users] VM migrations failing

2013-02-10 Thread Dan Kenigsberg
On Fri, Feb 08, 2013 at 01:40:59PM -0600, Dead Horse wrote:
> Current master does resolve the issue. However in order to test it the
> zombie reaper patch from the ovirt-3.2 branch must be applied to avoid that
> issue.
>  - DHC

I tend to agree that keeping vdsm's master branch broken for too long
is not very friendly. If the issue is not fixed properly in the coming
few days, I would have to resort to a plain revert
http://gerrit.ovirt.org/#/c/11492/ (that reexposes process leak).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-08 Thread Dead Horse
Current master does resolve the issue. However in order to test it the
zombie reaper patch from the ovirt-3.2 branch must be applied to avoid that
issue.
 - DHC


On Fri, Feb 8, 2013 at 2:08 AM, Vinzenz Feenstra wrote:

>  On 02/05/2013 10:57 PM, Dead Horse wrote:
>
>  Confirmed ovirt-3.2 branch of vdsm does work with migrations.
>  So there is a difference between it and master as pertains to the
> migration logic.
>  - DHC
>
> Hi,
>
> could you please retry it with the current master? We have fixed the issue
> which caused this regression. :-)
> Thank you :)
>
>
>
> On Tue, Feb 5, 2013 at 2:26 PM, Dead Horse 
> wrote:
>
>>  Dan,
>>  Building and testing it now.
>>  - DHC
>>
>>
>> On Tue, Feb 5, 2013 at 2:39 AM, Dan Kenigsberg  wrote:
>>
>>> On Mon, Feb 04, 2013 at 10:38:16AM -0600, Dead Horse wrote:
>>> > VDSM built from commit: c343e1833f7b6e5428dd90f14f7807dca1baa0b4 works
>>> > Current VDSM built from master does not work.
>>> >
>>> > I could try spending some time trying to bisect and find out where the
>>> > breakage occurred I suppose.
>>>
>>>  Would you be kind to find the time to help us here? Clearly, the commit
>>> on top of c343e1833f7b6e5428dd90f14f7807dca1baa0b4 introduces nasty
>>> supervdsm regressions; it has to be reverted for any meaningful testing.
>>> However I do not see how it can be related to the problem at hand.
>>>
>>> Would you at least try out the ovirt-3.2 branch (where the infamous
>>> "zombie reaper" commit is reverted)?
>>>
>>> Dan.
>>>
>>
>>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Regards,
>
> Vinzenz Feenstra | Senior Software Engineer
> RedHat Engineering Virtualization R & D
> Phone: +420 532 294 625
> IRC: vfeenstr or evilissimo
>
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-08 Thread Vinzenz Feenstra

On 02/05/2013 10:57 PM, Dead Horse wrote:

Confirmed ovirt-3.2 branch of vdsm does work with migrations.
So there is a difference between it and master as pertains to the 
migration logic.

- DHC

Hi,

could you please retry it with the current master? We have fixed the 
issue which caused this regression. :-)

Thank you :)




On Tue, Feb 5, 2013 at 2:26 PM, Dead Horse 
mailto:deadhorseconsult...@gmail.com>> 
wrote:


Dan,
Building and testing it now.
- DHC


On Tue, Feb 5, 2013 at 2:39 AM, Dan Kenigsberg mailto:dan...@redhat.com>> wrote:

On Mon, Feb 04, 2013 at 10:38:16AM -0600, Dead Horse wrote:
> VDSM built from commit:
c343e1833f7b6e5428dd90f14f7807dca1baa0b4 works
> Current VDSM built from master does not work.
>
> I could try spending some time trying to bisect and find out
where the
> breakage occurred I suppose.

Would you be kind to find the time to help us here? Clearly,
the commit
on top of c343e1833f7b6e5428dd90f14f7807dca1baa0b4 introduces
nasty
supervdsm regressions; it has to be reverted for any
meaningful testing.
However I do not see how it can be related to the problem at hand.

Would you at least try out the ovirt-3.2 branch (where the
infamous
"zombie reaper" commit is reverted)?

Dan.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Regards,

Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-05 Thread Dead Horse
Confirmed ovirt-3.2 branch of vdsm does work with migrations.
So there is a difference between it and master as pertains to the migration
logic.
- DHC


On Tue, Feb 5, 2013 at 2:26 PM, Dead Horse wrote:

> Dan,
> Building and testing it now.
> - DHC
>
>
> On Tue, Feb 5, 2013 at 2:39 AM, Dan Kenigsberg  wrote:
>
>> On Mon, Feb 04, 2013 at 10:38:16AM -0600, Dead Horse wrote:
>> > VDSM built from commit: c343e1833f7b6e5428dd90f14f7807dca1baa0b4 works
>> > Current VDSM built from master does not work.
>> >
>> > I could try spending some time trying to bisect and find out where the
>> > breakage occurred I suppose.
>>
>> Would you be kind to find the time to help us here? Clearly, the commit
>> on top of c343e1833f7b6e5428dd90f14f7807dca1baa0b4 introduces nasty
>> supervdsm regressions; it has to be reverted for any meaningful testing.
>> However I do not see how it can be related to the problem at hand.
>>
>> Would you at least try out the ovirt-3.2 branch (where the infamous
>> "zombie reaper" commit is reverted)?
>>
>> Dan.
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-05 Thread Dead Horse
Dan,
Building and testing it now.
- DHC


On Tue, Feb 5, 2013 at 2:39 AM, Dan Kenigsberg  wrote:

> On Mon, Feb 04, 2013 at 10:38:16AM -0600, Dead Horse wrote:
> > VDSM built from commit: c343e1833f7b6e5428dd90f14f7807dca1baa0b4 works
> > Current VDSM built from master does not work.
> >
> > I could try spending some time trying to bisect and find out where the
> > breakage occurred I suppose.
>
> Would you be kind to find the time to help us here? Clearly, the commit
> on top of c343e1833f7b6e5428dd90f14f7807dca1baa0b4 introduces nasty
> supervdsm regressions; it has to be reverted for any meaningful testing.
> However I do not see how it can be related to the problem at hand.
>
> Would you at least try out the ovirt-3.2 branch (where the infamous
> "zombie reaper" commit is reverted)?
>
> Dan.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-05 Thread Dan Kenigsberg
On Mon, Feb 04, 2013 at 10:38:16AM -0600, Dead Horse wrote:
> VDSM built from commit: c343e1833f7b6e5428dd90f14f7807dca1baa0b4 works
> Current VDSM built from master does not work.
> 
> I could try spending some time trying to bisect and find out where the
> breakage occurred I suppose.

Would you be kind to find the time to help us here? Clearly, the commit
on top of c343e1833f7b6e5428dd90f14f7807dca1baa0b4 introduces nasty
supervdsm regressions; it has to be reverted for any meaningful testing.
However I do not see how it can be related to the problem at hand.

Would you at least try out the ovirt-3.2 branch (where the infamous
"zombie reaper" commit is reverted)?

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-04 Thread Dead Horse
VDSM built from commit: c343e1833f7b6e5428dd90f14f7807dca1baa0b4 works
Current VDSM built from master does not work.

I could try spending some time trying to bisect and find out where the
breakage occurred I suppose.
- DHC


On Sun, Feb 3, 2013 at 10:13 AM, Martin Kletzander wrote:

> On 02/03/2013 08:40 AM, Dan Kenigsberg wrote:
> > On Fri, Feb 01, 2013 at 11:44:08PM +0100, Martin Kletzander wrote:
> >> On 02/01/2013 09:29 PM, Dead Horse wrote:
> >>> To test further I loaded up two more identical servers with EL 6.3 and
> the
> >>> same package versions originally indicated. The difference here is
> that I
> >>> did not turn these into ovirt nodes. EG: installing VDSM.
> >>>
> >>> - All configurations were left at defaults on both servers
> >>> - iptables and selinux disabled on both servers
> >>> - verified full connectivty between both servers
> >>> - setup ssh (/root/authorized keys) between the servers --> this
> turned out
> >>> to be the key!
> >>>
> >>> Then using syntax found here:
> >>> http://libvirt.org/migration.html#flowpeer2peer
> >>> EG: From the source server I issued the following:
> >>>
> >>
> >> So your client equals to the source server, that makes us sure that the
> >> connection is made on the same network for p2p and non-p2p migration.
> >>
> >>> virsh migrate --p2p sl63 qemu+ssh://192.168.1.2/system
> >>>
> >>
> >> You're using ssh transport here, but isn't vdsm using tcp or tls?
> >
> > It is!
> >
>
> So then testing it with '+ssh' does not help much.  But at least we know
> the addresses are reachable.
>
> >> According to the config file tcp transport is enabled with no
> >> authentication whatsoever...
> >>
> >>> It fails in exactly the same way as previously indicated when the
> >>> destination server does not have an ssh rsa pub ID from the source
> system
> >>> in it's /root/.ssh/authorized_keys file.
> >>> However once the ssh rsa pub ID is in place on the destination system
> all
> >>> is well and migrations work as expected.
> >>>
> >>
> >> ..., which would mean you need no ssh keys when migrating using tcp
> >> transport instead.
> >>
> >> Also during p2p migration the source libvirt daemon can't ask you for
> >> the password, but when not using p2p the client is connecting to the
> >> destination, thus being able to ask for the password and/or use
> >> different ssh keys.
> >>
> >> But it looks like none of this has anything to do with the problem as:
> >>
> >>  1) as you found out, changing vdsm versions makes the problem go
> >> away/appear and
> >
> > I've missed this point. Which version of vdsm makes it go away?
> >
>
> Sorry, I've got it stuck in my head that part of the thread was about
> it, but when going through the mail now it makes less sense than before.
>  I probably understood that from [1] and maybe some other sentence that
> mixed in my head, but was related to the ssh migration.
>
> Sorry for that,
> Martin
>
> [1] http://www.mail-archive.com/users@ovirt.org/msg06105.html
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-03 Thread Martin Kletzander
On 02/03/2013 08:40 AM, Dan Kenigsberg wrote:
> On Fri, Feb 01, 2013 at 11:44:08PM +0100, Martin Kletzander wrote:
>> On 02/01/2013 09:29 PM, Dead Horse wrote:
>>> To test further I loaded up two more identical servers with EL 6.3 and the
>>> same package versions originally indicated. The difference here is that I
>>> did not turn these into ovirt nodes. EG: installing VDSM.
>>>
>>> - All configurations were left at defaults on both servers
>>> - iptables and selinux disabled on both servers
>>> - verified full connectivty between both servers
>>> - setup ssh (/root/authorized keys) between the servers --> this turned out
>>> to be the key!
>>>
>>> Then using syntax found here:
>>> http://libvirt.org/migration.html#flowpeer2peer
>>> EG: From the source server I issued the following:
>>>
>>
>> So your client equals to the source server, that makes us sure that the
>> connection is made on the same network for p2p and non-p2p migration.
>>
>>> virsh migrate --p2p sl63 qemu+ssh://192.168.1.2/system
>>>
>>
>> You're using ssh transport here, but isn't vdsm using tcp or tls?
> 
> It is!
> 

So then testing it with '+ssh' does not help much.  But at least we know
the addresses are reachable.

>> According to the config file tcp transport is enabled with no
>> authentication whatsoever...
>>
>>> It fails in exactly the same way as previously indicated when the
>>> destination server does not have an ssh rsa pub ID from the source system
>>> in it's /root/.ssh/authorized_keys file.
>>> However once the ssh rsa pub ID is in place on the destination system all
>>> is well and migrations work as expected.
>>>
>>
>> ..., which would mean you need no ssh keys when migrating using tcp
>> transport instead.
>>
>> Also during p2p migration the source libvirt daemon can't ask you for
>> the password, but when not using p2p the client is connecting to the
>> destination, thus being able to ask for the password and/or use
>> different ssh keys.
>>
>> But it looks like none of this has anything to do with the problem as:
>>
>>  1) as you found out, changing vdsm versions makes the problem go
>> away/appear and
> 
> I've missed this point. Which version of vdsm makes it go away?
> 

Sorry, I've got it stuck in my head that part of the thread was about
it, but when going through the mail now it makes less sense than before.
 I probably understood that from [1] and maybe some other sentence that
mixed in my head, but was related to the ssh migration.

Sorry for that,
Martin

[1] http://www.mail-archive.com/users@ovirt.org/msg06105.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-02 Thread Dan Kenigsberg
On Fri, Feb 01, 2013 at 11:44:08PM +0100, Martin Kletzander wrote:
> On 02/01/2013 09:29 PM, Dead Horse wrote:
> > To test further I loaded up two more identical servers with EL 6.3 and the
> > same package versions originally indicated. The difference here is that I
> > did not turn these into ovirt nodes. EG: installing VDSM.
> > 
> > - All configurations were left at defaults on both servers
> > - iptables and selinux disabled on both servers
> > - verified full connectivty between both servers
> > - setup ssh (/root/authorized keys) between the servers --> this turned out
> > to be the key!
> > 
> > Then using syntax found here:
> > http://libvirt.org/migration.html#flowpeer2peer
> > EG: From the source server I issued the following:
> >
> 
> So your client equals to the source server, that makes us sure that the
> connection is made on the same network for p2p and non-p2p migration.
> 
> > virsh migrate --p2p sl63 qemu+ssh://192.168.1.2/system
> > 
> 
> You're using ssh transport here, but isn't vdsm using tcp or tls?

It is!

> According to the config file tcp transport is enabled with no
> authentication whatsoever...
> 
> > It fails in exactly the same way as previously indicated when the
> > destination server does not have an ssh rsa pub ID from the source system
> > in it's /root/.ssh/authorized_keys file.
> > However once the ssh rsa pub ID is in place on the destination system all
> > is well and migrations work as expected.
> > 
> 
> ..., which would mean you need no ssh keys when migrating using tcp
> transport instead.
> 
> Also during p2p migration the source libvirt daemon can't ask you for
> the password, but when not using p2p the client is connecting to the
> destination, thus being able to ask for the password and/or use
> different ssh keys.
> 
> But it looks like none of this has anything to do with the problem as:
> 
>  1) as you found out, changing vdsm versions makes the problem go
> away/appear and

I've missed this point. Which version of vdsm makes it go away?

> 
>  2) IIUC the first error was "function is not supported by the
> connection driver: virDomainMigrateToURI2", but the second one was
> "error: operation failed: Failed to connect to remote libvirt URI".
> 
> Since I tried finding out why the first error appeared, I probably
> misunderstood somewhere in the middle of this thread and am useless
> here.  However if I can help from the libvirt POV, I'll follow up this
> thread and will see whether there's anything related.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-01 Thread Martin Kletzander
On 02/01/2013 09:29 PM, Dead Horse wrote:
> To test further I loaded up two more identical servers with EL 6.3 and the
> same package versions originally indicated. The difference here is that I
> did not turn these into ovirt nodes. EG: installing VDSM.
> 
> - All configurations were left at defaults on both servers
> - iptables and selinux disabled on both servers
> - verified full connectivty between both servers
> - setup ssh (/root/authorized keys) between the servers --> this turned out
> to be the key!
> 
> Then using syntax found here:
> http://libvirt.org/migration.html#flowpeer2peer
> EG: From the source server I issued the following:
>

So your client equals to the source server, that makes us sure that the
connection is made on the same network for p2p and non-p2p migration.

> virsh migrate --p2p sl63 qemu+ssh://192.168.1.2/system
> 

You're using ssh transport here, but isn't vdsm using tcp or tls?
According to the config file tcp transport is enabled with no
authentication whatsoever...

> It fails in exactly the same way as previously indicated when the
> destination server does not have an ssh rsa pub ID from the source system
> in it's /root/.ssh/authorized_keys file.
> However once the ssh rsa pub ID is in place on the destination system all
> is well and migrations work as expected.
> 

..., which would mean you need no ssh keys when migrating using tcp
transport instead.

Also during p2p migration the source libvirt daemon can't ask you for
the password, but when not using p2p the client is connecting to the
destination, thus being able to ask for the password and/or use
different ssh keys.

But it looks like none of this has anything to do with the problem as:

 1) as you found out, changing vdsm versions makes the problem go
away/appear and

 2) IIUC the first error was "function is not supported by the
connection driver: virDomainMigrateToURI2", but the second one was
"error: operation failed: Failed to connect to remote libvirt URI".

Since I tried finding out why the first error appeared, I probably
misunderstood somewhere in the middle of this thread and am useless
here.  However if I can help from the libvirt POV, I'll follow up this
thread and will see whether there's anything related.

Good luck,
Martin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-01 Thread Dead Horse
To test further I loaded up two more identical servers with EL 6.3 and the
same package versions originally indicated. The difference here is that I
did not turn these into ovirt nodes. EG: installing VDSM.

- All configurations were left at defaults on both servers
- iptables and selinux disabled on both servers
- verified full connectivty between both servers
- setup ssh (/root/authorized keys) between the servers --> this turned out
to be the key!

Then using syntax found here:
http://libvirt.org/migration.html#flowpeer2peer
EG: From the source server I issued the following:

virsh migrate --p2p sl63 qemu+ssh://192.168.1.2/system

It fails in exactly the same way as previously indicated when the
destination server does not have an ssh rsa pub ID from the source system
in it's /root/.ssh/authorized_keys file.
However once the ssh rsa pub ID is in place on the destination system all
is well and migrations work as expected.

Next I tried the same on the identical servers acting as ovirt nodes. (EG:
installed with and running VDSM) The above did not do the trick, still
fails the same exact way. Thus this leads me to believe something with VDSM
or a configuration alteration made by it is to blame.

I should note that I have SSL=disabled on my engine/nodes
EG: vdc_options --> SSLEnabled = false, UseSecureConnectionWithServers =
false, EnableSpiceRootCertificateValidation = false

vdsm.conf (same on both nodes):
[addresses]
management_port = 54321

[vars]
ssl = false

libvirtd.conf alterations by VDSM (same on both nodes excluding host_uuid)

## beginning of configuration section by vdsm-4.9.11 <-- BTW this does not
match the running build of VDSM on the system which is actually build from
commit: c343e1833f7b6e5428dd90f14f7807dca1baa0b4 (vdsm-4.10.2-12.el6.x86_64,
"-12" is my own internal counter)
listen_addr="0.0.0.0"
unix_sock_group="kvm"
unix_sock_rw_perms="0770"
auth_unix_rw="sasl"
host_uuid="1d465e80-1e48-474f-b52d-f5aeda9f6a7a"
log_outputs="1:file:/var/log/libvirtd.log"
log_filters="1:libvirt 3:event 3:json 1:util 1:qemu"
auth_tcp="none"
listen_tcp=1
listen_tls=0
## end of configuration section by vdsm-4.9.11 <-- Does not match running
VDSM version#

qemu.conf alterations by VDSM (same on both nodes)

## beginning of configuration section by vdsm-4.9.11 <-- Does not match
running VDSM version#
dynamic_ownership=0
spice_tls=0
save_image_format="lzop"
lock_manager="sanlock"
auto_dump_path="/var/log/core"
## end of configuration section by vdsm-4.9.11 <-- Does not match running
VDSM version#

- DHC

On Fri, Feb 1, 2013 at 10:47 AM, Dead Horse
wrote:

> Both nodes are identical and can fully communicate with each other.
> Since the normal non p2p live migration works both hosts can reach each
> other via the connection URI.
> Perhaps I am missing something here?
> - DHC
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-01 Thread Dead Horse
Both nodes are identical and can fully communicate with each other.
Since the normal non p2p live migration works both hosts can reach each
other via the connection URI.
Perhaps I am missing something here?
- DHC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-01 Thread Martin Kletzander
On 01/31/2013 07:07 PM, Dead Horse wrote:
> Here is the content exceprt from libvirtd.log for the command: virsh #
> migrate --p2p sl63 qemu+ssh://192.168.1.2/system
> 

Thanks for testing this.  However, this is another problem.  When
migrating p2p, the destination URI must be reachable from the source,
not the client.  That is probably what failed in first place.  Would you
mind trying migrating p2p (and specifying the destination URI that's
reachable from the source host)?  Thanks.

Also, on both hosts the version are the same?

> 2013-01-31 18:02:53.740+: 2832: debug : virDomainFree:2313 :
> dom=0x7f4f88000c80, (VM: name=sl63,
> uuid=887d764a-f835-4112-9eda-836a772ea5eb)
> 2013-01-31 18:02:53.743+: 2831: debug : virDomainLookupByName:2146 :
> conn=0x7f4f8c001d80, name=sl63
> 2013-01-31 18:02:53.743+: 2831: debug : virDomainFree:2313 :
> dom=0x7f4f84002150, (VM: name=sl63,
> uuid=887d764a-f835-4112-9eda-836a772ea5eb)
> 2013-01-31 18:02:53.747+: 2829: debug : virDrvSupportsFeature:1521 :
> conn=0x7f4f8c001d80, feature=4
> 2013-01-31 18:02:53.751+: 2826: debug : virDrvSupportsFeature:1521 :
> conn=0x7f4f8c001d80, feature=6
> 2013-01-31 18:02:53.754+: 2828: debug : virDomainMigratePerform3:6247 :
> dom=0x7f4f7c0d1430, (VM: name=sl63,
> uuid=887d764a-f835-4112-9eda-836a772ea5eb), xmlin=(null) cookiein=(nil),
> cookieinlen=0, cookieout=0x7f4f998cab30, cookieoutlen=0x7f4f998cab3c,
> dconnuri=qemu+ssh://192.168.1.2/system, uri=(null), flags=2, dname=(null),
> bandwidth=0
> 2013-01-31 18:02:53.755+: 2828: debug : qemuMigrationPerform:2821 :
> driver=0x7f4f8c0af820, conn=0x7f4f8c001d80, vm=0x7f4f7c0c7040,
> xmlin=(null), 
> dconnuri=qemu+ssh://192.168.1.2/system,

Funny how mail clients try to make it "easier" for us and add a link
that makes sense (to them).

[...]

> 
> On Thu, Jan 31, 2013 at 11:27 AM, Dead Horse
> wrote:
> 
>> note ignore the IP diff in the ssh host auth --> copy/paste fail ;)
>> - DHC
>>
>>
>> On Thu, Jan 31, 2013 at 11:25 AM, Dead Horse <
>> deadhorseconsult...@gmail.com> wrote:
>>
>>> Doh, brain fart VDSM is not involved here for the purposed of the needed
>>> test.
>>> Here is my initial whack at it:
>>>
>>> Source Node:
>>>
>>> virsh # list
>>>  IdName   State
>>> 
>>>  1 sl63   running
>>>
>>> virsh # migrate --p2p sl63 qemu+ssh://192.168.1.2/system
>>> error: operation failed: Failed to connect to remote libvirt URI
>>> qemu+ssh://192.168.1.2/system 
>>>
>>> virsh # migrate --live sl63 qemu+ssh://192.168.1.2/system
>>> The authenticity of host '192.168.1.2 (192.168.1.2)' can't be established.
>>> RSA key fingerprint is e5:1d:b3:e5:38:5f:e1:8b:73:26:9e:15:c8:0a:2d:ac.
>>> Are you sure you want to continue connecting (yes/no)? yes
>>> root@192.168.1.2's password:
>>> Please enter your authentication name: vdsm@ovirt
>>> Please enter your password:
>>>
>>> virsh #
>>>
>>>
>>> Dest Node After migrate --live:
>>> virsh # list
>>>  IdName   State
>>> 
>>>  2 sl63   running
>>>
>>> virsh #
>>>
>>>
>>>
>>> On Thu, Jan 31, 2013 at 10:38 AM, Dead Horse <
>>> deadhorseconsult...@gmail.com> wrote:
>>>
 Shu,
 I build oVirt Engine and vdsm from source myself. The commits I
 indicated are what I built from.I run the engine under FC17 and my nodes
 are running EL6.x respectively.

 Dan,
 I reverted VDSM on my two test nodes to an earlier build of VDSM
 (commit:
 c343e1833f7b6e5428dd90f14f7807dca1baa0b4)
 VDSM after the above commit is broken due to commit:
 fc3a44f71d2ef202cff18d7203b9e4165b546621 however when I built and tested
 from master yesterday I did apply a patch I tested for
 ybronhei which fixed that issue.

 I will build VDSM from master, today w/ the supervdsm patch and try the
 manual migration you indicated.

  - DHC



 On Thu, Jan 31, 2013 at 4:56 AM, Dan Kenigsberg wrote:

> On Thu, Jan 31, 2013 at 11:08:58AM +0100, Martin Kletzander wrote:
>> On 01/31/2013 10:25 AM, Dan Kenigsberg wrote:
>>> On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
 On 01/30/2013 08:40 PM, Dead Horse wrote:
> The nodes are EL6.3 based.
>
> Currently installed libvirt packages:
>
> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
> libvirt-cim-0.6.1-3.el6.x86_64
> libvirt-0.9.10-21.el6_3.8.x86_64
> libvirt-python-0.9.10-21.el6_3.8.x86_64
> libvirt-client-0.9.10-21.el6_3.8.x86_64
>
> and qemu packages:
> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
>
> Thus my presumption here given t

Re: [Users] VM migrations failing

2013-01-31 Thread Dead Horse
Here is the content exceprt from libvirtd.log for the command: virsh #
migrate --p2p sl63 qemu+ssh://192.168.1.2/system

2013-01-31 18:02:53.740+: 2832: debug : virDomainFree:2313 :
dom=0x7f4f88000c80, (VM: name=sl63,
uuid=887d764a-f835-4112-9eda-836a772ea5eb)
2013-01-31 18:02:53.743+: 2831: debug : virDomainLookupByName:2146 :
conn=0x7f4f8c001d80, name=sl63
2013-01-31 18:02:53.743+: 2831: debug : virDomainFree:2313 :
dom=0x7f4f84002150, (VM: name=sl63,
uuid=887d764a-f835-4112-9eda-836a772ea5eb)
2013-01-31 18:02:53.747+: 2829: debug : virDrvSupportsFeature:1521 :
conn=0x7f4f8c001d80, feature=4
2013-01-31 18:02:53.751+: 2826: debug : virDrvSupportsFeature:1521 :
conn=0x7f4f8c001d80, feature=6
2013-01-31 18:02:53.754+: 2828: debug : virDomainMigratePerform3:6247 :
dom=0x7f4f7c0d1430, (VM: name=sl63,
uuid=887d764a-f835-4112-9eda-836a772ea5eb), xmlin=(null) cookiein=(nil),
cookieinlen=0, cookieout=0x7f4f998cab30, cookieoutlen=0x7f4f998cab3c,
dconnuri=qemu+ssh://192.168.1.2/system, uri=(null), flags=2, dname=(null),
bandwidth=0
2013-01-31 18:02:53.755+: 2828: debug : qemuMigrationPerform:2821 :
driver=0x7f4f8c0af820, conn=0x7f4f8c001d80, vm=0x7f4f7c0c7040,
xmlin=(null), dconnuri=qemu+ssh://192.168.1.2/system,
uri=(null), cookiein=(null), cookieinlen=0, cookieout=0x7f4f998cab30,
cookieoutlen=0x7f4f998cab3c, flags=2, dname=(null), resource=0, v3proto=1
2013-01-31 18:02:53.755+: 2828: debug :
qemuDomainObjBeginJobInternal:758 : Starting async job: migration out
2013-01-31 18:02:53.779+: 2828: debug :
qemuProcessAutoDestroyActive:4226 : vm=sl63
2013-01-31 18:02:53.779+: 2828: debug : qemuDriverCloseCallbackGet:605
: vm=sl63, uuid=887d764a-f835-4112-9eda-836a772ea5eb, conn=(nil)
2013-01-31 18:02:53.779+: 2828: debug : qemuDriverCloseCallbackGet:611
: cb=(nil)
2013-01-31 18:02:53.779+: 2828: debug : doPeer2PeerMigrate:2528 :
driver=0x7f4f8c0af820, sconn=0x7f4f8c001d80, vm=0x7f4f7c0c7040,
xmlin=(null), dconnuri=qemu+ssh://192.168.1.2/system,
uri=(null), flags=2, dname=(null), resource=0
2013-01-31 18:02:53.779+: 2828: debug : virConnectOpen:1349 :
name=qemu+ssh://192.168.1.2/system 
2013-01-31 18:02:53.779+: 2828: debug :
virConnectOpenResolveURIAlias:1070 : Loading config file
'/etc/libvirt/libvirt.conf'
2013-01-31 18:02:53.779+: 2828: debug : do_open:1151 : name "qemu+ssh://
192.168.1.2/system " to URI components:
  scheme qemu+ssh
  opaque (null)
  authority (null)
  server 192.168.1.2
  user (null)
  port 0
  path /system

2013-01-31 18:02:53.779+: 2828: debug : do_open:1195 : trying driver 0
(Test) ...
2013-01-31 18:02:53.779+: 2828: debug : do_open:1201 : driver 0 Test
returned DECLINED
2013-01-31 18:02:53.779+: 2828: debug : do_open:1195 : trying driver 1
(ESX) ...
2013-01-31 18:02:53.779+: 2828: debug : do_open:1201 : driver 1 ESX
returned DECLINED
2013-01-31 18:02:53.779+: 2828: debug : do_open:1195 : trying driver 2
(remote) ...
2013-01-31 18:02:53.779+: 2828: debug : virCommandRunAsync:2174 : About
to run LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin ssh
192.168.1.2 sh -c 'if 'nc' -q 2>&1 | grep "requires an argument" >/dev/null
2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U /var/run/libvirt/libvirt-sock'
2013-01-31 18:02:53.780+: 2828: debug : virCommandRunAsync:2192 :
Command result 0, with PID 14537
2013-01-31 18:02:53.844+: 2828: error : virNetSocketReadWire:988 :
Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
: Connection reset by peerey,gssapi-keyex,gssapi-with-mic,password).
2013-01-31 18:02:53.845+: 2828: debug : do_open:1201 : driver 2 remote
returned ERROR
2013-01-31 18:02:53.845+: 2828: error : doPeer2PeerMigrate:2539 :
operation failed: Failed to connect to remote libvirt URI qemu+ssh://
192.168.1.2/system 
2013-01-31 18:02:53.845+: 2828: debug : qemuDomainObjEndAsyncJob:888 :
Stopping async job: migration out
2013-01-31 18:02:53.845+: 2821: debug : virPidAbort:2341 : aborting
child process 14537
2013-01-31 18:02:53.845+: 2821: debug : virPidAbort:2346 : process has
ended: exit status 255
2013-01-31 18:02:53.863+: 2828: debug : virDomainFree:2313 :
dom=0x7f4f7c0d1430, (VM: name=sl63,
uuid=887d764a-f835-4112-9eda-836a772ea5eb)
2013-01-31 18:02:54.459+: 2834: debug : virDomainInterfaceStats:7223 :
dom=0x7f4f6c000910, (VM: name=sl63,
uuid=887d764a-f835-4112-9eda-836a772ea5eb), path=vnet0,
stats=0x7f4f95cc4b00, size=64
2013-01-31 18:02:54.459+: 2834: debug : virDomainFree:2313 :
dom=0x7f4f6c000910, (VM: name=sl63,
uuid=887d764a-f835-4112-9eda-836a772ea5eb)
2013-01-31 18:02:59.464+: 2825: debug : virDomainInterfaceStats:7223 :
dom=0x7f4f8c2455d0, (VM: name=sl63,
uuid=887d764a-f835-4112-9eda-836a772ea5eb), path=vnet0,
stats=0x7f4f9b6cdb00, size=64
2013-01-31 18:02:59.472+0

Re: [Users] VM migrations failing

2013-01-31 Thread Dead Horse
note ignore the IP diff in the ssh host auth --> copy/paste fail ;)
- DHC


On Thu, Jan 31, 2013 at 11:25 AM, Dead Horse
wrote:

> Doh, brain fart VDSM is not involved here for the purposed of the needed
> test.
> Here is my initial whack at it:
>
> Source Node:
>
> virsh # list
>  IdName   State
> 
>  1 sl63   running
>
> virsh # migrate --p2p sl63 qemu+ssh://192.168.1.2/system
> error: operation failed: Failed to connect to remote libvirt URI
> qemu+ssh://3.57.111.32/system
>
> virsh # migrate --live sl63 qemu+ssh://192.168.1.2/system
> The authenticity of host '3.57.111.32 (192.168.1.2)' can't be established.
> RSA key fingerprint is e5:1d:b3:e5:38:5f:e1:8b:73:26:9e:15:c8:0a:2d:ac.
> Are you sure you want to continue connecting (yes/no)? yes
> root@192.168.1.2's password:
> Please enter your authentication name: vdsm@ovirt
> Please enter your password:
>
> virsh #
>
>
> Dest Node After migrate --live:
> virsh # list
>  IdName   State
> 
>  2 sl63   running
>
> virsh #
>
>
>
> On Thu, Jan 31, 2013 at 10:38 AM, Dead Horse <
> deadhorseconsult...@gmail.com> wrote:
>
>> Shu,
>> I build oVirt Engine and vdsm from source myself. The commits I indicated
>> are what I built from.I run the engine under FC17 and my nodes are running
>> EL6.x respectively.
>>
>> Dan,
>> I reverted VDSM on my two test nodes to an earlier build of VDSM (commit:
>> c343e1833f7b6e5428dd90f14f7807dca1baa0b4)
>> VDSM after the above commit is broken due to commit:
>> fc3a44f71d2ef202cff18d7203b9e4165b546621 however when I built and tested
>> from master yesterday I did apply a patch I tested for
>> ybronhei which fixed that issue.
>>
>> I will build VDSM from master, today w/ the supervdsm patch and try the
>> manual migration you indicated.
>>
>>  - DHC
>>
>>
>>
>> On Thu, Jan 31, 2013 at 4:56 AM, Dan Kenigsberg wrote:
>>
>>> On Thu, Jan 31, 2013 at 11:08:58AM +0100, Martin Kletzander wrote:
>>> > On 01/31/2013 10:25 AM, Dan Kenigsberg wrote:
>>> > > On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
>>> > >> On 01/30/2013 08:40 PM, Dead Horse wrote:
>>> > >>> The nodes are EL6.3 based.
>>> > >>>
>>> > >>> Currently installed libvirt packages:
>>> > >>>
>>> > >>> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
>>> > >>> libvirt-cim-0.6.1-3.el6.x86_64
>>> > >>> libvirt-0.9.10-21.el6_3.8.x86_64
>>> > >>> libvirt-python-0.9.10-21.el6_3.8.x86_64
>>> > >>> libvirt-client-0.9.10-21.el6_3.8.x86_64
>>> > >>>
>>> > >>> and qemu packages:
>>> > >>> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
>>> > >>> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
>>> > >>> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
>>> > >>>
>>> > >>> Thus my presumption here given the above is that
>>> virDomainMigrateToURI2 has
>>> > >>> not yet been patched and/or back-ported into the EL6.x
>>> libvirt/qemu?
>>> > >>>
>>> > >>
>>> > >> virDomainMigrateToURI2 is supported since 0.9.2, but is there a
>>> > >> possibility the code is requesting direct migration?  That might
>>> explain
>>> > >> the message, which is then incorrect; this was fixed in [1].
>>> > >>
>>> > >> Martin
>>> > >>
>>> > >> [1]
>>> > >>
>>> http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c
>>> > >
>>> > > What is "direct migration" exactly, in the context of qemu-kvm?
>>> > >
>>> > > We are using p2p migration
>>> > >
>>> http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirtvm.py;h=fe140ecbfac665248e2ad5c4bfaebaf54ab884cc;hb=18c24f7c7c27ac732c4a760caa9524e7319cd47e#l501
>>> > >
>>> >
>>> > OK, so that's not the issue, sorry for the confusion.  I was thinking
>>> it
>>> > would "somehow" get there.  Direct migration doesn't exist in QEMU at
>>> > all, so it seemed weird, but I can't seem to find any other reason for
>>> > this failure; will keep searching, though.
>>>
>>> In this case, Dead Horse, would you try to migrate a VM (that you do not
>>> care much about) using
>>> virsh -c qemu+tls://hostname/system migrate --p2p dsthost?
>>>
>>> I'd like to see that the problem reproduces this way, too. More of
>>> libvirtd.log may help. You may want to disable iptables for a moment,
>>> just to eliminate a common cause of failure.
>>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-31 Thread Dead Horse
Doh, brain fart VDSM is not involved here for the purposed of the needed
test.
Here is my initial whack at it:

Source Node:

virsh # list
 IdName   State

 1 sl63   running

virsh # migrate --p2p sl63 qemu+ssh://192.168.1.2/system
error: operation failed: Failed to connect to remote libvirt URI qemu+ssh://
3.57.111.32/system

virsh # migrate --live sl63 qemu+ssh://192.168.1.2/system
The authenticity of host '3.57.111.32 (192.168.1.2)' can't be established.
RSA key fingerprint is e5:1d:b3:e5:38:5f:e1:8b:73:26:9e:15:c8:0a:2d:ac.
Are you sure you want to continue connecting (yes/no)? yes
root@192.168.1.2's password:
Please enter your authentication name: vdsm@ovirt
Please enter your password:

virsh #


Dest Node After migrate --live:
virsh # list
 IdName   State

 2 sl63   running

virsh #



On Thu, Jan 31, 2013 at 10:38 AM, Dead Horse
wrote:

> Shu,
> I build oVirt Engine and vdsm from source myself. The commits I indicated
> are what I built from.I run the engine under FC17 and my nodes are running
> EL6.x respectively.
>
> Dan,
> I reverted VDSM on my two test nodes to an earlier build of VDSM (commit:
> c343e1833f7b6e5428dd90f14f7807dca1baa0b4)
> VDSM after the above commit is broken due to commit:
> fc3a44f71d2ef202cff18d7203b9e4165b546621 however when I built and tested
> from master yesterday I did apply a patch I tested for
> ybronhei which fixed that issue.
>
> I will build VDSM from master, today w/ the supervdsm patch and try the
> manual migration you indicated.
>
>  - DHC
>
>
>
> On Thu, Jan 31, 2013 at 4:56 AM, Dan Kenigsberg  wrote:
>
>> On Thu, Jan 31, 2013 at 11:08:58AM +0100, Martin Kletzander wrote:
>> > On 01/31/2013 10:25 AM, Dan Kenigsberg wrote:
>> > > On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
>> > >> On 01/30/2013 08:40 PM, Dead Horse wrote:
>> > >>> The nodes are EL6.3 based.
>> > >>>
>> > >>> Currently installed libvirt packages:
>> > >>>
>> > >>> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
>> > >>> libvirt-cim-0.6.1-3.el6.x86_64
>> > >>> libvirt-0.9.10-21.el6_3.8.x86_64
>> > >>> libvirt-python-0.9.10-21.el6_3.8.x86_64
>> > >>> libvirt-client-0.9.10-21.el6_3.8.x86_64
>> > >>>
>> > >>> and qemu packages:
>> > >>> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
>> > >>> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
>> > >>> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
>> > >>>
>> > >>> Thus my presumption here given the above is that
>> virDomainMigrateToURI2 has
>> > >>> not yet been patched and/or back-ported into the EL6.x libvirt/qemu?
>> > >>>
>> > >>
>> > >> virDomainMigrateToURI2 is supported since 0.9.2, but is there a
>> > >> possibility the code is requesting direct migration?  That might
>> explain
>> > >> the message, which is then incorrect; this was fixed in [1].
>> > >>
>> > >> Martin
>> > >>
>> > >> [1]
>> > >>
>> http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c
>> > >
>> > > What is "direct migration" exactly, in the context of qemu-kvm?
>> > >
>> > > We are using p2p migration
>> > >
>> http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirtvm.py;h=fe140ecbfac665248e2ad5c4bfaebaf54ab884cc;hb=18c24f7c7c27ac732c4a760caa9524e7319cd47e#l501
>> > >
>> >
>> > OK, so that's not the issue, sorry for the confusion.  I was thinking it
>> > would "somehow" get there.  Direct migration doesn't exist in QEMU at
>> > all, so it seemed weird, but I can't seem to find any other reason for
>> > this failure; will keep searching, though.
>>
>> In this case, Dead Horse, would you try to migrate a VM (that you do not
>> care much about) using
>> virsh -c qemu+tls://hostname/system migrate --p2p dsthost?
>>
>> I'd like to see that the problem reproduces this way, too. More of
>> libvirtd.log may help. You may want to disable iptables for a moment,
>> just to eliminate a common cause of failure.
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-31 Thread Dead Horse
Shu,
I build oVirt Engine and vdsm from source myself. The commits I indicated
are what I built from.I run the engine under FC17 and my nodes are running
EL6.x respectively.

Dan,
I reverted VDSM on my two test nodes to an earlier build of VDSM (commit:
c343e1833f7b6e5428dd90f14f7807dca1baa0b4)
VDSM after the above commit is broken due to commit:
fc3a44f71d2ef202cff18d7203b9e4165b546621 however when I built and tested
from master yesterday I did apply a patch I tested for
ybronhei which fixed that issue.

I will build VDSM from master, today w/ the supervdsm patch and try the
manual migration you indicated.

 - DHC



On Thu, Jan 31, 2013 at 4:56 AM, Dan Kenigsberg  wrote:

> On Thu, Jan 31, 2013 at 11:08:58AM +0100, Martin Kletzander wrote:
> > On 01/31/2013 10:25 AM, Dan Kenigsberg wrote:
> > > On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
> > >> On 01/30/2013 08:40 PM, Dead Horse wrote:
> > >>> The nodes are EL6.3 based.
> > >>>
> > >>> Currently installed libvirt packages:
> > >>>
> > >>> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
> > >>> libvirt-cim-0.6.1-3.el6.x86_64
> > >>> libvirt-0.9.10-21.el6_3.8.x86_64
> > >>> libvirt-python-0.9.10-21.el6_3.8.x86_64
> > >>> libvirt-client-0.9.10-21.el6_3.8.x86_64
> > >>>
> > >>> and qemu packages:
> > >>> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
> > >>> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
> > >>> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
> > >>>
> > >>> Thus my presumption here given the above is that
> virDomainMigrateToURI2 has
> > >>> not yet been patched and/or back-ported into the EL6.x libvirt/qemu?
> > >>>
> > >>
> > >> virDomainMigrateToURI2 is supported since 0.9.2, but is there a
> > >> possibility the code is requesting direct migration?  That might
> explain
> > >> the message, which is then incorrect; this was fixed in [1].
> > >>
> > >> Martin
> > >>
> > >> [1]
> > >>
> http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c
> > >
> > > What is "direct migration" exactly, in the context of qemu-kvm?
> > >
> > > We are using p2p migration
> > >
> http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirtvm.py;h=fe140ecbfac665248e2ad5c4bfaebaf54ab884cc;hb=18c24f7c7c27ac732c4a760caa9524e7319cd47e#l501
> > >
> >
> > OK, so that's not the issue, sorry for the confusion.  I was thinking it
> > would "somehow" get there.  Direct migration doesn't exist in QEMU at
> > all, so it seemed weird, but I can't seem to find any other reason for
> > this failure; will keep searching, though.
>
> In this case, Dead Horse, would you try to migrate a VM (that you do not
> care much about) using
> virsh -c qemu+tls://hostname/system migrate --p2p dsthost?
>
> I'd like to see that the problem reproduces this way, too. More of
> libvirtd.log may help. You may want to disable iptables for a moment,
> just to eliminate a common cause of failure.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-31 Thread Martin Kletzander
On 01/31/2013 11:56 AM, Dan Kenigsberg wrote:
> On Thu, Jan 31, 2013 at 11:08:58AM +0100, Martin Kletzander wrote:
>> On 01/31/2013 10:25 AM, Dan Kenigsberg wrote:
>>> On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
 On 01/30/2013 08:40 PM, Dead Horse wrote:
> The nodes are EL6.3 based.
>
> Currently installed libvirt packages:
>
> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
> libvirt-cim-0.6.1-3.el6.x86_64
> libvirt-0.9.10-21.el6_3.8.x86_64
> libvirt-python-0.9.10-21.el6_3.8.x86_64
> libvirt-client-0.9.10-21.el6_3.8.x86_64
>
> and qemu packages:
> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
>
> Thus my presumption here given the above is that virDomainMigrateToURI2 
> has
> not yet been patched and/or back-ported into the EL6.x libvirt/qemu?
>

 virDomainMigrateToURI2 is supported since 0.9.2, but is there a
 possibility the code is requesting direct migration?  That might explain
 the message, which is then incorrect; this was fixed in [1].

 Martin

 [1]
 http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c
>>>
>>> What is "direct migration" exactly, in the context of qemu-kvm?
>>>
>>> We are using p2p migration
>>> http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirtvm.py;h=fe140ecbfac665248e2ad5c4bfaebaf54ab884cc;hb=18c24f7c7c27ac732c4a760caa9524e7319cd47e#l501
>>>
>>
>> OK, so that's not the issue, sorry for the confusion.  I was thinking it
>> would "somehow" get there.  Direct migration doesn't exist in QEMU at
>> all, so it seemed weird, but I can't seem to find any other reason for
>> this failure; will keep searching, though.
> 
> In this case, Dead Horse, would you try to migrate a VM (that you do not
> care much about) using
> virsh -c qemu+tls://hostname/system migrate --p2p dsthost?
> 
> I'd like to see that the problem reproduces this way, too. More of
> libvirtd.log may help. You may want to disable iptables for a moment,
> just to eliminate a common cause of failure.
> 

The error message in this version of libvirt is emitted only in two
cases.  Either the QEMU driver doesn't support peer2peer migration
(which it does) or direct migration was requested (which wasn't) :)

So I agree with Dan, please try to reproduce this without ovirt/vdsm and
let us know (logs [1] will help a lot), because in case nobody fiddled
with anything, this sounds like a bug to me.

Martin

[1] http://libvirt.org/logging.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-31 Thread Dan Kenigsberg
On Thu, Jan 31, 2013 at 11:08:58AM +0100, Martin Kletzander wrote:
> On 01/31/2013 10:25 AM, Dan Kenigsberg wrote:
> > On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
> >> On 01/30/2013 08:40 PM, Dead Horse wrote:
> >>> The nodes are EL6.3 based.
> >>>
> >>> Currently installed libvirt packages:
> >>>
> >>> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
> >>> libvirt-cim-0.6.1-3.el6.x86_64
> >>> libvirt-0.9.10-21.el6_3.8.x86_64
> >>> libvirt-python-0.9.10-21.el6_3.8.x86_64
> >>> libvirt-client-0.9.10-21.el6_3.8.x86_64
> >>>
> >>> and qemu packages:
> >>> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
> >>> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
> >>> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
> >>>
> >>> Thus my presumption here given the above is that virDomainMigrateToURI2 
> >>> has
> >>> not yet been patched and/or back-ported into the EL6.x libvirt/qemu?
> >>>
> >>
> >> virDomainMigrateToURI2 is supported since 0.9.2, but is there a
> >> possibility the code is requesting direct migration?  That might explain
> >> the message, which is then incorrect; this was fixed in [1].
> >>
> >> Martin
> >>
> >> [1]
> >> http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c
> > 
> > What is "direct migration" exactly, in the context of qemu-kvm?
> > 
> > We are using p2p migration
> > http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirtvm.py;h=fe140ecbfac665248e2ad5c4bfaebaf54ab884cc;hb=18c24f7c7c27ac732c4a760caa9524e7319cd47e#l501
> > 
> 
> OK, so that's not the issue, sorry for the confusion.  I was thinking it
> would "somehow" get there.  Direct migration doesn't exist in QEMU at
> all, so it seemed weird, but I can't seem to find any other reason for
> this failure; will keep searching, though.

In this case, Dead Horse, would you try to migrate a VM (that you do not
care much about) using
virsh -c qemu+tls://hostname/system migrate --p2p dsthost?

I'd like to see that the problem reproduces this way, too. More of
libvirtd.log may help. You may want to disable iptables for a moment,
just to eliminate a common cause of failure.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-31 Thread Dan Kenigsberg
On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
> On 01/30/2013 08:40 PM, Dead Horse wrote:
> > The nodes are EL6.3 based.
> > 
> > Currently installed libvirt packages:
> > 
> > libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
> > libvirt-cim-0.6.1-3.el6.x86_64
> > libvirt-0.9.10-21.el6_3.8.x86_64
> > libvirt-python-0.9.10-21.el6_3.8.x86_64
> > libvirt-client-0.9.10-21.el6_3.8.x86_64
> > 
> > and qemu packages:
> > qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
> > qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
> > qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
> > 
> > Thus my presumption here given the above is that virDomainMigrateToURI2 has
> > not yet been patched and/or back-ported into the EL6.x libvirt/qemu?
> > 
> 
> virDomainMigrateToURI2 is supported since 0.9.2, but is there a
> possibility the code is requesting direct migration?  That might explain
> the message, which is then incorrect; this was fixed in [1].
> 
> Martin
> 
> [1]
> http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c

What is "direct migration" exactly, in the context of qemu-kvm?

We are using p2p migration
http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirtvm.py;h=fe140ecbfac665248e2ad5c4bfaebaf54ab884cc;hb=18c24f7c7c27ac732c4a760caa9524e7319cd47e#l501
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-31 Thread Martin Kletzander
On 01/31/2013 10:25 AM, Dan Kenigsberg wrote:
> On Thu, Jan 31, 2013 at 09:43:44AM +0100, Martin Kletzander wrote:
>> On 01/30/2013 08:40 PM, Dead Horse wrote:
>>> The nodes are EL6.3 based.
>>>
>>> Currently installed libvirt packages:
>>>
>>> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
>>> libvirt-cim-0.6.1-3.el6.x86_64
>>> libvirt-0.9.10-21.el6_3.8.x86_64
>>> libvirt-python-0.9.10-21.el6_3.8.x86_64
>>> libvirt-client-0.9.10-21.el6_3.8.x86_64
>>>
>>> and qemu packages:
>>> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
>>> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
>>> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
>>>
>>> Thus my presumption here given the above is that virDomainMigrateToURI2 has
>>> not yet been patched and/or back-ported into the EL6.x libvirt/qemu?
>>>
>>
>> virDomainMigrateToURI2 is supported since 0.9.2, but is there a
>> possibility the code is requesting direct migration?  That might explain
>> the message, which is then incorrect; this was fixed in [1].
>>
>> Martin
>>
>> [1]
>> http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c
> 
> What is "direct migration" exactly, in the context of qemu-kvm?
> 
> We are using p2p migration
> http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirtvm.py;h=fe140ecbfac665248e2ad5c4bfaebaf54ab884cc;hb=18c24f7c7c27ac732c4a760caa9524e7319cd47e#l501
> 

OK, so that's not the issue, sorry for the confusion.  I was thinking it
would "somehow" get there.  Direct migration doesn't exist in QEMU at
all, so it seemed weird, but I can't seem to find any other reason for
this failure; will keep searching, though.

Martin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-31 Thread Martin Kletzander
On 01/30/2013 08:40 PM, Dead Horse wrote:
> The nodes are EL6.3 based.
> 
> Currently installed libvirt packages:
> 
> libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
> libvirt-cim-0.6.1-3.el6.x86_64
> libvirt-0.9.10-21.el6_3.8.x86_64
> libvirt-python-0.9.10-21.el6_3.8.x86_64
> libvirt-client-0.9.10-21.el6_3.8.x86_64
> 
> and qemu packages:
> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
> qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
> qemu-img-0.12.1.2-2.295.el6_3.10.x86_64
> 
> Thus my presumption here given the above is that virDomainMigrateToURI2 has
> not yet been patched and/or back-ported into the EL6.x libvirt/qemu?
> 

virDomainMigrateToURI2 is supported since 0.9.2, but is there a
possibility the code is requesting direct migration?  That might explain
the message, which is then incorrect; this was fixed in [1].

Martin

[1]
http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=3189dfb1636da22d426d2fc07cc9f60304b16c5c

> - DHC
> 
> 
> On Wed, Jan 30, 2013 at 1:28 PM, Dan Kenigsberg  wrote:
> 
>> On Wed, Jan 30, 2013 at 11:04:00AM -0600, Dead Horse wrote:
>>> Engine Build --> Commit: 82bdc46dfdb46b000f67f0cd4e51fc39665bf13b
>>> VDSM Build: --> Commit: da89a27492cc7d5a84e4bb87652569ca8e0fb20e + patch
>>> --> http://gerrit.ovirt.org/#/c/11492/
>>>
>>> Engine Side:
>>> 2013-01-30 10:56:38,439 ERROR
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-70) Rerun vm
>> 887d764a-f835-4112-9eda-836a772ea5eb.
>>> Called from vds lostisles
>>> 2013-01-30 10:56:38,506 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-49) START, MigrateStatusVDSCommand(HostName = lostisles,
>>> HostId = e042b03b-dd4e-414c-be1a-b2c65ac000f5,
>>> vmId=887d764a-f835-4112-9eda-836a772ea5eb), log id: 6556e75b
>>> 2013-01-30 10:56:38,510 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-49) Failed in MigrateStatusVDS method
>>> 2013-01-30 10:56:38,510 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-49) Error code migrateErr and error message
>>> VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
>> error =
>>> Fatal error during migration
>>> 2013-01-30 10:56:38,511 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-49) Command
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
>>> value
>>>  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12,
>>> mMessage=Fatal error during migration]]
>>>
>>>
>>> VDSM Side:
>>> Thread-43670::ERROR::2013-01-30 10:56:37,052::vm::200::vm.Vm::(_recover)
>>> vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::this function is not
>> supported
>>> by the connection driver: virDomainMigrateToURI2
>>> Thread-43670::ERROR::2013-01-30 10:56:37,513::vm::288::vm.Vm::(run)
>>> vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Failed to migrate
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/vm.py", line 273, in run
>>> self._startUnderlyingMigration()
>>>   File "/usr/share/vdsm/libvirtvm.py", line 504, in
>>> _startUnderlyingMigration
>>> None, maxBandwidth)
>>>   File "/usr/share/vdsm/libvirtvm.py", line 540, in f
>>> ret = attr(*args, **kwargs)
>>>   File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",
>> line
>>> 111, in wrapper
>>> ret = f(*args, **kwargs)
>>>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in
>>> migrateToURI2
>>> if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed',
>>> dom=self)
>>> libvirtError: this function is not supported by the connection driver:
>>> virDomainMigrateToURI2
>>
>> Could it be that you are using an ancient libvirt with no
>> virDomainMigrateToURI2? What are your libvirt and qemu-kvm versions (on
>> both machines)?
>>
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-30 Thread Shu Ming

Hi,

Just curiously,  did you install  oVirt packages from 
http://resources.ovirt.org/releases/nightly/rpm/EL/6/noarch/  ?

And there is no stable oVirt release 3.1/3.2 for RHEL.

Dead Horse:

The nodes are EL6.3 based.

Currently installed libvirt packages:

libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
libvirt-cim-0.6.1-3.el6.x86_64
libvirt-0.9.10-21.el6_3.8.x86_64
libvirt-python-0.9.10-21.el6_3.8.x86_64
libvirt-client-0.9.10-21.el6_3.8.x86_64

and qemu packages:
qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
qemu-img-0.12.1.2-2.295.el6_3.10.x86_64

Thus my presumption here given the above is that 
virDomainMigrateToURI2 has not yet been patched and/or back-ported 
into the EL6.x libvirt/qemu?


virDomainMigrateToURI2() is supported in libvirt-0.9.10, I am not sure 
if qemu-kvm-0.12.1.2 supports it.

/var/log/libvirtd.log may help to verify this assumption



- DHC


On Wed, Jan 30, 2013 at 1:28 PM, Dan Kenigsberg > wrote:


On Wed, Jan 30, 2013 at 11:04:00AM -0600, Dead Horse wrote:
> Engine Build --> Commit: 82bdc46dfdb46b000f67f0cd4e51fc39665bf13b
> VDSM Build: --> Commit: da89a27492cc7d5a84e4bb87652569ca8e0fb20e
+ patch
> --> http://gerrit.ovirt.org/#/c/11492/
>
> Engine Side:
> 2013-01-30 10:56:38,439 ERROR
> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-70) Rerun vm
887d764a-f835-4112-9eda-836a772ea5eb.
> Called from vds lostisles
> 2013-01-30 10:56:38,506 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) START, MigrateStatusVDSCommand(HostName =
lostisles,
> HostId = e042b03b-dd4e-414c-be1a-b2c65ac000f5,
> vmId=887d764a-f835-4112-9eda-836a772ea5eb), log id: 6556e75b
> 2013-01-30 10:56:38,510 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) Failed in MigrateStatusVDS method
> 2013-01-30 10:56:38,510 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) Error code migrateErr and error message
> VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS, error =
> Fatal error during migration
> 2013-01-30 10:56:38,511 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) Command
>
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
return
> value
>  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12,
> mMessage=Fatal error during migration]]
>
>
> VDSM Side:
> Thread-43670::ERROR::2013-01-30
10:56:37,052::vm::200::vm.Vm::(_recover)
> vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::this function is
not supported
> by the connection driver: virDomainMigrateToURI2
> Thread-43670::ERROR::2013-01-30 10:56:37,513::vm::288::vm.Vm::(run)
> vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Failed to migrate
> Traceback (most recent call last):
>   File "/usr/share/vdsm/vm.py", line 273, in run
> self._startUnderlyingMigration()
>   File "/usr/share/vdsm/libvirtvm.py", line 504, in
> _startUnderlyingMigration
> None, maxBandwidth)
>   File "/usr/share/vdsm/libvirtvm.py", line 540, in f
> ret = attr(*args, **kwargs)
>   File
"/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
> 111, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line
1103, in
> migrateToURI2
> if ret == -1: raise libvirtError ('virDomainMigrateToURI2()
failed',
> dom=self)
> libvirtError: this function is not supported by the connection
driver:
> virDomainMigrateToURI2

Could it be that you are using an ancient libvirt with no
virDomainMigrateToURI2? What are your libvirt and qemu-kvm
versions (on
both machines)?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
---
?? Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626  Tieline: 9051626 E-mail: shum...@cn.ibm.com or 
shum...@linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, 
Beijing 100193, PRC

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-30 Thread Dead Horse
The nodes are EL6.3 based.

Currently installed libvirt packages:

libvirt-lock-sanlock-0.9.10-21.el6_3.8.x86_64
libvirt-cim-0.6.1-3.el6.x86_64
libvirt-0.9.10-21.el6_3.8.x86_64
libvirt-python-0.9.10-21.el6_3.8.x86_64
libvirt-client-0.9.10-21.el6_3.8.x86_64

and qemu packages:
qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
qemu-kvm-tools-0.12.1.2-2.295.el6_3.10.x86_64
qemu-img-0.12.1.2-2.295.el6_3.10.x86_64

Thus my presumption here given the above is that virDomainMigrateToURI2 has
not yet been patched and/or back-ported into the EL6.x libvirt/qemu?

- DHC


On Wed, Jan 30, 2013 at 1:28 PM, Dan Kenigsberg  wrote:

> On Wed, Jan 30, 2013 at 11:04:00AM -0600, Dead Horse wrote:
> > Engine Build --> Commit: 82bdc46dfdb46b000f67f0cd4e51fc39665bf13b
> > VDSM Build: --> Commit: da89a27492cc7d5a84e4bb87652569ca8e0fb20e + patch
> > --> http://gerrit.ovirt.org/#/c/11492/
> >
> > Engine Side:
> > 2013-01-30 10:56:38,439 ERROR
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (QuartzScheduler_Worker-70) Rerun vm
> 887d764a-f835-4112-9eda-836a772ea5eb.
> > Called from vds lostisles
> > 2013-01-30 10:56:38,506 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (pool-3-thread-49) START, MigrateStatusVDSCommand(HostName = lostisles,
> > HostId = e042b03b-dd4e-414c-be1a-b2c65ac000f5,
> > vmId=887d764a-f835-4112-9eda-836a772ea5eb), log id: 6556e75b
> > 2013-01-30 10:56:38,510 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (pool-3-thread-49) Failed in MigrateStatusVDS method
> > 2013-01-30 10:56:38,510 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (pool-3-thread-49) Error code migrateErr and error message
> > VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
> error =
> > Fatal error during migration
> > 2013-01-30 10:56:38,511 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> > (pool-3-thread-49) Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
> > value
> >  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12,
> > mMessage=Fatal error during migration]]
> >
> >
> > VDSM Side:
> > Thread-43670::ERROR::2013-01-30 10:56:37,052::vm::200::vm.Vm::(_recover)
> > vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::this function is not
> supported
> > by the connection driver: virDomainMigrateToURI2
> > Thread-43670::ERROR::2013-01-30 10:56:37,513::vm::288::vm.Vm::(run)
> > vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Failed to migrate
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/vm.py", line 273, in run
> > self._startUnderlyingMigration()
> >   File "/usr/share/vdsm/libvirtvm.py", line 504, in
> > _startUnderlyingMigration
> > None, maxBandwidth)
> >   File "/usr/share/vdsm/libvirtvm.py", line 540, in f
> > ret = attr(*args, **kwargs)
> >   File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",
> line
> > 111, in wrapper
> > ret = f(*args, **kwargs)
> >   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in
> > migrateToURI2
> > if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed',
> > dom=self)
> > libvirtError: this function is not supported by the connection driver:
> > virDomainMigrateToURI2
>
> Could it be that you are using an ancient libvirt with no
> virDomainMigrateToURI2? What are your libvirt and qemu-kvm versions (on
> both machines)?
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-01-30 Thread Dan Kenigsberg
On Wed, Jan 30, 2013 at 11:04:00AM -0600, Dead Horse wrote:
> Engine Build --> Commit: 82bdc46dfdb46b000f67f0cd4e51fc39665bf13b
> VDSM Build: --> Commit: da89a27492cc7d5a84e4bb87652569ca8e0fb20e + patch
> --> http://gerrit.ovirt.org/#/c/11492/
> 
> Engine Side:
> 2013-01-30 10:56:38,439 ERROR
> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-70) Rerun vm 887d764a-f835-4112-9eda-836a772ea5eb.
> Called from vds lostisles
> 2013-01-30 10:56:38,506 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) START, MigrateStatusVDSCommand(HostName = lostisles,
> HostId = e042b03b-dd4e-414c-be1a-b2c65ac000f5,
> vmId=887d764a-f835-4112-9eda-836a772ea5eb), log id: 6556e75b
> 2013-01-30 10:56:38,510 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) Failed in MigrateStatusVDS method
> 2013-01-30 10:56:38,510 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) Error code migrateErr and error message
> VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
> Fatal error during migration
> 2013-01-30 10:56:38,511 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-49) Command
> org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
> value
>  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12,
> mMessage=Fatal error during migration]]
> 
> 
> VDSM Side:
> Thread-43670::ERROR::2013-01-30 10:56:37,052::vm::200::vm.Vm::(_recover)
> vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::this function is not supported
> by the connection driver: virDomainMigrateToURI2
> Thread-43670::ERROR::2013-01-30 10:56:37,513::vm::288::vm.Vm::(run)
> vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Failed to migrate
> Traceback (most recent call last):
>   File "/usr/share/vdsm/vm.py", line 273, in run
> self._startUnderlyingMigration()
>   File "/usr/share/vdsm/libvirtvm.py", line 504, in
> _startUnderlyingMigration
> None, maxBandwidth)
>   File "/usr/share/vdsm/libvirtvm.py", line 540, in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
> 111, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in
> migrateToURI2
> if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed',
> dom=self)
> libvirtError: this function is not supported by the connection driver:
> virDomainMigrateToURI2

Could it be that you are using an ancient libvirt with no
virDomainMigrateToURI2? What are your libvirt and qemu-kvm versions (on
both machines)?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] VM migrations failing

2013-01-30 Thread Dead Horse
Engine Build --> Commit: 82bdc46dfdb46b000f67f0cd4e51fc39665bf13b
VDSM Build: --> Commit: da89a27492cc7d5a84e4bb87652569ca8e0fb20e + patch
--> http://gerrit.ovirt.org/#/c/11492/

Engine Side:
2013-01-30 10:56:38,439 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-70) Rerun vm 887d764a-f835-4112-9eda-836a772ea5eb.
Called from vds lostisles
2013-01-30 10:56:38,506 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) START, MigrateStatusVDSCommand(HostName = lostisles,
HostId = e042b03b-dd4e-414c-be1a-b2c65ac000f5,
vmId=887d764a-f835-4112-9eda-836a772ea5eb), log id: 6556e75b
2013-01-30 10:56:38,510 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) Failed in MigrateStatusVDS method
2013-01-30 10:56:38,510 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) Error code migrateErr and error message
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
Fatal error during migration
2013-01-30 10:56:38,511 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12,
mMessage=Fatal error during migration]]


VDSM Side:
Thread-43670::ERROR::2013-01-30 10:56:37,052::vm::200::vm.Vm::(_recover)
vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::this function is not supported
by the connection driver: virDomainMigrateToURI2
Thread-43670::ERROR::2013-01-30 10:56:37,513::vm::288::vm.Vm::(run)
vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Failed to migrate
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 273, in run
self._startUnderlyingMigration()
  File "/usr/share/vdsm/libvirtvm.py", line 504, in
_startUnderlyingMigration
None, maxBandwidth)
  File "/usr/share/vdsm/libvirtvm.py", line 540, in f
ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
111, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed',
dom=self)
libvirtError: this function is not supported by the connection driver:
virDomainMigrateToURI2
GuestMonitor-sl63::DEBUG::2013-01-30
10:56:38,235::libvirtvm::307::vm.Vm::(_getDiskLatency)
vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Disk vda latency not available

- DHC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users