Re: [Users] oVirt 3.3.3 - Migration failures

2014-03-13 Thread Roy Golan

On 03/12/2014 03:52 PM, LANGE Thomas wrote:


Hi !

I am testing all the functionalities of oVirt for work and I have some 
issues regarding live migration.


Here's my configuration:

-pc001 : CentOS 6.5 minimal with VDSM (4.13.3-3.el6)

-pc002 : CentOS 6.5 minimal with VDSM (4.13.3-3.el6) and ovirt-engine 
(3.3.3-2.el6)


-pc003 : CentOS 6.5 with VDSM (4.13.3-3.el6)

All these servers are in the same cluster and datacenter is using NFS 
for storage.


Here's my problem : I have one VM (called Debian) running on a 
specific host (pc003.lan). I try to migrate it to another host 
(pc001.lan) but the migration fails. As a result, the VM is still 
running on pc003 and the number of running VMs on this host is still 
the same; but I can see one more VM running on the host pc001. It does 
the same thing with other VMs (mavm1, mavm2, mavm3, Debian2). At 
the end, the administration portal tells me I have 7VMs running on 
host pc001, 4 on pc002 and 3 on pc001; I have only 7 VMs , not 14.


From now, some errors keep showing up in the log (see attachment). It 
says that migrations are still running but I cannot cancel them in the 
administration portal and they never stop/succeed/fail. This problem 
occurred some time after I enabled quotas in the Datacenter but I 
don't know if it's related in any way. If I reboot pc001, the number 
of running VM is reset to 0, but any attempt to migrate a VM to this 
host does the same thing again. Also, restarting the engine did not 
fix this issue.


Does anyone have an idea about what's happening ? I'm new with oVirt 
and I don't know what to do to solve this.


Thanks for your help !

Regards,

Thomas LANGÉ



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

hi Thomas pls provide vdsm.log for both pc001 and pc003
and also the whole engine.log.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt 3.3.3 - Migration failures

2014-03-12 Thread LANGE Thomas
Hi !

I am testing all the functionalities of oVirt for work and I have some issues 
regarding live migration.
Here's my configuration:

-   pc001 : CentOS 6.5 minimal with VDSM (4.13.3-3.el6)

-   pc002 : CentOS 6.5 minimal with VDSM (4.13.3-3.el6) and ovirt-engine 
(3.3.3-2.el6)

-   pc003 : CentOS 6.5 with VDSM (4.13.3-3.el6)
All these servers are in the same cluster and datacenter is using NFS for 
storage.

Here's my problem : I have one VM (called Debian) running on a specific host 
(pc003.lan). I try to migrate it to another host (pc001.lan) but the migration 
fails. As a result, the VM is still running on pc003 and the number of running 
VMs on this host is still the same; but I can see one more VM running on the 
host pc001. It does the same thing with other VMs (mavm1, mavm2, mavm3, 
Debian2). At the end, the administration portal tells me I have 7VMs 
running on host pc001, 4 on pc002 and 3 on pc001; I have only 7 VMs , not 14.

From now, some errors keep showing up in the log (see attachment). It says 
that migrations are still running but I cannot cancel them in the 
administration portal and they never stop/succeed/fail. This problem occurred 
some time after I enabled quotas in the Datacenter but I don't know if it's 
related in any way. If I reboot pc001, the number of running VM is reset to 0, 
but any attempt to migrate a VM to this host does the same thing again. Also, 
restarting the engine did not fix this issue.

Does anyone have an idea about what's happening ? I'm new with oVirt and I 
don't know what to do to solve this.

Thanks for your help !

Regards,

Thomas LANGÉ

2014-03-12 11:34:34,430 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-97) Correlation ID: null, Call Stack: null, 
Custom Event ID: -1, Message: VM CentOS is down. Exit message: Domain not 
found: no domain with matching uuid 'f31e9b47-a676-4151-bcbd-c39cab759807'.
2014-03-12 11:34:34,432 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) START, DestroyVDSCommand(HostName = 
pc001.lan, HostId = 54a05734-2b23-43a8-a24f-f62f297fa884, 
vmId=4ad4d268-8487-4ad3-a21d-ec9ea284d5c6, force=false, secondsToWait=0, 
gracefully=false), log id: f2bee0a
2014-03-12 11:34:34,441 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) Failed in DestroyVDS method
2014-03-12 11:34:34,442 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) Error code unexpected and error message 
VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = 
Unexpected exception
2014-03-12 11:34:34,443 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand return value 
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=16, 
mMessage=Unexpected exception]]
2014-03-12 11:34:34,443 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) HostName = pc001.lan
2014-03-12 11:34:34,444 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) Command DestroyVDS execution failed. 
Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
DestroyVDS, error = Unexpected exception
2014-03-12 11:34:34,445 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) FINISH, DestroyVDSCommand, log id: f2bee0a
2014-03-12 11:34:34,455 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-97) Correlation ID: null, Call Stack: null, 
Custom Event ID: -1, Message: VM TestImportKVM is down. Exit message: Domain 
not found: no domain with matching uuid '4ad4d268-8487-4ad3-a21d-ec9ea284d5c6'.
2014-03-12 11:34:34,457 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) START, DestroyVDSCommand(HostName = 
pc001.lan, HostId = 54a05734-2b23-43a8-a24f-f62f297fa884, 
vmId=637d6a3a-c10d-4255-ad2d-ccf2970ff201, force=false, secondsToWait=0, 
gracefully=false), log id: 2b28d436
2014-03-12 11:34:34,466 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) Failed in DestroyVDS method
2014-03-12 11:34:34,467 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) Error code unexpected and error message 
VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = 
Unexpected exception
2014-03-12 11:34:34,467 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-97) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand return value 
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc 

Re: [Users] oVirt 3.3.3 - Migration failures

2014-03-12 Thread Michal Skrivanek


 On 12 Mar 2014, at 15:52, LANGE Thomas thomas.la...@thalesgroup.com wrote:
 
 Hi !
  
 I am testing all the functionalities of oVirt for work and I have some issues 
 regarding live migration.
 Here’s my configuration:
 -   pc001 : CentOS 6.5 minimal with VDSM (4.13.3-3.el6)
 -   pc002 : CentOS 6.5 minimal with VDSM (4.13.3-3.el6) and ovirt-engine 
 (3.3.3-2.el6)
 -   pc003 : CentOS 6.5 with VDSM (4.13.3-3.el6)
 All these servers are in the same cluster and datacenter is using NFS for 
 storage.
  
 Here’s my problem : I have one VM (called Debian) running on a specific host 
 (pc003.lan). I try to migrate it to another host (pc001.lan) but the 
 migration fails. As a result, the VM is still running on pc003 and the number 
 of running VMs on this host is still the same; but I can see one more VM 
 running on the host pc001. It does the same thing with other VMs (mavm1, 
 mavm2, mavm3, Debian2….). At the end, the administration portal tells me I 
 have 7VMs running on host pc001, 4 on pc002 and 3 on pc001; I have only 7 VMs 
 , not 14.

Hi,
...running - i.e. they are shown as the green triangle as if they are running, 
all 14 of them?

  
 From now, some errors keep showing up in the log (see attachment). It says 
 that migrations are still running but I cannot cancel them in the 
 administration portal and they never stop/succeed/fail. This problem occurred 
 some time after I enabled quotas in the Datacenter but I don’t know if it’s 
 related in any way.

Might be. Does it disappear when you disable quota?:)


 If I reboot pc001, the number of running VM is reset to 0, but any attempt to 
 migrate a VM to this host does the same thing again. Also, restarting the 
 engine did not fix this issue.
  
 Does anyone have an idea about what’s happening ? I’m new with oVirt and I 
 don’t know what to do to solve this.

seems a destination creation issue. Please reproduce with one VM and get the 
logs from both source and destination host from that time. 

Thanks,
michal
  
 Thanks for your help !
  
 Regards,
  
 Thomas LANGÉ
  
 log.txt
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users