odd, 

migration seem to be successful on destination server, but source reports a 
problem:

Thread-1484336::DEBUG::2013-01-08 
10:41:07,659::BindingXMLRPC::883::vds::(wrapper) client [10.192.42.207]::call 
vmMigrate with ({'src': '10.192.42.196', 'dst': '10.192.42.165:54321', 'vmId': 
'cfb17b98-1476-4fbf-9f
Thread-1484336::DEBUG::2013-01-08 10:41:07,659::API::432::vds::(migrate) 
{'src': '10.192.42.196', 'dst': '10.192.42.165:54321', 'vmId': 
'cfb17b98-1476-4fbf-9fab-7c7f48b60adf', 'method': 'online'}
Thread-1484337::DEBUG::2013-01-08 
10:41:07,660::vm::125::vm.Vm::(_setupVdsConnection) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Destination server is: 
10.192.42.165:54321
Thread-1484336::DEBUG::2013-01-08 
10:41:07,660::BindingXMLRPC::890::vds::(wrapper) return vmMigrate with 
{'status': {'message': 'Migration process starting', 'code': 0}}
Thread-1484337::DEBUG::2013-01-08 
10:41:07,660::vm::127::vm.Vm::(_setupVdsConnection) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Initiating connection with 
destination
Thread-1484337::DEBUG::2013-01-08 
10:41:07,752::libvirtvm::278::vm.Vm::(_getDiskLatency) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Disk vda latency not available
Thread-1484337::DEBUG::2013-01-08 10:41:07,835::vm::173::vm.Vm::(_prepareGuest) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration Process begins
Thread-1484337::DEBUG::2013-01-08 10:41:07,927::vm::237::vm.Vm::(run) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration semaphore acquired
Thread-1484337::DEBUG::2013-01-08 
10:41:08,251::libvirtvm::449::vm.Vm::(_startUnderlyingMigration) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::starting migration to 
qemu+tls://10.192.42.165/system
Thread-1484338::DEBUG::2013-01-08 10:41:08,251::libvirtvm::335::vm.Vm::(run) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration downtime thread started
Thread-1484339::DEBUG::2013-01-08 10:41:08,252::libvirtvm::371::vm.Vm::(run) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::starting migration monitor thread
Thread-1484337::DEBUG::2013-01-08 10:41:09,521::libvirtvm::350::vm.Vm::(cancel) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::canceling migration downtime thread
Thread-1484337::DEBUG::2013-01-08 10:41:09,521::libvirtvm::409::vm.Vm::(stop) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::stopping migration monitor thread
Thread-1484338::DEBUG::2013-01-08 10:41:09,522::libvirtvm::347::vm.Vm::(run) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration downtime thread exiting
Thread-1484337::ERROR::2013-01-08 10:41:09,522::vm::179::vm.Vm::(_recover) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::internal error Process exited 
while reading console log output: 
Thread-1484340::DEBUG::2013-01-08 
10:41:09,544::task::568::TaskManager.Task::(_updateState) 
Task=`bfebf940-d2a3-4b6c-948b-cac951a686bf`::moving from state init -> state 
preparing
Thread-1484340::INFO::2013-01-08 
10:41:09,544::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-1484340::INFO::2013-01-08 
10:41:09,544::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, 
Return response: {'2a1939bd-9fa3-4896-b8a9-46234172aae7': {'delay': 
'0.00229001045227', 'lastCheck': '
Thread-1484340::DEBUG::2013-01-08 
10:41:09,544::task::1151::TaskManager.Task::(prepare) 
Task=`bfebf940-d2a3-4b6c-948b-cac951a686bf`::finished: 
{'2a1939bd-9fa3-4896-b8a9-46234172aae7': {'delay': '0.00229001045227',
Thread-1484340::DEBUG::2013-01-08 
10:41:09,544::task::568::TaskManager.Task::(_updateState) 
Task=`bfebf940-d2a3-4b6c-948b-cac951a686bf`::moving from state preparing -> 
state finished
Thread-1484340::DEBUG::2013-01-08 
10:41:09,545::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-1484340::DEBUG::2013-01-08 
10:41:09,545::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-1484340::DEBUG::2013-01-08 
10:41:09,545::task::957::TaskManager.Task::(_decref) 
Task=`bfebf940-d2a3-4b6c-948b-cac951a686bf`::ref 0 aborting False
Thread-1484341::DEBUG::2013-01-08 
10:41:09,558::libvirtvm::278::vm.Vm::(_getDiskLatency) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Disk vda latency not available
Thread-1484341::DEBUG::2013-01-08 
10:41:09,559::libvirtvm::278::vm.Vm::(_getDiskLatency) 
vmId=`7b8f725b-0a67-46d4-a3cf-db43daad0c42`::Disk vda latency not available
Thread-1484341::DEBUG::2013-01-08 
10:41:09,559::libvirtvm::278::vm.Vm::(_getDiskLatency) 
vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Disk vda latency not available
Thread-1484341::DEBUG::2013-01-08 
10:41:09,559::libvirtvm::278::vm.Vm::(_getDiskLatency) 
vmId=`e8683e88-f3f2-4fe9-80f7-f4888d8e7a13`::Disk vda latency not available
Thread-1484337::ERROR::2013-01-08 10:41:09,754::vm::258::vm.Vm::(run) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Failed to migrate
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 245, in run
    self._startUnderlyingMigration()
  File "/usr/share/vdsm/libvirtvm.py", line 474, in _startUnderlyingMigration
    None, maxBandwidth)
  File "/usr/share/vdsm/libvirtvm.py", line 510, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 83, 
in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in 
migrateToURI2
    if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: internal error Process exited while reading console log output: 

any chance you attach libvirtd.log and qemu log (/var/log/libvirt/qemu/{}.log?

Danken - any insights?

----- Original Message -----
> From: "Tom Brown" <t...@ng23.net>
> To: "Roy Golan" <rgo...@redhat.com>
> Cc: "Haim Ateya" <hat...@redhat.com>, users@ovirt.org
> Sent: Tuesday, January 8, 2013 11:50:26 AM
> Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> 
> 
> > can you attach the same snip from the src VDSM 10.192.42.196 as
> > well?
> 
> The log is pretty chatty therefore i did another migration attempt
> and snipd'd the new
> log from both sides.
> 
> see attached
> 
> 
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to