Are you doing block migration ? (What is the exact command used to initiate
live-migration?)
What is the status of the instance ? ("nova list" output)
You can also check the status of the instance in db, specifically vm_state
and task_stateThese might give some clue. -Mandar On Mon, Jul 9, 2012 at 10:31 PM, Leander Bessa Beernaert < [email protected]> wrote: > There is no error, it just doesn't do anything :s. > > I've left the instance alone for 3 hours now and it's still stuck on the > original compute node. > > On Mon, Jul 9, 2012 at 5:55 PM, Mandar Vaze / मंदार वझे < > [email protected]> wrote: > >> I see "pre_live_migration" in destination compute log, so migration at >> least started. >> >> Since there are no errors in either compute log, is it possible that >> migration is taking long ? (Just a possibility) >> When you say "migration fails" what error did you get ? >> >> -Mandar >> >> On Mon, Jul 9, 2012 at 7:39 PM, Leander Bessa Beernaert < >> [email protected]> wrote: >> >>> Ok, so i've updated to the test packages from >>> >>> The migration still fails, but i see no errors in the logs. I'm trying >>> to migrate a VM with the m1.tiny flavor from one machine to another. Their >>> hardware are identical and they have more than enough resources to support >>> the m1.tiny flavor: >>> >>> cloud35 (total) 4 3867 186 >>>> cloud35 (used_now) 0 312 5 >>>> cloud35 (used_max) 0 0 0 >>> >>> >>> These are the logs from the origin compute node: >>> http://paste.openstack.org/show/19319/ and the destination compute >>> node: http://paste.openstack.org/show/19318/ . The scheduler's log has >>> no visible errors or stack traces. >>> >>> I'm still using nfsv4. >>> >>> Any ideas? >>> >>> >>> On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert < >>> [email protected]> wrote: >>> >>>> Thanks for the tip, it's a better than nothing :) >>>> >>>> Regards, >>>> Leander >>>> >>>> On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे < >>>> [email protected]> wrote: >>>> >>>>> Not sure if you are able to debug this, but a while ago there was a >>>>> bug where instance.id was passed where instance.uuid was expected. >>>>> This used to cause some problem. >>>>> It looks like you are using distribution package rather than devstack >>>>> installation, so it is likely that the issue is now fixed. Can you try >>>>> latest packages (and/or try devstack if you can) >>>>> >>>>> I wish I could help more. >>>>> >>>>> -Mandar >>>>> >>>>> >>>>> On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert < >>>>> [email protected]> wrote: >>>>> >>>>>> Hello, >>>>>> >>>>>> I've recently setup a system to test out the live migration feature. >>>>>> So far i've been able to launch the instances with the shared nfs folder. >>>>>> However, when i run the live-migration command i encounter this error in >>>>>> the destination compute node: >>>>>> >>>>>> 2012-07-05 09:33:48 ERROR nova.manager [-] Error during >>>>>>> ComputeManager.update_available_resource: Domain not found: no domain >>>>>>> with >>>>>>> matching id 2 >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call >>>>>>> last): >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager File >>>>>>> "/usr/lib/python2.7/dist-packages/nova/manager.py", line 155, in >>>>>>> periodic_tasks >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager task(self, context) >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager File >>>>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2409, >>>>>>> in >>>>>>> update_available_resource >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager >>>>>>> self.driver.update_available_resource(context, self.host) >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager File >>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line >>>>>>> 1936, in update_available_resource >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used': >>>>>>> self.get_vcpu_used(), >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager File >>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line >>>>>>> 1743, in get_vcpu_used >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager dom = >>>>>>> self._conn.lookupByID(dom_id) >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager File >>>>>>> "/usr/lib/python2.7/dist-packages/libvirt.py", line 2363, in lookupByID >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise >>>>>>> libvirtError('virDomainLookupByID() failed', conn=self) >>>>>>> 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not >>>>>>> found: no domain with matching id 2 >>>>>> >>>>>> >>>>>> Any ideas on how to solve this? >>>>>> >>>>>> Regards, >>>>>> Leander >>>>>> >>>>>> _______________________________________________ >>>>>> Mailing list: https://launchpad.net/~openstack >>>>>> Post to : [email protected] >>>>>> Unsubscribe : https://launchpad.net/~openstack >>>>>> More help : https://help.launchpad.net/ListHelp >>>>>> >>>>>> >>>>> >>>> >>> >> >
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : [email protected] Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp

