Hi guys,
I have problem with live migration and block migration with IBM SVC plugin for
Cinder at Fibre Channel. I have instance (boot from volume) running at compute
node ch1nod2 and I am trying to migrate on second compute ch1nod3. I run the
command and it takes about 1 second, after that instance is in state MIGRATION
and at ch1nod3 I can see a new paused instance. It takes about 6 minutes and
after I can see the errors bellow.
[root@ctl-1*12* ~]#nova list
+--------------------------------------+----------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State |
Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+------------------------+
| 8a9979f0-8a27-40ee-bc9d-b8cf17dd7265 | WindowsTest | ACTIVE | - |
Running | network1=192.168.6.54 |
| 9c5afd75-ab19-44c6-a630-458431ad4eda | centossnap2 | ACTIVE | - |
Running | network2=192.168.7.252 |
| df725a9f-784a-4207-b7d0-da2a9f34eb9c | next | ACTIVE | - |
NOSTATE | network1=192.168.6.253 |
| 2cb09534-5413-4975-8560-b05ff9645c35 | volumesnaphost | ACTIVE | - |
Running | network2=192.168.7.253 |
+--------------------------------------+----------------+--------+------------+-------------+------------------------+
│
[root@ctl-1*12* ~]#nova live-migration 9c5afd75-ab19-44c6-a630-458431ad4ed
ch1nod3.12.intra.cloudlab.cz
ch1nod2 - compute node
2014-06-30 11:02:17.376 3079 INFO nova.compute.manager [-] [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] During sync_power_state the instance has
a pending task. Skip.
2014-06-30 11:08:11.606 3079 INFO nova.compute.resource_tracker [-]
Compute_service record updated for
ch1nod2.12.intra.cloudlab.cz:ch1nod2.12.intra.cloudlab.cz
2014-06-30 11:09:07.582 3079 INFO nova.compute.manager [-] Lifecycle event 3 on
VM 9c5afd75-ab19-44c6-a630-458431ad4eda
2014-06-30 11:09:07.585 3079 ERROR nova.virt.libvirt.driver [-] [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] Live Migration failure: operation failed:
migration job: unexpectedly failed
2014-06-30 11:09:11.670 3079 AUDIT nova.compute.resource_tracker [-] Auditing
locally available compute resources
2014-06-30 11:09:11.873 3079 AUDIT nova.compute.resource_tracker [-] Free ram
(MB): 189078
2014-06-30 11:09:11.874 3079 AUDIT nova.compute.resource_tracker [-] Free disk
(GB): 156
2014-06-30 11:09:11.874 3079 AUDIT nova.compute.resource_tracker [-] Free
VCPUS: 22
ch1nod3 - compute node
2014-06-30 11:09:07.578 29488 INFO nova.compute.manager [-] Lifecycle event 1
on VM 9c5afd75-ab19-44c6-a630-458431ad4eda
2014-06-30 11:09:07.738 29488 INFO nova.compute.manager [-] [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] During the sync_power process the
instance has moved from host ch1nod3.12.intra.cloudlab.cz to host
ch1nod2.12.intra.cloudlab.cz
2014-06-30 11:09:07.939 29488 AUDIT nova.compute.manager
[req-6748d84f-4f0c-43ef-97b4-12e74a989b57 6836cb1afded478a802e2f28020b2bad
e47d5141f5ac40f8a5fedf76bb40e904] [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] Detach volume
5c0160d7-f9e2-4089-9b0b-d3f3ad46006c from mountpoint vda
2014-06-30 11:09:07.941 29488 WARNING nova.compute.manager
[req-6748d84f-4f0c-43ef-97b4-12e74a989b57 6836cb1afded478a802e2f28020b2bad
e47d5141f5ac40f8a5fedf76bb40e904] [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] Detaching volume from unknown instance
2014-06-30 11:09:07.944 29488 ERROR nova.compute.manager
[req-6748d84f-4f0c-43ef-97b4-12e74a989b57 6836cb1afded478a802e2f28020b2bad
e47d5141f5ac40f8a5fedf76bb40e904] [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] Failed to detach volume
5c0160d7-f9e2-4089-9b0b-d3f3ad46006c from vda
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] Traceback (most recent call last):
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3725, in
_detach_volume
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] encryption=encryption)
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1202, in
detach_volume
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] virt_dom =
self._lookup_by_name(instance_name)
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3085, in
_lookup_by_name
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] raise
exception.InstanceNotFound(instance_id=instance_name)
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda] InstanceNotFound: Instance
instance-000009e0 could not be found.
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance:
9c5afd75-ab19-44c6-a630-458431ad4eda]
[root@ch1nod2 ~]# grep "tls\|tcp" /etc/libvirt/libvirtd.conf | grep -v "^#"
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
nova.conf
# Migration flags to be set for live migration (string value)
#live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
# Migration flags to be set for block migration (string value)
block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_NON_SHARED_INC
Can anybody help me with this problem?
Jakub
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack