Hi All, In my continuing sporadic testing of Kilo->Mitaka upgrade I've run into an issue with Snapshotting instances.
I'm seeing this Failure on the hypervisor: 2016-09-12 14:14:26.303 24723 ERROR oslo_messaging.rpc.dispatcher libvirtError: internal error: unable to execute QEMU command 'migrate': Migration disabled: failed to allocate s hared memory This happens wether the instance is running paused or stopped. I've also tried setting workrkarounds/disable_libvirt_livesnapshot=False based on https://bugs.launchpad.net/nova/+bug/1334398 instance ephemeral disk and glance store are RBD backed and using same credentials. I am able to upload new glance iamges and nova is able to 'shelve' the instances which (I think) invokes a similar rbd snapshot process: 2016-09-12 14:18:45.061 24723 DEBUG nova.virt.libvirt.storage.rbd_utils [] creating snapshot(cbd2266da21d41358259a2e664584299) on rbd image(5b32bb8b-13bf-4b11-882d-3e6183cd1010_disk) create_snap /usr/lib/python2.7/dist-packages/nova/virt/libvirt/storage/rbd_utils.py:380 2016-09-12 14:18:46.344 24723 DEBUG nova.virt.libvirt.storage.rbd_utils [] cloning vms/5b32bb8b-13bf-4b11-882d-3e6183cd1010_disk@cbd2266da21d41358259a2e664584299 to images/2dbedf42-81af-4bf3-b0c4-0a1ba30e3d6a clone /usr/lib/python2.7/dist-packages/nova/virt/libvirt/storage/rbd_utils.py:218 2016-09-12 14:18:46.531 24723 DEBUG nova.virt.libvirt.storage.rbd_utils [] flattening images/2dbedf42-81af-4bf3-b0c4-0a1ba30e3d6a flatten /usr/lib/python2.7/dist-packages/nova/virt/libvirt/storage/rbd_utils.py:267 so looks like it can do the RBD stuff it needs to just fine. Using: Ubuntu 14.04 libvirt-bin 1.3.1 kernel 3.16.0 http://stackoverflow.com/questions/38922184/live-migration-failure-unable-to-execute-qemu-command-migrate-migration-disa lists a similar error but actually related to migration rather than snapshoting and suggests a newer kernel is the fix, I'm hoping this is n't required as it woudl really complicate my life to have to live-migrate-shuffle everything to upgrade kernels prior to upgrading OpenStack. The answer suggesting this fix is unverified and gives no reason why this is a fix, so hoping someone has a better answer (better for me anyway)... -Jon -- _______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators