Hi, Since my fear of backing-file issues I've had years ago with live-migration I always used:
force_raw_images=False use_cow_images=False ... on my compute nodes, to have qcow2 instances without backing file. (I'm using ZFS, so deduplication is done on the filesystem anyway.) Using Ubuntu Xenial / Mitaka I found that this does not work as long as the swap backing files aren't created: 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [req-93c219a5-0650-4121-8626-5ad6a319fb73 2476f1b03720446481b793e25277b871 7b109aa61c794259a4c5d6352a3ecaa2 - - -] [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] Instance failed to spawn 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] Traceback (most recent call last): 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2218, in _build_resources 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] yield resources 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2064, in _build_and_run_instance 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] block_device_info=block_device_info) 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2761, in spawn 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] admin_pass=admin_password) 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3243, in _create_image 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] swap_mb=swap_mb) 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 254, in cache 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] if size > self.get_disk_size(base): 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 308, in get_disk_size 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] return disk.get_disk_size(name) 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line 147, in get_disk_size 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] return images.qemu_img_info(path).virtual_size 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 50, in qemu_img_info 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] raise exception.DiskNotFound(location=path) 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] DiskNotFound: No disk at /var/lib/nova/instances/_base/swap_2048 2016-08-29 10:24:39.053 30377 ERROR nova.compute.manager [instance: f35abbd1-ef06-4d4d-ab97-b8d79f4ac309] The _base/swap_2048 is created automatically without the two options set. Did anyone stumble upon this before? (Maybe not because there's no default swap in flavors as well as the backing file setting in nova.conf?) Regards, Paul _______________________________________________ OpenStack-operators mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
