Hello all, I am installing icehouse setup on 3 node. My entire set up is on virtual environment.
======Nova-compute.log ================ WARNING nova.virt.libvirt.driver [-] Periodic task is updating the host stat, it is trying to get disk instance-0000000b, but disk file was removed by concurrent operations such as resize WARNING nova.virt.disk.vfs.guestfs [req-727a676a-ca15-4cb5-8fc4-73dbf307a14f 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19] Failed to close augeas aug_close: call launch before using this function (in guestfish, don't forget to use the 'run' command) 2014-07-07 17:02:23.852 2916 WARNING nova.virt.libvirt.driver [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19] Timeout waiting for vif plugging callback for instance 75515a86-ba63-4c95-8065-1add9da1f314 2014-07-07 17:02:24.693 2916 INFO nova.virt.libvirt.driver [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19] [instance: 75515a86-ba63-4c95-8065-1add9da1f314] Deleting instance files /var/lib/nova/instances/75515a86-ba63-4c95-8065-1add9da1f314 2014-07-07 17:02:24.693 2916 INFO nova.virt.libvirt.driver [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19] [instance: 75515a86-ba63-4c95-8065-1add9da1f314] Deletion of /var/lib/nova/instances/75515a86-ba63-4c95-8065-1add9da1f314 complete 2014-07-07 17:02:24.771 2916 ERROR nova.compute.manager [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19] [instance: 75515a86-ba63-4c95-8065-1add9da1f314] Instance failed to spawn 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] Traceback (most recent call last): 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in _spawn 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] block_device_info) 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2253, in spawn 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] block_device_info) 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3663, in _create_domain_and_network 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] raise exception.VirtualInterfaceCreateException() 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance: 75515a86-ba63-4c95-8065-1add9da1f314] VirtualInterfaceCreateException: Virtual Interface creation failed I got the below error from horizon virtual interface creation failed . As per the link https://ask.openstack.org/en/question/26985/icehouse-virtual-interface-creation-failed/ If I add below entries in my nova.conf , I am able to launch a instance but I ll be disconnected from my host machine and I will not be able to connect via ssh aslo ... again I need to reboot my host machine to connect .. vif_plugging_is_fatal: false vif_plugging_timeout: 0 is there method to fix this ..? Please let me know , Regards, Malleshi CN ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : [email protected] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
