[openstack-dev] Unable to see console using VNC on ESX hypervisor
Hi All, I have configured OpenStack Grizzly to control ESX hypervisor. I can successfully launch instances but unable to see its console using VNC. Following is my configuration. ***Compute node : nova.conf for vnc: vnc_enabled = true novncproxy_base_url=http://public_ip_of_controller:6080/vnc_auto.html novncproxy_port=6080 vncserver_proxyclient_address=management_ip_of_compute vncserver_listen=0.0.0.0 ***Controller node: nova.conf for vnc: novncproxy_base_url=http://public_ip_of_controller:6080/vnc_auto.html novncproxy_port=6080 vncserver_proxyclient_address=management_ip_of_controller vncserver_listen=0.0.0.0 root@openstk2:~# tail /var/log/nova/nova-consoleauth.log 2013-11-21 18:40:35.228 7570 AUDIT nova.service [-] Starting consoleauth node (version 2013.1.3) 2013-11-21 18:40:35.395 INFO nova.openstack.common.rpc.common [req-179d456d-f306-426f-b65e-242362758f73 None None] Connected to AMQP server on controller_ip:5672 2013-11-21 18:42:34.012 AUDIT nova.consoleauth.manager [req-ebc33f34-f57b-492b-8429-39eb3240e5d7 a8f0e9af6e6b4d08b1729acae0510d54 db63e4a448fc426086562638726f9081] Received Token: 1bcb7408-5c59-466d-a84d-528481af3c37, {'instance_uuid': u'969e49b0-af3f-45bd-8618-1320ba337962', 'internal_access_path': None, 'last_activity_at': 1385039554.012067, 'console_type': u'novnc', 'host': u'ESX_host_IP', 'token': u'1bcb7408-5c59-466d-a84d-528481af3c37', 'port': 6031}) 2013-11-21 18:42:34.015 INFO nova.openstack.common.rpc.common [req-ebc33f34-f57b-492b-8429-39eb3240e5d7 a8f0e9af6e6b4d08b1729acae0510d54 db63e4a448fc426086562638726f9081] Connected to AMQP server on controller_ip:5672 2013-11-21 18:42:34.283 AUDIT nova.consoleauth.manager [req-518ed47e-5d68-491d-8c57-16952744a2d8 None None] Checking Token: 1bcb7408-5c59-466d-a84d-528481af3c37, True) 2013-11-21 18:42:35.710 AUDIT nova.consoleauth.manager [req-2d65d8ac-c003-4f4d-9014-9e8995794ad6 None None] Checking Token: 1bcb7408-5c59-466d-a84d-528481af3c37, True) With same configuration I can connect to vm's console on KVM setup. Is there any other setting to access console for ESX hypervisor? Any help would be highly appreciated. Thanks in advance, Regards, Rajshree ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] Openstack XCP with OVS Quantum
On 10/31/2013 2:53 PM, Rajshree Thorat wrote: Hi All, I have successfully configured Openstack Havana with xen hypervisor(XCP). Initially creating/deleting instances from OpenStack works as expected but networking part(neutron with OpenvSwitch) was not working. The steps I performed to make it work are as below: Normal flow to get DHCP IP: - VM boots and asks for an IP through DHCP - The Nova Compute has a GRE tunnel to the OpenStack Networking node where the neutron/openvSwitch agent provides an IP to the VM. VM --- Nova Compute Node --- GRE tunnel --- OpenStack Networking node --- DHCP agent In case of XCP when guest VM boots it sends a DHCP request to dom0 through xenapi but dom0 unable to communicate with OpenStack Networking node over GRE tunnel. To allow VM's to communicate with Network node over GRE tunnel, we can assign one more nic(eth2) which is part of xapi1 of dom0 to nova-compute and add eth2 to br-int on nova-compute. xapi1 is a openstack network bridge in dom0. Now the packet will traverse as VM -- xapi1(dom0) -- eth2(compute) -- br-tun(compute) -- Network-node(over GRE tunnel) VM -- xapi1(dom0) -- eth2(compute) -- br-tun(compute) -- Network-node(over GRE tunnel) Inbuilt Openvswitch-controller configures the v-switches to allow only specific flows which matches the rules installed on them. Even if we add eth2 to br-int, we will also need to add generic rules to br-tun such that they are able to pass the packets received from eth2 to br-int, then to br-tun and then to network node over GRE tunnel. That's it you are done ! dump-flows before adding rules: root@compute:~# ovs-ofctl dump-flows br-tun NXST_FLOW reply (xid=0x4): cookie=0x0, duration=3.248s, table=0, n_packets=0, n_bytes=0, idle_age=3, priority=1,in_port=1 actions=resubmit(,1) cookie=0x0, duration=2.069s, table=0, n_packets=0, n_bytes=0, idle_age=2, priority=1,in_port=2 actions=resubmit(,2) cookie=0x0, duration=3.187s, table=0, n_packets=1, n_bytes=70, idle_age=2, priority=0 actions=drop cookie=0x0, duration=3.066s, table=1, n_packets=0, n_bytes=0, idle_age=3, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21) cookie=0x0, duration=3.126s, table=1, n_packets=0, n_bytes=0, idle_age=3, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) cookie=0x0, duration=3.006s, table=2, n_packets=0, n_bytes=0, idle_age=3, priority=0 actions=drop cookie=0x0, duration=2.946s, table=3, n_packets=0, n_bytes=0, idle_age=2, priority=0 actions=drop cookie=0x0, duration=2.886s, table=10, n_packets=0, n_bytes=0, idle_age=2, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]-NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 cookie=0x0, duration=2.825s, table=20, n_packets=0, n_bytes=0, idle_age=2, priority=0 actions=resubmit(,21) cookie=0x0, duration=2.766s, table=21, n_packets=0, n_bytes=0, idle_age=2, priority=0 actions=drop Add flows: root@compute:~# ovs-vsctl add-port br-int eth2 tag=1 Here the neutron-plugin-openvswitch-agent has put port eth2 into VLAN 1 (tag 1) on br-int. root@compute:~# ovs-ofctl add-flow br-tun priority=3,in_port=1,dl_vlan=1,actions=set_tunnel:0x1,NORMAL It is for outgoing traffic from br-int VLAN 1 - it sets the GRE key to 0x1 and gives it the NORMAL action. root@compute:~# ovs-ofctl add-flow br-tun priority=2,tun_id=0x1,actions=mod_vlan_vid:1,NORMAL This flow accepts incoming traffic. dump-flows after adding rules: root@compute:~# ovs-ofctl dump-flows br-tun NXST_FLOW reply (xid=0x4): cookie=0x0, duration=115.467s, table=0, n_packets=5, n_bytes=958, idle_age=58, priority=2,tun_id=0x1 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=133.203s, table=0, n_packets=5, n_bytes=830, idle_age=61, priority=3,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL cookie=0x0, duration=343.011s, table=0, n_packets=7, n_bytes=1230, idle_age=186, priority=1,in_port=1 actions=resubmit(,1) cookie=0x0, duration=341.832s, table=0, n_packets=0, n_bytes=0, idle_age=341, priority=1,in_port=2 actions=resubmit(,2) cookie=0x0, duration=342.95s, table=0, n_packets=4, n_bytes=300, idle_age=334, priority=0 actions=drop cookie=0x0, duration=342.829s, table=1, n_packets=7, n_bytes=1230, idle_age=186, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21) cookie=0x0, duration=342.889s, table=1, n_packets=0, n_bytes=0, idle_age=342, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) cookie=0x0, duration=342.769s, table=2, n_packets=0, n_bytes=0, idle_age=342, priority=0 actions=drop cookie=0x0, duration=342.709s, table=3, n_packets=0, n_bytes=0, idle_age=342, priority=0 actions=drop cookie=0x0, duration=342.649s, table=10, n_packets=0, n_bytes=0, idle_age=342, priority=1
[openstack-dev] Unable to logging to guest console on XCP/xenserver
Hi, I am trying to use Openstack Havana to control XCP hypervisor with neutron OVS plugin. I can launch instances normally but unable to logging to guest console. Please see the below log from nova compute to get a clear idea. Log: 2013-10-29 14:09:10.203 954 AUDIT nova.compute.manager [req-157fb348-69fb-4b7f-b50c-44ef85e85f11 42ffb12172244726a1a15d044167de86 2c4acdf64eed40f7a9efcf3b7dd13259] [instance: cf5678ec-5284-48cc-a8d6-005eac42118e] Get console output 2013-10-29 14:09:10.296 954 ERROR nova.virt.xenapi.vmops [req-157fb348-69fb-4b7f-b50c-44ef85e85f11 42ffb12172244726a1a15d044167de86 2c4acdf64eed40f7a9efcf3b7dd13259] ['XENAPI_PLUGIN_FAILURE', 'get_console_log', 'IOError', [Errno 2] No such file or directory: '/var/log/xen/guest/console.7'] 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops Traceback (most recent call last): 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops File /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py, line 1446, in get_console_output 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops 'get_console_log', {'dom_id': dom_id}) 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops File /usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py, line 796, in call_plugin 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops host, plugin, fn, args) 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops File /usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py, line 851, in _unwrap_plugin_exceptions 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops return func(*args, **kwargs) 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops File /usr/local/lib/python2.7/dist-packages/XenAPI.py, line 229, in __call__ 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops return self.__send(self.__name, args) 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops File /usr/local/lib/python2.7/dist-packages/XenAPI.py, line 133, in xenapi_request 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops result = _parse_result(getattr(self, methodname)(*full_params)) 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops File /usr/local/lib/python2.7/dist-packages/XenAPI.py, line 203, in _parse_result 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops raise Failure(result['ErrorDescription']) 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops Failure: ['XENAPI_PLUGIN_FAILURE', 'get_console_log', 'IOError', [Errno 2] No such file or directory: '/var/log/xen/guest/console.7'] 2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops 2013-10-29 14:09:10.358 954 ERROR nova.openstack.common.rpc.amqp [req-157fb348-69fb-4b7f-b50c-44ef85e85f11 42ffb12172244726a1a15d044167de86 2c4acdf64eed40f7a9efcf3b7dd13259] Exception during message handling 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last): 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 461, in _process_data 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp **args) 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, line 172, in dispatch 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs) 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp payload) 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw) 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 271, in decorated_function 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info()) 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 258, in decorated_function 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3502, in get_console_output 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp output = self.driver.get_console_output(instance) 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py, line 374, in get_console_output 2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp return self._vmops.get_console_output(instance) 2013-10-29
[openstack-dev] Test Mail EOM
___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev