Re: [Openstack] Multiple quantum L3 agents in Folsom
Continuing my investigation, I tested the same setup on latest Devstack and it works fine there. I have also set up a fresh test environment with latest Folsom from ubuntu cloud repo. Setup consists of single controller and network nodes. Still have the same problem as reporeted before. Here is the network configuration and some output from 'ip netns' - http://paste2.org/p/3091769. Is this a bug in Folsom? Can somone confirm a working multi-l3 agent configuration? On Fri, Mar 8, 2013 at 6:45 PM, Jānis Ģeņģeris janis.genge...@gmail.comwrote: Hello, I'm trying to run two Quantum L3 agents where each of them handles different external network which is attached to separate router. Depending on which agent was started as last, it clears all interfaces on the router that is handled by first L3 agent. I have the following configs for agents: * first agent: gateway_external_network_id = 7dcb378b-f32b-4a42-bdc1-5294cf2ac14a handle_internal_only_routers = True external_network_bridge = br-ex * second agent: gateway_external_network_id = 7b9063c2-eeb8-436b-bc01-07bfcc5ec8f0 handle_internal_only_routers = False external_network_bridge = br-ex Is there a problem with config? I'm wandering if 'external_network_bridge' can be the same on both agent configs, because the sample in the docs shows that each agent config have different 'external_network_bridge'. I'm running Quantum setup with VLANs, and all VLANs are trunked through 'br-ex'. What can be the problem here? Regards, --janis --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] snapshots, backups of running VMs and compute node recovery
On Fri, Nov 9, 2012 at 9:45 PM, Vishvananda Ishaya vishvana...@gmail.comwrote: The libvirt driver has actually gotten quite good at rebuilding all of the data for instances. This only thing it can't do right now is redownload base images from glance. With current state if you simply back up the instances directory (usually /var/lib/nova/instances) then you can recover by bringing back the whole directory and doing a nova reboot uuid for each instance. What about image corruption, If I start backing up '/var/lib/instances' when the instances are running? Should they be paused or suspended while doing that? You could just stick the whole thing on an lvm and snaphot it regularly for dr. The _base directory can be regenerated with images from glance so you could also write a script to regenerate it and not have to worry about backing it up. The code to add to nova to make it automatically re-download the image from glance if it isn't there shouldn't be too bad either, which would mean you could safely ignore the _base directory for backups. Additionally using qcow images in glance and the config option `force_raw_images=False` will keep this directory much smaller. If I put everything on LVM, I will not be able to regular snapshots anymore. Is there a workaround to this. I want to get some understanding and confidence before I start rebuild my current setup. With image regeneration you meant image download from glance ('glance image-download')? Vish On Nov 9, 2012, at 2:51 AM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: Hello all, I would like to know the available solutions that are used regarding to backing up and/or snapshotting running instances on compute nodes. Documentation does not mention anything related to this. With snapshots I don't mean the current snapshot mechanism, that imports image of the running VM into glance. I'm using KVM, but this is significant for any hypervisor. Why is this important? Consider simple scenario when hardware on compute node fails and the node goes down immediately and is not recoverable in reasonable time. The images of the running instances are also lost. Shared file system is not considered here as it may cause IO bottlenecks and adds another layer of complexity. There have been a few discussions on the the list about this problem, but none have really answered the question. The documentation speaks of disaster recovery when power loss have happened and failed compute node recovery from shared file system. But don't cover the case without shared file system. I can think of few solutions currently (for KVM): a) using LVM images for VMs, and making LVM logical volume snapshots, but then the current nova snapshot mechanism will not work (from the docs - 'current snapshot mechanism in OpenStack Compute works only with instances backed with Qcow2 images'); b) snapshot machines with OpenStack snapshotting mechanism, but this doesn't fit somehow, because it has other goal than creating backups, will be slow and pollute the glance image space; Regards --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] snapshots, backups of running VMs and compute node recovery
Hi, What do you mean with deleting all servers? If the node is down, then all the VM data is gone too, or you are talking about DB entries? On Mon, Nov 12, 2012 at 11:08 AM, Édouard Thuleau thul...@gmail.com wrote: I try to implement a simple way to automate the backup mechanism (eg. every day): https://blueprints.launchpad.net/nova/+spec/backup-schedule And I though of a solution to respond to your needs: when a node fails (for any reasons), I disable it, I delete all servers was running on it and I restart them from the last available backup. Édouard. On Fri, Nov 9, 2012 at 8:45 PM, Vishvananda Ishaya vishvana...@gmail.com wrote: The libvirt driver has actually gotten quite good at rebuilding all of the data for instances. This only thing it can't do right now is redownload base images from glance. With current state if you simply back up the instances directory (usually /var/lib/nova/instances) then you can recover by bringing back the whole directory and doing a nova reboot uuid for each instance. You could just stick the whole thing on an lvm and snaphot it regularly for dr. The _base directory can be regenerated with images from glance so you could also write a script to regenerate it and not have to worry about backing it up. The code to add to nova to make it automatically re-download the image from glance if it isn't there shouldn't be too bad either, which would mean you could safely ignore the _base directory for backups. Additionally using qcow images in glance and the config option `force_raw_images=False` will keep this directory much smaller. Vish On Nov 9, 2012, at 2:51 AM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: Hello all, I would like to know the available solutions that are used regarding to backing up and/or snapshotting running instances on compute nodes. Documentation does not mention anything related to this. With snapshots I don't mean the current snapshot mechanism, that imports image of the running VM into glance. I'm using KVM, but this is significant for any hypervisor. Why is this important? Consider simple scenario when hardware on compute node fails and the node goes down immediately and is not recoverable in reasonable time. The images of the running instances are also lost. Shared file system is not considered here as it may cause IO bottlenecks and adds another layer of complexity. There have been a few discussions on the the list about this problem, but none have really answered the question. The documentation speaks of disaster recovery when power loss have happened and failed compute node recovery from shared file system. But don't cover the case without shared file system. I can think of few solutions currently (for KVM): a) using LVM images for VMs, and making LVM logical volume snapshots, but then the current nova snapshot mechanism will not work (from the docs - 'current snapshot mechanism in OpenStack Compute works only with instances backed with Qcow2 images'); b) snapshot machines with OpenStack snapshotting mechanism, but this doesn't fit somehow, because it has other goal than creating backups, will be slow and pollute the glance image space; Regards --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] snapshots, backups of running VMs and compute node recovery
Hello all, I would like to know the available solutions that are used regarding to backing up and/or snapshotting running instances on compute nodes. Documentation does not mention anything related to this. With snapshots I don't mean the current snapshot mechanism, that imports image of the running VM into glance. I'm using KVM, but this is significant for any hypervisor. Why is this important? Consider simple scenario when hardware on compute node fails and the node goes down immediately and is not recoverable in reasonable time. The images of the running instances are also lost. Shared file system is not considered here as it may cause IO bottlenecks and adds another layer of complexity. There have been a few discussions on the the list about this problem, but none have really answered the question. The documentation speaks of disaster recovery when power loss have happened and failed compute node recovery from shared file system. But don't cover the case without shared file system. I can think of few solutions currently (for KVM): a) using LVM images for VMs, and making LVM logical volume snapshots, but then the current nova snapshot mechanism will not work (from the docs - 'current snapshot mechanism in OpenStack Compute works only with instances backed with Qcow2 images'); b) snapshot machines with OpenStack snapshotting mechanism, but this doesn't fit somehow, because it has other goal than creating backups, will be slow and pollute the glance image space; Regards --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Using GRE tunnels and VLANs together in Quantum
Hello, Is it possible to use GRE tunneling between compute nodes and VLANs with external networks? Let me illustrate this. If we look at this reference picture from the docs http://docs.openstack.org/trunk/openstack-network/admin/content/connectivity.html . The 'Data network' will use GRE tunnels and 'External network' would use VLANs for separating public networks. If this is really possible, I'm curious how to properly configure that, so that provider networks are still available to VMs also. That would mean, tunneling VLANs over GRE, am I right? Regards, --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Not able to get IP address for VM
Hi Srikanth, Can you confirm that metadata service is working and the VMs are able to access it? Usually if VM's can't get network settings is because of inaccessible metadata service. --janis On Wed, Oct 24, 2012 at 4:00 PM, Srikanth Kumar Lingala srikanthkumar.ling...@gmail.com wrote: Here is the *nova.conf* file contents: *[DEFAULT]* *# MySQL Connection #* *sql_connection=mysql://nova:password@10.232.91.33/nova* * * *# nova-scheduler #* *rabbit_host=10.232.91.33* *rabbit_userid=guest* *rabbit_password=password* *#scheduler_driver=nova.scheduler.simple.SimpleScheduler* *#scheduler_default_filters=ImagePropertiesFilter* * * * * *scheduler_driver=nova.scheduler.multi.MultiScheduler* *compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler* *scheduler_available_filters=nova.scheduler.filters.standard_filters* *scheduler_default_filters=ImagePropertiesFilter* * * * * *# nova-api #* *cc_host=10.232.91.33* *auth_strategy=keystone* *s3_host=10.232.91.33* *ec2_host=10.232.91.33* *nova_url=http://10.232.91.33:8774/v1.1/* *ec2_url=http://10.232.91.33:8773/services/Cloud* *keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens* *api_paste_config=/etc/nova/api-paste.ini* *allow_admin_api=true* *use_deprecated_auth=false* *ec2_private_dns_show_ip=True* *dmz_cidr=169.254.169.254/32* *ec2_dmz_host=169.254.169.254* *metadata_host=169.254.169.254* *enabled_apis=ec2,osapi_compute,metadata* * * * * *# Networking #* *network_api_class=nova.network.quantumv2.api.API* *quantum_url=http://10.232.91.33:9696* *libvirt_vif_type=ethernet* *linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver* *firewall_driver=nova.virt.firewall.NoopFirewallDriver* *libvirt_use_virtio_for_bridges=True* * * *# Cinder #* *#volume_api_class=cinder.volume.api.API* * * *# Glance #* *glance_api_servers=10.232.91.33:9292* *image_service=nova.image.glance.GlanceImageService* * * *# novnc #* *novnc_enable=true* *novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html* *vncserver_proxyclient_address=127.0.0.1* *vncserver_listen=0.0.0.0* * * *# Misc #* *logdir=/var/log/nova* *state_path=/var/lib/nova* *lock_path=/var/lock/nova* *root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf* *verbose=true* *dhcpbridge_flagfile=/etc/nova/nova.conf* *dhcpbridge=/usr/bin/nova-dhcpbridge* *force_dhcp_release=True* *iscsi_helper=tgtadm* *connection_type=libvirt* *libvirt_type=kvm* *libvirt_ovs_bridge=br-int* *libvirt_vif_type=ethernet* *libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver* Regards, Srikanth. On Mon, Oct 22, 2012 at 7:48 AM, gong yong sheng gong...@linux.vnet.ibm.com wrote: can u send out nova.conf file? On 10/22/2012 07:30 PM, Srikanth Kumar Lingala wrote: Hi, I am using latest devstack I am trying to create a VM with one Ethernet interface card. I am able to create the VM successfully, but not able to get IP for the ethernet interface. I have Openstack Controller running the following: - nova-api - nova-cert - nova-consoleauth - nova-scheduler - quantum-dhcp-agent - quantum-openvswitch-agent And O also have Openstack Host Node running the following: - nova-api - nova-compute - quantum-openvswitch-agent I am not seeing any kind of errors in logs related nova as well as quantum. I observed that when I execute 'dhclient' in VM, 'br-int' interface in 'Openstack Controller' getting DHCP requests, but not sending reply. Please let me know, what I am doing wrong here. Thanks in advance. -- Srikanth. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Srikanth. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Not able to get IP address for VM
To test if it's running, you can check if the metadata process is running, then you can also use the solution Daniel suggested. To check if VMs ar able to access metadata, I think you have to connect from the address that is registered as legitimate network in nova. VMs try to connect to address 169.254.169.254 to recieve information provided by metadata service. This address is non routable address, so you must have iptables NAT rules that will rewrite it to the proper destination of nova-metadata service address(server IP where metadata service is running), if things are done properly then these rules must be in place already, added by nova. If you manage to get inside VM, you can run this command: curl http://169.254.169.254/latest/meta-data/public-ipv4 to check if you can get to metadata. As I can see from your config, you are using Quantum, then you can just run: nc -v 169.254.169.254 80 from dhcp net namespace of your fixed network. To debug it further, run tcpdump on every namespace involved to find out how far the packets go. And make sure that you have your network topology setup as in docs: http://docs.openstack.org/trunk/openstack-network/admin/content/connectivity.html On Wed, Oct 24, 2012 at 9:40 PM, Daniel Vázquez daniel2d2...@gmail.comwrote: As root user $ service open-stack-nova-metadata-api status or $ /etc/init.d/open-stack-nova-metadata-api status bests, 2012/10/24 Srikanth Kumar Lingala srikanthkumar.ling...@gmail.com: @janis: How can I check that metadata service is working? @Salvatore: DHCP Agent is working fine and I am not seeing any ERROR logs. I am able to see dnsmasq services. I am able to see those MAC entries in the hosts file. tap interface is creating on Host Node, which is attached to br-int. Regards, Srikanth. On Wed, Oct 24, 2012 at 7:24 PM, Salvatore Orlando sorla...@nicira.com wrote: Srikanth, from your analysis it seems that L2 connectivity between the compute and the controller node is working as expected. Before looking further, it is maybe worth ruling out the obvious problems. Hence: 1) is the dhcp-agent service running (or is it stuck in some error state?) 2) Can you see dnsmasq instances running on the controller node? If yes, do you see your VM's MAC in the hosts file for the dnsmasq instance? 3) If dnsmasq instances are running, can you confirm the relevant tap ports are inserted on Open vSwitch instance br-int? Salvatore On 24 October 2012 14:14, Jānis Ģeņģeris janis.genge...@gmail.com wrote: Hi Srikanth, Can you confirm that metadata service is working and the VMs are able to access it? Usually if VM's can't get network settings is because of inaccessible metadata service. --janis On Wed, Oct 24, 2012 at 4:00 PM, Srikanth Kumar Lingala srikanthkumar.ling...@gmail.com wrote: Here is the nova.conf file contents: [DEFAULT] # MySQL Connection # sql_connection=mysql://nova:password@10.232.91.33/nova # nova-scheduler # rabbit_host=10.232.91.33 rabbit_userid=guest rabbit_password=password #scheduler_driver=nova.scheduler.simple.SimpleScheduler #scheduler_default_filters=ImagePropertiesFilter scheduler_driver=nova.scheduler.multi.MultiScheduler compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters=nova.scheduler.filters.standard_filters scheduler_default_filters=ImagePropertiesFilter # nova-api # cc_host=10.232.91.33 auth_strategy=keystone s3_host=10.232.91.33 ec2_host=10.232.91.33 nova_url=http://10.232.91.33:8774/v1.1/ ec2_url=http://10.232.91.33:8773/services/Cloud keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens api_paste_config=/etc/nova/api-paste.ini allow_admin_api=true use_deprecated_auth=false ec2_private_dns_show_ip=True dmz_cidr=169.254.169.254/32 ec2_dmz_host=169.254.169.254 metadata_host=169.254.169.254 enabled_apis=ec2,osapi_compute,metadata # Networking # network_api_class=nova.network.quantumv2.api.API quantum_url=http://10.232.91.33:9696 libvirt_vif_type=ethernet linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.firewall.NoopFirewallDriver libvirt_use_virtio_for_bridges=True # Cinder # #volume_api_class=cinder.volume.api.API # Glance # glance_api_servers=10.232.91.33:9292 image_service=nova.image.glance.GlanceImageService # novnc # novnc_enable=true novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html vncserver_proxyclient_address=127.0.0.1 vncserver_listen=0.0.0.0 # Misc # logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=true dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge force_dhcp_release=True iscsi_helper=tgtadm connection_type=libvirt libvirt_type=kvm libvirt_ovs_bridge=br-int
Re: [Openstack] metadata api with Quantum and provider networks
Hi, Thanks for you patience Dan answering all the questions, I have solved all the issues and got the thing working. I could possibly add some content to the Quantum docs about my experience, who should I contant in this regard? Regards, --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] metadata api with Quantum and provider networks
Hi, I have managed to create the Quantum router so that all the VMs created on provider network are going through it. I had to simply use router-interface-add. But I have now the problem with accessing metadata service. As L3 agent adds DNAT rule in router, the packets are rewritten and arrives at metadata host, but when coming back it doesn't go through the router but directly to the source host resulting in TCP handshake failure. Could it be because all the ports in br-int which are somehow related to this provider network are withing the same VLAN(have the same tag in ovs-vsctl output) thus being in the same broadcast domain? So the above could be solved with SNAT rule to router address, but then metadata service sees routers address instead of VMs and returns HTTP 404 error. Is it possible to fix this with additional flow in openvswitch somehow? The vswitch man pages say that it is possible to match against IPs. What other solution is here to fix this? Thanks, --janis On Mon, Oct 8, 2012 at 11:04 PM, Jānis Ģeņģeris janis.genge...@gmail.comwrote: Here is the output, with few details. # quantum router-show router_vlan1501 +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | external_gateway_info | | | id| 3c7383c7-7759-4db6-ba5b-19e754280cb8 | | name | router_vlan1501 | | status| ACTIVE | | tenant_id | 7246a7e9d61f42b8a644bc1551a2a396 | +---+--+ # quantum net-show vlan1501 +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | id| c2161824-a439-40e5-8809-5599f80df2fe | | name | vlan1501 | | provider:network_type | vlan | | provider:physical_network | default | | provider:segmentation_id | 1501 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | af6eac1e-dfec-49a0-bfc2-3fbf9a7063b3 | | tenant_id | 7246a7e9d61f42b8a644bc1551a2a396 | +---+--+ # quantum router-gateway-set 3c7383c7-7759-4db6-ba5b-19e754280cb8 c2161824-a439-40e5-8809-5599f80df2fe Bad router request: Network c2161824-a439-40e5-8809-5599f80df2fe is not a valid external network I assume this happens because 'router:external' if False, when switching it to True the above command succeeds. But then if I want to switch back later (with no ports attached, no routers, even no subnets), I get this: quantum net-update vlan1501 --router:external False External network e011d68b-6abd-43a4-b6c3-9dbadf5344ee cannot be updated to be made non-external, since it has existing gateway ports Anyway, I have set 'router:external' to True for 'vlan1501' network, set the gateway for my newly created quantum router(router_vlan1501) with 'router-gateway-set' as a result quantum have created router ns with single interface from my configured provider network 'vlan1501' with default gateway set to the one of the 'vlan1501' nets attached subnet. But now if I boot the VM with '--nic=vlan1501_id' option, I get VM with default gateway set to networks('vlan1501') gateway(not the freshly created router gateway, am I missing something in configs?), which is the same as the created routers gateway. Another thing if I switch to newly created 'router_vlan1501' namespace I can't actually ping the external gateway, that is there as a default gw for the 'vlan1501' net. So my thinking is I need to change the default gw in VM to the virtual router? On Mon, Oct 8, 2012 at 9:32 PM, Dan Wendlandt d...@nicira.com wrote: On Mon, Oct 8, 2012 at 12:27 PM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: On Mon, Oct 8, 2012 at 6:24 PM, Dan Wendlandt d...@nicira.com wrote: On Mon, Oct 8, 2012 at 7:52 AM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: Hello, When using provider networks in Quantum, where should the metadata service rule mapping (e.g. 169.254.169.254:80 - metadata_server:metadata_port) must be set? For example, for floating IPs l3-agent handles this, but for provider networks router is not used. I tried
[Openstack] metadata api with Quantum and provider networks
Hello, When using provider networks in Quantum, where should the metadata service rule mapping (e.g. 169.254.169.254:80 - metadata_server:metadata_port) must be set? For example, for floating IPs l3-agent handles this, but for provider networks router is not used. I tried to set custom iptables rule for this, but have a hard time understanding where to set it, as there is openvswitch and namespaces. I'm using provider network configuration with VLANs. Regards, --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] metadata api with Quantum and provider networks
On Mon, Oct 8, 2012 at 6:24 PM, Dan Wendlandt d...@nicira.com wrote: On Mon, Oct 8, 2012 at 7:52 AM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: Hello, When using provider networks in Quantum, where should the metadata service rule mapping (e.g. 169.254.169.254:80 - metadata_server:metadata_port) must be set? For example, for floating IPs l3-agent handles this, but for provider networks router is not used. I tried to set custom iptables rule for this, but have a hard time understanding where to set it, as there is openvswitch and namespaces. I'm using provider network configuration with VLANs. You actually could use the Quantum L3 router as your gateway even if VMs are on a provider network, but I suspect your question is actually more along the lines of: if I want my gateway to be a physical router not managed by Quantum, how do I does the DNAT rule for metadata get applied? In this case, you need to apply the DNAT rule manually to the physical router, which I believe is the same as if you were using flat networking with Nova with a physical router. Adding the rule in physical router is not a good idea, because then the configuration of the OpenStack crosses the actual software/server border into network equipment, than can add to complexity later. I tried to add provider network to quantum router, and the quantum CLI was rejecting it. AFAIK router-interface-add is for internal networks, and router-gateway-set is also failing. Which CLI command to use for adding provider network to existing quantum router? There may also be a more complex solution achievable via quantum in which the provider creates a quantum router with an interface on the provider network, VMs are each given a host route to route traffic destined for 169.254.169.254/32 to this quantum router IP, rather than the physical default gateway, and this quantum router performs the DNAT. However, its probably much easier to just apply this rule to your physical router. No, this is no good. Dan Regards, --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- ~~~ Dan Wendlandt Nicira, Inc: www.nicira.com twitter: danwendlandt ~~~ Regards, --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] metadata api with Quantum and provider networks
Here is the output, with few details. # quantum router-show router_vlan1501 +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | external_gateway_info | | | id| 3c7383c7-7759-4db6-ba5b-19e754280cb8 | | name | router_vlan1501 | | status| ACTIVE | | tenant_id | 7246a7e9d61f42b8a644bc1551a2a396 | +---+--+ # quantum net-show vlan1501 +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | id| c2161824-a439-40e5-8809-5599f80df2fe | | name | vlan1501 | | provider:network_type | vlan | | provider:physical_network | default | | provider:segmentation_id | 1501 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | af6eac1e-dfec-49a0-bfc2-3fbf9a7063b3 | | tenant_id | 7246a7e9d61f42b8a644bc1551a2a396 | +---+--+ # quantum router-gateway-set 3c7383c7-7759-4db6-ba5b-19e754280cb8 c2161824-a439-40e5-8809-5599f80df2fe Bad router request: Network c2161824-a439-40e5-8809-5599f80df2fe is not a valid external network I assume this happens because 'router:external' if False, when switching it to True the above command succeeds. But then if I want to switch back later (with no ports attached, no routers, even no subnets), I get this: quantum net-update vlan1501 --router:external False External network e011d68b-6abd-43a4-b6c3-9dbadf5344ee cannot be updated to be made non-external, since it has existing gateway ports Anyway, I have set 'router:external' to True for 'vlan1501' network, set the gateway for my newly created quantum router(router_vlan1501) with 'router-gateway-set' as a result quantum have created router ns with single interface from my configured provider network 'vlan1501' with default gateway set to the one of the 'vlan1501' nets attached subnet. But now if I boot the VM with '--nic=vlan1501_id' option, I get VM with default gateway set to networks('vlan1501') gateway(not the freshly created router gateway, am I missing something in configs?), which is the same as the created routers gateway. Another thing if I switch to newly created 'router_vlan1501' namespace I can't actually ping the external gateway, that is there as a default gw for the 'vlan1501' net. So my thinking is I need to change the default gw in VM to the virtual router? On Mon, Oct 8, 2012 at 9:32 PM, Dan Wendlandt d...@nicira.com wrote: On Mon, Oct 8, 2012 at 12:27 PM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: On Mon, Oct 8, 2012 at 6:24 PM, Dan Wendlandt d...@nicira.com wrote: On Mon, Oct 8, 2012 at 7:52 AM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: Hello, When using provider networks in Quantum, where should the metadata service rule mapping (e.g. 169.254.169.254:80 - metadata_server:metadata_port) must be set? For example, for floating IPs l3-agent handles this, but for provider networks router is not used. I tried to set custom iptables rule for this, but have a hard time understanding where to set it, as there is openvswitch and namespaces. I'm using provider network configuration with VLANs. You actually could use the Quantum L3 router as your gateway even if VMs are on a provider network, but I suspect your question is actually more along the lines of: if I want my gateway to be a physical router not managed by Quantum, how do I does the DNAT rule for metadata get applied? In this case, you need to apply the DNAT rule manually to the physical router, which I believe is the same as if you were using flat networking with Nova with a physical router. Adding the rule in physical router is not a good idea, because then the configuration of the OpenStack crosses the actual software/server border into network equipment, than can add to complexity later. Yes, its hard to have it both ways... if you want everything done automatically via software, I'd suggest using the quantum router as the gateway, not an external physical router. I tried to add provider network to quantum
Re: [Openstack] Is it possible to have several floating IPs per VM?
Hello, I have asked similar question about floating IPs before, but the Quantum docs are now updated with section about provider networks. Can this(multiple IP from same public net) be done if using provider networks? As I understand from the docs then they map directly to VMs without fixed ip + nat in the middle. And the user would see real IP asigned to interface inside VM? On Fri, Oct 5, 2012 at 8:23 PM, Dan Wendlandt d...@nicira.com wrote: On Fri, Oct 5, 2012 at 4:32 AM, Heinonen, Johanna (NSN - FI/Espoo) johanna.heino...@nsn.com wrote: Hi, I was reading Quantum admin guide (folsom release). There was use case “per-tenant routers with private networks”. In this example all floating IPs were from the same subnet (30.0.0.0/22). I was wondering whether it is possible to have several floating IP subnets and could one VM have floating IP from all of those? (If I have an application that must be reachable from internet via two different interfaces with two different IP addresses, can I do it with Quantum?) Not right now. With Quantum, a router can only uplink to a single external network, and you can have at most one floating IP per external network (otherwise we could not unambiguously apply the policy of SNAT-ing VM initiated connections to the floating IP). In Grizzly, we're planning on making router uplinks more sophisticated, which will include not only the ability to uplink to multiple networks with floating IPs, but also uplink to other types of connectivity, such as an external VPN. I'll update the admin guide to make this clear. Dan Best regards, Johanna ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- ~~~ Dan Wendlandt Nicira, Inc: www.nicira.com twitter: danwendlandt ~~~ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Network IP address setting in nova.conf file
Hello Ahmed, On Thu, Oct 4, 2012 at 9:08 AM, Ahmed Al-Mehdi ah...@coraid.com wrote: Hi Anne, Thank you for the explanation. A few follow-up question: 1. Is the set of IP address mentioned by fixed_range distributed over all the Compute Nodes. E.g., a VM on Compute Node1 would get an IP address from this range. Another VM instance on Compute Node2 would get an IP address from this same range. Is that right? I think it depends on configuration, but with standard setup as described in docs, it's done exactly as you think. 2. Pardon my ignorance, what does POC stand for? It might be Proof Of Concept , depends on context. 3. br100 interface can be created on a Native Ubuntu server also, not necessarily on VM in VirtualBox, right? 4. The br100 interface is only applicable to the Computer Node, not the Controller Node, right? http://docs.openstack.org/essex/openstack-compute/admin/content/libvirt-flat-dhcp-networking.html This image gives good illustration about the responsibilities assigned to br100. As I understand then br100 is interface where the VM network will be plugged in. It makes sens of having it only on compute nodes. But you can assign addresses from different subnets to the bridge, and make the instances accessible from the controller node as well. 5. The command to create the network for compute VMs - nova-manage network create Š. is executed on the Compute Node, right? nova-manage works with the configured database and uses your nova.conf for getting the db dsn. It is independent from compute nodes. It depends on network configuration what happens later and where it's executed. I think nova-network service is working with all the network stuff if you are not using quantum. But I might be wrong here. 6. In my setup, one physical host will be the Controller Node, one physical host is the Compute Node, and I might add another physical host to be the Compute Node. In this setup, what I the recommended setup for public_interface? Or does that depend on the number of NIC ports on the physical host? The public interface is where the floating IPs are handled. Better make it separate, although I have red that people have managed to make compute nodes with single interface, it might be more painful to setup. Thank you, Ahmed. On 10/3/12 8:26 PM, Anne Gentle a...@openstack.org wrote: Hi Ahmed - I have logged a doc bug to clear up this mismatch: https://bugs.launchpad.net/openstack-manuals/+bug/1061352. Appreciate you asking! Here's some explanation for each setting. fixed_range - fixed block of IP addresses handed out to VMs as they're provisioned. So these could be a 10.0.0.0/4 block if you want, or the 192... block. public_interface - some say this needs to be a physical nic, but when you're running a POC on a VM on a laptop in VirtualBox for example, it can be br100. It's what DevStack defaults to. This setting indicates the interface nova-network uses to send the floating ip traffic on the correct network. So if you have nova-network on each compute node then yes, it belongs on the compute node. The Install/deploy guide walks through Flat DHCP as the network manager. If you use --network_bridge=br100 when you run the nova-manage network command, nova will set up the bridge for you. This example command uses 192.168.100.0/24 for the fixed range of IP addresses: http://docs.openstack.org/trunk/openstack-compute/install/apt/content/comp ute-create-network.html Hope this helps. Others on the list, feel free to correct my explanations as needed! Thanks, Anne On Wed, Oct 3, 2012 at 6:46 PM, Ahmed Al-Mehdi ah...@coraid.com wrote: Hello, I am following the steps in OpenStack Install and Deploy RedHat Ubuntu (Folsom) to setup a Controller node. The section Configuring OpenStack Compute ( http://docs.openstack.org/trunk/openstack-compute/install/apt/content/com pute-minimum-configuration-settings.html ) gives a snippet of an example of the values of parameters in the nova.conf file. The document first lists some common settings in nova.conf file. I am highlighting the following two which I am concerned about: fixed_range=192.168.100.0/24 public_interface=eth0 The Document right after gives the whole content of a sample (usable) nova.conf file for the Controller Node, in which the above settings are set as follows: fixed_range=10.0.0.0/24 public_interface=br100 I am assuming fixed_range=192.168.100.0/24 is the correct setting. However, can someone please help as to the correct setting of public_interface. And the same setting value would be applicable for the Compute Node, is that right? Thank you very much in advance. Regards, Ahmed. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help
Re: [Openstack] Just a newbie getting some error messages
Hi Daniel, Do you have set up [filter:authtoken] section in glance-api-paste.ini? You need to replace the strings with percent signs with correct auth credentials. To get more precise output use, 'glance -d index', that will show additional debug information. On Thu, Oct 4, 2012 at 4:07 PM, Daniel Oliveira dvalbr...@gmail.com wrote: Hello, I've been trying to install OpenStack on a server by following the manual installation tutorial on openstack.org for Ubuntu Server 12.04 (and that's the OS I'm using, obviously). But when it comes to test whether Glance was installed successfully ( http://docs.openstack.org/essex/openstack-compute/install/apt/content/images-verifying-install.html), I get the following error message: *Failed to show index. Got error:* *Unexpected responde: 500* * * The same happens when I try to do *glance index*, which I guess should produce no output instead. I really need some help on this, since I'm not that experient with linux. Sorry for any grammar errors, english is not my native language ^_^. Thanks in advance, -- My best regards, Daniel Oliveira. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Just a newbie getting some error messages
When you have fresh install you will usually have something like this in glance-api.conf [filter:authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD% and something similar inside glance-api-paste.ini So if you have these lines with %SERVICE_USER%, %SERVICE_PASSWORD%, %SERVICE_TENANT_NAME%, you need to replace them all with real user credentials that have been configured in Keystone. Here you can see sample config files with the sections configured: http://docs.openstack.org/trunk/openstack-compute/install/content/glance-api-paste-file.html http://docs.openstack.org/trunk/openstack-compute/install/content/glance-api-conf-file.html On Thu, Oct 4, 2012 at 4:36 PM, Daniel Oliveira dvalbr...@gmail.com wrote: Hello Janis, I'm not sure if I understood what I should replace. Do you mean replacing, for example: the line *admin_user = glance* in the [filter:authtoken] section for the line *admin_user = %glance*? and so on for the other credentials? 2012/10/4 Jānis Ģeņģeris janis.genge...@gmail.com Hi Daniel, Do you have set up [filter:authtoken] section in glance-api-paste.ini? You need to replace the strings with percent signs with correct auth credentials. To get more precise output use, 'glance -d index', that will show additional debug information. On Thu, Oct 4, 2012 at 4:07 PM, Daniel Oliveira dvalbr...@gmail.comwrote: Hello, I've been trying to install OpenStack on a server by following the manual installation tutorial on openstack.org for Ubuntu Server 12.04 (and that's the OS I'm using, obviously). But when it comes to test whether Glance was installed successfully ( http://docs.openstack.org/essex/openstack-compute/install/apt/content/images-verifying-install.html), I get the following error message: *Failed to show index. Got error:* *Unexpected responde: 500* * * The same happens when I try to do *glance index*, which I guess should produce no output instead. I really need some help on this, since I'm not that experient with linux. Sorry for any grammar errors, english is not my native language ^_^. Thanks in advance, -- My best regards, Daniel Oliveira. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- --janis -- My best regards, Daniel Oliveira. -- --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Network IP address setting in nova.conf file
On Thu, Oct 4, 2012 at 8:23 PM, Ahmed Al-Mehdi ah...@coraid.com wrote: Hi Janis, Thank you very much for your response. I have some questions to your response which are inlined below. Regards, Ahmed. From: Jānis Ģeņģeris janis.genge...@gmail.com Date: Thursday, October 4, 2012 12:37 AM To: Ahmed Al-Mehdi ah...@coraid.com Cc: Anne Gentle a...@openstack.org, openstack@lists.launchpad.net openstack@lists.launchpad.net Subject: Re: [Openstack] Network IP address setting in nova.conf file Hello Ahmed, On Thu, Oct 4, 2012 at 9:08 AM, Ahmed Al-Mehdi ah...@coraid.com wrote: Hi Anne, Thank you for the explanation. A few follow-up question: 1. Is the set of IP address mentioned by fixed_range distributed over all the Compute Nodes. E.g., a VM on Compute Node1 would get an IP address from this range. Another VM instance on Compute Node2 would get an IP address from this same range. Is that right? I think it depends on configuration, but with standard setup as described in docs, it's done exactly as you think. 2. Pardon my ignorance, what does POC stand for? It might be Proof Of Concept , depends on context. 3. br100 interface can be created on a Native Ubuntu server also, not necessarily on VM in VirtualBox, right? 4. The br100 interface is only applicable to the Computer Node, not the Controller Node, right? http://docs.openstack.org/essex/openstack-compute/admin/content/libvirt-flat-dhcp-networking.html This image gives good illustration about the responsibilities assigned to br100. As I understand then br100 is interface where the VM network will be plugged in. It makes sens of having it only on compute nodes. But you can assign addresses from different subnets to the bridge, and make the instances accessible from the controller node as well. 5. The command to create the network for compute VMs - nova-manage network create Š. is executed on the Compute Node, right? nova-manage works with the configured database and uses your nova.conf for getting the db dsn. It is independent from compute nodes. It depends on network configuration what happens later and where it's executed. I think nova-network service is working with all the network stuff if you are not using quantum. But I might be wrong here. The reason I posed the question is because in the Install and Deploy.. document, it seem the command nova-manage will create the br100 interface (on the Compute Node). How does it do that if it is executed on the Controller Node. I might be wrong, but I guess, that when you try to create a new instance, the RPC call is made, and if the bridge is not on a compute node already, it is created. For specific details you should check the source. 6. In my setup, one physical host will be the Controller Node, one physical host is the Compute Node, and I might add another physical host to be the Compute Node. In this setup, what I the recommended setup for public_interface? Or does that depend on the number of NIC ports on the physical host? The public interface is where the floating IPs are handled. Better make it separate, although I have red that people have managed to make compute nodes with single interface, it might be more painful to setup. Are you recommending the Compute Node have two interfaces (eth0 and eth1, eth0 is the public interface connected to the outside world). The br100 interface on the Compute Node is connected to which physical interface, eth0 or eth1? I would recommend 3 if you are experimenting alot. One is for floating IP net, one is for service VM net, and one is the address that you use to ssh into server. Then br100 is connected with eth1. I think this great blog entry from Mirantis will help you alot http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/ Thank you, Ahmed. On 10/3/12 8:26 PM, Anne Gentle a...@openstack.org wrote: Hi Ahmed - I have logged a doc bug to clear up this mismatch: https://bugs.launchpad.net/openstack-manuals/+bug/1061352. Appreciate you asking! Here's some explanation for each setting. fixed_range - fixed block of IP addresses handed out to VMs as they're provisioned. So these could be a 10.0.0.0/4 block if you want, or the 192... block. public_interface - some say this needs to be a physical nic, but when you're running a POC on a VM on a laptop in VirtualBox for example, it can be br100. It's what DevStack defaults to. This setting indicates the interface nova-network uses to send the floating ip traffic on the correct network. So if you have nova-network on each compute node then yes, it belongs on the compute node. The Install/deploy guide walks through Flat DHCP as the network manager. If you use --network_bridge=br100 when you run the nova-manage network command, nova will set up the bridge for you. This example command uses 192.168.100.0/24 for the fixed range of IP addresses: http
Re: [Openstack] Quantum network configurations for individual VMs
Thanks a lot Dan for an in-depth explanation. I think the part about fixed/floating ip assignment limitations for a VM should go into docs. It will help to avoid pointless experiments. On Fri, Sep 28, 2012 at 6:36 PM, Dan Wendlandt d...@nicira.com wrote: Hi Janis, On Thu, Sep 27, 2012 at 3:20 PM, Jānis Ģeņģeris janis.genge...@gmail.com wrote: Hello, What are the limitations that can be done with quantum related to NIC configuration for individual VMs? For example, is it possible to have multiple floating IPs and multiple fixed IPs assigned to the same VM (the IPs might come from the same and/or different subnets)? The spec for all core APIs is complete and available here: http://docs.openstack.org/api/openstack-network/2.0/content/index.html . Quantum ports have a list of fixed_ips ( http://docs.openstack.org/api/openstack-network/2.0/content/Show_port.html ), meaning that multiple IPs are supported. The floating IP stuff is actually an extension, not part of the core API for folsom. We're still adding content for extensions to the guide (I think its under review right now... should be available early next week). Right now the code actually limits each port to having a single floating IP, but in reality, the you should probably be able to have a different floating IP for each fixed_ip on the port, and in fact, a different floating-ip from each external network for each fixed IP on the port (having multiple floating-IPs from the same external network for a single fixed IP would lead to ambiguity when SNATing connections). I've filed this bug to track the appropriate code changes: https://bugs.launchpad.net/quantum/+bug/1057844 . The change is very simple, and so should be easy to pull into a stable/folsom release. How much does it depend on the chosen hypervisor? None of the Quantum logic depends on the hypervisor. What really matters is the method for how you choose to inject IP addresses into the VM. The main methods I'm aware of are filesystem injection by nova, DHCP injection, or using some type of agent. Filesystem injection in particular may be hypervisor-specific. Quantum documentation is quite recent and is still full of 'TBD', but in the current version there is nothing about the features/limitations Quantum brings to the individual VM instance network configuration in comparison to legacy nova-network. Most of the TBD's were filled in by some commits that landed yesterday, but given the large amount of new functionality in Quantum during Folsom, there will definitely be some doc gaps that we need users to help us identify. I'd encourage you to file doc bugs here (https://bugs.launchpad.net/openstack-manuals) to make sure these gaps are brought to the attention of our great docs team. Dan Regards, --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- ~~~ Dan Wendlandt Nicira, Inc: www.nicira.com twitter: danwendlandt ~~~ -- --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Quantum network configurations for individual VMs
Hello, What are the limitations that can be done with quantum related to NIC configuration for individual VMs? For example, is it possible to have multiple floating IPs and multiple fixed IPs assigned to the same VM (the IPs might come from the same and/or different subnets)? How much does it depend on the chosen hypervisor? Quantum documentation is quite recent and is still full of 'TBD', but in the current version there is nothing about the features/limitations Quantum brings to the individual VM instance network configuration in comparison to legacy nova-network. Regards, --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] why nova live-migration doesn't work?
What does compute and scheduler logs say? On Fri, Aug 10, 2012 at 10:23 AM, 王鹏 breakwin...@gmail.com wrote: Hi : everyone. when I test live-migration ,strangely,no error ,but instance don't move. please help me why ? the instance.log as follow: 2012-08-10 06:47:24.815+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-0034 -uuid 6c576b16-aabe-42d8-aae4-ef7d9ca4fbbd -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0034.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/nova/instances/instance-0034/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:5a:6f:fd,bus=pci.0,addr=0x3 -netdev tap,fd=21,id=hostnet1 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=fa:16:3e:76:ca:c6,bus=pci.0,addr=0x4 -chardev file,id=charserial0,path=/var/lib/nova/instances/instance-0034/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 -vnc 172.18.32.8:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 Domain id=1 is tainted: high-privileges char device redirected to /dev/pts/4 It look like just a start up log ?... thanks! ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- --janis ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp