[Yahoo-eng-team] [Bug 1499784] [NEW] ovsdb native implementation calls to ovs-vsctl to open the connection port on each BaseOVS __init__ call
Public bug reported: It makes agent spend a lot of time and cpu in rootwrap for no good reason. ** Affects: neutron Importance: Undecided Assignee: Ihar Hrachyshka (ihar-hrachyshka) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499784 Title: ovsdb native implementation calls to ovs-vsctl to open the connection port on each BaseOVS __init__ call Status in neutron: In Progress Bug description: It makes agent spend a lot of time and cpu in rootwrap for no good reason. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499784/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499787] [NEW] Static routes are attempted to add to SNAT Namespace of DVR routers without checking for Router Gateway.
Public bug reported: In DVR routers static routes are now only added to snat namespace. But before adding to snat namespace, the routers are not checked for the existence of gateway. ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-dvr-backlog -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499787 Title: Static routes are attempted to add to SNAT Namespace of DVR routers without checking for Router Gateway. Status in neutron: New Bug description: In DVR routers static routes are now only added to snat namespace. But before adding to snat namespace, the routers are not checked for the existence of gateway. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499787/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499785] [NEW] Static routes are not added to the qrouter namespace for DVR routers
Public bug reported: Static routes are not added to the qrouter namespace when routers are added. Initially it used to be configuring the routes in the qrouter namespace but not in the SNAT namespace. A recent patch caused this regression in moving the routes from qrouter namespace to SNAT namespace. 2bb48eb58ad28a629dd12c434b83680aa3f240a4 ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-dvr-backlog -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499785 Title: Static routes are not added to the qrouter namespace for DVR routers Status in neutron: New Bug description: Static routes are not added to the qrouter namespace when routers are added. Initially it used to be configuring the routes in the qrouter namespace but not in the SNAT namespace. A recent patch caused this regression in moving the routes from qrouter namespace to SNAT namespace. 2bb48eb58ad28a629dd12c434b83680aa3f240a4 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499785/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499821] [NEW] ovs_lib.OVSBridge.get_ports_attributes returns all ports in case there's no ports on the OVSBridge in question
Public bug reported: If OVSBridge.get_ports_attributes is executed on an empty bridge, get_port_name_list will return an empty string. In which case, ovsdb.db_list is executed with port_names = '', meaning that it will return all ports, instead of no ports. The implication is that if, for example, br-ex (An ancillary bridge in the OVS agent) is currently empty, then scan_ancillary_ports will pick up all ports. All ports on the system will be considered ancillary_ports, which is unexpected and can result in ports going DOWN when they shouldn't. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499821 Title: ovs_lib.OVSBridge.get_ports_attributes returns all ports in case there's no ports on the OVSBridge in question Status in neutron: New Bug description: If OVSBridge.get_ports_attributes is executed on an empty bridge, get_port_name_list will return an empty string. In which case, ovsdb.db_list is executed with port_names = '', meaning that it will return all ports, instead of no ports. The implication is that if, for example, br-ex (An ancillary bridge in the OVS agent) is currently empty, then scan_ancillary_ports will pick up all ports. All ports on the system will be considered ancillary_ports, which is unexpected and can result in ports going DOWN when they shouldn't. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499821/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 fails
This is either because curtin is not installing the cloud configuration for MAAS, cloud-init is not reading the correct config, or cloud-init cannot talk to MAAS. I believe cloud-init changed to python-oauthlib instead of python-oauth so that might be the issue. Going to target to both curtin and cloud- init as well just to make sure the appropriate eyes see this. ** Also affects: curtin Importance: Undecided Status: New ** Also affects: cloud-init Importance: Undecided Status: New ** Changed in: maas Milestone: None => 1.9.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1499869 Title: maas wily deployment to HP Proliant m400 fails Status in cloud-init: New Status in curtin: New Status in MAAS: New Bug description: This is the error seen on the console: [ 64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin [ 124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect timeout=50.0)'))] [ 124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds [ 124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404] This times out eventually and the node is left at the login prompt. I can install wily via netboot without issue and some time back, wily was deployable to this node from MAAS. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499893] [NEW] Native OVSDB DbSetCommand shows O(n) performance
Public bug reported: Create 100 tenants each one with the following setup where each router is scheduled to the same legacy node that has the L3 agent configured to use the native OVSDB inerface. tenant network --- router -- external network Reference http://ibin.co/2GuI6plJvngR for graph of performance during set up of 100 routers. In the above graph, y-axis is time in seconds, and x-axis is pass through _ovs_add_port (two per router add). DbSetCommand's performance increases with each router add. To support scale, this needs to be closer to O(1) and perform significantly better than using ovs-vsctl via rootwrap daemon. ** Affects: neutron Importance: Medium Status: New ** Tags: kilo-backport-potential liberty-rc-potential performance ** Changed in: neutron Importance: High => Medium ** Tags added: kilo-backport-potential liberty-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499893 Title: Native OVSDB DbSetCommand shows O(n) performance Status in neutron: New Bug description: Create 100 tenants each one with the following setup where each router is scheduled to the same legacy node that has the L3 agent configured to use the native OVSDB inerface. tenant network --- router -- external network Reference http://ibin.co/2GuI6plJvngR for graph of performance during set up of 100 routers. In the above graph, y-axis is time in seconds, and x-axis is pass through _ovs_add_port (two per router add). DbSetCommand's performance increases with each router add. To support scale, this needs to be closer to O(1) and perform significantly better than using ovs-vsctl via rootwrap daemon. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499893/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499658] Re: Consume wsgi module from oslo.service
This is not a bug, at least not for nova. If you want to do some refactoring to use things from an oslo library, you can open a specless blueprint in nova and bring it to the nova meeting (under open discussion we talk about specless blueprints) and then could be worked that way. ** Changed in: nova Status: New => Invalid ** Changed in: nova Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499658 Title: Consume wsgi module from oslo.service Status in Cinder: New Status in neutron: In Progress Status in OpenStack Compute (nova): Invalid Bug description: Basic WSGI functionality has been moved to oslo.service [1] and now OpenStack projects can adopt it. [1] https://github.com/openstack/oslo.service/blob/master/oslo_service/wsgi.py To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1499658/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499751] Re: OpenStack (nova boot exactly) allows only one SSH key.
We don't track RFEs through launchpad bugs. If you'd like to submit a blueprint and spec to the nova-specs repo then we could review the idea there, if even a limited backlog spec where you don't necessarily need design/implementation details, just the high level problem statement and use cases. ** Changed in: nova Importance: Undecided => Wishlist ** Changed in: nova Status: New => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1499751 Title: OpenStack (nova boot exactly) allows only one SSH key. Status in OpenStack Compute (nova): Opinion Bug description: ii nova-api 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - API frontend ii nova-cert1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - certificate management ii nova-common 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - common files ii nova-conductor 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - conductor service ii nova-consoleauth 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - Console Authenticator ii nova-novncproxy 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute Python libraries ii python-novaclient1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API Problem was described at https://ask.openstack.org/en/question/82224/is-it-possible-to-create-instance-with-multiple-ssh-keys/ Looks like OpenStack (nova) allows to specify only one SSH key when instance is created. I believe array of strings should be supported instead of single string only, as authorized_keys allows for more than one SSH key. Workarounds like using merged key are not always an option, as it scales poorly (see example in question linked above). Wishlist/enhancement request of course. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1499751/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499752] Re: reading a ospfd.vty file take time to read in docker container launched from openstack
What do you mean by "launch the container from controller node" - are you using the nova-docker virt driver and nova to create these docker VMs via Nova? Or something else? The nova-docker driver isn't in the nova project anymore, so if you're using that and have bugs, report them against the nova-docker project in launchpad. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1499752 Title: reading a ospfd.vty file take time to read in docker container launched from openstack Status in OpenStack Compute (nova): Invalid Bug description: 1. Openstack version ii nova-api1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - API frontend ii nova-cert 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - certificate management ii nova-common 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - common files ii nova-conductor 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - conductor service ii nova-consoleauth1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - Console Authenticator ii nova-novncproxy 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute Python libraries ii python-novaclient 1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API 3. Reproduce steps * I made a docker container image which has control application like quagga (has zebra and ospf deamons). * then glanced it and launch the container from controller node. * when container is up , i launch zebra and ospf from container using following command from container terminal: for zebra: sudo /usr/local/quagga/sbin/zebra -u root -g root -i /usr/local/quagga/etc/zebra.pid for ospf: /usr/local/quagga/sbin/ospfd -u root -g root -i /usr/loc/usr/local/quagga/etc/ospfd.pid * then if immediately i want to attach vtysh terminal to ospf deamon , it could not connect immediately it will take 3 mins. time to attach. *then on debugging i found that a linux read() from file halts ospf deamon. Expected result: * vtysh should connect immediately to ospf Actual result: * it takes around 3 mins. to connect. Remark: when i launch container from my VM and NOT via openstack i get the desired result, but not with container launched with openstack.( that's the reason i am reporting bug here). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1499752/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1492398] Re: VXLAN Overlay ping issue when Gateway IP is set to one of local NIC's IP address
I've marked the security advisory task "won't fix" but added the security tag in case this corner case may be of interest to OSSN and Security Guide editors. ** Tags added: security ** Changed in: ossa Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1492398 Title: VXLAN Overlay ping issue when Gateway IP is set to one of local NIC's IP address Status in neutron: New Status in OpenStack Security Advisory: Won't Fix Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. There's an issue when a VXLAN overlay VM tries to ping an overlay IP address that is also the same as one of the host machine's local IP addresses. In my setup, I've tried pinging the overlay VM's router's IP address. Here are the details: VXLAN Id is 100 (this number is immaterial, what matters is that we use VXLAN for tenant traffic) Overlay VM: IP: 10.0.1.3/24 GW: 10.0.1.1 Host Info: enp21s0f0: 1.1.1.5/24 (This interface is used to contact the controller as well as for encapsulated datapath traffic. qbr89a962f7-9b: Linux Bridge to which the Overlay VM connects. No IP address on this one. brctl show: qbr89a962f7-9b 8000.56f6fefb9d5c no qvb89a962f7-9b tap89a962f7-9b ifconfig qbr89a962f7-9b qbr89a962f7-9b: flags=4163mtu 1500 inet6 fe80::54f6:feff:fefb:9d5c prefixlen 64 scopeid 0x20 ether 56:f6:fe:fb:9d:5c txqueuelen 0 (Ethernet) RX packets 916 bytes 27072 (26.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 780 (780.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I am using a previously unused NIC named eno1 for this example. When eno1 has no IP address, ping from the overlay VM to the router is successful. ARP on the VM shows the correct MAC resolution. When I set eno1 to 10.0.1.1, ARP on the overlay VM show's qbr89a962f7-9b's MAC address and ping never succeeds. When things work OK ARP for 10.0.1.1 is fa:16:3e:0c:52:6d When eno1 is set to 10.0.1.1 ARP resolution is incorrect, 10.0.1.1 resolves to 56:f6:fe:fb:9d:5c and ping never succeeds. I've deleted ARPs to ensure that resolution is triggered. It appears as of the OVS br-int never received the ARP request. Thanks, -Uday To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1492398/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499841] [NEW] Install guide is redirecting to "juno" rather than "kilo"
Public bug reported: http://docs.openstack.org/developer/keystone/installing.html#installing- from-packages-fedora ** Affects: keystone Importance: Undecided Assignee: venkatamahesh (venkatamaheshkotha) Status: In Progress ** Tags: low-hanging-fruit ** Changed in: keystone Assignee: (unassigned) => venkatamahesh (venkatamaheshkotha) ** Changed in: keystone Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1499841 Title: Install guide is redirecting to "juno" rather than "kilo" Status in Keystone: In Progress Bug description: http://docs.openstack.org/developer/keystone/installing.html #installing-from-packages-fedora To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1499841/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499864] [NEW] Fullstack infrastructure as a developer multi-node deployment tool
Public bug reported: The fullstack testing infrastructure today is used purely in a testing context. This RFE suggests that it could be useful to have fullstack support another use case, which is a quick deployment tool for developers that want to manually test something they're working on, or if they want to learn about Neutron or a specific feature. Neutron would expose a script that would accept a deployment topology document that will describe what is currently here: https://github.com/openstack/neutron/blob/master/neutron/tests/fullstack/test_l3_agent.py#L61 For example, a .yaml file with: How many OVS, L3, DHCP agents Global configuration such as the segmentation type, l2pop={True,False}, OVS arp responder etc. The script would then deploy the requested topology and spit out a credentials file to interact with the API server, information about the agents it deployed (Their host names, state paths, path to configuration files etc), and perhaps also a plotnetcfg [1] output of the resulting deployment (An image that shows how the OVS bridges are connected, namespaces, what devices are connected to what bridges). After deployment is finished, the script would also allow the creation of fake VMs (Identical to the fake VMs we already create during fullstack testing). Reminder: These VMs are backed by a Neutron port, a namespace, and a device with the appropriate IP address connected to the correct bridge. They can sufficiently simulate VMs, and resources on external networks (To enable testing floating IPs and SNAT). So, for example: neutron-fullstack deploy dvr_ha_dhcp.yaml # Spits out information about the topology source neutron net-create 1 neutron net-create 2 neutron-fullstack create_vm --net_id=1, --binding:host_id=xyz neutron-fullstack create_vm --net_id=2, --binding:host_id=abc neutron router-create, attach it to both networks Test ping from VM 1 to VM 2 neutron-fullstack destroy # Possibly accepting a topology ID if we were to support deploying more than a single topology at any given time, I need to think about this further [1] https://github.com/jbenc/plotnetcfg ** Affects: neutron Importance: Undecided Status: New ** Tags: fullstack rfe ** Summary changed: - Fullstack infrastructure as a dev deployment tool + Fullstack infrastructure as a developer multi-node deployment tool -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499864 Title: Fullstack infrastructure as a developer multi-node deployment tool Status in neutron: New Bug description: The fullstack testing infrastructure today is used purely in a testing context. This RFE suggests that it could be useful to have fullstack support another use case, which is a quick deployment tool for developers that want to manually test something they're working on, or if they want to learn about Neutron or a specific feature. Neutron would expose a script that would accept a deployment topology document that will describe what is currently here: https://github.com/openstack/neutron/blob/master/neutron/tests/fullstack/test_l3_agent.py#L61 For example, a .yaml file with: How many OVS, L3, DHCP agents Global configuration such as the segmentation type, l2pop={True,False}, OVS arp responder etc. The script would then deploy the requested topology and spit out a credentials file to interact with the API server, information about the agents it deployed (Their host names, state paths, path to configuration files etc), and perhaps also a plotnetcfg [1] output of the resulting deployment (An image that shows how the OVS bridges are connected, namespaces, what devices are connected to what bridges). After deployment is finished, the script would also allow the creation of fake VMs (Identical to the fake VMs we already create during fullstack testing). Reminder: These VMs are backed by a Neutron port, a namespace, and a device with the appropriate IP address connected to the correct bridge. They can sufficiently simulate VMs, and resources on external networks (To enable testing floating IPs and SNAT). So, for example: neutron-fullstack deploy dvr_ha_dhcp.yaml # Spits out information about the topology source neutron net-create 1 neutron net-create 2 neutron-fullstack create_vm --net_id=1, --binding:host_id=xyz neutron-fullstack create_vm --net_id=2, --binding:host_id=abc neutron router-create, attach it to both networks Test ping from VM 1 to VM 2 neutron-fullstack destroy # Possibly accepting a topology ID if we were to support deploying more than a single topology at any given time, I need to think about this further [1] https://github.com/jbenc/plotnetcfg To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499864/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post
[Yahoo-eng-team] [Bug 1499800] [NEW] lbaas associate floating ip shows up when no vip present
Public bug reported: The lbaas table action for "associate fip" shows up, when there is no vip present. This does not make sense, if you don't have a vip yet, you don't need a fip yet. This was just a simple code bug with the wrong default value ** Affects: horizon Importance: Undecided Assignee: Eric Peterson (ericpeterson-l) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => Eric Peterson (ericpeterson-l) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1499800 Title: lbaas associate floating ip shows up when no vip present Status in OpenStack Dashboard (Horizon): In Progress Bug description: The lbaas table action for "associate fip" shows up, when there is no vip present. This does not make sense, if you don't have a vip yet, you don't need a fip yet. This was just a simple code bug with the wrong default value To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1499800/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499054] Re: devstack VMs are not booting
** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499054 Title: devstack VMs are not booting Status in Ironic: Invalid Status in Ironic Inspector: Invalid Status in neutron: Fix Released Bug description: In devstack, VMs are failing to boot the deploy ramdisk consistently. It appears ipxe is failing to configure the NIC, which is usually caused by a DHCP timeout, but can also be caused by a bug in the PXE ROM that chainloads to ipxe. See also http://ipxe.org/err/040ee1 Console output: eaBIOS (version 1.7.4-20140219_122710-roseapple) achine UUID 37679b90-9a59-4a85-8665-df8267e09a3b M iPXE (http://ipxe.org) 00:04.0 CA00 PCI2.10 PnP PMM+3FFC2360+3FF22360 CA00 Booting from ROM... iPXE (PCI 00:04.0) starting execution...ok iPXE initialising devices...ok iPXE 1.0.0+git-2013.c3d1e78-2ubuntu1.1 -- Open Source Network Boot Firmware -- http://ipxe.org Features: HTTP HTTPS iSCSI DNS TFTP AoE bzImage ELF MBOOT PXE PXEXT Menu net0: 52:54:00:7c:af:9e using 82540em on PCI00:04.0 (open) [Link:up, TX:0 TXE:0 RX:0 RXE:0] Configuring (net0 52:54:00:7c:af:9e).. Error 0x040ee119 (http:// ipxe.org/040ee119) No more network devices No bootable device. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1499054/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499637] [NEW] An L2 agent extension can't access agent methods
Public bug reported: In the networking-bgpvpn project, the reference driver interacts with the openvswitch agent (of the ML2 openvswitch mech driver) with new RPCs to allow setup exchanging information with the BGP VPN implementation running on the compute nodes. We also need the OVS agent to setup specific things between the bridges for MPLS traffic. To extend the agent behavior, we currently create a new agent by mimicking the main() in ovs_neutron_agent.py with a main() instantiating a class that overloads the OVSAgent class with the additional behavior we need [1]. This is really not the ideal way of extending the agent and we would prefer using the L2 agent extension framework [2]. This approach works, but only partially: we are able to register our RPC consumers, but are missing access to some datastructures of the agent that we need to use (setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge objects to manipulate OVS ports). ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe ** Description changed: In the networking-bgpvpn project, the reference driver interacts with the openvswitch agent (of the ML2 openvswitch mech driver) with new RPCs - to (a) setup the OVS bridges for MPLS VPNs (b) exchange information with - the BGP VPN implementation running on the BGP speaker. + to allow setup exchanging information with the BGP VPN implementation + running on the compute nodes. We also need the OVS agent to setup + specific things between the bridges for MPLS traffic. - We currently create an agent by mimicking the main() in - ovs_neutron_agent.py with a main() instantiating a class that overloads - the OVSAgent class with the additional behavior we need [1]. + To extend the agent behavior, we currently create a new agent by + mimicking the main() in ovs_neutron_agent.py with a main() instantiating + a class that overloads the OVSAgent class with the additional behavior + we need [1]. This is really not the ideal way of extending the agent and we would - prefer using the L2 agent extension framework [2] to do better. + prefer using the L2 agent extension framework [2]. - This approach works, but only partially: we are able to register our - RPC consumers, but are missing access to some datastructures of the - agent that we need to use (setup_entry_for_arp_reply and local_vlan_map, + This approach works, but only partially: we are able to register our RPC + consumers, but are missing access to some datastructures of the agent + that we need to use (setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge objects to manipulate OVS ports). -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499637 Title: An L2 agent extension can't access agent methods Status in neutron: New Bug description: In the networking-bgpvpn project, the reference driver interacts with the openvswitch agent (of the ML2 openvswitch mech driver) with new RPCs to allow setup exchanging information with the BGP VPN implementation running on the compute nodes. We also need the OVS agent to setup specific things between the bridges for MPLS traffic. To extend the agent behavior, we currently create a new agent by mimicking the main() in ovs_neutron_agent.py with a main() instantiating a class that overloads the OVSAgent class with the additional behavior we need [1]. This is really not the ideal way of extending the agent and we would prefer using the L2 agent extension framework [2]. This approach works, but only partially: we are able to register our RPC consumers, but are missing access to some datastructures of the agent that we need to use (setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge objects to manipulate OVS ports). To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499637/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499647] [NEW] L3 HA: extra L3HARouterAgentPortBinding created for routers
Public bug reported: I have tested work of L3 HA on environment with 3 controllers and 1 compute (Kilo) keepalived v1.2.13 I create 50 nets with 50 subnets and 50 routers with interface is set for each subnet(Note: I've seem the same errors with just one router and net). I've got the following errors: root@node-6:~# neutron l3-agent-list-hosting-router router-1 Request Failed: internal server error while processing your request. In neutron-server error log: http://paste.openstack.org/show/473760/ When I fixed _get_agents_dict_for_router to skip None for further testing, so then I was able to see: root@node-6:~# neutron l3-agent-list-hosting-router router-1 +--+---++---+--+ | id | host | admin_state_up | alive | ha_state | +--+---++---+--+ | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | node-6.domain.tld | True | :-) | active | | c9159f09-34d4-404f-b46c-a8c18df677f3 | node-7.domain.tld | True | :-) | standby | | b458ab49-c294-4bdb-91bf-ae375d87ff20 | node-8.domain.tld | True | :-) | standby | | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | node-6.domain.tld | True | :-) | active | +--+---++---+--+ root@node-6:~# neutron port-list --device_id=fcf150c0-f690-4265-974d-8db370e345c4 +--+-+---++ | id | name | mac_address | fixed_ips | +--+-+---++ | 0834f8a2-f109-4060-9312-edebac84aba5 | | fa:16:3e:73:9f:33 | {"subnet_id": "0c7a2cfa-1cfd-4ecc-a196-ab9e97139352", "ip_address": "172.18.161.223"} | | 2b5a7a15-98a2-4ff1-9128-67d098fa3439 | HA port tenant aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:b8:f6:35 | {"subnet_id": "1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.149"} | | 48c887c1-acc3-4804-a993-b99060fa2c75 | HA port tenant aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:e7:70:13 | {"subnet_id": "1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.151"} | | 82ab62d6-7dd1-4294-a0dc-f5ebfbcbb4ca | | fa:16:3e:c6:fc:74 | {"subnet_id": "c4cc21c9-3b3a-407c-b4a7-b22f783377e7", "ip_address": "10.0.40.1"} | | bbca8575-51f1-4b42-b074-96e15aeda420 | HA port tenant aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:84:4c:fc | {"subnet_id": "1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.150"} | | bee5c6d4-7e0a-4510-bb19-2ef9d60b9faf | HA port tenant aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:09:a1:ae | {"subnet_id": "1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.193.11"} | | f8945a1d-b359-4c36-a8f8-e78c1ba992f0 | HA port tenant aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:c4:54:b5 | {"subnet_id": "1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.193.12"} | +--+-+---++ mysql root@192.168.0.2:neutron> SELECT * FROM ha_router_agent_port_bindings WHERE router_id='fcf150c0-f690-4265-974d-8db370e345c4'; +--+--+--+-+ | port_id | router_id| l3_agent_id | state | |--+--+--+-| | 2b5a7a15-98a2-4ff1-9128-67d098fa3439 | fcf150c0-f690-4265-974d-8db370e345c4 | c9159f09-34d4-404f-b46c-a8c18df677f3 | standby | | 48c887c1-acc3-4804-a993-b99060fa2c75 | fcf150c0-f690-4265-974d-8db370e345c4 | b458ab49-c294-4bdb-91bf-ae375d87ff20 | standby | | bbca8575-51f1-4b42-b074-96e15aeda420 | fcf150c0-f690-4265-974d-8db370e345c4 | | standby | | bee5c6d4-7e0a-4510-bb19-2ef9d60b9faf | fcf150c0-f690-4265-974d-8db370e345c4 | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | active | | f8945a1d-b359-4c36-a8f8-e78c1ba992f0 | fcf150c0-f690-4265-974d-8db370e345c4 | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | active |
[Yahoo-eng-team] [Bug 1499339] Re: sec_group rule quota usage unreliable
** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499339 Title: sec_group rule quota usage unreliable Status in neutron: Fix Released Bug description: Security group rules are now being deleted with query.delete while efficient, this prevents sqlalchemy events from being fired (see http://docs.openstack.org/developer/neutron/devref/quota.html#exceptions-and-caveats) It might be worth to have this fixed before releasing RC-1; even if impact of this bug is not really serious. After a delete the quota tracker is not marked as dirty, and therefore it reports an incorrect, but higher usage data. As a result a tenant might not be allowed to use all of its quota (but just total - 1). This will however be fixed by the next get operation. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1499339/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1384379] Re: versions resource uses host_url which may be incorrect
** Changed in: ironic Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1384379 Title: versions resource uses host_url which may be incorrect Status in Ceilometer: Triaged Status in Cinder: Fix Released Status in Glance: Fix Released Status in Glance icehouse series: Triaged Status in Glance juno series: Triaged Status in heat: Triaged Status in Ironic: Fix Released Status in Manila: Fix Released Status in OpenStack Compute (nova): Fix Released Status in Trove: In Progress Bug description: The versions resource constructs the links by using host_url, but the glance api endpoint may be behind a proxy or ssl terminator. This means that host_url may be incorrect. It should have a config option to override host_url like the other services do when constructing versions links. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1384379/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1493576] Re: Incorrect usage of python-novaclient
** Changed in: mistral Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1493576 Title: Incorrect usage of python-novaclient Status in Cinder: Fix Released Status in OpenStack Dashboard (Horizon): Fix Committed Status in Manila: Fix Released Status in Mistral: Fix Released Bug description: All projects should use only `novaclient.client` as entry point. It designed with some version checks and backward compatibility. Direct import of versioned client object(i.e. novaclient.v2.client) is a way to "shoot yourself in the foot". Python-novaclient's doc: http://docs.openstack.org/developer/python- novaclient/api.html Affected projects: - Horizon - https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31 - Manila - https://github.com/openstack/manila/blob/473b46f6edc511deaba88b48392b62bfbb979787/manila/compute/nova.py#L23 - Cinder- https://github.com/openstack/cinder/blob/de64f5ad716676b7180365798efc3ea69a4fef0e/cinder/compute/nova.py#L23 - Mistral - https://github.com/openstack/mistral/blob/f42b7f5f5e4bcbce8db7e7340b4cac12de3eec4d/mistral/actions/openstack/actions.py#L23 To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1493576/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1367944] Re: tenant usage information api is consuming lot of memory
** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1367944 Title: tenant usage information api is consuming lot of memory Status in OpenStack Compute (nova): Invalid Bug description: I have noticed that when a tenant usage information API is invoked for a particular tenant owning large number of instances (both active & terminated), then I see a sudden increase in nova-api process memory consumption from 500 MB up to 2.3 GB. It is due to a SQL retrieving large number of records of instance_system_metadata for instances using where in clause. At the time of getting tenant usage information, I had approx. 120,000 instances in the db for a particular tenant (few were active and remaining terminated) Also in this plugin, it unnecessarily gets following information of the instances from the db further degrading the performance of the API. 1. metadata 2. info_cache 3. security_groups To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1367944/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499670] [NEW] Option "Associate Floating IP" available for instances that can't have interfaces with floating ip
Public bug reported: If we have one network topology like the one attached, the instance "ne2" has the IP 20.0.0.3 but the 20.0.0.0/24 isn't an external network, but when we go to http://host/dashboard/project/routers// we see the option "Associate Floating IP" and when the pop-up "Manage Floating IP Associations" opens there aren't any port of instance "ne2" listed, and this is correct, the network that the instance is attached isn't an external network but the option of "Associate Floating IP" is incorrect. HORIZON_BRANCH=master ** Affects: horizon Importance: Undecided Status: New ** Attachment added: "Captura de ecrã 2015-09-25, às 11.14.43.png" https://bugs.launchpad.net/bugs/1499670/+attachment/4474208/+files/Captura%20de%20ecr%C3%A3%202015-09-25%2C%20%C3%A0s%2011.14.43.png ** Description changed: If we have one network topology like this one: - +-+ - | |+--++--+ - | || | 10.0.0.4 router_interface | | - | || | 20.0.0.1 router_interface | | + When we go to: + + +-+ + | |+--++--+ + | || | 10.0.0.4 router_interface | | + | || | 20.0.0.1 router_interface | | + When we go to: | || || | | /dashboard/project/instances// - | || | +-+ | | | We have the option Associate Floating IP - | || +--+ R2 +---+ | | but when the pop up "Manage Floating IP - | | ++| | +-+ | | | Associations" the instance ports are not - | +--+ R1 ++ || | | listed because this instance can't have - | | ++| || | v floating IPs. - | || || | - | | 172.24.4.2 router_gateway| | ++| | +---+ - | | 10.0.0.1router_interface| +--+ Instance 1 || +-+ Instance 2| - | || | ++| | +---+ - | || | 10.0.0.3 | | 20.0.0.3 - | || | 172.24.4.4 (floating ip) | | - | || || | - | || || | - | || || | - | || || | - | || |Private | |Private - | || |10.0.0.0/24 | |20.0.0.0/24 - | |+--++--+ - | | External network: - | | 172.24.4.0/24 - | | + | || | +-+ | | | We have the option Associate Floating IP + | || +--+ R2 +---+ | | but when the pop up "Manage Floating IP + | | ++| | +-+ | | | Associations" the instance ports are not + | +--+ R1 ++ || | |
[Yahoo-eng-team] [Bug 1499658] [NEW] Consume wsgi module from oslo.service
Public bug reported: Basic WSGI functionality has been moved to oslo.service [1] and now OpenStack projects can adopt it. [1] https://github.com/openstack/oslo.service/blob/master/oslo_service/wsgi.py ** Affects: cinder Importance: Undecided Status: New ** Affects: neutron Importance: Undecided Assignee: Elena Ezhova (eezhova) Status: New ** Affects: nova Importance: Undecided Status: New ** Tags: oslo ** Also affects: cinder Importance: Undecided Status: New ** Also affects: nova Importance: Undecided Status: New ** Changed in: neutron Assignee: (unassigned) => Elena Ezhova (eezhova) ** Tags added: oslo -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499658 Title: Consume wsgi module from oslo.service Status in Cinder: New Status in neutron: New Status in OpenStack Compute (nova): New Bug description: Basic WSGI functionality has been moved to oslo.service [1] and now OpenStack projects can adopt it. [1] https://github.com/openstack/oslo.service/blob/master/oslo_service/wsgi.py To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1499658/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499664] [NEW] Add pagination support to the volume snapshots and backups tabs in the Dashboard
Public bug reported: Both snapshots and backups API endpoints now support pagination in Cinder API (https://review.openstack.org/#/c/195071/ and https://review.openstack.org/#/c/204493/), it's time to leverage this support in Horizon. This is similar to bug 1316793 (except that it does the same for Volumes tab). ** Affects: horizon Importance: Undecided Assignee: Timur Sufiev (tsufiev-x) Status: New ** Changed in: horizon Assignee: (unassigned) => Timur Sufiev (tsufiev-x) ** Description changed: Both snapshots and backups API endpoints now support pagination in Cinder API (https://review.openstack.org/#/c/195071/ and https://review.openstack.org/#/c/204493/), it's time to leverage this - support in Horizon + support in Horizon. + + This is similar to bug 1316793 (except that it does the same for Volumes + tab). -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1499664 Title: Add pagination support to the volume snapshots and backups tabs in the Dashboard Status in OpenStack Dashboard (Horizon): New Bug description: Both snapshots and backups API endpoints now support pagination in Cinder API (https://review.openstack.org/#/c/195071/ and https://review.openstack.org/#/c/204493/), it's time to leverage this support in Horizon. This is similar to bug 1316793 (except that it does the same for Volumes tab). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1499664/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499751] [NEW] OpenStack (nova boot exactly) allows only one SSH key.
Public bug reported: ii nova-api 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - API frontend ii nova-cert1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - certificate management ii nova-common 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - common files ii nova-conductor 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - conductor service ii nova-consoleauth 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - Console Authenticator ii nova-novncproxy 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute Python libraries ii python-novaclient1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API Problem was described at https://ask.openstack.org/en/question/82224/is-it-possible-to-create-instance-with-multiple-ssh-keys/ Looks like OpenStack (nova) allows to specify only one SSH key when instance is created. I believe array of strings should be supported instead of single string only, as authorized_keys allows for more than one SSH key. Workarounds like using merged key are not always an option, as it scales poorly (see example in question linked above). Wishlist/enhancement request of course. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1499751 Title: OpenStack (nova boot exactly) allows only one SSH key. Status in OpenStack Compute (nova): New Bug description: ii nova-api 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - API frontend ii nova-cert1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - certificate management ii nova-common 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - common files ii nova-conductor 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - conductor service ii nova-consoleauth 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - Console Authenticator ii nova-novncproxy 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.1-0ubuntu1~cloud2 all OpenStack Compute Python libraries ii python-novaclient1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API Problem was described at https://ask.openstack.org/en/question/82224/is-it-possible-to-create-instance-with-multiple-ssh-keys/ Looks like OpenStack (nova) allows to specify only one SSH key when instance is created. I believe array of strings should be supported instead of single string only, as authorized_keys allows for more than one SSH key. Workarounds like using merged key are not always an option, as it scales poorly (see example in question linked above). Wishlist/enhancement request of course. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1499751/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499752] [NEW] reading a ospfd.vty file take time to read in docker container launched from openstack
Public bug reported: 1. Openstack version ii nova-api1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - API frontend ii nova-cert 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - certificate management ii nova-common 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - common files ii nova-conductor 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - conductor service ii nova-consoleauth1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - Console Authenticator ii nova-novncproxy 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute Python libraries ii python-novaclient 1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API 3. Reproduce steps * I made a docker container image which has control application like quagga (has zebra and ospf deamons). * then glanced it and launch the container from controller node. * when container is up , i launch zebra and ospf from container using following command from container terminal: for zebra: sudo /usr/local/quagga/sbin/zebra -u root -g root -i /usr/local/quagga/etc/zebra.pid for ospf: /usr/local/quagga/sbin/ospfd -u root -g root -i /usr/loc/usr/local/quagga/etc/ospfd.pid * then if immediately i want to attach vtysh terminal to ospf deamon , it could not connect immediately it will take 3 mins. time to attach. *then on debugging i found that a linux read() from file halts ospf deamon. Expected result: * vtysh should connect immediately to ospf Actual result: * it takes around 3 mins. to connect. Remark: when i launch container from my VM and NOT via openstack i get the desired result, but not with container launched with openstack.( that's the reason i am reporting bug here). ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1499752 Title: reading a ospfd.vty file take time to read in docker container launched from openstack Status in OpenStack Compute (nova): New Bug description: 1. Openstack version ii nova-api1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - API frontend ii nova-cert 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - certificate management ii nova-common 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - common files ii nova-conductor 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - conductor service ii nova-consoleauth1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - Console Authenticator ii nova-novncproxy 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute Python libraries ii python-novaclient 1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API 3. Reproduce steps * I made a docker container image which has control application like quagga (has zebra and ospf deamons). * then glanced it and launch the container from controller node. * when container is up , i launch zebra and ospf from container using following command from container terminal: for zebra: sudo /usr/local/quagga/sbin/zebra -u root -g root -i /usr/local/quagga/etc/zebra.pid for ospf: /usr/local/quagga/sbin/ospfd -u root -g root -i /usr/loc/usr/local/quagga/etc/ospfd.pid * then if immediately i want to attach vtysh terminal to ospf deamon , it could not connect immediately it will take 3 mins. time to attach. *then on debugging i found that a linux read() from file halts ospf deamon. Expected result: * vtysh should connect immediately to ospf Actual result: * it takes around 3 mins. to connect. Remark: when i launch container from my VM and NOT via openstack i get the desired result, but not with container launched with openstack.( that's the reason i am reporting bug here). To manage notifications about this bug