[Yahoo-eng-team] [Bug 1355087] [NEW] when a interface is added after router gateway set, external connectivity using snat fails
Public bug reported: 1.create n/w,subnet 2.create a dvr and attach the subnet 3/create external network and attach the router gateway 4.now boot a vm in that subnet 5.ping to external network -successful 6.create a new network,subnet attach it to router created in step 2. 7.boot a vm and ping to external network -fails 8.try to ping to external network using vm created in step 4 -fails Reason: === when new subnet is added ,all the sg ports inside snat namespace are updated with default gateway of subnet added say i had subnet 4.4.4.0/24 already attached to router its sg port had ip 4.4.4.2,now when i add new subnet say 5.5.5.0/24 this router sg port of 4.4.4.0/24 becomes 5.5.5.1 also sg ip of 5.5.5.0/24 also becomes 5.5.5.1 (even though 5.5.5.1 has device owner =network:router_interface_distributed and 5.5.5.2 has device owner as network:router_centralized_snat) ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-dvr-backlog -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355087 Title: when a interface is added after router gateway set,external connectivity using snat fails Status in OpenStack Neutron (virtual network service): New Bug description: 1.create n/w,subnet 2.create a dvr and attach the subnet 3/create external network and attach the router gateway 4.now boot a vm in that subnet 5.ping to external network -successful 6.create a new network,subnet attach it to router created in step 2. 7.boot a vm and ping to external network -fails 8.try to ping to external network using vm created in step 4 -fails Reason: === when new subnet is added ,all the sg ports inside snat namespace are updated with default gateway of subnet added say i had subnet 4.4.4.0/24 already attached to router its sg port had ip 4.4.4.2,now when i add new subnet say 5.5.5.0/24 this router sg port of 4.4.4.0/24 becomes 5.5.5.1 also sg ip of 5.5.5.0/24 also becomes 5.5.5.1 (even though 5.5.5.1 has device owner =network:router_interface_distributed and 5.5.5.2 has device owner as network:router_centralized_snat) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355087/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355100] [NEW] TestSubresourcePlugin functions are double indented
Public bug reported: The code for the TestSubresourcePlugin class in neutron/test/unit/test_api_v2.py is double indented which does not meet pep8 requirements, however flake8 seems to ignore the problem in automated testing. ** Affects: neutron Importance: Undecided Assignee: Sam Betts (sambetts) Status: New ** Changed in: neutron Assignee: (unassigned) = Sam Betts (sambetts) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355100 Title: TestSubresourcePlugin functions are double indented Status in OpenStack Neutron (virtual network service): New Bug description: The code for the TestSubresourcePlugin class in neutron/test/unit/test_api_v2.py is double indented which does not meet pep8 requirements, however flake8 seems to ignore the problem in automated testing. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355100/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355103] [NEW] InvalidRequestError in l3_agent_scheduler:bind_router
Public bug reported: Observed in the gate: ERROR neutron.api.v2.resource [req-359624f3-6dac-49a6-b7fb-ebf932840832 None] add_router_interface failed TRACE neutron.api.v2.resource Traceback (most recent call last): TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource TRACE neutron.api.v2.resource result = method(request=request, **args) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/base.py, line 200, in _handle_action TRACE neutron.api.v2.resource return getattr(self._plugin, name)(*arg_list, **kwargs) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 187, in add_router_interface TRACE neutron.api.v2.resource context, router_interface_info, 'add') TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1083, in notify_router_interface_action TRACE neutron.api.v2.resource {'subnet_id': router_interface_info['subnet_id']}) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1055, in notify_routers_updated TRACE neutron.api.v2.resource context, router_ids, operation, data) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py, line 139, in routers_updated TRACE neutron.api.v2.resource operation, data) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py, line 107, in _notification TRACE neutron.api.v2.resource plugin.schedule_routers(adminContext, router_ids, hints=data) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_agentschedulers_db.py, line 355, in schedule_routers TRACE neutron.api.v2.resource self.schedule_router(context, router, candidates=None, hints=hints) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_agentschedulers_db.py, line 350, in schedule_router TRACE neutron.api.v2.resource self, context, router, candidates=candidates, hints=hints) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/scheduler/l3_agent_scheduler.py, line 229, in schedule TRACE neutron.api.v2.resource plugin, context, router_id, candidates=candidates, hints=hints) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/scheduler/l3_agent_scheduler.py, line 213, in _schedule_router TRACE neutron.api.v2.resource self.bind_router(context, router_id, chosen_agent) TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/scheduler/l3_agent_scheduler.py, line 187, in bind_router TRACE neutron.api.v2.resource {'agent_id': chosen_agent.id, TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 316, in __get__ TRACE neutron.api.v2.resource return self.impl.get(instance_state(instance), dict_) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 611, in get TRACE neutron.api.v2.resource value = callable_(state, passive) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 380, in __call__ TRACE neutron.api.v2.resource self.manager.deferred_scalar_loader(self, toload) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 601, in load_scalar_attributes TRACE neutron.api.v2.resource only_load_props=attribute_names) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 226, in load_on_ident TRACE neutron.api.v2.resource return q.one() TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2310, in one TRACE neutron.api.v2.resource ret = list(self) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2353, in __iter__ TRACE neutron.api.v2.resource return self._execute_and_instances(context) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2366, in _execute_and_instances TRACE neutron.api.v2.resource close_with_result=True) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2357, in _connection_from_session TRACE neutron.api.v2.resource **kw) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 799, in connection TRACE neutron.api.v2.resource close_with_result=close_with_result) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 803, in _connection_for_bind TRACE neutron.api.v2.resource return self.transaction._connection_for_bind(engine) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 282, in _connection_for_bind TRACE
[Yahoo-eng-team] [Bug 1355125] [NEW] keystonemiddleware appears not to hash PKIZ tokens
Public bug reported: It looks like Keystone hashes only PKI tokens - https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L1399 and test test_verify_signed_token_raises_exception_for_revoked_pkiz_token in https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/tests/test_auth_token_middleware.py#L741 does not takes hashing into account (and checks only already hashed data and hot hashing itself) And that should make token revocation for PKIZ tokens broken. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1355125 Title: keystonemiddleware appears not to hash PKIZ tokens Status in OpenStack Identity (Keystone): New Bug description: It looks like Keystone hashes only PKI tokens - https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L1399 and test test_verify_signed_token_raises_exception_for_revoked_pkiz_token in https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/tests/test_auth_token_middleware.py#L741 does not takes hashing into account (and checks only already hashed data and hot hashing itself) And that should make token revocation for PKIZ tokens broken. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1355125/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355131] [NEW] Populate xstatic-core and xstatic-ptl ACL groups for Gerrit
Public bug reported: The xstatic-core and xstatic-ptl groups were created to manage access to all the stackforge/xstatic-* repositories, which contain packages with javascript libraries used by Horizon. The repositories are now created. but the groups are empty. They should be initially populated with the same users, as the corresponding horizon-core and horizon-ptl groups. ** Affects: horizon Importance: Undecided Assignee: Radomir Dopieralski (thesheep) Status: New ** Affects: openstack-ci Importance: Undecided Status: New ** Also affects: horizon Importance: Undecided Status: New ** Changed in: horizon Assignee: (unassigned) = Radomir Dopieralski (thesheep) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355131 Title: Populate xstatic-core and xstatic-ptl ACL groups for Gerrit Status in OpenStack Dashboard (Horizon): New Status in OpenStack Core Infrastructure: New Bug description: The xstatic-core and xstatic-ptl groups were created to manage access to all the stackforge/xstatic-* repositories, which contain packages with javascript libraries used by Horizon. The repositories are now created. but the groups are empty. They should be initially populated with the same users, as the corresponding horizon-core and horizon-ptl groups. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355131/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1351466] Re: can't copy '.../cisco_cfg_agent.ini': doesn't exist
** Changed in: tripleo Status: Triaged = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1351466 Title: can't copy '.../cisco_cfg_agent.ini': doesn't exist Status in OpenStack Neutron (virtual network service): Fix Committed Status in tripleo - openstack on openstack: Fix Released Bug description: Started roughly 1800 UTC this evening 2014-08-01 19:36:06.878 | error: can't copy 'etc/neutron/plugins/cisco/cisco_cfg_agent.ini': doesn't exist or not a regular file http://logs.openstack.org/70/111370/1/check-tripleo/check-tripleo- ironic-undercloud-precise-nonha/3bc75ae/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1351466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1349774] Re: Error parsing _stylesheets.html
** Changed in: tripleo Status: Triaged = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1349774 Title: Error parsing _stylesheets.html Status in OpenStack Dashboard (Horizon): Fix Committed Status in tripleo - openstack on openstack: Fix Released Bug description: As of about 10 UTC last night, all tripleo jobs are failing http://logs.openstack.org/60/104060/2/check-tripleo/check-tripleo- novabm-overcloud-f20-nonha/ebd8814/logs/overcloud-controller0_logs/os- collect-config.txt.gz Jul 29 08:12:06 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: dib-run-parts Tue Jul 29 08:12:06 UTC 2014 Running /opt/stack/os-config-refresh/post-configure.d/14-horizon Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: CommandError: An error occured during rendering /opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/templates/_stylesheets.html: Error parsing block: Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 1 Jul 29 08:12:10 overcloud-controller0-6xxjtao24g7a os-collect-config[788]: 2 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1349774/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355131] Re: Populate xstatic-core and xstatic-ptl ACL groups for Gerrit
I've added horizon-core and horizon-ptl groups to the xxstatic-core and xstatic-ptl groups correspondingly. ** Changed in: openstack-ci Milestone: None = juno ** Changed in: openstack-ci Assignee: (unassigned) = Sergey Lukjanov (slukjanov) ** Changed in: openstack-ci Importance: Undecided = Wishlist ** Changed in: openstack-ci Status: New = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355131 Title: Populate xstatic-core and xstatic-ptl ACL groups for Gerrit Status in OpenStack Dashboard (Horizon): New Status in OpenStack Core Infrastructure: Fix Released Bug description: The xstatic-core and xstatic-ptl groups were created to manage access to all the stackforge/xstatic-* repositories, which contain packages with javascript libraries used by Horizon. The repositories are now created. but the groups are empty. They should be initially populated with the same users, as the corresponding horizon-core and horizon-ptl groups. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355131/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355153] [NEW] Inefficient implementation of Linuxbridge agent get_bridge_for_tap_device
Public bug reported: The implementation of LinuxBridgeManager.get_bridge_for_tap_device uses an inefficient algorithm of repeatedly listing all bridges and their interfaces for each call. At scale, this implementation becomes very inefficient, causing the agent to become unresponsive in reporting interfaces as up to nova, leading to instance creation failure due to a timeout in LibvirtDriver._create_domain_and_network. This can be replaced with a constant-time implementation using the fact that the tap device's 'brport' directory contains a 'bridge' symlink to the bridge device, which carries the bridge name. The same symlink also exists as 'master' on the same level as 'brport'. The patch I propose also removes the single use of methods get_all_neutron_bridges and interface_exists_on_bridge. I left these methods in as their removal is not necessary for this bugfix. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355153 Title: Inefficient implementation of Linuxbridge agent get_bridge_for_tap_device Status in OpenStack Neutron (virtual network service): New Bug description: The implementation of LinuxBridgeManager.get_bridge_for_tap_device uses an inefficient algorithm of repeatedly listing all bridges and their interfaces for each call. At scale, this implementation becomes very inefficient, causing the agent to become unresponsive in reporting interfaces as up to nova, leading to instance creation failure due to a timeout in LibvirtDriver._create_domain_and_network. This can be replaced with a constant-time implementation using the fact that the tap device's 'brport' directory contains a 'bridge' symlink to the bridge device, which carries the bridge name. The same symlink also exists as 'master' on the same level as 'brport'. The patch I propose also removes the single use of methods get_all_neutron_bridges and interface_exists_on_bridge. I left these methods in as their removal is not necessary for this bugfix. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355153/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355131] Re: Populate xstatic-core and xstatic-ptl ACL groups for Gerrit
Awesome, thank you! ** Changed in: horizon Status: New = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355131 Title: Populate xstatic-core and xstatic-ptl ACL groups for Gerrit Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Core Infrastructure: Fix Released Bug description: The xstatic-core and xstatic-ptl groups were created to manage access to all the stackforge/xstatic-* repositories, which contain packages with javascript libraries used by Horizon. The repositories are now created. but the groups are empty. They should be initially populated with the same users, as the corresponding horizon-core and horizon-ptl groups. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355131/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354829] Re: sudo: 3 incorrect password attempts in gate-neutron-python26
After looking through logstash it would appear that this is not specific to Neutron. Several of the Jenkins servers have suffered from this error when testing different projects, including Neutron, Horizon and Tempest. ** Also affects: horizon Importance: Undecided Status: New ** Also affects: tempest Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1354829 Title: sudo: 3 incorrect password attempts in gate-neutron-python26 Status in OpenStack Dashboard (Horizon): New Status in OpenStack Neutron (virtual network service): New Status in Tempest: New Bug description: http://logs.openstack.org/86/110186/1/check/gate-neutron- python26/9dec53b/console.html 2014-08-10 06:51:57.341 | Started by user anonymous 2014-08-10 06:51:57.343 | Building remotely on bare-centos6-rax-dfw-1380820 in workspace /home/jenkins/workspace/gate-neutron-python26 2014-08-10 06:51:57.458 | [gate-neutron-python26] $ /bin/bash -xe /tmp/hudson1812648831707400818.sh 2014-08-10 06:51:57.540 | + rpm -ql libffi-devel 2014-08-10 06:51:57.543 | /tmp/hudson1812648831707400818.sh: line 2: rpm: command not found 2014-08-10 06:51:57.543 | + sudo yum install -y libffi-devel 2014-08-10 06:51:57.549 | sudo: no tty present and no askpass program specified 2014-08-10 06:51:57.551 | Sorry, try again. 2014-08-10 06:51:57.552 | sudo: no tty present and no askpass program specified 2014-08-10 06:51:57.552 | Sorry, try again. 2014-08-10 06:51:57.553 | sudo: no tty present and no askpass program specified 2014-08-10 06:51:57.553 | Sorry, try again. 2014-08-10 06:51:57.553 | sudo: 3 incorrect password attempts 2014-08-10 06:51:57.571 | Build step 'Execute shell' marked build as failure http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcInN1ZG86IDMgaW5jb3JyZWN0IHBhc3N3b3JkIGF0dGVtcHRzXCIgQU5EIGZpbGVuYW1lOiBcImNvbnNvbGUuaHRtbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA3NjcyNjY4NDc3fQ== To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1354829/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355169] [NEW] rest client from opendaylight ml2 driver should be reusable
Public bug reported: The current ML2 plugin for opendaylight is a single class. It would be better if it were split such that the rest-client could be easily re-used by additional plugins e.g a l3 service plugin and/or LBaaS, FWaaS etc... ** Affects: neutron Importance: Undecided Assignee: Dave Tucker (davetucker) Status: New ** Changed in: neutron Assignee: (unassigned) = Dave Tucker (davetucker) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355169 Title: rest client from opendaylight ml2 driver should be reusable Status in OpenStack Neutron (virtual network service): New Bug description: The current ML2 plugin for opendaylight is a single class. It would be better if it were split such that the rest-client could be easily re-used by additional plugins e.g a l3 service plugin and/or LBaaS, FWaaS etc... To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355169/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355171] [NEW] Injected /etc/network/interfaces misses line feed and IP existence checks
Public bug reported: When I use nova's /etc/network/interfaces file injection via configdisk and set up by cloud-init, I end up with a not working /etc/network/interfaces like this one (notice the missing line feed and the address None) : # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.1.0.14 netmask 255.255.0.0 broadcast 10.1.255.255 gateway 10.1.0.1 iface eth0 inet6 static address None netmask Noneauto eth1 iface eth1 inet static address None netmask None broadcast None iface eth1 inet6 static address :::::c netmask 64 I use two interfaces for two different networks one v4 only on eth0 and one v6 only on eth1. I also attached a patch that corrects both issues. About the second issue, since the patched version checks for IPv6 address existence, it might be possible to delete the use_ipv6 check. ** Affects: nova Importance: Undecided Status: New ** Patch added: interfaces.template.patch https://bugs.launchpad.net/bugs/1355171/+attachment/4174463/+files/interfaces.template.patch -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1355171 Title: Injected /etc/network/interfaces misses line feed and IP existence checks Status in OpenStack Compute (Nova): New Bug description: When I use nova's /etc/network/interfaces file injection via configdisk and set up by cloud-init, I end up with a not working /etc/network/interfaces like this one (notice the missing line feed and the address None) : # Injected by Nova on instance boot # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.1.0.14 netmask 255.255.0.0 broadcast 10.1.255.255 gateway 10.1.0.1 iface eth0 inet6 static address None netmask Noneauto eth1 iface eth1 inet static address None netmask None broadcast None iface eth1 inet6 static address :::::c netmask 64 I use two interfaces for two different networks one v4 only on eth0 and one v6 only on eth1. I also attached a patch that corrects both issues. About the second issue, since the patched version checks for IPv6 address existence, it might be possible to delete the use_ipv6 check. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1355171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355125] Re: keystonemiddleware appears not to hash PKIZ tokens
** Description changed: - It looks like Keystone hashes only PKI tokens - https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L1399 - and test test_verify_signed_token_raises_exception_for_revoked_pkiz_token in https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/tests/test_auth_token_middleware.py#L741 does not takes hashing into account (and checks only already hashed data and hot hashing itself) + It looks like Keystone hashes only PKI tokens [1] and test test_verify_signed_token_raises_exception_for_revoked_pkiz_token [2] does not take hashing into account (and checks only already hashed data and not hashing itself) And that should make token revocation for PKIZ tokens broken. + + + [1] https://github.com/openstack/keystonemiddleware/blob/c9036a00ef3f7c4b9475799d5b713db7a2d94961/keystonemiddleware/auth_token.py#L1399 + [2] https://github.com/openstack/keystonemiddleware/blob/c9036a00ef3f7c4b9475799d5b713db7a2d94961/keystonemiddleware/tests/test_auth_token_middleware.py#L741 ** Also affects: keystonemiddleware Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1355125 Title: keystonemiddleware appears not to hash PKIZ tokens Status in OpenStack Identity (Keystone): New Status in OpenStack Identity (Keystone) Middleware: New Bug description: It looks like Keystone hashes only PKI tokens [1] and test test_verify_signed_token_raises_exception_for_revoked_pkiz_token [2] does not take hashing into account (and checks only already hashed data and not hashing itself) And that should make token revocation for PKIZ tokens broken. [1] https://github.com/openstack/keystonemiddleware/blob/c9036a00ef3f7c4b9475799d5b713db7a2d94961/keystonemiddleware/auth_token.py#L1399 [2] https://github.com/openstack/keystonemiddleware/blob/c9036a00ef3f7c4b9475799d5b713db7a2d94961/keystonemiddleware/tests/test_auth_token_middleware.py#L741 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1355125/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355183] [NEW] Allow POST of floating_ip_address for L3 extension
Public bug reported: The documentation to Create Floating IP suggests that the call shall allow the floating_ip_address to be selected by the caller, yet the code does not permit that and will not use the value: From http://docs.openstack.org/api/openstack- network/2.0/content/floatingip_create.html An address for the floating ip will be automatically allocated, unless the floating_ip_address attribute is specified in the request body. If the requested floating IP address does not fall in the external network's subnet range, a 400 error will be returned. If the requested floating IP address is already in use, a 409 error code will be returned. A proposed patch will permiy the parameter on POST (but not change it via PUT). The policy on setting fixed_ip of a port applies on the underlying create port call on the external network, which also makes the necessary checks against out-of-range and duplicate addresses. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355183 Title: Allow POST of floating_ip_address for L3 extension Status in OpenStack Neutron (virtual network service): New Bug description: The documentation to Create Floating IP suggests that the call shall allow the floating_ip_address to be selected by the caller, yet the code does not permit that and will not use the value: From http://docs.openstack.org/api/openstack- network/2.0/content/floatingip_create.html An address for the floating ip will be automatically allocated, unless the floating_ip_address attribute is specified in the request body. If the requested floating IP address does not fall in the external network's subnet range, a 400 error will be returned. If the requested floating IP address is already in use, a 409 error code will be returned. A proposed patch will permiy the parameter on POST (but not change it via PUT). The policy on setting fixed_ip of a port applies on the underlying create port call on the external network, which also makes the necessary checks against out-of-range and duplicate addresses. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355183/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355177] [NEW] hyperV: compute service goes off when booting an instance fails
Public bug reported: I have a 2012 R2 windows machine where hyperV, openstack nova compute service and openstack neutron hyperV agent is installed. I have a devstack running where neutron is not installed. When I am trying to boot an instance, it fails with error: ConnectionFailed: Connection to neutron failed: HTTPConnectionPool(host='x.x.x.x', port=9696): Max retries exceeded with url and further re-scheduled to a different host. But in the mean time compute service becomes dead in the hyperV host. nova list: nova-compute WIN-4EUO2SEHJ92 nova enabled XXX 2014-08-11 09:57:55 compute.log: 014-08-11 10:21:53.653 3924 ERROR nova.compute.manager [req-69c46c41-0aa1-47cc-8abe-7306ec5ef57d None] [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] Instance failed to spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] Traceback (most recent call last): 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, line 2108, in _build_resources 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] yield resources 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, line 1994, in _build_and_run_instance 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] block_device_info=block_device_info) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 55, in spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] admin_password, network_info, block_device_info) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 87, in wrapper 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] return function(self, *args, **kwds) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 255, in spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] self.destroy(instance) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\openstack\common\excutils.py, line 82, in __exit__ 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] six.reraise(self.type_, self.value, self.tb) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 246, in spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] root_vhd_path, eph_vhd_path) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 289, in create_instance 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] for vif in network_info: 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\network\model.py, line 441, in __iter__ 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] return self._sync_wrapper(fn, *args, **kwargs) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\network\model.py, line 432, in _sync_wrapper 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] self.wait() 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance:
[Yahoo-eng-team] [Bug 1355186] [NEW] Run the Horizon integration tests on the Horizon gate
Public bug reported: Horizon is building up a small collection of integration tests and it's time to add a gate job so that we can run them on the Horizon gate for the master branch. The tests require Selenium to run. I have a patch pending for using the recent selenium-headless flag to the tox job (py27integration) and make the tests run headlessly. I'm hoping that the addition of the selenium-headless flag removes the need for special casing on the gate (i.e. https://git.openstack.org/cgit/openstack- infra/config/tree/modules/openstack_project/files/slave_scripts/run- selenium.sh ) (if I'm correct then separately we can also update the tox job for the selenium unit tests and remove the need for that script altogether). I'm moderately familiar with adding new jobs for unit tests (as in, it should look something like that: https://git.openstack.org/cgit /openstack- infra/config/commit/?id=ecf346a668631e4f560949686c2669a0c2d281bd ) but could humbly use additional advice from infra to figure out what needs to work differently here since: - The tests require a running devstack - The instance running the tests needs to have the xvfb package installed, required for running selenium headlessly. Xvfb is referenced in run-selenium.sh so I understand it is installed in some cases at least? I'd like to suggest making the job non-voting for the first few weeks until we figure out how stable it is. ** Affects: horizon Importance: High Assignee: Julie Pichon (jpichon) Status: In Progress ** Affects: openstack-ci Importance: Undecided Status: New ** Tags: integration-test ** Also affects: openstack-ci Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355186 Title: Run the Horizon integration tests on the Horizon gate Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Core Infrastructure: New Bug description: Horizon is building up a small collection of integration tests and it's time to add a gate job so that we can run them on the Horizon gate for the master branch. The tests require Selenium to run. I have a patch pending for using the recent selenium-headless flag to the tox job (py27integration) and make the tests run headlessly. I'm hoping that the addition of the selenium-headless flag removes the need for special casing on the gate (i.e. https://git.openstack.org/cgit/openstack- infra/config/tree/modules/openstack_project/files/slave_scripts/run- selenium.sh ) (if I'm correct then separately we can also update the tox job for the selenium unit tests and remove the need for that script altogether). I'm moderately familiar with adding new jobs for unit tests (as in, it should look something like that: https://git.openstack.org/cgit /openstack- infra/config/commit/?id=ecf346a668631e4f560949686c2669a0c2d281bd ) but could humbly use additional advice from infra to figure out what needs to work differently here since: - The tests require a running devstack - The instance running the tests needs to have the xvfb package installed, required for running selenium headlessly. Xvfb is referenced in run-selenium.sh so I understand it is installed in some cases at least? I'd like to suggest making the job non-voting for the first few weeks until we figure out how stable it is. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355186/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355195] [NEW] ipv6 router does not set ip forwarding
Public bug reported: When we set up a router between two IPv6-only subnets, the router fails to forward the traffic from one network to the other. After investigations in neutron's l3_agent source code (on git), it seems that you forgot to set ipv6 forwarding to 1 with sysctl after you set it for ipv4. The attached patch corrects this issue. ** Affects: neutron Importance: Undecided Status: New ** Attachment added: l3agent.patch https://bugs.launchpad.net/bugs/1355195/+attachment/4174467/+files/l3agent.patch ** Project changed: nova = neutron ** Summary changed: - ipv6 router do not set ip forwarding + ipv6 router does not set ip forwarding -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1355195 Title: ipv6 router does not set ip forwarding Status in OpenStack Neutron (virtual network service): New Bug description: When we set up a router between two IPv6-only subnets, the router fails to forward the traffic from one network to the other. After investigations in neutron's l3_agent source code (on git), it seems that you forgot to set ipv6 forwarding to 1 with sysctl after you set it for ipv4. The attached patch corrects this issue. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355195/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355177] Re: hyperV: compute service goes off when booting an instance fails
This is the expected behaviour and it's unrelated to the Hyper-V driver. You can tell Nova not to use Neutron by unsetting network_api_class in nova.conf on your Hyper-V compute node(s) which will revert to the default (Nova network). Please note that Nova network is not supported on Hyper-V beyond the very basic flat networking. ** Changed in: nova Status: New = Invalid ** Tags removed: neutron-agent ** Tags added: hyper-v -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1355177 Title: hyperV: compute service goes off when booting an instance fails Status in OpenStack Compute (Nova): Invalid Bug description: I have a 2012 R2 windows machine where hyperV, openstack nova compute service and openstack neutron hyperV agent is installed. I have a devstack running where neutron is not installed. When I am trying to boot an instance, it fails with error: ConnectionFailed: Connection to neutron failed: HTTPConnectionPool(host='x.x.x.x', port=9696): Max retries exceeded with url and further re-scheduled to a different host. But in the mean time compute service becomes dead in the hyperV host. nova list: nova-compute WIN-4EUO2SEHJ92 nova enabledXXX 2014-08-11 09:57:55 compute.log: 014-08-11 10:21:53.653 3924 ERROR nova.compute.manager [req-69c46c41-0aa1-47cc-8abe-7306ec5ef57d None] [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] Instance failed to spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] Traceback (most recent call last): 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, line 2108, in _build_resources 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] yield resources 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, line 1994, in _build_and_run_instance 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] block_device_info=block_device_info) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 55, in spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] admin_password, network_info, block_device_info) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 87, in wrapper 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] return function(self, *args, **kwds) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 255, in spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] self.destroy(instance) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\openstack\common\excutils.py, line 82, in __exit__ 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] six.reraise(self.type_, self.value, self.tb) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 246, in spawn 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] root_vhd_path, eph_vhd_path) 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] File C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 289, in create_instance 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance: 6d430014-faa6-4af5-9701-063f61f4eb40] for vif in network_info: 2014-08-11 10:21:53.653 3924 TRACE nova.compute.manager [instance:
[Yahoo-eng-team] [Bug 1278843] Re: Neutron doesn't report using a stale CA certificate
** Also affects: keystonemiddleware Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1278843 Title: Neutron doesn't report using a stale CA certificate Status in OpenStack Identity (Keystone) Middleware: New Status in OpenStack Neutron (virtual network service): Confirmed Bug description: It seems that when the CA certificate cashed locally by Neutron in /var/lib/neutron/keystone-signing/ is stale (does not match the current CA cert used by keystone), it is not possible to start a new instance. This is understandable. However, the stacktrace error you get while trying to start as instance in such a situation is a hugely misleading: ERROR: Error: unsupported operand type(s) for +: 'NoneType' and 'str' It's rather tricky to debug the issue. To reproduce just redo the pki-setup for keystone on a deployed and otherwise healthy openstack cluster. This will create a new CA cert for keystone, however neutron-server will be completely oblivious to this fact and will still insist on using it's local copy of the cacert. I'm running Havana on rhel6.4 -- /var/log/nova/compute.log on the compute node when trying to start a vm OpenStack (nova:4668) ERROR: Instance failed network setup after 1 attempt(s) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager Traceback (most recent call last): 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1238, in _allocate_network_async 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager dhcp_options=dhcp_options) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/nova/network/api.py, line 49, in wrapper 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager res = f(self, context, *args, **kwargs) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, line 358, in allocate_for_instance 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager LOG.exception(msg, port_id) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, line 323, in allocate_for_instance 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager port_req_body) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, line 392, in _populate_neutron_extension_values 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager self._refresh_neutron_extensions_cache() 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, line 376, in _refresh_neutron_extensions_cache 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager extensions_list = neutron.list_extensions()['extensions'] 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 108, in with_params 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager ret = self.function(instance, *args, **kwargs) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 286, in list_extensions 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager return self.get(self.extensions_path, params=_params) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1183, in get 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager headers=headers, params=params) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1168, in retry_request 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager headers=headers, params=params) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1103, in do_request 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager resp, replybody = self.httpclient.do_request(action, method, body=body) 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/neutronclient/client.py, line 188, in do_request 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager self.authenticate() 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager File /usr/lib/python2.6/site-packages/neutronclient/client.py, line 224, in authenticate 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager token_url = self.auth_url + /tokens 2014-02-11 13:02:38.429 4668 TRACE nova.compute.manager TypeError: unsupported operand
[Yahoo-eng-team] [Bug 1355251] [NEW] db plugin sort testing function should not use sorted() on assertion
Public bug reported: tests/unit/test_db_plugin.py line 585 (https://github.com/openstack/neutron/blob/master/neutron/tests/unit/test_db_plugin.py#L585) sorted() function is used within assertEqual() function. This breaks tests objective which is to test sorting Every unit test using this function (_test_list_with_sort) for sorting tests will always succeed. sorted() should not be used for proper test results ** Affects: neutron Importance: Undecided Assignee: Evgeny Fedoruk (evgenyf) Status: New ** Tags: unittest ** Changed in: neutron Assignee: (unassigned) = Evgeny Fedoruk (evgenyf) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355251 Title: db plugin sort testing function should not use sorted() on assertion Status in OpenStack Neutron (virtual network service): New Bug description: tests/unit/test_db_plugin.py line 585 (https://github.com/openstack/neutron/blob/master/neutron/tests/unit/test_db_plugin.py#L585) sorted() function is used within assertEqual() function. This breaks tests objective which is to test sorting Every unit test using this function (_test_list_with_sort) for sorting tests will always succeed. sorted() should not be used for proper test results To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355251/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354509] Re: Glance requirements.txt for osprofiler too loose
*** This bug is a duplicate of bug 1354500 *** https://bugs.launchpad.net/bugs/1354500 Duplicate of https://bugs.launchpad.net/glance/+bug/1354500 ** Changed in: glance Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1354509 Title: Glance requirements.txt for osprofiler too loose Status in OpenStack Image Registry and Delivery Service (Glance): Invalid Status in tripleo - openstack on openstack: Confirmed Bug description: Need to be set to osprofiler=0.3.0 after change https://review.openstack.org/#/c/105635/22 0.3.0 def __init__(self, application, hmac_keys, enabled=False): https://github.com/stackforge/osprofiler/blob/0.3.0/osprofiler/web.py 0.2.5 def __init__(self, application, hmac_key, enabled=False): https://github.com/stackforge/osprofiler/blob/0.2.5/osprofiler/web.py To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1354509/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1327959] Re: fwaas:firewall rule doesn't throw error when setting dest. ip address as network and took it as /32
Such source/destination ip addresses may be valid in case when network prefix is less than 24 bits. I'd suggest marking this bug as invalid. ** Changed in: neutron Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1327959 Title: fwaas:firewall rule doesn't throw error when setting dest. ip address as network and took it as /32 Status in OpenStack Neutron (virtual network service): Invalid Bug description: when creating firewall rule if destination/source ipaddress as 10.10.10.0, it doesnt throw error and took it as 10.10.10.0/32 Steps to Reproduce: create firewall rule with destination ip address as 10.10.10.0 Actual Results: root@IGA-OSC:~# fwru re --source-ip-address 10.10.1.0 --destination-ip-address 10.10.2.0 Updated firewall_rule: re root@IGA-OSC:~# fwrs re ++--+ | Field | Value| ++--+ | action | deny | | description| | | destination_ip_address | 10.10.2.0| | destination_port | | | enabled| True | | firewall_policy_id | 924d41cd-fad1-4ed4-9114-6dd704382bd3 | | id | ed8769fc-e4b7-4306-b8ca-95350c80ca22 | | ip_version | 4| | name | re | | position | 1| | protocol | icmp | | shared | False| | source_ip_address | 10.10.1.0| | source_port| | | tenant_id | d9481c57a11c46eea62886938b5378a7 | ++--+ In routers iptable-save output neutron-vpn-agen-iv47a808890 -s 10.10.1.0/32 -d 10.10.2.0/32 -p icmp -j DROP -- it got the /32 as subnet for network which s invalid -A neutron-vpn-agen-iv47a808890 -d 10.10.10.25/32 -p icmp -j DROP -A neutron-vpn-agen-iv47a808890 -d 10.10.10.24/32 -p icmp -j DROP -A neutron-vpn-agen-iv47a808890 -s 192.52.1.3/32 -d 192.52.1.45/32 -p tcp -m tcp --dport 22:23 -j DROP -A neutron-vpn-agen-iv47a808890 -j ACCEPT -A neutron-vpn-agen-ov47a808890 -m state --state INVALID -j DROP -A neutron-vpn-agen-ov47a808890 -m state --state RELATED,ESTABLISHED -j ACCEPT -A neutron-vpn-agen-ov47a808890 -s 10.10.1.0/32 -d 10.10.2.0/32 -p icmp -j DROP -A neutron-vpn-agen-ov47a808890 -d 10.10.10.25/32 -p icmp -j DROP -A neutron-vpn-agen-ov47a808890 -d 10.10.10.24/32 -p icmp -j DROP -A neutron-vpn-agen-ov47a808890 -s 192.52.1.3/32 -d 192.52.1.45/32 -p tcp -m tcp --dport 22:23 -j DROP -A neutron-vpn-agen-ov47a808890 -j ACCEPT Expected Results It should throw error specifying that the given ip address is network To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1327959/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355285] [NEW] EC2 attachment set is absent for attaching/detaching volumes
Public bug reported: When volume is attaching AWS reports attachment info: VOLUME vol-b6baa9ff 1 us-east-1b in-use 2014-08-11T15:34:38.090Z ATTACHMENT vol-b6baa9ffi-afcc1f85 /dev/sddattaching 2014-08-11T15:41:24.000Z But Nova EC2 doesn't do it: VOLUME vol-0001 1 novain-use 2014-08-10T19:51:06.00 ATTACHMENT vol-0001NoneNoneNoneNone ** Affects: nova Importance: Undecided Assignee: Feodor Tersin (ftersin) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1355285 Title: EC2 attachment set is absent for attaching/detaching volumes Status in OpenStack Compute (Nova): New Bug description: When volume is attaching AWS reports attachment info: VOLUMEvol-b6baa9ff 1 us-east-1b in-use 2014-08-11T15:34:38.090Z ATTACHMENTvol-b6baa9ffi-afcc1f85 /dev/sddattaching 2014-08-11T15:41:24.000Z But Nova EC2 doesn't do it: VOLUMEvol-0001 1 novain-use 2014-08-10T19:51:06.00 ATTACHMENTvol-0001NoneNoneNoneNone To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1355285/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355287] [NEW] Introduce constants for volume status values
Public bug reported: Volume status values ('available', 'error', etc) are hardcoded in various places throughout the UI. Constants should be introduced for these and used consistently throughout horizon. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355287 Title: Introduce constants for volume status values Status in OpenStack Dashboard (Horizon): New Bug description: Volume status values ('available', 'error', etc) are hardcoded in various places throughout the UI. Constants should be introduced for these and used consistently throughout horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355287/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1309642] Re: horizon angular integration overrides embed tag
** Changed in: horizon Status: In Progress = Invalid ** Changed in: horizon Assignee: Rob Raymond (rob-raymond) = (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1309642 Title: horizon angular integration overrides embed tag Status in OpenStack Dashboard (Horizon): Invalid Bug description: The angular integration into horizon overrides the default content tag {{ }} since if conflicts with the Django template tag. The problem is that some angular components (ng-grid) cannot be reused because they assume the default setting. Ideally angular code should run unchanged whether run inside of horizon or outside. I am thinking that this override should be removed. Angular templates (aka partials) should be loaded from static files rather than templates. A quick search of templates in horizon did not find any use of {$ $} but this surprises me. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1309642/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355313] [NEW] instance table is flickered in Chrome browser when launching multiple (10) instances at the same time
Public bug reported: When launching multiple instances (usually more than 10) in parallel, the instance table is flickered in Chrome browser (on Windows). Some cells become white temporarily and then displayed correctly. It continues until instance launching are completed. It is quite ugly and hopefully it is fixed. When using Firefox browser (on Windows), this symptom does not occur. I don't know it occurs in Chrome browser on MacOSX or Linux. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355313 Title: instance table is flickered in Chrome browser when launching multiple (10) instances at the same time Status in OpenStack Dashboard (Horizon): New Bug description: When launching multiple instances (usually more than 10) in parallel, the instance table is flickered in Chrome browser (on Windows). Some cells become white temporarily and then displayed correctly. It continues until instance launching are completed. It is quite ugly and hopefully it is fixed. When using Firefox browser (on Windows), this symptom does not occur. I don't know it occurs in Chrome browser on MacOSX or Linux. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355313/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1294682] Re: Hyper-v VHDX resizing not working when using differencing images
Converted in blueprint: https://blueprints.launchpad.net/nova/+spec/add- differencing-vhdx-resize-support Merged in Juno ** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1294682 Title: Hyper-v VHDX resizing not working when using differencing images Status in OpenStack Compute (Nova): Invalid Bug description: When spawning a new VM using cow images, the root image won't get the size of the flavor. Instead, it will have the size of the base image no resize is attempted. In this case, the proper size should be specified when creating the differencing image. If after that one attempts to resize the VM, the get_internal_vhd_size_by_file_size method will raise an exception as it can't get this info out of the differencing image. Instead of raising this exception, this method may recurse by calling itself on the parent of the differencing image. This happens on both V1 and V2 namespaces. Trace: http://paste.openstack.org/show/73825/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1294682/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355360] [NEW] Cisco VPNaaS agent doesn't resync with config DB on restart
Public bug reported: When Cisco VPNaaS agent could restarts - either due to software-error or operator action - the agent doesn't reconcile with the configuration changes that happened while it was down with CSR VM configs. This could lead to cases where, a) any VPN config (ipsec-connection, et. al) that gets deleted in DB but doesn't gets deleted in the CSR VM b) any VPN config that gets updated in DB will cause the agent to reapply the configs as 'new' but it already exists in CSR VM. While many configuration in CSR software is idempotent the RESTapi used to re-apply will fail with 'resource already exist' status. This will cause the Cisco VPN agent to report a failure. ** Affects: neutron Importance: Undecided Assignee: Sridhar Ramaswamy (srramasw) Status: New ** Tags: vpnaas ** Changed in: neutron Assignee: (unassigned) = Sridhar Ramaswamy (srramasw) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355360 Title: Cisco VPNaaS agent doesn't resync with config DB on restart Status in OpenStack Neutron (virtual network service): New Bug description: When Cisco VPNaaS agent could restarts - either due to software-error or operator action - the agent doesn't reconcile with the configuration changes that happened while it was down with CSR VM configs. This could lead to cases where, a) any VPN config (ipsec-connection, et. al) that gets deleted in DB but doesn't gets deleted in the CSR VM b) any VPN config that gets updated in DB will cause the agent to reapply the configs as 'new' but it already exists in CSR VM. While many configuration in CSR software is idempotent the RESTapi used to re-apply will fail with 'resource already exist' status. This will cause the Cisco VPN agent to report a failure. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355360/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355373] [NEW] [data processing] Image registry lists images twice for admin
Public bug reported: *low pirority* If you log in as the admin user and go to the Data Processing - Image Registry, you will notice that the image dropdown might list each image twice (if an image is loaded for the admin project and is public, it will be shown twice). The following code (data_image_registry/forms.py) builds up a list that might contain duplicates, and the possibility of duplicates is never checked later down the line. images = self._get_tenant_images(request) if request.user.is_superuser: images += self._get_public_images(request) The image dropdown should prevent images from showing up twice. ** Affects: horizon Importance: Undecided Status: New ** Tags: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355373 Title: [data processing] Image registry lists images twice for admin Status in OpenStack Dashboard (Horizon): New Bug description: *low pirority* If you log in as the admin user and go to the Data Processing - Image Registry, you will notice that the image dropdown might list each image twice (if an image is loaded for the admin project and is public, it will be shown twice). The following code (data_image_registry/forms.py) builds up a list that might contain duplicates, and the possibility of duplicates is never checked later down the line. images = self._get_tenant_images(request) if request.user.is_superuser: images += self._get_public_images(request) The image dropdown should prevent images from showing up twice. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355373/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355405] [NEW] Exception handling when deleting a non-empty container causes Horizon to crash.
Public bug reported: When using Horizon in French, deleting a non-empty object container. Causes Horizon to crash in production. ** Affects: horizon Importance: Undecided Assignee: George Peristerakis (george-peristerakis) Status: New ** Changed in: horizon Assignee: (unassigned) = George Peristerakis (george-peristerakis) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355405 Title: Exception handling when deleting a non-empty container causes Horizon to crash. Status in OpenStack Dashboard (Horizon): New Bug description: When using Horizon in French, deleting a non-empty object container. Causes Horizon to crash in production. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355405/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355409] [NEW] Key error in l3_dvr_db
Public bug reported: The following stack trace observed in the gate: ERROR oslo.messaging.rpc.dispatcher [req-110db567-3322-4922-95c8-a54d166c8ead ] Exception during message handling: u'b132e9da-3ee2-47ac-9ed5-f04ca73c01d1' TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply TRACE oslo.messaging.rpc.dispatcher incoming.message)) TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 57, in sync_routers TRACE oslo.messaging.rpc.dispatcher context, host, router_ids) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_agentschedulers_db.py, line 191, in list_active_sync_routers_on_active_l3_agent TRACE oslo.messaging.rpc.dispatcher active=True) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 306, in get_sync_data TRACE oslo.messaging.rpc.dispatcher DEVICE_OWNER_DVR_INTERFACE]) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1016, in _get_router_info_list TRACE oslo.messaging.rpc.dispatcher active=active) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_db.py, line 919, in _get_sync_routers TRACE oslo.messaging.rpc.dispatcher return self._build_routers_list(context, router_dicts, gw_ports) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 241, in _build_routers_list TRACE oslo.messaging.rpc.dispatcher rtr['gw_port'] = gw_ports[gw_port_id] TRACE oslo.messaging.rpc.dispatcher KeyError: u'b132e9da-3ee2-47ac-9ed5-f04ca73c01d1'a Link: http://logs.openstack.org/48/112948/3/check/check-tempest-dsvm- neutron-pg/503d619/logs/screen-q-svc.txt.gz?#_2014-08-11_07_19_33_893 ** Affects: neutron Importance: High Assignee: Eugene Nikanorov (enikanorov) Status: New ** Tags: l3-dvr-backlog -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355409 Title: Key error in l3_dvr_db Status in OpenStack Neutron (virtual network service): New Bug description: The following stack trace observed in the gate: ERROR oslo.messaging.rpc.dispatcher [req-110db567-3322-4922-95c8-a54d166c8ead ] Exception during message handling: u'b132e9da-3ee2-47ac-9ed5-f04ca73c01d1' TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply TRACE oslo.messaging.rpc.dispatcher incoming.message)) TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 57, in sync_routers TRACE oslo.messaging.rpc.dispatcher context, host, router_ids) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_agentschedulers_db.py, line 191, in list_active_sync_routers_on_active_l3_agent TRACE oslo.messaging.rpc.dispatcher active=True) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 306, in get_sync_data TRACE oslo.messaging.rpc.dispatcher DEVICE_OWNER_DVR_INTERFACE]) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1016, in _get_router_info_list TRACE oslo.messaging.rpc.dispatcher active=active) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_db.py, line 919, in _get_sync_routers TRACE oslo.messaging.rpc.dispatcher return self._build_routers_list(context, router_dicts, gw_ports) TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 241, in _build_routers_list TRACE oslo.messaging.rpc.dispatcher
[Yahoo-eng-team] [Bug 1355424] [NEW] NSX: update_lswitch method adds duplicate version tag
Public bug reported: The nsxlib.update_lswitch() method automatically adds the neutron version as a tag to the string. Because of this it's possible to have this same value get added many times which exceeds the number of tags nsx allows. 2014-08-11 11:54:20.980 ERROR neutron.plugins.vmware.api_client.client [req-2e1404a4-52dd-4348-bd3e-1bf863c6066b admin c76d95a547294fa3a377f2039136305c] Server Error Message: LogicalForwardingElementConfig.tags: must contain at most 5 items (value is [{'scope': 'os_tid', 'tag': 'c76d95a547294fa3a377f2039136305c'}, {'scope': 'quantum', 'tag': '2014.2.dev196.g18a10fa'}, {'scope': 'quantum', 'tag': '2014.2.dev196.g18a10fa'}, {'scope': 'quantum_net_id', 'tag': '788e5b7c-4cff-4440-b2ea-3519b34229e7'}, {'scope': 'os_tid', 'tag': 'c76d95a547294fa3a377f2039136305c'}, {'scope': 'multi_lswitch', 'tag': 'True'}]) 2014-08-11 11:54:20.980 ERROR NeutronPlugin [req-2e1404a4-52dd-4348-bd3e-1bf863c6066b admin c76d95a547294fa3a377f2039136305c] An exception occurred while selecting logical switch for the port ** Affects: neutron Importance: High Assignee: Aaron Rosen (arosen) Status: New ** Tags: icehouse-backport-potential nicira ** Changed in: neutron Importance: Undecided = High ** Changed in: neutron Assignee: (unassigned) = Aaron Rosen (arosen) ** Tags added: icehouse-backport-potential nicira -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355424 Title: NSX: update_lswitch method adds duplicate version tag Status in OpenStack Neutron (virtual network service): New Bug description: The nsxlib.update_lswitch() method automatically adds the neutron version as a tag to the string. Because of this it's possible to have this same value get added many times which exceeds the number of tags nsx allows. 2014-08-11 11:54:20.980 ERROR neutron.plugins.vmware.api_client.client [req-2e1404a4-52dd-4348-bd3e-1bf863c6066b admin c76d95a547294fa3a377f2039136305c] Server Error Message: LogicalForwardingElementConfig.tags: must contain at most 5 items (value is [{'scope': 'os_tid', 'tag': 'c76d95a547294fa3a377f2039136305c'}, {'scope': 'quantum', 'tag': '2014.2.dev196.g18a10fa'}, {'scope': 'quantum', 'tag': '2014.2.dev196.g18a10fa'}, {'scope': 'quantum_net_id', 'tag': '788e5b7c-4cff-4440-b2ea-3519b34229e7'}, {'scope': 'os_tid', 'tag': 'c76d95a547294fa3a377f2039136305c'}, {'scope': 'multi_lswitch', 'tag': 'True'}]) 2014-08-11 11:54:20.980 ERROR NeutronPlugin [req-2e1404a4-52dd-4348-bd3e-1bf863c6066b admin c76d95a547294fa3a377f2039136305c] An exception occurred while selecting logical switch for the port To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355424/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355426] [NEW] pip2rpm builds bad specs when packages have tests requirements
Public bug reported: pip2rpm is not correctly parsing test-requirements.txt it will take the relevant entries and create a python-(character) for each character in the file. Under the tests portion of the repo. This creates a spec file like the following, which make it impossible to install the tests rpms. Source: h Requires: python-h # Source: a Requires: python-a # Source: c Requires: python-c # Source: k Requires: python-k # Source: i Requires: python-i # Source: n Requires: python-n # Source: g Requires: python-g # Source: 0 Requires: python-0 # Source: . Requires: python-- # Source: 8 Requires: python-8 # Source: . Requires: python-- # Source: 0 Requires: python-0 # Source: 0 Requires: python-0 # Source: . Requires: python-- # Source: 9 Requires: python-9 Which should be python-hacking=0.8.0,0.9 its is also stripping out ,= ** Affects: anvil Importance: Undecided Status: New ** Summary changed: - Build requirements + pip2rpm builds bad specs when packages have tests requirements -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to anvil. https://bugs.launchpad.net/bugs/1355426 Title: pip2rpm builds bad specs when packages have tests requirements Status in ANVIL for forging OpenStack.: New Bug description: pip2rpm is not correctly parsing test-requirements.txt it will take the relevant entries and create a python-(character) for each character in the file. Under the tests portion of the repo. This creates a spec file like the following, which make it impossible to install the tests rpms. Source: h Requires: python-h # Source: a Requires: python-a # Source: c Requires: python-c # Source: k Requires: python-k # Source: i Requires: python-i # Source: n Requires: python-n # Source: g Requires: python-g # Source: 0 Requires: python-0 # Source: . Requires: python-- # Source: 8 Requires: python-8 # Source: . Requires: python-- # Source: 0 Requires: python-0 # Source: 0 Requires: python-0 # Source: . Requires: python-- # Source: 9 Requires: python-9 Which should be python-hacking=0.8.0,0.9 its is also stripping out ,= To manage notifications about this bug go to: https://bugs.launchpad.net/anvil/+bug/1355426/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355439] [NEW] Improve rpm package build requirements for anvil
Public bug reported: Currently it is a major pain to build horizon, because to correctly build horizon you need to have all the openstack clients installed on the build host as well as some other openstack specific packages. We use to do a hack that would remove the requirements of installing all the clients in order to get python manage.py compressstatic to run. Under icehouse there is a new requirement to install a package that are not available in standard repos - unless you install/enable RDO, or have access to a previously built set of openstack packages. The current specific package is python-oslo-config,which is built in the normal package phase and not in the dep phase and as such is not available for install at horizon build time. A possible fix to this would be to add a mini-stage before prepare/build that is called prepare-build-requirement and build-build-requirements so that the needed deps are built and placed into a repo before they are included/used. ** Affects: anvil Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to anvil. https://bugs.launchpad.net/bugs/1355439 Title: Improve rpm package build requirements for anvil Status in ANVIL for forging OpenStack.: New Bug description: Currently it is a major pain to build horizon, because to correctly build horizon you need to have all the openstack clients installed on the build host as well as some other openstack specific packages. We use to do a hack that would remove the requirements of installing all the clients in order to get python manage.py compressstatic to run. Under icehouse there is a new requirement to install a package that are not available in standard repos - unless you install/enable RDO, or have access to a previously built set of openstack packages. The current specific package is python-oslo-config,which is built in the normal package phase and not in the dep phase and as such is not available for install at horizon build time. A possible fix to this would be to add a mini-stage before prepare/build that is called prepare-build-requirement and build- build-requirements so that the needed deps are built and placed into a repo before they are included/used. To manage notifications about this bug go to: https://bugs.launchpad.net/anvil/+bug/1355439/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355442] [NEW] Need better handling of errors during dependency building
Public bug reported: Currently when anvil is building dependency packages and a package fails to build it will stop building packages once it hit that failure. If you run run the build step , it will continue building packages after the build failure. If a package fails to build and your remove the source rpm for that package - so that anvil continues to build the other packages dependencies. Once it has finished building only the packages that were built *after* the failure will be in the repo. This is because we remove the rpmbuild directory after a build failure but keep track of where we are in the build process. The result is successfully built packages are removed and we end up with a dependency repo that only contains some of the successfully built rpm's. It would be best to not remove the rpmbuild directory after an error, but to remove it during the first part of the prepare stage - or even at the start of the build stage. ** Affects: anvil Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to anvil. https://bugs.launchpad.net/bugs/1355442 Title: Need better handling of errors during dependency building Status in ANVIL for forging OpenStack.: New Bug description: Currently when anvil is building dependency packages and a package fails to build it will stop building packages once it hit that failure. If you run run the build step , it will continue building packages after the build failure. If a package fails to build and your remove the source rpm for that package - so that anvil continues to build the other packages dependencies. Once it has finished building only the packages that were built *after* the failure will be in the repo. This is because we remove the rpmbuild directory after a build failure but keep track of where we are in the build process. The result is successfully built packages are removed and we end up with a dependency repo that only contains some of the successfully built rpm's. It would be best to not remove the rpmbuild directory after an error, but to remove it during the first part of the prepare stage - or even at the start of the build stage. To manage notifications about this bug go to: https://bugs.launchpad.net/anvil/+bug/1355442/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354258] Re: nova-api will go wrong if AZ name has space in it when memcach is used
Easily reproducible import memcache mc = memcache.Client(['192.168.1.111:11211'], debug=0) mc.set(some_key, Some value) True print mc.get(some_key) Some value mc.set(some key, Some value) Traceback (most recent call last): File stdin, line 1, in module File /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/memcache.py, line 584, in set return self._set(set, key, val, time, min_compress_len) File /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/memcache.py, line 804, in _set self.check_key(key) File /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/memcache.py, line 1062, in check_key Control characters not allowed) memcache.MemcachedKeyCharacterError: Control characters not allowed ** Also affects: oslo Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1354258 Title: nova-api will go wrong if AZ name has space in it when memcach is used Status in OpenStack Compute (Nova): New Status in Oslo - a Library of Common OpenStack Code: New Bug description: Description: 1. memcahe is enabled 2. AZ name has space in it such as vmware region Then the nova-api will go wrong: [root@rs-144-1 init.d]# nova list ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-a26c1fd3-ce08-4875-aacf-f8db8f73b089) Reason: Memcach retrieve the AZ name as key and check it. It will raise an error if there are unexpected character in the key. LOG in /var/log/api.log 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack File /usr/lib/python2.6/site-packages/nova/availability_zones.py, line 145, in get_instance_availability_zone 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack az = cache.get(cache_key) 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack File /usr/lib/python2.6/site-packages/memcache.py, line 898, in get 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack return self._get('get', key) 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack File /usr/lib/python2.6/site-packages/memcache.py, line 847, in _get 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack self.check_key(key) 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack File /usr/lib/python2.6/site-packages/memcache.py, line 1065, in check_key 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack #Control characters not allowed) 2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack MemcachedKeyCharacterError: Control characters not allowed To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1354258/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355489] [NEW] ldap binary fields fail when code try to convert to utf8
Public bug reported: When attempting to fetch a token with a ldap backed keystone authentication, users are never able to authenticate. Setup: Version: stable/icehouse LDAP: Active Directory. User fields have many binary fields (i.e. thumbnail_image). driver=keystone.identity.backends.ldap.Identity Observance Request: When attempting to fetch a token with known valid creds via: keystone token-get Response: The request you have made requires authentication. (HTTP 401) Debugging Session: During a IRC #openstack-keystone chat 8/11 with ayoung, wwriverrat1, mdorman, it was discovered the method _id_to_dn calls search without limiting the return attributes. When the internal search is performed, each of the attributes returned from ldap are being converted to utf8 including the binary fields. This causes the call to raise exception and quietly reject the request. If the code prevents these fields from returning, all is well. Source (stable/icehouse): https://github.com/openstack/keystone/blob/stable/icehouse/keystone/common/ldap/core.py#L464-L470 Adding a search value for attrlist eliminated the error: Changed the following line 470 'objclass': self.object_class}) to 'objclass': self.object_class}, attrlist=[self.id_attr]) resolved the issue. This should be a safe fix because the actual return attributes are never needed nor returned. NOTE: passing in a empty list did not fix the problem. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1355489 Title: ldap binary fields fail when code try to convert to utf8 Status in OpenStack Identity (Keystone): New Bug description: When attempting to fetch a token with a ldap backed keystone authentication, users are never able to authenticate. Setup: Version: stable/icehouse LDAP: Active Directory. User fields have many binary fields (i.e. thumbnail_image). driver=keystone.identity.backends.ldap.Identity Observance Request: When attempting to fetch a token with known valid creds via: keystone token-get Response: The request you have made requires authentication. (HTTP 401) Debugging Session: During a IRC #openstack-keystone chat 8/11 with ayoung, wwriverrat1, mdorman, it was discovered the method _id_to_dn calls search without limiting the return attributes. When the internal search is performed, each of the attributes returned from ldap are being converted to utf8 including the binary fields. This causes the call to raise exception and quietly reject the request. If the code prevents these fields from returning, all is well. Source (stable/icehouse): https://github.com/openstack/keystone/blob/stable/icehouse/keystone/common/ldap/core.py#L464-L470 Adding a search value for attrlist eliminated the error: Changed the following line 470 'objclass': self.object_class}) to 'objclass': self.object_class}, attrlist=[self.id_attr]) resolved the issue. This should be a safe fix because the actual return attributes are never needed nor returned. NOTE: passing in a empty list did not fix the problem. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1355489/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355500] [NEW] unmocked method call in databases obtaining flavor
Public bug reported: Exception while obtaining flavors list Traceback (most recent call last): File /home/david-lyle/horizon/horizon/openstack_dashboard/dashboards/project/databases/workflows/create_instance.py, line 56, in flavors return api.trove.flavor_list(request) File /home/david-lyle/horizon/horizon/.venv/local/lib/python2.7/site-packages/mox.py, line 765, in __call__ return mock_method(*params, **named_params) File /home/david-lyle/horizon/horizon/.venv/local/lib/python2.7/site-packages/mox.py, line 1010, in __call__ raise expected_method._exception ClientException: Expected failure. Problem instantiating action class. Traceback (most recent call last): File /home/david-lyle/horizon/horizon/horizon/workflows/base.py, line 368, in action context) File /home/david-lyle/horizon/horizon/horizon/workflows/base.py, line 138, in __init__ self._populate_choices(request, context) File /home/david-lyle/horizon/horizon/horizon/workflows/base.py, line 151, in _populate_choices bound_field.choices = meth(request, context) File /home/david-lyle/horizon/horizon/openstack_dashboard/dashboards/project/databases/workflows/create_instance.py, line 65, in populate_flavor_choices flavor_list = [(f.id, %s % f.name) for f in self.flavors(request)] File /home/david-lyle/horizon/horizon/horizon/utils/memoized.py, line 90, in wrapped value = cache[key] = func(*args, **kwargs) File /home/david-lyle/horizon/horizon/openstack_dashboard/dashboards/project/databases/workflows/create_instance.py, line 62, in flavors redirect=redirect) File /home/david-lyle/horizon/horizon/horizon/exceptions.py, line 326, in handle raise Http302(redirect) Http302 In test run. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355500 Title: unmocked method call in databases obtaining flavor Status in OpenStack Dashboard (Horizon): New Bug description: Exception while obtaining flavors list Traceback (most recent call last): File /home/david-lyle/horizon/horizon/openstack_dashboard/dashboards/project/databases/workflows/create_instance.py, line 56, in flavors return api.trove.flavor_list(request) File /home/david-lyle/horizon/horizon/.venv/local/lib/python2.7/site-packages/mox.py, line 765, in __call__ return mock_method(*params, **named_params) File /home/david-lyle/horizon/horizon/.venv/local/lib/python2.7/site-packages/mox.py, line 1010, in __call__ raise expected_method._exception ClientException: Expected failure. Problem instantiating action class. Traceback (most recent call last): File /home/david-lyle/horizon/horizon/horizon/workflows/base.py, line 368, in action context) File /home/david-lyle/horizon/horizon/horizon/workflows/base.py, line 138, in __init__ self._populate_choices(request, context) File /home/david-lyle/horizon/horizon/horizon/workflows/base.py, line 151, in _populate_choices bound_field.choices = meth(request, context) File /home/david-lyle/horizon/horizon/openstack_dashboard/dashboards/project/databases/workflows/create_instance.py, line 65, in populate_flavor_choices flavor_list = [(f.id, %s % f.name) for f in self.flavors(request)] File /home/david-lyle/horizon/horizon/horizon/utils/memoized.py, line 90, in wrapped value = cache[key] = func(*args, **kwargs) File /home/david-lyle/horizon/horizon/openstack_dashboard/dashboards/project/databases/workflows/create_instance.py, line 62, in flavors redirect=redirect) File /home/david-lyle/horizon/horizon/horizon/exceptions.py, line 326, in handle raise Http302(redirect) Http302 In test run. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355500/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355502] [NEW] NSX - add note in configuration files regarding distributed routers
Public bug reported: In order to leverage distributed routing with the NSX plugin - the replication_mode parameter should be set to 'service'. Otherwise the backend wil throw 409 errors resulting in 500 NSX errors. This should be noted in the configuration files. ** Affects: neutron Importance: Low Assignee: Salvatore Orlando (salvatore-orlando) Status: New ** Tags: vmware -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355502 Title: NSX - add note in configuration files regarding distributed routers Status in OpenStack Neutron (virtual network service): New Bug description: In order to leverage distributed routing with the NSX plugin - the replication_mode parameter should be set to 'service'. Otherwise the backend wil throw 409 errors resulting in 500 NSX errors. This should be noted in the configuration files. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1334368] Re: HEAD and GET inconsistencies in Keystone
** Also affects: openstack-api-site Importance: Undecided Status: New ** Tags added: identity-api -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1334368 Title: HEAD and GET inconsistencies in Keystone Status in OpenStack Identity (Keystone): Fix Released Status in Keystone icehouse series: Fix Released Status in OpenStack API documentation site: New Status in Tempest: Fix Released Bug description: While trying to convert Keystone to gate/check under mod_wsgi, it was noticed that occasionally a few HEAD calls were returning HTTP 200 where under eventlet they consistently return HTTP 204. This is an inconsistency within Keystone. Based upon the RFC, HEAD should be identitcal to GET except that there is no body returned. Apache + MOD_WSGI in some cases converts a HEAD request to a GET request to the back-end wsgi application to avoid issues where the headers cannot be built to be sent as part of the response (this can occur when no content is returned from the wsgi app). This situation shows that Keystone should likely never build specific HEAD request methods and have HEAD simply call to the controller GET handler, the wsgi-layer should then simply remove the response body. This will help to simplify Keystone's code as well as mkae the API responses more consistent. Example Error in Gate: 2014-06-25 05:20:37.820 | tempest.api.identity.admin.v3.test_trusts.TrustsV3TestJSON.test_trust_expire[gate,smoke] 2014-06-25 05:20:37.820 | 2014-06-25 05:20:37.820 | 2014-06-25 05:20:37.820 | Captured traceback: 2014-06-25 05:20:37.820 | ~~~ 2014-06-25 05:20:37.820 | Traceback (most recent call last): 2014-06-25 05:20:37.820 | File tempest/api/identity/admin/v3/test_trusts.py, line 241, in test_trust_expire 2014-06-25 05:20:37.820 | self.check_trust_roles() 2014-06-25 05:20:37.820 | File tempest/api/identity/admin/v3/test_trusts.py, line 173, in check_trust_roles 2014-06-25 05:20:37.821 | self.assertEqual('204', resp['status']) 2014-06-25 05:20:37.821 | File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 321, in assertEqual 2014-06-25 05:20:37.821 | self.assertThat(observed, matcher, message) 2014-06-25 05:20:37.821 | File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in assertThat 2014-06-25 05:20:37.821 | raise mismatch_error 2014-06-25 05:20:37.821 | MismatchError: '204' != '200' This is likely going to require changes to Keystone, Keystoneclient, Tempest, and possibly services that consume data from keystone. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1334368/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355547] [NEW] Catch invalid exception when deleting a nonexistent flavor
Public bug reported: flavor_get_by_flavor_id() raises a FlavorNotFound exception when tring to delete a nonexistent flavor and REST API layer should catch the exception. However the one of v2 API catches exception.NotFound instead. This invalid exception catching has been fixed on v3 API. ** Affects: nova Importance: Undecided Assignee: Ken'ichi Ohmichi (oomichi) Status: In Progress ** Changed in: nova Assignee: (unassigned) = Ken'ichi Ohmichi (oomichi) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1355547 Title: Catch invalid exception when deleting a nonexistent flavor Status in OpenStack Compute (Nova): In Progress Bug description: flavor_get_by_flavor_id() raises a FlavorNotFound exception when tring to delete a nonexistent flavor and REST API layer should catch the exception. However the one of v2 API catches exception.NotFound instead. This invalid exception catching has been fixed on v3 API. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1355547/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355565] [NEW] config argument to config.setup_logging is unused
Public bug reported: In neutron/common/config.py, the argument to setup_logging is unused: def setup_logging(conf): product_name = neutron logging.setup(product_name) LOG.info(_(Logging enabled!)) As a minor cleanup, we should remove the argument in the interests of simpler code and avoiding confusion. ** Affects: neutron Importance: Undecided Assignee: Angus Lees (gus) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1355565 Title: config argument to config.setup_logging is unused Status in OpenStack Neutron (virtual network service): In Progress Bug description: In neutron/common/config.py, the argument to setup_logging is unused: def setup_logging(conf): product_name = neutron logging.setup(product_name) LOG.info(_(Logging enabled!)) As a minor cleanup, we should remove the argument in the interests of simpler code and avoiding confusion. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1355565/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1355601] [NEW] User project can't be list after the project is created
Public bug reported: Testing step: 1:login as admin 2:create a new user and assign admin role 3:login as this new user 4:create a new project 5:search this new project and return empty ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1355601 Title: User project can't be list after the project is created Status in OpenStack Dashboard (Horizon): New Bug description: Testing step: 1:login as admin 2:create a new user and assign admin role 3:login as this new user 4:create a new project 5:search this new project and return empty To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1355601/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp