[Yahoo-eng-team] [Bug 1607625] [NEW] gate-neutron-fwaas-dsvm-tempest is not working properly
Public bug reported: it's failing to enable q-fwwas ** Affects: neutron Importance: Undecided Assignee: YAMAMOTO Takashi (yamamoto) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1607625 Title: gate-neutron-fwaas-dsvm-tempest is not working properly Status in neutron: In Progress Bug description: it's failing to enable q-fwwas To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1607625/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1603905] Re: V2 API: enable a user doesn't work
Reviewed: https://review.openstack.org/344057 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=4adf01b03b75af4bb1917f545dc71788fb07d5ea Submitter: Jenkins Branch:master commit 4adf01b03b75af4bb1917f545dc71788fb07d5ea Author: Dave ChenDate: Tue Jul 19 14:53:36 2016 +0800 Add schema for enabling a user The schema is added to ensure the payload is correctly assembled. The route of the API is defined here: https://github.com/openstack/keystone/blob/master/keystone/v2_crud/admin_crud.py#L134 Partially implements: bp schema-validation-extent Change-Id: I79c95be3699cf915fc8542d2e770072970656261 Closes-Bug: #1603905 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1603905 Title: V2 API: enable a user doesn't work Status in OpenStack Identity (keystone): Fix Released Bug description: Enable user === PUT /v2.0/users/{userId}/OS-KSADM/enabled The above API doesn't work, there are two issue here. 1. The API unnecessarily need a request body url -g -i -X PUT http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404 /OS-KSADM/enabled -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: e2fde9a73eb743e298e3d10aabebe5e0" {"error": {"message": "set_user_enabled() takes exactly 4 arguments (3 given)", "code": 400, "title": "Bad Request"}} 2. If we pass a request body without 'enabled' property, it cannot enable the disabled user. openstack user show acc163d0efa14fe5b84e1dcc62ff6404 ++--+ | Field | Value| ++--+ | default_project_id | e9b5b0575cad498f8fce9e39ef209411 | | domain_id | default | | enabled| False| | id | acc163d0efa14fe5b84e1dcc62ff6404 | | name | test_user| ++--+ curl -g -i -X PUT http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404 /OS-KSADM/enabled -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: e2fde9a73eb743e298e3d10aabebe5e0" -d '{"user": {"name": "test_user"}}' {"user": {"username": "test_user", "name": "test_user", "extra": {}, "enabled": false, "id": "acc163d0efa14fe5b84e1dcc62ff6404", "tenantId": "e9b5b0575cad498f8fce9e39ef209411"}} Nothing is changed, the user is still disabled. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1603905/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1586633] Re: Unable to disable tenant_network_types
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1586633 Title: Unable to disable tenant_network_types Status in neutron: Expired Bug description: Following the Mitaka installation guide for Ubuntu, the instructions here [1] state to set "tenant_network_type = " to disable tenant networks. It appears however that the setting to nothing is ignored and it defaults back to local. On starting the neutron-service I get: 2016-05-28 07:15:57.183 16734 INFO neutron.plugins.ml2.managers [-] Configured type driver names: ['flat', 'vlan'] 2016-05-28 07:15:57.188 16734 INFO neutron.plugins.ml2.drivers.type_flat [-] Allowable flat physical_network names: ['provider'] 2016-05-28 07:15:57.194 16734 INFO neutron.plugins.ml2.drivers.type_vlan [-] Network VLAN ranges: {} 2016-05-28 07:15:57.195 16734 INFO neutron.plugins.ml2.managers [-] Loaded type driver names: ['flat', 'vlan'] 2016-05-28 07:15:57.195 16734 INFO neutron.plugins.ml2.managers [-] Registered types: ['flat', 'vlan'] 2016-05-28 07:15:57.196 16734 ERROR neutron.plugins.ml2.managers [-] No type driver for tenant network_type: local. Service terminated! So it is still attempting to use "local" even when overridden in the ml2_conf.ini file. I have not tried the latest from master to see if this behavior has changed. [1] http://docs.openstack.org/mitaka/install-guide-ubuntu/neutron-controller-install-option1.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1586633/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607602] [NEW] policy.json ignored for most instance actions
Public bug reported: I'm trying to allow a certain role to do certain things to any projects instances through policy.json and it isn't working as expected. I've set the following policies to allow my role to do a "nova show" but with no luck, the same is with any other instance action like start, reboot etc. "compute:get": "rule:default_or_monitoring", "compute:get_all": "rule:default_or_monitoring", "compute:get_all_tenants": "rule:admin_or_monitoring", "os_compute_api:servers:detail:get_all_tenants": "rule:admin_or_monitoring", "os_compute_api:servers:index:get_all_tenants": "rule:admin_or_monitoring", "os_compute_api:servers:detail": "rule:default_or_monitoring", "os_compute_api:servers:index": "rule:default_or_monitoring", "os_compute_api:servers:show": "rule:default_or_monitoring", Upon looking in the code I see that in the DB layer the instance_get function is hard coded to filter by project if the context isn't admin see: HEAD (as of writing) https://github.com/openstack/nova/blob/d0905df10a48212950c0854597a2df923e6ddd0c/nova/db/sqlalchemy/api.py#L1885 If I remove this project=True flag then everything works as expected. Nova api otherwise just returns a 404 ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607602 Title: policy.json ignored for most instance actions Status in OpenStack Compute (nova): New Bug description: I'm trying to allow a certain role to do certain things to any projects instances through policy.json and it isn't working as expected. I've set the following policies to allow my role to do a "nova show" but with no luck, the same is with any other instance action like start, reboot etc. "compute:get": "rule:default_or_monitoring", "compute:get_all": "rule:default_or_monitoring", "compute:get_all_tenants": "rule:admin_or_monitoring", "os_compute_api:servers:detail:get_all_tenants": "rule:admin_or_monitoring", "os_compute_api:servers:index:get_all_tenants": "rule:admin_or_monitoring", "os_compute_api:servers:detail": "rule:default_or_monitoring", "os_compute_api:servers:index": "rule:default_or_monitoring", "os_compute_api:servers:show": "rule:default_or_monitoring", Upon looking in the code I see that in the DB layer the instance_get function is hard coded to filter by project if the context isn't admin see: HEAD (as of writing) https://github.com/openstack/nova/blob/d0905df10a48212950c0854597a2df923e6ddd0c/nova/db/sqlalchemy/api.py#L1885 If I remove this project=True flag then everything works as expected. Nova api otherwise just returns a 404 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607602/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607119] Re: TOTP auth not functional in python3
Reviewed: https://review.openstack.org/348081 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=b2cb4c403f94fdf61100d43b59dedec1547c7364 Submitter: Jenkins Branch:master commit b2cb4c403f94fdf61100d43b59dedec1547c7364 Author: adriantDate: Thu Jul 28 11:24:58 2016 +1200 TOTP auth not functional in python3 Fixing a byte>str conversion bug present in the TOTP passcode generation function that was only present in python3 which rendered TOTP auth non-functional in python3. Also adding a test to check passcode generation returns the correct format. Closes-Bug: #1607119 Change-Id: Ie052d559c4eb2577d35caa9f6e240664cf4cf399 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1607119 Title: TOTP auth not functional in python3 Status in OpenStack Identity (keystone): Fix Released Bug description: Because of how python3 handles byte>str conversion, the passcode generation function produces a mangled result in python3. The reason the unit tests still pass in python3 is because the tests also use the same function and thus the server and the tests are both sending and expecting the same mangled passcode. This would then mean that anyone correctly generating the passcode and attempting to authenticate via TOTP would fail because the server is expecting a mangled passcode. The fix is to not use six.text_type, as it does the wrong thing, and instead use .decode('utf-8') which produces the correct result in both python2 and python3. Example of why and how this happens: Python2: >>> passcode = b'123456' >>> print passcode 123456 >>> type(passcode) >>> import six >>> six.text_type(passcode) u'123456' >>> type(six.text_type(passcode)) >>> otherstring = "openstack" >>> otherstring + passcode 'openstack123456' >>> passcode.decode('utf-8') u'123456' >>> type(passcode.decode('utf-8')) Python3: >>> passcode = b'123456' >>> print(passcode) b'123456' >>> type(passcode) >>> import six >>> six.text_type(passcode) "b'123456'" >>> type(six.text_type(passcode)) >>> otherstring = "openstack" >>> otherstring + passcode Traceback (most recent call last): File "", line 1, in TypeError: Can't convert 'bytes' object to str implicitly >>> otherstring + str(passcode) "openstackb'123456'" >>> passcode.decode('utf-8') '123456' >>> type(passcode.decode('utf-8')) To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1607119/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607219] Re: revert-resize doen't drop new pci devices
** Changed in: nova Status: Invalid => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607219 Title: revert-resize doen't drop new pci devices Status in OpenStack Compute (nova): In Progress Bug description: This commit https://review.openstack.org/#/c/307124/ fixes the resize for pci device, but in drop_move_claim it takes always the old pci device for the migration context. It should get the pci device according to prefix To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607219/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607585] [NEW] neutron server report error when get loadbalance status
Public bug reported: 2014-01-29 06:43:31.602 11340 ERROR neutron.api.v2.resource [req-d7fb09f3-f5b0-433f-8142-70c2515b78bc ] statuses failed 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource Traceback (most recent call last): 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in resource 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource result = method(request=request, **args) 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 207, in _handle_action 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource return getattr(self._plugin, name)(*arg_list, **kwargs) 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py", line 1010, in statuses 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource self._set_degraded(self, listener_status, lb_status) 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py", line 1044, in _set_degraded 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource obj["operating_status"] = lb_const.DEGRADED 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource TypeError: 'LoadBalancerPluginv2' object does not support item assignment ** Affects: neutron Importance: Undecided Assignee: dongjuan (dong-juan1) Status: New ** Tags: lbaas ** Changed in: neutron Assignee: (unassigned) => dongjuan (dong-juan1) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1607585 Title: neutron server report error when get loadbalance status Status in neutron: New Bug description: 2014-01-29 06:43:31.602 11340 ERROR neutron.api.v2.resource [req-d7fb09f3-f5b0-433f-8142-70c2515b78bc ] statuses failed 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource Traceback (most recent call last): 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in resource 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource result = method(request=request, **args) 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 207, in _handle_action 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource return getattr(self._plugin, name)(*arg_list, **kwargs) 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py", line 1010, in statuses 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource self._set_degraded(self, listener_status, lb_status) 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py", line 1044, in _set_degraded 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource obj["operating_status"] = lb_const.DEGRADED 2014-01-29 06:43:31.602 11340 TRACE neutron.api.v2.resource TypeError: 'LoadBalancerPluginv2' object does not support item assignment To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1607585/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607574] [NEW] Calling of get_mac_by_pci_address() missed True 'pf_interface'
Public bug reported: Calling of get_mac_by_pci_address in _populate_pci_mac_address() missed parameter 'pf_interface' of True although 'pci_dev.dev_type == obj_fields.PciDeviceType.SRIOV_PF', and this will result in incorrectly 'dev_path' and incorrectly 'if_name' after calling of get_mac_by_pci_address(). ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607574 Title: Calling of get_mac_by_pci_address() missed True 'pf_interface' Status in OpenStack Compute (nova): New Bug description: Calling of get_mac_by_pci_address in _populate_pci_mac_address() missed parameter 'pf_interface' of True although 'pci_dev.dev_type == obj_fields.PciDeviceType.SRIOV_PF', and this will result in incorrectly 'dev_path' and incorrectly 'if_name' after calling of get_mac_by_pci_address(). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607574/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1604370] Re: functional: test_legacy_router_ns_rebuild is unstable
Reviewed: https://review.openstack.org/344859 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=11dc21d3a6d765f7bcc95548b55ff13c4397c2e7 Submitter: Jenkins Branch:master commit 11dc21d3a6d765f7bcc95548b55ff13c4397c2e7 Author: Terry WilsonDate: Fri Apr 22 08:55:11 2016 -0500 Wait for vswitchd to add interfaces in native ovsdb ovs-vsctl, unless --no-wait is passed, will wait until ovs-vswitchd has reacted to a successful transaction. This patch implements the same logic, waiting for next_cfg to be incremented and checking that any added interfaces have actually been assigned ofports. Closes-Bug: #1604816 Closes-Bug: #1604370 Related-Bug: #1604115 Change-Id: I638b82c13394f150c0bd23301285bd3375e66139 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1604370 Title: functional: test_legacy_router_ns_rebuild is unstable Status in neutron: Fix Released Bug description: logstash query: http://goo.gl/xfeSlj 13 hits in last 7 days Example of failure: http://logs.openstack.org/01/312401/8/check/gate- neutron-dsvm-functional/6aedd17/logs/dsvm-functional- logs/neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_ns_rebuild.txt.gz#_2016-07-19_10_45_28_248 Traceback: 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info [req-8e9eaf17-43d7-46e4-8179-84cb18fa106e - - - - -] Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot find device "qg-85bf65ee-a2" 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info Traceback (most recent call last): 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/common/utils.py", line 239, in call 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info return func(*args, **kwargs) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/l3/router_info.py", line 1035, in process 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info self.process_external(agent) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/l3/router_info.py", line 817, in process_external 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info self._process_external_gateway(ex_gw_port, agent.pd) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/l3/router_info.py", line 695, in _process_external_gateway 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info self.external_gateway_added(ex_gw_port, interface_name) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/l3/router_info.py", line 660, in external_gateway_added 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info ex_gw_port, interface_name, self.ns_name, preserve_ips) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/l3/router_info.py", line 606, in _external_gateway_added 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info self._plug_external_gateway(ex_gw_port, interface_name, ns_name) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/l3/router_info.py", line 573, in _plug_external_gateway 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info mtu=ex_gw_port.get('mtu')) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/linux/interface.py", line 251, in plug 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info bridge, namespace, prefix, mtu) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/linux/interface.py", line 344, in plug_new 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info ns_dev.link.set_address(mac_address) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/linux/ip_lib.py", line 496, in set_address 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info self._as_root([], ('set', self.name, 'address', mac_address)) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/linux/ip_lib.py", line 362, in _as_root 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info use_root_namespace=use_root_namespace) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/linux/ip_lib.py", line 95, in _as_root 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info log_fail_as_error=self.log_fail_as_error) 2016-07-19 10:45:28.248 27850 ERROR neutron.agent.l3.router_info File "neutron/agent/linux/ip_lib.py", line 104, in _execute 2016-07-19 10:45:28.248 27850 ERROR
[Yahoo-eng-team] [Bug 1604816] Re: native ovsdb seems to return before finishing adding/removing port
Reviewed: https://review.openstack.org/344859 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=11dc21d3a6d765f7bcc95548b55ff13c4397c2e7 Submitter: Jenkins Branch:master commit 11dc21d3a6d765f7bcc95548b55ff13c4397c2e7 Author: Terry WilsonDate: Fri Apr 22 08:55:11 2016 -0500 Wait for vswitchd to add interfaces in native ovsdb ovs-vsctl, unless --no-wait is passed, will wait until ovs-vswitchd has reacted to a successful transaction. This patch implements the same logic, waiting for next_cfg to be incremented and checking that any added interfaces have actually been assigned ofports. Closes-Bug: #1604816 Closes-Bug: #1604370 Related-Bug: #1604115 Change-Id: I638b82c13394f150c0bd23301285bd3375e66139 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1604816 Title: native ovsdb seems to return before finishing adding/removing port Status in neutron: Fix Released Bug description: When using AddPortCommand and DelPortCommand, there seems to be a delay in when the DB returns and when the actual interface is visible for use in the system. In this failure, we can see an AddPortCommand completes and then an ip link call is made on the tap device which fails because it does not exist yet: 2016-07-20 12:18:18.555 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): AddPortCommand(bridge=test-brca25db6d, port=qr-10df58c6-8a, may_exist=False) do_commit neutron/agent/ovsdb/impl_idl.py:83 2016-07-20 12:18:18.556 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=1): DbSetCommand(table=Interface, record=qr-10df58c6-8a, col_values=(('type', 'internal'), ('external_ids', {'iface-status': 'active', 'iface-id': '10df58c6-8ad8-4f89-b3d1-36cf26ee5792', 'attached-mac': 'ca:fe:de:ad:be:ef'}))) do_commit neutron/agent/ovsdb/impl_idl.py:83 2016-07-20 12:18:18.570 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): DbGetCommand(table=Interface, column=ofport, record=qr-10df58c6-8a) do_commit neutron/agent/ovsdb/impl_idl.py:83 2016-07-20 12:18:18.570 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Transaction caused no change do_commit neutron/agent/ovsdb/impl_idl.py:111 2016-07-20 12:18:18.591 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): DbGetCommand(table=Interface, column=ofport, record=qr-10df58c6-8a) do_commit neutron/agent/ovsdb/impl_idl.py:83 2016-07-20 12:18:18.592 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Transaction caused no change do_commit neutron/agent/ovsdb/impl_idl.py:111 2016-07-20 12:18:18.634 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): DbGetCommand(table=Interface, column=ofport, record=qr-10df58c6-8a) do_commit neutron/agent/ovsdb/impl_idl.py:83 2016-07-20 12:18:18.634 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Transaction caused no change do_commit neutron/agent/ovsdb/impl_idl.py:111 2016-07-20 12:18:18.715 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn command(idx=0): DbGetCommand(table=Interface, column=ofport, record=qr-10df58c6-8a) do_commit neutron/agent/ovsdb/impl_idl.py:83 2016-07-20 12:18:18.716 28212 DEBUG neutron.agent.ovsdb.impl_idl [-] Transaction caused no change do_commit neutron/agent/ovsdb/impl_idl.py:111 2016-07-20 12:18:18.716 28212 DEBUG neutron.agent.linux.utils [req-34784ee2-5510-47ce-af43-fe7691c9dcda - - - - -] Running command (rootwrap daemon): ['ip', 'link', 'set', 'qr-10df58c6-8a', 'address', 'ca:fe:de:ad:be:ef'] execute_rootwrap_daemon neutron/agent/linux/utils.py:99 2016-07-20 12:18:18.724 28212 ERROR neutron.agent.linux.utils [req-34784ee2-5510-47ce-af43-fe7691c9dcda - - - - -] Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot find device "qr-10df58c6-8a" On examining the syslog, the interface shows up about 500ms later: Jul 20 12:18:19 ubuntu-trusty-rax-ord-2672516 kernel: [ 978.400048] device qr-10df58c6-8a entered promiscuous mode A similar effect is visible in the dhcp stale cleanup test where DelPortCommand is issued but then the port remains visible afterwards to a find via /sys/ : http://logs.openstack.org/31/344731/2/check /gate-neutron-dsvm-functional/98c6b55/logs/dsvm-functional- logs/neutron.tests.functional.agent.linux.test_dhcp.TestDhcp.test_cleanup_stale_devices.txt.gz To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1604816/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607567] [NEW] domain configuration API docs missing domain config creation
Public bug reported: The domain configuration API documentation is missing information on how to create a domain config. The help for domain config mentions a PUT request, but the API for the PUT is missing. http://developer.openstack.org/api-ref/identity/v3/index.html?expanded =update-domain-configuration-detail#domain-configuration I also checked the code and there is support for creating a domain config via a PUT request. ** Affects: keystone Importance: Undecided Assignee: Eric Brown (ericwb) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1607567 Title: domain configuration API docs missing domain config creation Status in OpenStack Identity (keystone): In Progress Bug description: The domain configuration API documentation is missing information on how to create a domain config. The help for domain config mentions a PUT request, but the API for the PUT is missing. http://developer.openstack.org/api-ref/identity/v3/index.html?expanded =update-domain-configuration-detail#domain-configuration I also checked the code and there is support for creating a domain config via a PUT request. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1607567/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607553] [NEW] Revocation event caching is broken
Public bug reported: It seems the caching of revocation events is broken. I have a devstack stood up with fernet tokens enabled. If I run tempest.api.identity.admin.v3.test_tokens.TokensV3TestJSON.test_tokens with revocation event caching and fernet enabled, the test fails consistently [0]. This is just one example of a failing test. Several of the tests in the tempest.api.identity suite fail with the same symptoms. If I disable revocation event caching in the keystone.conf (CONF.revoke.caching=False), the test passes consistently. [0] http://cdn.pasteraw.com/gx49zwqv5stjmixqbcgoscexcn28w32 ** Affects: keystone Importance: Undecided Status: New ** Tags: fernet revoke ** Tags added: revoke ** Tags added: fernet -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1607553 Title: Revocation event caching is broken Status in OpenStack Identity (keystone): New Bug description: It seems the caching of revocation events is broken. I have a devstack stood up with fernet tokens enabled. If I run tempest.api.identity.admin.v3.test_tokens.TokensV3TestJSON.test_tokens with revocation event caching and fernet enabled, the test fails consistently [0]. This is just one example of a failing test. Several of the tests in the tempest.api.identity suite fail with the same symptoms. If I disable revocation event caching in the keystone.conf (CONF.revoke.caching=False), the test passes consistently. [0] http://cdn.pasteraw.com/gx49zwqv5stjmixqbcgoscexcn28w32 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1607553/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607534] [NEW] Subnets not visible for admin tenant external network for non-admin user
Public bug reported: External network subnet is not visible to non-admin users. External network is created under "admin" tenant. As an admin user, # neutron net-list | 6190b806-a05f-4394-ac2e-b646d307b8a7 | external | 80a88580-9986-47a3-bd3f-68e43b7edf1c 169.54.91.160/27 | However as a project member user, # neutron net-list | 6190b806-a05f-4394-ac2e-b646d307b8a7 | external | 80a88580-9986-47a3-bd3f-68e43b7edf1c | This is breaking the scripts that relies on subnet information from net-list output. Is this behavior intended? Also, as per the defect, https://github.com/openstack/neutron/pull/36/commits/391c2327c3b9de0e2b9875bab8d6f6909fa0983a only external "Shared" network subnets are visible. However if an external network is made "Shared", users can launch VMs directly on external network (rather than just floating ip and gateway ips to routers). ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1607534 Title: Subnets not visible for admin tenant external network for non-admin user Status in neutron: New Bug description: External network subnet is not visible to non-admin users. External network is created under "admin" tenant. As an admin user, # neutron net-list | 6190b806-a05f-4394-ac2e-b646d307b8a7 | external | 80a88580-9986-47a3-bd3f-68e43b7edf1c 169.54.91.160/27 | However as a project member user, # neutron net-list | 6190b806-a05f-4394-ac2e-b646d307b8a7 | external | 80a88580-9986-47a3-bd3f-68e43b7edf1c | This is breaking the scripts that relies on subnet information from net-list output. Is this behavior intended? Also, as per the defect, https://github.com/openstack/neutron/pull/36/commits/391c2327c3b9de0e2b9875bab8d6f6909fa0983a only external "Shared" network subnets are visible. However if an external network is made "Shared", users can launch VMs directly on external network (rather than just floating ip and gateway ips to routers). To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1607534/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1597898] Re: OVN L3 service plugin does not need agent RPC
** Also affects: neutron Importance: Undecided Status: New ** Changed in: neutron Assignee: (unassigned) => Richard Theis (rtheis) ** Changed in: neutron Status: New => In Progress ** Changed in: networking-ovn Status: Confirmed => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1597898 Title: OVN L3 service plugin does not need agent RPC Status in networking-ovn: In Progress Status in neutron: In Progress Bug description: The OVN L3 service plugin (OVNL3RouterPlugin) inherits from the neutron ExtraRoute_db_mixin class which eventually inherits from the neutron L3RpcNotifierMixin class. The L3RpcNotifierMixin inheritance isn't necessary and unneeded RPC overhead since OVN L3 service plugin doesn't use an L3 agent. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-ovn/+bug/1597898/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606995] Re: Nova fails to provision machine but can pull existing machines
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1606995 Title: Nova fails to provision machine but can pull existing machines Status in OpenStack Compute (nova): Invalid Bug description: After switching from Keystone V2.0 to Keystone V3 we can no longer provision machines, we can still see existing machines in Horizon and log in Horizon. Nova config for Keystone: [keystone_authtoken] # # From keystonemiddleware.auth_token # # Complete public Identity API endpoint. (string value) #auth_uri = auth_uri = http://192.168.0.2:5000/ # API version of the admin Identity API endpoint. (string value) #auth_version = # Do not handle authorization requests within the middleware, but delegate the # authorization decision to downstream WSGI components. (boolean value) #delay_auth_decision = false # Request timeout value for communicating with Identity API server. (integer # value) #http_connect_timeout = # How many times are we trying to reconnect when communicating with Identity # API Server. (integer value) #http_request_max_retries = 3 # Env key for the swift cache. (string value) #cache = # Required if identity server requires client certificate (string value) #certfile = # Required if identity server requires client certificate (string value) #keyfile = # A PEM encoded Certificate Authority to use when verifying HTTPs connections. # Defaults to system CAs. (string value) #cafile = # Verify HTTPS connections. (boolean value) #insecure = false # The region in which the identity server can be found. (string value) #region_name = # Directory used to cache files related to PKI tokens. (string value) #signing_dir = signing_dir = /tmp/keystone-signing-nova # Optionally specify a list of memcached server(s) to use for caching. If left # undefined, tokens will instead be cached in-process. (list value) # Deprecated group/name - [DEFAULT]/memcache_servers #memcached_servers = # In order to prevent excessive effort spent validating tokens, the middleware # caches previously-seen tokens for a configurable duration (in seconds). Set # to -1 to disable caching completely. (integer value) #token_cache_time = 300 # Determines the frequency at which the list of revoked tokens is retrieved # from the Identity service (in seconds). A high number of revocation events # combined with a low cache duration may significantly reduce performance. # (integer value) #revocation_cache_time = 10 # (Optional) If defined, indicate whether token data should be authenticated or # authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, # token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data # is encrypted and authenticated in the cache. If the value is not one of these # options or empty, auth_token will raise an exception on initialization. # (string value) #memcache_security_strategy = # (Optional, mandatory if memcache_security_strategy is defined) This string is # used for key derivation. (string value) #memcache_secret_key = # (Optional) Number of seconds memcached server is considered dead before it is # tried again. (integer value) #memcache_pool_dead_retry = 300 # (Optional) Maximum total number of open connections to every memcached # server. (integer value) #memcache_pool_maxsize = 10 # (Optional) Socket timeout in seconds for communicating with a memcached # server. (integer value) #memcache_pool_socket_timeout = 3 # (Optional) Number of seconds a connection to memcached is held unused in the # pool before it is closed. (integer value) #memcache_pool_unused_timeout = 60 # (Optional) Number of seconds that an operation will wait to get a memcached # client connection from the pool. (integer value) #memcache_pool_conn_get_timeout = 10 # (Optional) Use the advanced (eventlet safe) memcached client pool. The # advanced pool will only work under python 2.x. (boolean value) #memcache_use_advanced_pool = false # (Optional) Indicate whether to set the X-Service-Catalog header. If False, # middleware will not ask for service catalog on token validation and will not # set the X-Service-Catalog header. (boolean value) #include_service_catalog = true # Used to control the use and type of token binding. Can be set to: "disabled" # to not check token binding. "permissive" (default) to validate binding # information if the bind type is of a form known to the server and ignore it # if not. "strict" like "permissive" but if the bind type is unknown the token # will be rejected. "required" any form of token binding is needed to be # allowed. Finally the name of a binding method that must be present in tokens.
[Yahoo-eng-team] [Bug 1607227] Re: [RFE] Enhancement to iptables driver for FWaaS v2
This sounds to me like a stepping stone for [1], and as such you can target that blueprint directly and we can take the discussion over review. [1] https://blueprints.launchpad.net/neutron/+spec/fwaas-api-2.0 ** Changed in: neutron Status: In Progress => Invalid ** Changed in: neutron Assignee: chandan dutta chowdhury (chandanc) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1607227 Title: [RFE] Enhancement to iptables driver for FWaaS v2 Status in neutron: Invalid Bug description: The Iptables manager and firewall driver in Neutron must be enhanced for co-existence of SecurityGroup and FWaaS v2 APIs. This patch re- factors the IPTables driver for enabling FWaaS and SG chain to be interleaved preserving ordering of rules. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1607227/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607461] [NEW] nova-compute hangs while executing a blocking call to librbd
Public bug reported: While executing a call to librbd nova-compute may hang for a while and eventually go down in nova service-list output. strace'ing shows that a process is stuck on acquiring a mutex: root@node-153:~# strace -p 16675 Process 16675 attached futex(0x7fff084ce36c, FUTEX_WAIT_PRIVATE, 1, NULL gdb allows to see the traceback: http://paste.openstack.org/show/542534/ ^ which basically means calls to librbd (C library) are not monkey- patched and do not allow to switch the execution context to another green thread in an eventlet-based process. To avoid blocking of the whole nova-compute process on calls to librbd we should wrap them with tpool.execute() (http://eventlet.net/doc/threading.html#eventlet.tpool.execute) ** Affects: nova Importance: Undecided Assignee: Roman Podoliaka (rpodolyaka) Status: New ** Tags: ceph compute ** Changed in: nova Assignee: (unassigned) => Roman Podoliaka (rpodolyaka) ** Tags added: ceph compute -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607461 Title: nova-compute hangs while executing a blocking call to librbd Status in OpenStack Compute (nova): New Bug description: While executing a call to librbd nova-compute may hang for a while and eventually go down in nova service-list output. strace'ing shows that a process is stuck on acquiring a mutex: root@node-153:~# strace -p 16675 Process 16675 attached futex(0x7fff084ce36c, FUTEX_WAIT_PRIVATE, 1, NULL gdb allows to see the traceback: http://paste.openstack.org/show/542534/ ^ which basically means calls to librbd (C library) are not monkey- patched and do not allow to switch the execution context to another green thread in an eventlet-based process. To avoid blocking of the whole nova-compute process on calls to librbd we should wrap them with tpool.execute() (http://eventlet.net/doc/threading.html#eventlet.tpool.execute) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607461/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1598100] Re: Adding FDB population agent extension
Reviewed: https://review.openstack.org/345384 Committed: https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=81507dc5607cd303c4f3214289fe1f2e7c4b Submitter: Jenkins Branch:master commit 81507dc5607cd303c4f3214289fe1f2e7c4b Author: Edan DavidDate: Thu Jul 21 12:22:00 2016 + Add 'FDB L2 extension' section sriov config Change-Id: Ic7bd0110585d759d4e83fa2c73ca9d4939752663 Closes-Bug: #1598100 ** Changed in: openstack-manuals Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1598100 Title: Adding FDB population agent extension Status in neutron: Invalid Status in openstack-manuals: Fix Released Bug description: https://review.openstack.org/320562 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 2c8f61b816bf531a17a7b45d35a5388e8a2f607a Author: Edan David Date: Tue May 24 11:54:02 2016 -0400 Adding FDB population agent extension The purpose of this extension is updating the FDB table upon changes of normal port instances thus enabling communication between direct port SR-IOV instances and normal port instances. Additionally enabling communication to direct port instances with floating ips. Support for OVS agent and linux bridge. DocImpact Change-Id: I61a8aacb1b21b2a6e452389633d7dcccf9964fea Closes-Bug: #1492228 Closes-Bug: #1527991 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1598100/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605282] Re: Transaction rolled back while creating HA router
** Changed in: neutron Status: New => Opinion ** Changed in: neutron Status: Opinion => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1605282 Title: Transaction rolled back while creating HA router Status in neutron: Confirmed Bug description: The stacktrace can be found here: http://paste.openstack.org/show/539052/ This was discovered while running the create_and_delete_router rally test with a high (~10) concurrency number. I encountered this on stable/mitaka so it's interesting to see if this reproduces on master. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1605282/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1603592] Re: os-attach-interface returns a 500 when neutron policy forbids port creation
Reviewed: https://review.openstack.org/345223 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=ead6597274c088712b399d27faa663c67647 Submitter: Jenkins Branch:master commit ead6597274c088712b399d27faa663c67647 Author: liyingjunDate: Thu Jul 21 15:49:37 2016 +0800 network: handle forbidden exception from neutron Neutron will raise a forbidden exception when the neutron policy is not allowed to for some operation like create_port. The operation is an RPC call from nova-api to nova-compute. The Forbidden from neutron isn't handled in nova-api so we get a 500 back instead of a 403. It should be a 403 in this case. Change-Id: Iea4feaeb7ea6860e892ef57a4443e814a74b1d9e Closes-bug: #1603592 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1603592 Title: os-attach-interface returns a 500 when neutron policy forbids port creation Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: From a test on our internal CI against mitaka: root@uat-dal09-compute-316:~# nova-manage version 13.0.0 The tempest test failure: http://paste.openstack.org/show/533818/ The attach_interface operation is an RPC call from nova-api to nova- compute. In our case, neutron policy was such that port creation failed: http://paste.openstack.org/show/533819/ The Forbidden from neutron isn't handled in nova-api so we get a 500 back instead of a 403. This is somewhat related to bug 1571722 and patch https://review.openstack.org/#/c/312014/ but that's fixing a 401 and a misconfiguration issue. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1603592/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606573] Re: dev-docs: missing 'ova' container_format
Reviewed: https://review.openstack.org/347717 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=21c79817518295e96130352fd65859dd132e2055 Submitter: Jenkins Branch:master commit 21c79817518295e96130352fd65859dd132e2055 Author: ke-kurokiDate: Wed Jul 27 18:08:30 2016 +0900 Add 'ova' as a container_format in dev-docs Add 'ova' as a container_format in glanceapi.rst because the format is missed. Change-Id: Ifa52d6ca8a6f81fd8c07aa59c171ba47ad8f798c Closes-Bug: #1606573 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1606573 Title: dev-docs: missing 'ova' container_format Status in Glance: Fix Released Bug description: in glanceapi.rst: x-image-meta-container_format is missing 'ova'. Note: the following are correct: formats.rst api-ref To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1606573/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607412] [NEW] gate-grenade-dsvm-neutron-multinode fails server build on newton side with "Unrecognized attribute(s) 'dns_name'"
Public bug reported: Saw this here in the gate: http://logs.openstack.org/62/324262/3/gate/gate-grenade-dsvm-neutron- multinode/9af98c8/logs/new/screen-n-cond.txt.gz?level=TRACE#_2016-07-25_10_24_50_160 2016-07-25 10:24:50.160 3162 ERROR nova.scheduler.utils [req-3b0678a2 -4d9c-462e-97a5-913dc36fab81 tempest-ServerAddressesTestJSON-491230418 tempest-ServerAddressesTestJSON-491230418] [instance: f0e2b7fc-d88a- 4e2d-9bc8-adcb81efc0dc] Error from last host: ubuntu-trusty-2-node-rax- ord-2845415-108069 (node ubuntu-trusty-2-node-rax-ord-2845415-108069): [u'Traceback (most recent call last):\n', u' File "/opt/stack/old/nova/nova/compute/manager.py", line 1926, in _do_build_and_run_instance\nfilter_properties)\n', u' File "/opt/stack/old/nova/nova/compute/manager.py", line 2116, in _build_and_run_instance\ninstance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance f0e2b7fc-d88a-4e2d-9bc8-adcb81efc0dc was re-scheduled: Unrecognized attribute(s) 'dns_name'\nNeutron server returns request_ids: ['req- d96c687a-3971-4d8d-bb42-08661632f7c8']\n"] There are only 3 hits in 7 days in logstash, but it seems like there should be more than this (we might also be backed up on logstash results): http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22RescheduledException%3A%20Build%20of%20instance%5C%22%20AND%20message%3A%5C%22was %20re- scheduled%3A%20Unrecognized%20attribute(s)%20'dns_name'%5C%22%20AND%20tags%3A%5C%22screen-n-cond.txt%5C%22=7d ** Affects: neutron Importance: Undecided Status: New ** Affects: nova Importance: Undecided Status: New ** Tags: dns neutron ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1607412 Title: gate-grenade-dsvm-neutron-multinode fails server build on newton side with "Unrecognized attribute(s) 'dns_name'" Status in neutron: New Status in OpenStack Compute (nova): New Bug description: Saw this here in the gate: http://logs.openstack.org/62/324262/3/gate/gate-grenade-dsvm-neutron- multinode/9af98c8/logs/new/screen-n-cond.txt.gz?level=TRACE#_2016-07-25_10_24_50_160 2016-07-25 10:24:50.160 3162 ERROR nova.scheduler.utils [req-3b0678a2 -4d9c-462e-97a5-913dc36fab81 tempest-ServerAddressesTestJSON-491230418 tempest-ServerAddressesTestJSON-491230418] [instance: f0e2b7fc-d88a- 4e2d-9bc8-adcb81efc0dc] Error from last host: ubuntu-trusty-2-node- rax-ord-2845415-108069 (node ubuntu-trusty-2-node-rax- ord-2845415-108069): [u'Traceback (most recent call last):\n', u' File "/opt/stack/old/nova/nova/compute/manager.py", line 1926, in _do_build_and_run_instance\nfilter_properties)\n', u' File "/opt/stack/old/nova/nova/compute/manager.py", line 2116, in _build_and_run_instance\ninstance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance f0e2b7fc-d88a-4e2d-9bc8-adcb81efc0dc was re-scheduled: Unrecognized attribute(s) 'dns_name'\nNeutron server returns request_ids: ['req- d96c687a-3971-4d8d-bb42-08661632f7c8']\n"] There are only 3 hits in 7 days in logstash, but it seems like there should be more than this (we might also be backed up on logstash results): http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22RescheduledException%3A%20Build%20of%20instance%5C%22%20AND%20message%3A%5C%22was %20re- scheduled%3A%20Unrecognized%20attribute(s)%20'dns_name'%5C%22%20AND%20tags%3A%5C%22screen-n-cond.txt%5C%22=7d To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1607412/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607400] [NEW] UEFI not supported on SLES
Public bug reported: Launching an image with UEFI bootloader on a SLES 12 SP1 instances gives 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] Traceback (most recent call last): 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in _build_resources 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] yield resources 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in _build_and_run_instance 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] block_device_info=block_device_info) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2777, in spawn 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] write_to_disk=True) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4730, in _get_guest_xml 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] context) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4579, in _get_guest_config 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] root_device_name) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4401, in _configure_guest_by_virt_type 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] raise exception.UEFINotSupported() 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] UEFINotSupported: UEFI is not supported this is because the function probes for files that are in different locations on SLES, namely it looks for "/usr/share/OVMF/OVMF_CODE.fd" / /usr/share/AAVMF/AAVMF_CODE.fd which are the documented upstream defaults. However the SLES libvirt is compiled to default to different paths, that exist. one possibility would be to introspect domCapabilities from libvirt, which works just fine. An alternative patch is to just add the alternative paths for now. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607400 Title: UEFI not supported on SLES Status in OpenStack Compute (nova): New Bug description: Launching an image with UEFI bootloader on a SLES 12 SP1 instances gives 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] Traceback (most recent call last): 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in _build_resources 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] yield resources 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in _build_and_run_instance 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] block_device_info=block_device_info) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2777, in spawn 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] write_to_disk=True) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4730, in _get_guest_xml 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] context) 2016-07-28 08:23:12.820 3224 ERROR nova.compute.manager [instance: 5289d6f7-f4f5-4f95-bd55-4812ec3ab800] File
[Yahoo-eng-team] [Bug 1607398] [NEW] DVR, Floating IPs are not working. Failed sending gratuitous ARP
Public bug reported: Hello, I'm trying to use DVR and floating IPs, but it does not work. When I'm trying to associate a floating IP with VM, I see on the compute node: 2016-07-28 14:26:31.893 125513 DEBUG neutron.agent.linux.utils [-] Running command (rootwrap daemon): ['ip', 'netns', 'exec', 'fip-4f2774d1-dfb8-4833-8374-806e1fc40827', 'arping', '-A', '-I', 'fg-86481da8-4c', '-c', '3', '-w', '4.5', '172.16.48.6'] execute_rootwrap_daemon /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:100 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.utils [-] Exit code: 2; Stdin: ; Stdout: ; Stderr: bind: Cannot assign requested address 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 172.16.48.6 on fg-86481da8-4c in namespace fip-4f2774d1-dfb8-4833-8374-806e1fc40827 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib Traceback (most recent call last): 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 1040, in _arping 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib ip_wrapper.netns.execute(arping_cmd, check_exit_code=True) 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 927, in execute 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib log_fail_as_error=log_fail_as_error, **kwargs) 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 140, in execute 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib raise RuntimeError(msg) 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib RuntimeError: Exit code: 2; Stdin: ; Stdout: ; Stderr: bind: Cannot assign requested address 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib 2016-07-28 14:26:31.912 125513 ERROR neutron.agent.linux.ip_lib 2016-07-28 14:26:31.948 125513 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 67908aabc9bd446493cd22af8cccbd59 __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:302 2016-07-28 14:26:31.949 125513 DEBUG neutron.agent.linux.utils [-] Running command (rootwrap daemon): ['ip', 'netns', 'exec', 'fip-4f2774d1-dfb8-4833-8374-806e1fc40827', 'ip', '-o', 'link', 'show', 'fpr-a5e261f2-9'] execute_rootwrap_daemon /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:100 [root@node13 ~]# sysctl net.ipv4.ip_nonlocal_bind net.ipv4.ip_nonlocal_bind = 1 [root@node13 ~]# ip netns exec fip-4f2774d1-dfb8-4833-8374-806e1fc40827 sysctl net.ipv4.ip_nonlocal_bind net.ipv4.ip_nonlocal_bind = 1 Remote ping is not working: [root@node13 ~]# ping -c2 172.16.48.6 PING 172.16.48.6 (172.16.48.6) 56(84) bytes of data. ^C --- 172.16.48.6 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 999ms Ping into the namespace is working: [root@node13 ~]# ip netns fip-4f2774d1-dfb8-4833-8374-806e1fc40827 qrouter-a5e261f2-991c-497c-adcd-b1e9e1a8a001 [root@node13 ~]# ip netns exec fip-4f2774d1-dfb8-4833-8374-806e1fc40827 ping -c2 172.16.48.6 PING 172.16.48.6 (172.16.48.6) 56(84) bytes of data. 64 bytes from 172.16.48.6: icmp_seq=1 ttl=63 time=0.290 ms 64 bytes from 172.16.48.6: icmp_seq=2 ttl=63 time=0.260 ms --- 172.16.48.6 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.260/0.275/0.290/0.015 ms Additional information: [root@node13 ~]# ip netns exec fip-4f2774d1-dfb8-4833-8374-806e1fc40827 ip addr 1: lo:mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 9: fpr-a5e261f2-9: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether ce:cd:c7:4d:b8:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 169.254.126.51/31 scope global fpr-a5e261f2-9 valid_lft forever preferred_lft forever inet6 fe80::cccd:c7ff:fe4d:b8c2/64 scope link valid_lft forever preferred_lft forever 333: fg-86481da8-4c: mtu 1450 qdisc noqueue state UNKNOWN link/ether fa:16:3e:3b:ac:ac brd ff:ff:ff:ff:ff:ff inet 172.16.48.4/22 brd 172.16.51.255 scope global fg-86481da8-4c valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe3b:acac/64 scope link valid_lft forever preferred_lft forever [root@node13 ~]# ip netns exec fip-4f2774d1-dfb8-4833-8374-806e1fc40827 ip route default via 172.16.51.254 dev fg-86481da8-4c 169.254.126.50/31 dev fpr-a5e261f2-9 proto kernel scope link src 169.254.126.51 172.16.48.0/22 dev fg-86481da8-4c proto kernel scope link src
[Yahoo-eng-team] [Bug 1607395] [NEW] Traceback in dynamic metadata driver: unexpected keyword argument 'extra_md'
Public bug reported: Using new dynamic metadata driver fails with a traceback: ERROR nova.api.metadata.handler [req-d4df1623-dc4a-4e9c-b129-1e5dd76c59ac None None] Failed to get metadata for IP 10.0.0.3 TRACE nova.api.metadata.handler Traceback (most recent call last): TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/handler.py", line 134, in _handle_remote_ip_request TRACE nova.api.metadata.handler meta_data = self.get_metadata_by_remote_address(remote_address) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/handler.py", line 61, in get_metadata_by_remote_address TRACE nova.api.metadata.handler data = base.get_metadata_by_address(address) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/base.py", line 660, in get_metadata_by_address TRACE nova.api.metadata.handler ctxt) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/base.py", line 670, in get_metadata_by_instance_id TRACE nova.api.metadata.handler return InstanceMetadata(instance, address) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/base.py", line 195, in __init__ TRACE nova.api.metadata.handler extra_md=extra_md, network_info=network_info) TRACE nova.api.metadata.handler TypeError: __init__() got an unexpected keyword argument 'extra_md' This is the configuration: vendordata_providers = StaticJSON, DynamicJSON vendordata_dynamic_targets = 'join@http://127.0.0.1:/v1/' vendordata_driver = nova.api.metadata.vendordata_dynamic.DynamicVendorData vendordata_dynamic_connect_timeout = 5 vendordata_dynamic_read_timeout = 30 vendordata_jsonfile_path = /etc/nova/cloud-config.json ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607395 Title: Traceback in dynamic metadata driver: unexpected keyword argument 'extra_md' Status in OpenStack Compute (nova): New Bug description: Using new dynamic metadata driver fails with a traceback: ERROR nova.api.metadata.handler [req-d4df1623-dc4a-4e9c-b129-1e5dd76c59ac None None] Failed to get metadata for IP 10.0.0.3 TRACE nova.api.metadata.handler Traceback (most recent call last): TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/handler.py", line 134, in _handle_remote_ip_request TRACE nova.api.metadata.handler meta_data = self.get_metadata_by_remote_address(remote_address) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/handler.py", line 61, in get_metadata_by_remote_address TRACE nova.api.metadata.handler data = base.get_metadata_by_address(address) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/base.py", line 660, in get_metadata_by_address TRACE nova.api.metadata.handler ctxt) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/base.py", line 670, in get_metadata_by_instance_id TRACE nova.api.metadata.handler return InstanceMetadata(instance, address) TRACE nova.api.metadata.handler File "/home/stack/openstack/nova/nova/api/metadata/base.py", line 195, in __init__ TRACE nova.api.metadata.handler extra_md=extra_md, network_info=network_info) TRACE nova.api.metadata.handler TypeError: __init__() got an unexpected keyword argument 'extra_md' This is the configuration: vendordata_providers = StaticJSON, DynamicJSON vendordata_dynamic_targets = 'join@http://127.0.0.1:/v1/' vendordata_driver = nova.api.metadata.vendordata_dynamic.DynamicVendorData vendordata_dynamic_connect_timeout = 5 vendordata_dynamic_read_timeout = 30 vendordata_jsonfile_path = /etc/nova/cloud-config.json To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607395/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605955] Re: router interface status stuck in BUILD
Reviewed: https://review.openstack.org/346323 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2325e2aea86ddc28bc0e1573d4954518991cad19 Submitter: Jenkins Branch:master commit 2325e2aea86ddc28bc0e1573d4954518991cad19 Author: Kevin BentonDate: Sat Jul 23 00:07:17 2016 -0700 Skip DHCP provisioning block for network ports Network ports created via internal core plugin calls (e.g. dhcp ports and router interfaces) don't generate DHCP notifications to the DHCP agent so the agent never clears the DHCP provisioning block. This patch just skips adding DHCP provisioning blocks for network owned ports since they don't depend on DHCP anyway. Closes-Bug: #1590845 Closes-Bug: #1605955 Change-Id: I0111de79d9259ada3b1c06a087d0eaeb8f3cb158 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1605955 Title: router interface status stuck in BUILD Status in neutron: Fix Released Bug description: After adding a router interface the interface status stays stuck in BUILD even though it works okay. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1605955/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1590845] Re: Router interfaces report being in BUILD state - l3ha vrrp+LinuxBridge
Reviewed: https://review.openstack.org/346323 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2325e2aea86ddc28bc0e1573d4954518991cad19 Submitter: Jenkins Branch:master commit 2325e2aea86ddc28bc0e1573d4954518991cad19 Author: Kevin BentonDate: Sat Jul 23 00:07:17 2016 -0700 Skip DHCP provisioning block for network ports Network ports created via internal core plugin calls (e.g. dhcp ports and router interfaces) don't generate DHCP notifications to the DHCP agent so the agent never clears the DHCP provisioning block. This patch just skips adding DHCP provisioning blocks for network owned ports since they don't depend on DHCP anyway. Closes-Bug: #1590845 Closes-Bug: #1605955 Change-Id: I0111de79d9259ada3b1c06a087d0eaeb8f3cb158 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1590845 Title: Router interfaces report being in BUILD state - l3ha vrrp+LinuxBridge Status in neutron: Fix Released Bug description: I'm running a Liberty environment with two network hosts using the L3HA VRRP driver. I also have L2pop on and am using the ML2 LinuxBridge driver. When we programmatically attach subnets and/or ports to routers(we attach 1 interface every 60 seconds), some report back stuck in the BUILD state. Take this interface, for example: neutron port-show 98b55b89-a002-496f-a5d4-8de598613da8 +---+--+ | Field | Value | +---+--+ | admin_state_up| True | | allowed_address_pairs | | | binding:host_id | dn3usoskctl03_neutron_agents_container-e64e37d6 | | binding:profile | {} | | binding:vif_details | {"port_filter": true} | | binding:vif_type | bridge | | binding:vnic_type | normal | | device_id | 5838c5de-e87a-4e5e-b61f-a3f068fa7726 | | device_owner | network:router_interface | | dns_assignment| {"hostname": "host-10-169-160-1", "ip_address": "10.169.160.1", "fqdn": "host-10-169-160-1.openstacklocal."} | | dns_name | | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "bc3a8d37-6cd7-4d57-b0c9-2b35743b0a0b", "ip_address": "10.169.160.1"} | | id| 98b55b89-a002-496f-a5d4-8de598613da8 | | mac_address | fa:16:3e:b9:7a:1d | | name | | | network_id| 535c3336-202c-4dab-b517-2232c4ce1481 | | security_groups | | | status| BUILD | | tenant_id | 3ccf712795c44edcbc8ffcc331a59853 | +---+--+ It's reporting itself in the BUILD
[Yahoo-eng-team] [Bug 1607219] Re: revert-resize doen't drop new pci devices
duplicate to https://bugs.launchpad.net/nova/+bug/1594230 ** Changed in: nova Status: In Progress => Incomplete ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607219 Title: revert-resize doen't drop new pci devices Status in OpenStack Compute (nova): Invalid Bug description: This commit https://review.openstack.org/#/c/307124/ fixes the resize for pci device, but in drop_move_claim it takes always the old pci device for the migration context. It should get the pci device according to prefix To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607219/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607381] [NEW] HA router in l3 dvr_snat/legacy agent has no ha_port
Public bug reported: This is a successor to https://bugs.launchpad.net/neutron/+bug/1533441. HA router can not be deleted in L3 agent after race between HA router creating and deleting Exception: 1. Unable to process HA router %s without HA port (HA router initialize) 2. AttributeError: 'NoneType' object has no attribute 'config' (HA router deleting procedure) http://paste.openstack.org/show/523757/ infinite loop trace: http://paste.openstack.org/show/528407/ ** Affects: neutron Importance: Undecided Assignee: LIU Yulong (dragon889) Status: In Progress ** Summary changed: - HA router in l3 dvr_snat/legacy agent has not ha_port + HA router in l3 dvr_snat/legacy agent has no ha_port -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1607381 Title: HA router in l3 dvr_snat/legacy agent has no ha_port Status in neutron: In Progress Bug description: This is a successor to https://bugs.launchpad.net/neutron/+bug/1533441. HA router can not be deleted in L3 agent after race between HA router creating and deleting Exception: 1. Unable to process HA router %s without HA port (HA router initialize) 2. AttributeError: 'NoneType' object has no attribute 'config' (HA router deleting procedure) http://paste.openstack.org/show/523757/ infinite loop trace: http://paste.openstack.org/show/528407/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1607381/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607373] [NEW] [Master] Volume shouldn't be mandatory on Horizon while creating instance
Public bug reported: Setup: 2 KVM hosts 2 ESX Hosts 1 Controller 1 Network node Devstack(master) Steps: 1. I don't have cinder service configured 2. Horizon by default imposes 1GB of volume(cant set it to 0 from UI). Attached snapshot 3. VM launch fails has Cinder service catalog is empty with below trace. Logs: stack@runner:~$ openstack catalog list +-++-+ | Name| Type | Endpoints | +-++-+ | nova| compute| RegionOne | | || publicURL: http://172.17.1.142:8774/v2.1 | | || internalURL: http://172.17.1.142:8774/v2.1 | | || adminURL: http://172.17.1.142:8774/v2.1 | | || | | neutron | network| RegionOne | | || publicURL: http://172.17.1.142:9696/ | | || internalURL: http://172.17.1.142:9696/ | | || adminURL: http://172.17.1.142:9696/ | | || | | glance | image | RegionOne | | || publicURL: http://172.17.1.142:9292 | | || internalURL: http://172.17.1.142:9292 | | || adminURL: http://172.17.1.142:9292 | | || | | nova_legacy | compute_legacy | RegionOne | | || publicURL: http://172.17.1.142:8774/v2/73d0da4691294dfd927749eda6b12ffd | | || internalURL: http://172.17.1.142:8774/v2/73d0da4691294dfd927749eda6b12ffd | | || adminURL: http://172.17.1.142:8774/v2/73d0da4691294dfd927749eda6b12ffd| | || | | heat-cfn| cloudformation | RegionOne | | || publicURL: http://172.17.1.142:8000/v1 | | || internalURL: http://172.17.1.142:8000/v1 | | || adminURL: http://172.17.1.142:8000/v1 | | || | | heat| orchestration | RegionOne | | || publicURL: http://172.17.1.142:8004/v1/73d0da4691294dfd927749eda6b12ffd | | || internalURL: http://172.17.1.142:8004/v1/73d0da4691294dfd927749eda6b12ffd | | || adminURL: http://172.17.1.142:8004/v1/73d0da4691294dfd927749eda6b12ffd| | || | | keystone| identity | RegionOne | | || publicURL: http://172.17.1.142/identity | | || internalURL: http://172.17.1.142/identity | | || adminURL: http://172.17.1.142/identity_v2_admin | | || | +-++-+ stack@runner:~$ Logs: 2016-07-25 15:34:43.194 5679 DEBUG nova.network.neutronv2.api [req-c4d80590-f9ab-4a98-9629-f64ce1723236 admin admin] [instance: 5b6600f2-c6b4-417a-a743-82de065ea7b4] allocate_for_instance() allocate_for_i nstance
[Yahoo-eng-team] [Bug 1607369] [NEW] In case of PCI-PT the mac address of the port should be flushed when the Vm attached to it is deleted
Public bug reported: 1.brought up a pci-pt setup 2.Created a pci-pt port (it is assigned a mac starting with fa:) 3.now boot a Vm with port 4.on successful boot ,port created in step 2 gets mac of the nic of compute 5.now delete the vm ,we see that even though Vm is deleted The port still contains mac of the compute nic 6.if we would want to boot a new vm on the same compute ,we will need to either use the same port or first delete the port created in step 2 and create new port. Ideal scenario would be once vm is deleted ,The mac associated with the port (compute nic mac) should be released. stack@hlm:~$ neutron port-list +--+--+---+---+ | id | name | mac_address | fixed_ips | +--+--+---+---+ | 6354907d-47bb-4a9f-b68a-1079d7d36a77 | | 14:02:ec:6d:6e:98 | {"subnet_id": "f77cc897-3168-4be2-a0e3-b36597e77177", "ip_address": "7.7.7.3"} | | 7cd5cef5-af68-464b-9fc3-34aa6a0889a2 | | fa:16:3e:29:52:3a | {"subnet_id": "f77cc897-3168-4be2-a0e3-b36597e77177", "ip_address": "7.7.7.2"} | | 8e69dfc9-1f5b-4a9b-8e25-8d941841ae0b | | 14:02:ec:6d:6e:99 | {"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": "17.17.17.3"} | | a88d264e-a35a-4027-b975-29631b629232 | | fa:16:3e:34:c9:cf | {"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": "17.17.17.2"} | +--+--+---+---+ stack@hlm:~$ nova list +--+--+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+---+ | 61fe0ea1-6364-469b-ae6f-a2255268f8c5 | VM | ACTIVE | - | Running | n5=7.7.7.3; n6=17.17.17.3 | +--+--+++-+---+ stack@hlm:~$ neutron port-create n6 --vnic-type=direct-physical Created a new port: +---+---+ | Field | Value | +---+---+ | admin_state_up| True | | allowed_address_pairs | | | binding:host_id | | | binding:profile | {} | | binding:vif_details | {} | | binding:vif_type | unbound | | binding:vnic_type | direct-physical | | created_at| 2016-07-28T06:35:17 | | description | | | device_id | | | device_owner | | | dns_name | | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": "17.17.17.4"} | | id| 1769331d-0c5c-46ff-957e-a538a84b5095 | | mac_address | fa:16:3e:57:ba:4e | | name | | | network_id| 690be87f-b60b-4a08-9a1b-b147a5b41435 | | security_groups | 439977a7-50d1-40a7-bf38-6ad493c81e1f | | status
[Yahoo-eng-team] [Bug 1607350] [NEW] floating-ip info doesn't contaion information about instance if associated with nova network
Public bug reported: [Summary] floating ip info does not contain information about associated instance if nova-network is used. behaviour was changed between 11.05.16 and 21.06.16 [Topo] devstack all-in-one node libvirt+qemu nova-network [Description and expect result] floating ip info contains information about associated instance as in previous releases. [Reproduceable or not] reproduceable [Recreate Steps] 0) source any credentials. Result is the same for demo credentials of devstack (user=demo project=demo) and for admin credentials. 1) boot instance nova boot --image cirros-0.3.4-x86_64-uec --flavor 1 ttt 2) create floating ip nova floating-ip-create 3) associate floating-ip nova floating-ip-associate ttt 172.24.4.1 4) list intsances nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | 8ad61db0-f388-4bc7-bfbd-728782a5b505 | ttt | ACTIVE | - | Running | private=10.0.0.4, 172.24.4.1 | +--+--+++-+--+ 5) list floating ips nova floating-ip-list +++---+--++ | Id | IP | Server Id | Fixed IP | Pool | +++---+--++ | 1 | 172.24.4.1 | - | -| public | +++---+--++ [Root cause anlyze or debug inf] - database contains information about floating ip and record has a correct id of fixed ip - database contains informtaion about fixed ip and record has a correct instance uuid nova 'os-floating-ips' rest api calls network_api.get_floating_ips_by_project it calls objects.FloatingIPList.get_by_project it retrieves floating ips from DB and calls obj_base.obj_make_list for each record obj_make_list calls _from_db_object of passed type and creates FloatingIP object _from_db_object takes 'fixed_ip' as expected attributes but only FloatingIP.get_by_id passes it. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607350 Title: floating-ip info doesn't contaion information about instance if associated with nova network Status in OpenStack Compute (nova): New Bug description: [Summary] floating ip info does not contain information about associated instance if nova-network is used. behaviour was changed between 11.05.16 and 21.06.16 [Topo] devstack all-in-one node libvirt+qemu nova-network [Description and expect result] floating ip info contains information about associated instance as in previous releases. [Reproduceable or not] reproduceable [Recreate Steps] 0) source any credentials. Result is the same for demo credentials of devstack (user=demo project=demo) and for admin credentials. 1) boot instance nova boot --image cirros-0.3.4-x86_64-uec --flavor 1 ttt 2) create floating ip nova floating-ip-create 3) associate floating-ip nova floating-ip-associate ttt 172.24.4.1 4) list intsances nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | 8ad61db0-f388-4bc7-bfbd-728782a5b505 | ttt | ACTIVE | - | Running | private=10.0.0.4, 172.24.4.1 | +--+--+++-+--+ 5) list floating ips nova floating-ip-list +++---+--++ | Id | IP | Server Id | Fixed IP | Pool | +++---+--++ | 1 | 172.24.4.1 | - | -| public | +++---+--++ [Root cause anlyze or debug inf] - database contains information about floating ip and record has a correct id of fixed ip - database contains informtaion about fixed ip and record has a correct instance uuid nova 'os-floating-ips' rest api calls network_api.get_floating_ips_by_project it calls objects.FloatingIPList.get_by_project it retrieves floating ips from DB and calls obj_base.obj_make_list for each record obj_make_list calls _from_db_object of passed type and creates FloatingIP object _from_db_object takes 'fixed_ip' as expected attributes but only FloatingIP.get_by_id passes it.
[Yahoo-eng-team] [Bug 1607345] Re: Collect all logs needed to debug curtin/cloud-init for each deployment
MAAS stores install.log. MAAS doesn't store cloud unit logs because cloud-Init doesn't send them. However, you have rsyslog a that are what cloud-I it logs, although, I think it may not be logging correctly (/var/log/Maas/Rsyslog/) ** Changed in: maas Importance: Undecided => Wishlist ** Changed in: maas Milestone: None => next ** Changed in: maas Status: New => Triaged ** Also affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1607345 Title: Collect all logs needed to debug curtin/cloud-init for each deployment Status in cloud-init: New Status in MAAS: Triaged Bug description: According to https://bugs.launchpad.net/maas/+bug/1604962/comments/12, these logs are needed to debug curtin/cloud-init issues but aren't collected automatically by MAAS: - /var/log/cloud-init* - /run/cloud-init* - /var/log/cloud - /tmp/install.log We need these to be automatically collected by MAAS so we can automatically collect them as artifacts in the case of failures in OIL. curtin/cloud-init issues can be race conditions that are difficult to reproduce manually, so we need to grab the logs required to debug the first time it happens. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1607345/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607317] Re: metadata def namespace update CLI is not working as expected for parameter "protected"
not only this command but also others has this problem, you can take a try with "glance image-update --protectd". The reason is that glance client analyze this kind of parameter to "False" when it's not boolean. ** Project changed: glance => python-glanceclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1607317 Title: metadata def namespace update CLI is not working as expected for parameter "protected" Status in python-glanceclient: New Bug description: In v2 glance metadata def namespace update API, it is observed that when we are updating protected parameter with different invalid values, it updates this parameter inappropriately. Following CLI update output shows this behavior - # Create a v2 metadata def namespace using admin credentials - stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-create test-ns ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | False| | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:25Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ Now look at below different invalid protected parameters and behavior - stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --debug --os-image-api-version 2 md-namespace-update test-ns --protected True ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:32Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:32Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-update test-ns --protected True ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:32Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-update test-ns --protected False ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | False| | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:53:04Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-update test-ns --protected "True" ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace
[Yahoo-eng-team] [Bug 1607317] [NEW] metadata def namespace update CLI is not working as expected for parameter "protected"
Public bug reported: In v2 glance metadata def namespace update API, it is observed that when we are updating protected parameter with different invalid values, it updates this parameter inappropriately. Following CLI update output shows this behavior - # Create a v2 metadata def namespace using admin credentials - stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-create test-ns ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | False| | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:25Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ Now look at below different invalid protected parameters and behavior - stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --debug --os-image-api-version 2 md-namespace-update test-ns --protected True ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:32Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:32Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-update test-ns --protected True ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:51:32Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-update test-ns --protected False ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | False| | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:53:04Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-update test-ns --protected "True" ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | True | | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:53:12Z | | visibility | private | ++--+ stack@ubuntu-VirtualBox:/opt/stack/tempest$ glance --os-image-api-version 2 md-namespace-update test-ns --protected "True_1" ++--+ | Property | Value| ++--+ | created_at | 2016-07-28T10:51:25Z | | namespace | test-ns | | owner | 93efcc2b00164c61aafaa6f234b99bc3 | | protected | False| | schema | /v2/schemas/metadefs/namespace | | updated_at | 2016-07-28T10:53:23Z
[Yahoo-eng-team] [Bug 1607313] [NEW] Inconsistency in data stored in libvirt.xml file
Public bug reported: Operations involved : nova migrate nova evacuate nova live-migration The above mentioned operations on instances lead to creation of a new instance on a new compute host. It has been observed that the 'owner' information in the libvirt.xml file is populated with the username/projectname(tenantname) of the user performing any of the above operations. For instance, There's an instance 'ins-1' in project/tenant 'pro-1' owned by user 'user01' launched on compute host 'compute-101'. Now, an admin user named 'osadmin' from project 'admin', performs operation `nova live-migration asdfghi123xyz compute-102` * AD-123 (ID if ins-1) This leads to a live migration of ins-1 from compute-101 to compute-102. Now, the file /var/lib/nova/instances/asdfghi123xyz/libvirt.xml in compute-102 will have osadmin admin which ideally should be, user01 pro-1 This inconsistency is seen in all the operations mentioned, i.e. evacuate, migrate, live- migration. Related commands : nova live-migration SERVER HOST_NAME nova evacuate EVACUATED_SERVER_NAME HOST_B nova migrate VM_ID ** Affects: nova Importance: Undecided Status: New ** Tags: libvirt nova-manage ** Tags added: libvirt nova-manage -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607313 Title: Inconsistency in data stored in libvirt.xml file Status in OpenStack Compute (nova): New Bug description: Operations involved : nova migrate nova evacuate nova live-migration The above mentioned operations on instances lead to creation of a new instance on a new compute host. It has been observed that the 'owner' information in the libvirt.xml file is populated with the username/projectname(tenantname) of the user performing any of the above operations. For instance, There's an instance 'ins-1' in project/tenant 'pro-1' owned by user 'user01' launched on compute host 'compute-101'. Now, an admin user named 'osadmin' from project 'admin', performs operation `nova live-migration asdfghi123xyz compute-102` * AD-123 (ID if ins-1) This leads to a live migration of ins-1 from compute-101 to compute-102. Now, the file /var/lib/nova/instances/asdfghi123xyz/libvirt.xml in compute-102 will have osadmin admin which ideally should be, user01 pro-1 This inconsistency is seen in all the operations mentioned, i.e. evacuate, migrate, live- migration. Related commands : nova live-migration SERVER HOST_NAME nova evacuate EVACUATED_SERVER_NAME HOST_B nova migrate VM_ID To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607313/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1564110] Re: OpenStack should support MySQL Cluster (NDB)
** Changed in: nova Status: Opinion => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1564110 Title: OpenStack should support MySQL Cluster (NDB) Status in Ceilometer: Incomplete Status in Cinder: Opinion Status in Glance: New Status in heat: New Status in Ironic: Confirmed Status in OpenStack Identity (keystone): Incomplete Status in neutron: Incomplete Status in OpenStack Compute (nova): Confirmed Status in oslo.db: New Bug description: oslo.db assumes that a MySQL database can only have a storage engine of InnoDB. This causes complications for OpenStack to support other MySQL storage engines, such as MySQL Cluster (NDB). Oslo.db should have a configuration string (i.e. mysql_storage_engine) in the oslo_db database group that can be used by SQLalchemy, Alembic, and OpenStack to implement the desired support and behavior for alternative MySQL storage engines. I do have a change-set patch for options.py in oslo_db that will add this functionality. I'll post once I'm added to the CLA for OpenStack. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1564110/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606801] Re: deleting router run into race condition
*** This bug is a duplicate of bug 1533457 *** https://bugs.launchpad.net/bugs/1533457 ** This bug is no longer a duplicate of bug 1605546 Race with deleting HA routers ** This bug has been marked a duplicate of bug 1533457 Neutron server unable to sync HA info after race between HA router creating and deleting -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606801 Title: deleting router run into race condition Status in neutron: New Bug description: After deleting a router the logfiles of both network nodes are filled up with " RuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace "qrouter-3767" After i have restarted the openstack services on the network nodes, no new entries Reproduceable: yes Steps: * add router via CLI or dashboard * delete router via CLI or dashboard * logfiles grow up Openstack version: mitaka ( this error occured on liberty too ! ) OS: Centos 7, latest updates Installed Packages on nerwork nodes openstack-neutron-vpnaas-8.0.0-1.el7.noarch openstack-neutron-common-8.1.2-1.el7.noarch openstack-neutron-metering-agent-8.1.2-1.el7.noarch python-neutronclient-4.1.1-2.el7.noarch python-neutron-8.1.2-1.el7.noarch python-neutron-fwaas-8.0.0-3.el7.noarch openstack-neutron-ml2-8.1.2-1.el7.noarch openstack-neutron-bgp-dragent-8.1.2-1.el7.noarch python-neutron-vpnaas-8.0.0-1.el7.noarch openstack-neutron-openvswitch-8.1.2-1.el7.noarch openstack-neutron-8.1.2-1.el7.noarch python-neutron-lib-0.0.2-1.el7.noarch openstack-neutron-fwaas-8.0.0-3.el7.noarch Logfile network node: 2.770 44778 DEBUG neutron.agent.linux.ra [-] radvd disabled for router 37678766-597a-4e33-b83a-65142ca2ced8 disable /usr/lib/python2.7/site-packages/neutron/agent/linux/ra.py:190 2016-07-27 09:10:02.770 44778 DEBUG neutron.agent.linux.utils [-] Running command (rootwrap daemon): ['ip', 'netns', 'exec', 'qrouter-37678766-597a-4e33-b83a-65142ca2ced8', 'find', '/sys/class/net', '-maxdepth', '1', '-type', 'l', '-printf', '%f '] execute_rootwrap_daemon /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:100 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.linux.utils [-] Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace "qrouter-37678766-597a-4e33-b83a-65142ca2ced8": No such file or directory 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent [-] Error while deleting router 37678766-597a-4e33-b83a-65142ca2ced8 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent Traceback (most recent call last): 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 359, in _safe_router_removed 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent self._router_removed(router_id) 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 377, in _router_removed 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent ri.delete(self) 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent File "/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 380, in delete 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent super(HaRouter, self).delete(agent) 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent File "/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 349, in delete 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent self.router_namespace.delete() 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent File "/usr/lib/python2.7/site-packages/neutron/agent/l3/namespaces.py", line 100, in delete 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent for d in ns_ip.get_devices(exclude_loopback=True): 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 130, in get_devices 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent log_fail_as_error=self.log_fail_as_error 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 140, in execute 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent raise RuntimeError(msg) 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent RuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace "qrouter-37678766-597a-4e33-b83a-65142ca2ced8": No such file or directory 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent 2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent Attached logfiles of control node and both network nodes. At 09:09:00 -> added router At 09:10:00 -> deleted router To manage
[Yahoo-eng-team] [Bug 1607229] [NEW] ImageRef for server create/rebuild/rescue etc are accepted as random url
Public bug reported: Currently imageRef in server create, rebuild and rescue operation can be accepted as random url which contains image UUID and fetch the UUID from that. ImageRef in server creation etc are UUID only and valid against glance. Currently nova used to fetch the UUID from ImageRef url and proceed. As /images proxy APIs are deprecated, it make sense to strict the imageRef to UUID only and return 400 when non UUID(random url) is requested. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607229 Title: ImageRef for server create/rebuild/rescue etc are accepted as random url Status in OpenStack Compute (nova): New Bug description: Currently imageRef in server create, rebuild and rescue operation can be accepted as random url which contains image UUID and fetch the UUID from that. ImageRef in server creation etc are UUID only and valid against glance. Currently nova used to fetch the UUID from ImageRef url and proceed. As /images proxy APIs are deprecated, it make sense to strict the imageRef to UUID only and return 400 when non UUID(random url) is requested. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607229/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607227] [NEW] Enhancement to iptables driver for FWaaS v2
Public bug reported: The Iptables manager and firewall driver in Neutron must be enhanced for co-existence of SecurityGroup and FWaaS v2 APIs. This patch re-factors the IPTables driver for enabling FWaaS and SG chain to be interleaved preserving ordering of rules. ** Affects: neutron Importance: Undecided Assignee: chandan dutta chowdhury (chandanc) Status: New ** Changed in: neutron Assignee: (unassigned) => chandan dutta chowdhury (chandanc) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1607227 Title: Enhancement to iptables driver for FWaaS v2 Status in neutron: New Bug description: The Iptables manager and firewall driver in Neutron must be enhanced for co-existence of SecurityGroup and FWaaS v2 APIs. This patch re- factors the IPTables driver for enabling FWaaS and SG chain to be interleaved preserving ordering of rules. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1607227/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607219] [NEW] revert-resize doen't drop new pci devices
Public bug reported: This commit https://review.openstack.org/#/c/307124/ fixes the resize for pci device, but in drop_move_claim it takes always the old pci device for the migration context. It should get the pci device according to prefix ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1607219 Title: revert-resize doen't drop new pci devices Status in OpenStack Compute (nova): New Bug description: This commit https://review.openstack.org/#/c/307124/ fixes the resize for pci device, but in drop_move_claim it takes always the old pci device for the migration context. It should get the pci device according to prefix To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1607219/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp