[Yahoo-eng-team] [Bug 1694207] [NEW] Resource count displayed at the above of each table will not be hidden in case resource count changes to zero
Public bug reported: Horizon tables has two areas to display current number of resources. One is the above of table, and another is bottom of table. In case you have only one resource (e.g. instance), if you remove that resource, tables becomes like following after "UpdateRow" finished. 1. Table only displays "No items to display." 2. Resource count at the bottom will be hidden 3. Resource count at the above will be still displayed I thinks resource count displayed both above and bottom should be hidden. ** Affects: horizon Importance: Undecided Assignee: Keiichi Hikita (keiichi-hikita) Status: New ** Attachment added: "after_remove_last_resourcde.png" https://bugs.launchpad.net/bugs/1694207/+attachment/4885438/+files/after_remove_last_resourcde.png ** Changed in: horizon Assignee: (unassigned) => Keiichi Hikita (keiichi-hikita) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1694207 Title: Resource count displayed at the above of each table will not be hidden in case resource count changes to zero Status in OpenStack Dashboard (Horizon): New Bug description: Horizon tables has two areas to display current number of resources. One is the above of table, and another is bottom of table. In case you have only one resource (e.g. instance), if you remove that resource, tables becomes like following after "UpdateRow" finished. 1. Table only displays "No items to display." 2. Resource count at the bottom will be hidden 3. Resource count at the above will be still displayed I thinks resource count displayed both above and bottom should be hidden. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1694207/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1694190] [NEW] qos scenario tests should check if rule types are supported
Public bug reported: it should skip if the functionality is not supported. it has require_qos_rule_type check but it's broken after Ia00d349625db358ab486802fc0ff2e69eaa3895e . eg. http://logs.openstack.org/26/468326/1/check/gate-tempest-dsvm- neutron-scenario-linuxbridge-ubuntu-xenial- nv/54e3dbe/testr_results.html.gz NOTE: the CI has both of ovs and linuxbridge MDs enabled. Traceback (most recent call last): File "/opt/stack/new/neutron/neutron/tests/tempest/scenario/test_qos.py", line 168, in test_qos qos_policy_id=policy_id) File "/opt/stack/new/neutron/neutron/tests/tempest/services/network/json/network_client.py", line 154, in _update resp, body = self.put(uri, post_data) File "tempest/lib/common/rest_client.py", line 334, in put return self.request('PUT', url, extra_headers, headers, body, chunked) File "tempest/lib/common/rest_client.py", line 659, in request self._error_checker(resp, resp_body) File "tempest/lib/common/rest_client.py", line 780, in _error_checker raise exceptions.Conflict(resp_body, resp=resp) tempest.lib.exceptions.Conflict: An object with that identifier already exists Details: {u'message': u'Rule bandwidth_limit is not supported by port 6f42de3d-43ad-4d27-a523-efc0f60c0944', u'detail': u'', u'type': u'QosRuleNotSupported'} ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1694190 Title: qos scenario tests should check if rule types are supported Status in neutron: New Bug description: it should skip if the functionality is not supported. it has require_qos_rule_type check but it's broken after Ia00d349625db358ab486802fc0ff2e69eaa3895e . eg. http://logs.openstack.org/26/468326/1/check/gate-tempest-dsvm- neutron-scenario-linuxbridge-ubuntu-xenial- nv/54e3dbe/testr_results.html.gz NOTE: the CI has both of ovs and linuxbridge MDs enabled. Traceback (most recent call last): File "/opt/stack/new/neutron/neutron/tests/tempest/scenario/test_qos.py", line 168, in test_qos qos_policy_id=policy_id) File "/opt/stack/new/neutron/neutron/tests/tempest/services/network/json/network_client.py", line 154, in _update resp, body = self.put(uri, post_data) File "tempest/lib/common/rest_client.py", line 334, in put return self.request('PUT', url, extra_headers, headers, body, chunked) File "tempest/lib/common/rest_client.py", line 659, in request self._error_checker(resp, resp_body) File "tempest/lib/common/rest_client.py", line 780, in _error_checker raise exceptions.Conflict(resp_body, resp=resp) tempest.lib.exceptions.Conflict: An object with that identifier already exists Details: {u'message': u'Rule bandwidth_limit is not supported by port 6f42de3d-43ad-4d27-a523-efc0f60c0944', u'detail': u'', u'type': u'QosRuleNotSupported'} To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1694190/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1694183] [NEW] PFs with vlan are not being reported via devices metadata
Public bug reported: In some cases, Physical functions with VLAN are not being reported via the device's metadata, since info_cache shows as empty. Refreshing the info_cache before metadata creation solves the problem. ** Affects: nova Importance: Undecided Status: New ** Tags: metadata sriov -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1694183 Title: PFs with vlan are not being reported via devices metadata Status in OpenStack Compute (nova): New Bug description: In some cases, Physical functions with VLAN are not being reported via the device's metadata, since info_cache shows as empty. Refreshing the info_cache before metadata creation solves the problem. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1694183/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1693438] Re: error instances remain in "Build" status and can't delete it
Reviewed: https://review.openstack.org/468401 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=5fac17ae960d36eeb7a642725a37f82e8ca95ec1 Submitter: Jenkins Branch:master commit 5fac17ae960d36eeb7a642725a37f82e8ca95ec1 Author: Matt Riedemann Date: Fri May 26 08:27:07 2017 -0400 Use targeted context when burying instances in cell0 After Iccdf6b80f5fc8adcc8a89ce6ece3f37b6cbcaee2 we need to use the yielded context which is targeted to the cell when we do DB operations, which in this case is creating the instance in the cell0 database and then updating it's status. There is another place in here where this was missed, which is when we're trying to delete a build request which was already deleted. Closes-Bug: #1693438 Change-Id: I142f97d691fa55e9824714c9c224f998ad72337e ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1693438 Title: error instances remain in "Build" status and can't delete it Status in OpenStack Compute (nova): Fix Released Bug description: Description === When create server API fails because of no valid host, the instance's status remains in BUILD status and "scheduling" task state. Additionally, users can't delete the instance by delete server API. Steps to reproduce == 1. create devstack with default configs 2. boot instances until nova scheduler says "no valid host was found" 3. check the error instance's status, then its status remains in "BUILD" and its task state remains in "scheduling". 4. check nova-conductor's log, then it has a following error 5. I can't delete the failed instance by delete instance API Expected result === the instance goes ERROR status and none task state because of "no valid host was found". And I can delete the instance by delete instance API. Environment === - git log Merge: bedcf29 3838d5e Author: Jenkins Date: Thu May 25 01:15:41 2017 + Merge "Handle uuid in HostAPI._find_service" - hypervisor: KVM Logs & Configs == - nova-conductor's log: nova-conductor[28120]: NoValidHost: No valid host was found. There are not enough hosts available. nova-conductor[28120]: nova-conductor[28120]: WARNING nova.scheduler.utils [req-7969ec7f-795d-420d-b847-b5c3c6bc8489 admin admin] [instance: e5e9cfe9-49ec-40a4-b763-d8bda68d5e56] Setting instance to ERROR state. nova-conductor[28120]: ERROR oslo_messaging.rpc.server [req-7969ec7f-795d-420d-b847-b5c3c6bc8489 admin admin] Exception during message handling nova-conductor[28120]: ERROR oslo_messaging.rpc.server Traceback (most recent call last): nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 157, in _process_incoming nova-conductor[28120]: ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch nova-conductor[28120]: ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch nova-conductor[28120]: ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) nova-conductor[28120]: ERROR oslo_messaging.rpc.server build_requests=build_requests) nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/devstack/devstack_data/nova/nova/conductor/manager.py", line 893, in _bury_in_cell0 nova-conductor[28120]: ERROR oslo_messaging.rpc.server exc, legacy_spec) nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/devstack/devstack_data/nova/nova/conductor/manager.py", line 355, in _set_vm_state_and_notify nova-conductor[28120]: ERROR oslo_messaging.rpc.server ex, request_spec) nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/devstack/devstack_data/nova/nova/scheduler/utils.py", line 104, in set_vm_state_and_notify nova-conductor[28120]: ERROR oslo_messaging.rpc.server instance.save() nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 226, in wrapper nova-conductor[28120]: ERROR oslo_messaging.rpc.server return fn(self, *args, **kwargs) nova-conductor[28120]: ERROR oslo_messaging.rpc.server File "/devstack/devstack_data/nova/nova/objects/instance.py", line 781, in save nova-conductor[28120]: ERROR oslo_messaging.rpc.server columns_to_join=_expe
[Yahoo-eng-team] [Bug 1694165] [NEW] Improve Neutron documentation for simpler deployments
Public bug reported: During Boston Summit session, an issue was raised that Neutron documentation for simpler deployments should be improved/simplified. Couple of observations were noted: 1) For a non-neutron savvy users, it is not very intuitive to specify/configure networking requirements. 2) Basic default configuration (as documented) is very OVS centric. It should discuss other non-OVS specific deployments as well. Here is the etherpad with the details of the discussion - https://etherpad.openstack.org/p/pike-neutron-making-it-easy ** Affects: neutron Importance: Undecided Status: New ** Tags: doc rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1694165 Title: Improve Neutron documentation for simpler deployments Status in neutron: New Bug description: During Boston Summit session, an issue was raised that Neutron documentation for simpler deployments should be improved/simplified. Couple of observations were noted: 1) For a non-neutron savvy users, it is not very intuitive to specify/configure networking requirements. 2) Basic default configuration (as documented) is very OVS centric. It should discuss other non-OVS specific deployments as well. Here is the etherpad with the details of the discussion - https://etherpad.openstack.org/p/pike-neutron-making-it-easy To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1694165/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1694127] [NEW] Unauthorized exception in angular users page as a member user.
Public bug reported: Env: devstack master branch Steps to reproduce: 1. Set 'users_panel' is True in the settings.py 2. Use a member user to login. 3. Go to identity/users panel It will be redirect to login page, when click users panel. ** Affects: horizon Importance: Undecided Assignee: wei.ying (wei.yy) Status: New ** Changed in: horizon Assignee: (unassigned) => wei.ying (wei.yy) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1694127 Title: Unauthorized exception in angular users page as a member user. Status in OpenStack Dashboard (Horizon): New Bug description: Env: devstack master branch Steps to reproduce: 1. Set 'users_panel' is True in the settings.py 2. Use a member user to login. 3. Go to identity/users panel It will be redirect to login page, when click users panel. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1694127/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1694120] [NEW] routed-networks - segments subnet dhcp port bind to wrong segment
Public bug reported: $ openstack network list -f json [] $ openstack network create ctlplane Network created, and we have 1 default segment: $ openstack network segment list --network ctlplane -f json [ { "Network": "89e4aa6c-06f2-4348-9576-bec17c7526d6", "Network Type": "local", "Segment": null, "ID": "46950790-717b-4d31-883a-a8d9d3df2e92", "Name": null } ] $ openstack network segment create subnet0 --physical-network ctlplane --network ctlplane --network-type flat $ openstack subnet create --subnet-range 172.20.0.0/26 --dhcp --gateway 172.20.0.62 --ip-version 4 --network ctlplane --network-segment subnet0 --allocation-pool start=172.20.0.10,end=172.20.0.19 subnet0 (undercloud) [stack@ocataleafs ~]$ sudo ip netns list qdhcp-89e4aa6c-06f2-4348-9576-bec17c7526d6 (undercloud) [stack@ocataleafs ~]$ sudo ip netns exec qdhcp-89e4aa6c-06f2-4348-9576-bec17c7526d6 ip addr show | grep tap 13: tap0170296c-f2: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 inet 172.20.0.10/26 brd 172.20.0.63 scope global tap0170296c-f2 (undercloud) [stack@ocataleafs ~]$ sudo ip netns exec qdhcp-89e4aa6c-06f2-4348-9576-bec17c7526d6 ping 172.20.0.62 PING 172.20.0.62 (172.20.0.62) 56(84) bytes of data. >From 172.20.0.10 icmp_seq=1 Destination Host Unreachable >From 172.20.0.10 icmp_seq=2 Destination Host Unreachable $ openstack network segment list -f json [ { "Network": "89e4aa6c-06f2-4348-9576-bec17c7526d6", "Network Type": "local", "Segment": null, "ID": "46950790-717b-4d31-883a-a8d9d3df2e92", "Name": null }, { "Network": "89e4aa6c-06f2-4348-9576-bec17c7526d6", "Network Type": "flat", "Segment": null, "ID": "32a308e9-6f2e-4252-a273-95c7bfbcc94e", "Name": "subnet0" } ] (undercloud) [stack@ocataleafs ~]$ openstack network segment delete 46950790-717b-4d31-883a-a8d9d3df2e92 Failed to delete network segment with ID '46950790-717b-4d31-883a-a8d9d3df2e92': HttpException: Conflict (HTTP 409) (Request-ID: req-8b1a5179-c987-47ba-b35a-68c9afdca4bc), Segment '46950790-717b-4d31-883a-a8d9d3df2e92' cannot be deleted: The segment is still bound with port(s) 0170296c-f2e5-4482-843a-3ebb0bf11381. 1 of 1 network segments failed to delete. The dhcp-namespace port did not bind to the segment associated with subnet0, it bound to the default segment that is automatically created when creating a network. If I change the workflow to first delete the default segment, and then create segment: subnet0 + subnet: subnet0 the dhcp namespace is wired correctly. ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-ipam-dhcp -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1694120 Title: routed-networks - segments subnet dhcp port bind to wrong segment Status in neutron: New Bug description: $ openstack network list -f json [] $ openstack network create ctlplane Network created, and we have 1 default segment: $ openstack network segment list --network ctlplane -f json [ { "Network": "89e4aa6c-06f2-4348-9576-bec17c7526d6", "Network Type": "local", "Segment": null, "ID": "46950790-717b-4d31-883a-a8d9d3df2e92", "Name": null } ] $ openstack network segment create subnet0 --physical-network ctlplane --network ctlplane --network-type flat $ openstack subnet create --subnet-range 172.20.0.0/26 --dhcp --gateway 172.20.0.62 --ip-version 4 --network ctlplane --network- segment subnet0 --allocation-pool start=172.20.0.10,end=172.20.0.19 subnet0 (undercloud) [stack@ocataleafs ~]$ sudo ip netns list qdhcp-89e4aa6c-06f2-4348-9576-bec17c7526d6 (undercloud) [stack@ocataleafs ~]$ sudo ip netns exec qdhcp-89e4aa6c-06f2-4348-9576-bec17c7526d6 ip addr show | grep tap 13: tap0170296c-f2: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 inet 172.20.0.10/26 brd 172.20.0.63 scope global tap0170296c-f2 (undercloud) [stack@ocataleafs ~]$ sudo ip netns exec qdhcp-89e4aa6c-06f2-4348-9576-bec17c7526d6 ping 172.20.0.62 PING 172.20.0.62 (172.20.0.62) 56(84) bytes of data. From 172.20.0.10 icmp_seq=1 Destination Host Unreachable From 172.20.0.10 icmp_seq=2 Destination Host Unreachable $ openstack network segment list -f json [ { "Network": "89e4aa6c-06f2-4348-9576-bec17c7526d6", "Network Type": "local", "Segment": null, "ID": "46950790-717b-4d31-883a-a8d9d3df2e92", "Name": null }, { "Network": "89e4aa6c-06f2-4348-9576-bec17c7526d6", "Network Type": "flat", "Segment": null, "ID": "32a308e9-6f2e-4252-a273-95c7bfbcc94e", "Name": "subnet0" } ] (undercloud) [stack@ocataleafs ~]$ openstack network segment delete 46950790-717b-4d31-883a-a8d9d3df2e92 Failed to delete network segment with ID '46950790-717b-4d31-883a-a8d9d3df2e92': HttpException: Conflict (HT
[Yahoo-eng-team] [Bug 1684682] Re: DHCP namespace doesn't have IPv6 default route
Reviewed: https://review.openstack.org/461887 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=7ad7584ce113bff21999d6ffd155334bf3d05d2f Submitter: Jenkins Branch:master commit 7ad7584ce113bff21999d6ffd155334bf3d05d2f Author: Brian Haley Date: Tue May 2 15:25:17 2017 -0400 Add IPv6 default route to DHCP namespace The DHCP namespace used to always have its IPv6 default route configured from a received Router Advertisement (RA). A recent change [1] disabled receipt of RAs, instead relying on the network topology to configure the namespace. Unfortunately the code only added an IPv4 default route, which caused a regression with DNS resolution in some circumstances where IPv6 was being used. A default route is now added for both IP versions. [1] https://review.openstack.org/#/c/386687/ Change-Id: I7c388f64c0aa9feb002f7a2faf76e7ccca30a3e7 Closes-bug: 1684682 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1684682 Title: DHCP namespace doesn't have IPv6 default route Status in neutron: Fix Released Bug description: This is a regression for Ocata, things are working fine for Newton. But if I create a IPv6 subnet in Ocata, the DHCP namespace gets configured with an IPv6 address, but is lacking a default route, so dnsmasq fails to resolv any DNS queries except for the local OpenStack instances. I think there have been some changes in the way the namespace is being set up, removing listening to RAs and instead doing static configuration, that may have caused this. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1684682/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp