[Yahoo-eng-team] [Bug 1487668] [NEW] nova evacuate doent work with neutron dvr architecture
Public bug reported: We are getting some issue with nova evacuate with neutron dvr architecture. When we execute nova evacuate command when thre is no ditribuited router (qrouter) in the new compute nodel3 agent didn´t create a specific distribuited router (qrouter) in the new compute node for the evacuated vm´s and the vm´s don´t get ip address. When we evacuate and thre is specific qrouter the new vm don´t get IP address by dhcp, If you set an fiexd ip address it works. We tcp-dumped the traffic and seen the dhcp agent didn´t realized the migration and sent dhcp packages to old compute node. I didn´t get any error in openvswitch, l3, dhcp and nova logs files. Packages : ii neutron-common 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - common ii neutron-plugin-ml2 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - ML2 plugin ii neutron-server 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - server ii python-neutron 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - Python library ii python-neutron-fwaas2015.1.0-0ubuntu1~cloud0 all Firewall-as-a-Service driver for OpenStack Neutron ii python-neutronclient1:2.3.11-0ubuntu1.1~cloud0 all client - Neutron is a virtual network service for Openstack ii nova-api1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - API frontend ii nova-cert 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - certificate management ii nova-common 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - common files ii nova-conductor 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - conductor service ii nova-consoleauth1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - Console Authenticator ii nova-novncproxy 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute Python libraries ii python-novaclient 1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1487668 Title: nova evacuate doent work with neutron dvr architecture Status in OpenStack Compute (nova): New Bug description: We are getting some issue with nova evacuate with neutron dvr architecture. When we execute nova evacuate command when thre is no ditribuited router (qrouter) in the new compute nodel3 agent didn´t create a specific distribuited router (qrouter) in the new compute node for the evacuated vm´s and the vm´s don´t get ip address. When we evacuate and thre is specific qrouter the new vm don´t get IP address by dhcp, If you set an fiexd ip address it works. We tcp-dumped the traffic and seen the dhcp agent didn´t realized the migration and sent dhcp packages to old compute node. I didn´t get any error in openvswitch, l3, dhcp and nova logs files. Packages : ii neutron-common 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - common ii neutron-plugin-ml2 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - ML2 plugin ii neutron-server 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - server ii python-neutron 1:2015.1.0-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - Python library ii python-neutron-fwaas2015.1.0-0ubuntu1~cloud0 all Firewall-as-a-Service driver for OpenStack Neutron ii python-neutronclient1:2.3.11-0ubuntu1.1~cloud0 all client - Neutron is a virtual network service for Openstack ii nova-api1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - API frontend ii nova-cert 1:2015.1.0-0ubuntu1.1~cloud0 all
[Yahoo-eng-team] [Bug 1487663] [NEW] no testcases to cover the region creation with invalid id
Public bug reported: Regions use ID differently. The user can specify the ID or it will be generated automatically. Region schema validation require the Region ID is string type, but there is no testcase to cover this. CURL = - create region, curl -g -i -X POST http://127.0.0.1:35357/v3/regions -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: 9b7866d88de2408381549e9af55b0c07 -d '{region: {enabled: true, id: 1234}}' {error: {message: Invalid input for field 'id'. The value is '1234'., code: 400, title: Bad Request}} - create endpoint, curl -g -i -X POST http://127.0.0.1:35357/v3/endpoints -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: af9a74ea521045b2880e2a364a16b05a -d '{endpoint: {url: 192.168.1.1:78, interface: public, region: 7891, enabled: true, service_id: ce08ac9579fc4de78d0ee17efeca530e}}' {error: {message: Invalid input for field 'region'. The value is '7891'., code: 400, title: Bad Request}} ** Affects: keystone Importance: Undecided Status: New ** Description changed: Regions use ID differently. The user can specify the ID or it will be generated automatically. Region schema validation require the Region ID is string type, but there is no testcase to cover this. - - CURL - === - create region, + CURL + = + - create region, curl -g -i -X POST http://127.0.0.1:35357/v3/regions -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: 9b7866d88de2408381549e9af55b0c07 -d '{region: {enabled: true, id: 1234}}' {error: {message: Invalid input for field 'id'. The value is '1234'., code: 400, title: Bad Request}} - - create endpoint, + - create endpoint, curl -g -i -X POST http://127.0.0.1:35357/v3/endpoints -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: af9a74ea521045b2880e2a364a16b05a -d '{endpoint: {url: 192.168.1.1:78, interface: public, region: 7891, enabled: true, service_id: ce08ac9579fc4de78d0ee17efeca530e}}' - - {error: {message: Invalid input for field 'region'. The value is '7891'., code: 400, title: Bad Request}} + {error: {message: Invalid input for field 'region'. The value is + '7891'., code: 400, title: Bad Request}} -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1487663 Title: no testcases to cover the region creation with invalid id Status in Keystone: New Bug description: Regions use ID differently. The user can specify the ID or it will be generated automatically. Region schema validation require the Region ID is string type, but there is no testcase to cover this. CURL = - create region, curl -g -i -X POST http://127.0.0.1:35357/v3/regions -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: 9b7866d88de2408381549e9af55b0c07 -d '{region: {enabled: true, id: 1234}}' {error: {message: Invalid input for field 'id'. The value is '1234'., code: 400, title: Bad Request}} - create endpoint, curl -g -i -X POST http://127.0.0.1:35357/v3/endpoints -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: af9a74ea521045b2880e2a364a16b05a -d '{endpoint: {url: 192.168.1.1:78, interface: public, region: 7891, enabled: true, service_id: ce08ac9579fc4de78d0ee17efeca530e}}' {error: {message: Invalid input for field 'region'. The value is '7891'., code: 400, title: Bad Request}} To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1487663/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487425] [NEW] Ranged filtering by version is not supported
Public bug reported: Currently filtering version by range is not supported, so requests like ?version=gt:5.0version=lt:8.0version=ne:6.0 don't work as expected - only the last parameter is used in that cases. ** Affects: glance Importance: Undecided Status: New ** Tags: artifacts -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1487425 Title: Ranged filtering by version is not supported Status in Glance: New Bug description: Currently filtering version by range is not supported, so requests like ?version=gt:5.0version=lt:8.0version=ne:6.0 don't work as expected - only the last parameter is used in that cases. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1487425/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487506] [NEW] compression error
Public bug reported: while using django-pyscss==2.0.2 pyScss==1.3.4 I'm getting CommandError: An error occured during rendering /builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/templates/_stylesheets.html: Don't know how to merge conflicting combinators: SimpleSelector: u'+ .btn:not(:first-child)' and SimpleSelector: u' .btn' Found 'compress' tags in: /builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/templates/_stylesheets.html /builddir/build/BUILD/horizon-2015.1.1/horizon/templates/horizon/_scripts.html /builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/dashboards/theme/templates/_stylesheets.html /builddir/build/BUILD/horizon-2015.1.1/horizon/templates/horizon/_conf.html Compressing... error: Bad exit status from /var/tmp/rpm-tmp.oZj0k6 (%build) Bad exit status from /var/tmp/rpm-tmp.oZj0k6 (%build) ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487506 Title: compression error Status in OpenStack Dashboard (Horizon): New Bug description: while using django-pyscss==2.0.2 pyScss==1.3.4 I'm getting CommandError: An error occured during rendering /builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/templates/_stylesheets.html: Don't know how to merge conflicting combinators: SimpleSelector: u'+ .btn:not(:first-child)' and SimpleSelector: u' .btn' Found 'compress' tags in: /builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/templates/_stylesheets.html /builddir/build/BUILD/horizon-2015.1.1/horizon/templates/horizon/_scripts.html /builddir/build/BUILD/horizon-2015.1.1/openstack_dashboard/dashboards/theme/templates/_stylesheets.html /builddir/build/BUILD/horizon-2015.1.1/horizon/templates/horizon/_conf.html Compressing... error: Bad exit status from /var/tmp/rpm-tmp.oZj0k6 (%build) Bad exit status from /var/tmp/rpm-tmp.oZj0k6 (%build) To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487506/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487477] [NEW] Mess in live-migration compute-manager and drivers code
Public bug reported: There is _live_migration_cleanup_flags method in compute's manager class which should decide whether it's needed to make cleanup after live-migration is done or not. It accepts 2 params, from doc: :param block_migration: if true, it was a block migration :param migrate_data: implementation specific data The problem is that current compute's manager code is libvirt-specific. It operates values in migrate_data dictionary that valid only for libvirt driver implementation. This doesn't cause any bug yet because other drivers doesn't implement cleanup method at all. When anyone decide to implement this live-migration starts to fail. There is no valid ci job to verify that. live_migration_cleanup_flags - should become hypervisor specific. and we should move it from compute manager to drivers. ** Affects: nova Importance: Undecided Assignee: Timofey Durakov (tdurakov) Status: New ** Tags: live-migration -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1487477 Title: Mess in live-migration compute-manager and drivers code Status in OpenStack Compute (nova): New Bug description: There is _live_migration_cleanup_flags method in compute's manager class which should decide whether it's needed to make cleanup after live-migration is done or not. It accepts 2 params, from doc: :param block_migration: if true, it was a block migration :param migrate_data: implementation specific data The problem is that current compute's manager code is libvirt-specific. It operates values in migrate_data dictionary that valid only for libvirt driver implementation. This doesn't cause any bug yet because other drivers doesn't implement cleanup method at all. When anyone decide to implement this live-migration starts to fail. There is no valid ci job to verify that. live_migration_cleanup_flags - should become hypervisor specific. and we should move it from compute manager to drivers. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1487477/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487455] [NEW] Full decomposition of the core Nuage plugin
Public bug reported: Fully decompose Nuage core plugin to the vendor repo, so there is no code in the Neutron repo. ** Affects: neutron Importance: Undecided Status: New ** Tags: nuage ** Tags added: nuage -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487455 Title: Full decomposition of the core Nuage plugin Status in neutron: New Bug description: Fully decompose Nuage core plugin to the vendor repo, so there is no code in the Neutron repo. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487455/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487457] [NEW] cen't get user list for domain if default identity backend driver is ldap
Public bug reported: root@node-8:/var/log/apache2# openstack --os-token E6Px37E7 --os-url http://192.168.0.2:35357/v3/ --os-identity-api-version 3 user list --quiet --format csv --long --domain d95a7f6e6a6e4ca88703b3fb3e9ebda6 ERROR: openstack Could not find domain: d95a7f6e6a6e4ca88703b3fb3e9ebda6 (HTTP 404) (Request-ID: req-ce60504d-3e3b-4543-be29-61e140c3f59c) root@node-8:/var/log/apache2# openstack --os-token E6Px37E7 --os-url http://192.168.0.2:35357/v3/ --os-identity-api-version 3 domain show d95a7f6e6a6e4ca88703b3fb3e9ebda6+-+--+ | Field | Value| +-+--+ | enabled | True | | id | d95a7f6e6a6e4ca88703b3fb3e9ebda6 | | name| test_domain_3| +-+--+ keystone.conf https://paste.mirantis.net/show/968/ domains/keystone.heat.conf https://paste.mirantis.net/show/969/ ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1487457 Title: cen't get user list for domain if default identity backend driver is ldap Status in Keystone: New Bug description: root@node-8:/var/log/apache2# openstack --os-token E6Px37E7 --os-url http://192.168.0.2:35357/v3/ --os-identity-api-version 3 user list --quiet --format csv --long --domain d95a7f6e6a6e4ca88703b3fb3e9ebda6 ERROR: openstack Could not find domain: d95a7f6e6a6e4ca88703b3fb3e9ebda6 (HTTP 404) (Request-ID: req-ce60504d-3e3b-4543-be29-61e140c3f59c) root@node-8:/var/log/apache2# openstack --os-token E6Px37E7 --os-url http://192.168.0.2:35357/v3/ --os-identity-api-version 3 domain show d95a7f6e6a6e4ca88703b3fb3e9ebda6+-+--+ | Field | Value| +-+--+ | enabled | True | | id | d95a7f6e6a6e4ca88703b3fb3e9ebda6 | | name| test_domain_3| +-+--+ keystone.conf https://paste.mirantis.net/show/968/ domains/keystone.heat.conf https://paste.mirantis.net/show/969/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1487457/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1454074] Re: denial of service via large number of logout page requests
** Changed in: horizon Status: Triaged = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1454074 Title: denial of service via large number of logout page requests Status in OpenStack Dashboard (Horizon): Won't Fix Status in OpenStack Security Advisory: Won't Fix Bug description: While investigating CVE-2014-8124 (https://bugs.launchpad.net/horizon/+bug/1394370) I think I found another instance of the underlying issue, but with the logout form. I'm on Ubuntu 14.04 LTS, with distro-packaged openstack-dashboard 1:2014.1.4-0ubuntu2. I verified the patch from https://review.openstack.org/140356 is applied to the installed files. I configured horizon to use mysql datastore, and ran the following command: while true ; do wget http://localhost/horizon/auth/logout/ ; done While this command was running I checked the mysql dash database table django_sessions and found it growing without apparent bound: select * from django_session; ... 231 rows in set (0.00 sec) Is this an issue? Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1454074/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1404743] Re: sporadic test failures due to VMs not getting a DHCP lease
Closing for now per Ryan's comments in #19. ** Changed in: neutron Status: In Progress = Invalid ** Changed in: neutron Importance: Critical = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1404743 Title: sporadic test failures due to VMs not getting a DHCP lease Status in neutron: Invalid Bug description: http://logs.openstack.org/01/141001/4/gate/gate-tempest-dsvm-neutron- full/0fcd5ec/console.html.gz 2014-12-19 14:01:31.371 | SSHTimeout: Connection to the 172.24.4.69 via SSH timed out. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1404743/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487548] [NEW] fullstack infrastructure tears down processes via kill -9
Public bug reported: I can't imagine this has good implications. Distros typically kill neutron processes via kill -15, so this should definitely be doable here as well. ** Affects: neutron Importance: Undecided Status: New ** Tags: fullstack ** Tags added: fullstack -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487548 Title: fullstack infrastructure tears down processes via kill -9 Status in neutron: New Bug description: I can't imagine this has good implications. Distros typically kill neutron processes via kill -15, so this should definitely be doable here as well. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487548/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487522] [NEW] Objects: obj_reset_changes signature doesn't match
Public bug reported: If an object contains a Flavor object within it and obj_reset_changes is called with recursive=True it will fail with the following error. This is because Flavor.obj_reset_changes is missing the recursive param in it's signature. The Instance object is also missing this parameter in its method. Captured traceback: ~~~ Traceback (most recent call last): File nova/tests/unit/objects/test_request_spec.py, line 284, in test_save req_obj.obj_reset_changes(recursive=True) File nova/objects/base.py, line 224, in obj_reset_changes value.obj_reset_changes(recursive=True) TypeError: obj_reset_changes() got an unexpected keyword argument 'recursive' ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1487522 Title: Objects: obj_reset_changes signature doesn't match Status in OpenStack Compute (nova): New Bug description: If an object contains a Flavor object within it and obj_reset_changes is called with recursive=True it will fail with the following error. This is because Flavor.obj_reset_changes is missing the recursive param in it's signature. The Instance object is also missing this parameter in its method. Captured traceback: ~~~ Traceback (most recent call last): File nova/tests/unit/objects/test_request_spec.py, line 284, in test_save req_obj.obj_reset_changes(recursive=True) File nova/objects/base.py, line 224, in obj_reset_changes value.obj_reset_changes(recursive=True) TypeError: obj_reset_changes() got an unexpected keyword argument 'recursive' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1487522/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487582] [NEW] Moving translation to HTML for launch-instance flavor step
Public bug reported: We should clean out old gettext and move them into HTML files. This bug addresses the move to launch-instance flavor step. ** Affects: horizon Importance: Undecided Assignee: Cindy Lu (clu-m) Status: New ** Changed in: horizon Assignee: (unassigned) = Cindy Lu (clu-m) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487582 Title: Moving translation to HTML for launch-instance flavor step Status in OpenStack Dashboard (Horizon): New Bug description: We should clean out old gettext and move them into HTML files. This bug addresses the move to launch-instance flavor step. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487582/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487599] [NEW] fwaas - ip_version and IP address conflicts are not raised
Public bug reported: The FwaaS API currently allows the creation of firewall rules where the IP version is 4 but the source and destination IPs are IPv6 addresses. http://paste.openstack.org/show/412434/ This causes failures when a firewall is created, because iptables is being invoked with IPv6 addresses, which causes an exception in the iptables driver. ** Affects: neutron Importance: Undecided Assignee: Sean M. Collins (scollins) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487599 Title: fwaas - ip_version and IP address conflicts are not raised Status in neutron: In Progress Bug description: The FwaaS API currently allows the creation of firewall rules where the IP version is 4 but the source and destination IPs are IPv6 addresses. http://paste.openstack.org/show/412434/ This causes failures when a firewall is created, because iptables is being invoked with IPv6 addresses, which causes an exception in the iptables driver. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487599/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487598] [NEW] Remove vendor AGENT_TYPE_* constants
Public bug reported: Neutron defines in neutron.common.constants vendor AGENT_TYPE_* constants BUT there are only used by currently or future out-of-tree code ... such constants should be moved to out-of-tree repos ** Affects: neutron Importance: Undecided Assignee: Cedric Brandily (cbrandily) Status: New ** Changed in: neutron Assignee: (unassigned) = Cedric Brandily (cbrandily) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487598 Title: Remove vendor AGENT_TYPE_* constants Status in neutron: New Bug description: Neutron defines in neutron.common.constants vendor AGENT_TYPE_* constants BUT there are only used by currently or future out-of-tree code ... such constants should be moved to out-of-tree repos To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487598/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487590] [NEW] IMAGE_CUSTOM_PROPERTY_TITLES is not available for Angular JS
Public bug reported: The current translations for the image properties come from deep inside python in local_settings.py. It's also part of documentation. There's no current angular way of fetching those and displaying them. The particular setting name is IMAGE_CUSTOM_PROPERTY_TITLES and is used in openstack_dashboard/dashboards/project/images/images/tabs.py:29 ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487590 Title: IMAGE_CUSTOM_PROPERTY_TITLES is not available for Angular JS Status in OpenStack Dashboard (Horizon): New Bug description: The current translations for the image properties come from deep inside python in local_settings.py. It's also part of documentation. There's no current angular way of fetching those and displaying them. The particular setting name is IMAGE_CUSTOM_PROPERTY_TITLES and is used in openstack_dashboard/dashboards/project/images/images/tabs.py:29 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487590/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1484237] Re: token revocations not always respected when using fernet tokens
I agree this seems like a very impractical/unlikely vulnerability in any real-world deployment, so class C1 in our report taxonomy: https://security.openstack.org/vmt-process.html#incident-report-taxonomy ** Information type changed from Private Security to Public ** Tags added: security ** Changed in: ossa Status: Incomplete = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1484237 Title: token revocations not always respected when using fernet tokens Status in Keystone: Confirmed Status in OpenStack Security Advisory: Won't Fix Bug description: A simple test that shows that fernet tokens are not always being invalidated. Simple test steps: 1) gets a token 2) deletes a token 3) tries to validate the deleted token When I run this in production on 10 tokens, I get about a 20% success rate on the token being detected as invalid, 80% of the time, keystone tells me the token is valid. I have validated that the token is showing in the revocation event table. I've tried a 5 second delay between the calls which did not change the behavior. My current script (below) will look for 204 and 404 to show failure and will wait forever. I've let it wait over 5 minutes, it seems to me that either keystone knows immediately that the token is invalid or not at all. I do not have memcache enabled on these nodes. The same test has a 100% pass rate with UUID tokens. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1484237/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1212358] Re: django openstack auth is granting permissions for services outside of current region
** Changed in: horizon Status: Confirmed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1212358 Title: django openstack auth is granting permissions for services outside of current region Status in django-openstack-auth: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Bug description: The roles/permissions for openstack.services.%s type permissions are granted for every service available to the user. When a user is logged in and selects a certain region, not all services might be present in that region. This leads to problems when accessing the various panels like compute/object store and those services not being in the user's current selected region. Those panels look for endpoints that must match the same region as the user's current selection. To manage notifications about this bug go to: https://bugs.launchpad.net/django-openstack-auth/+bug/1212358/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487566] [NEW] Moving translation to HTML for launch-instance key pair step
Public bug reported: We should clean out the old gettext from launch-instance. This bug covers the key pair step. ** Affects: horizon Importance: Undecided Assignee: Paulo Ewerton (pauloewerton) Status: In Progress ** Changed in: horizon Assignee: (unassigned) = Paulo Ewerton (pauloewerton) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487566 Title: Moving translation to HTML for launch-instance key pair step Status in OpenStack Dashboard (Horizon): In Progress Bug description: We should clean out the old gettext from launch-instance. This bug covers the key pair step. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487566/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487570] [NEW] test_list_servers_by_admin_with_all_tenants fails with InstanceNotFound trying to lazy-load flavor
Public bug reported: http://logs.openstack.org/70/215170/1/check/gate-tempest-dsvm- nova-v21-full/3fdc0d6/console.html#_2015-08-21_16_04_53_513 2015-08-21 16:04:53.514 | Captured traceback: 2015-08-21 16:04:53.514 | ~~~ 2015-08-21 16:04:53.514 | Traceback (most recent call last): 2015-08-21 16:04:53.514 | File tempest/api/compute/admin/test_servers.py, line 81, in test_list_servers_by_admin_with_all_tenants 2015-08-21 16:04:53.514 | body = self.client.list_servers(detail=True, **params) 2015-08-21 16:04:53.514 | File tempest/services/compute/json/servers_client.py, line 159, in list_servers 2015-08-21 16:04:53.514 | resp, body = self.get(url) 2015-08-21 16:04:53.514 | File /opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py, line 271, in get 2015-08-21 16:04:53.514 | return self.request('GET', url, extra_headers, headers) 2015-08-21 16:04:53.514 | File /opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py, line 643, in request 2015-08-21 16:04:53.515 | resp, resp_body) 2015-08-21 16:04:53.515 | File /opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py, line 754, in _error_checker 2015-08-21 16:04:53.515 | raise exceptions.ServerFault(resp_body, message=message) 2015-08-21 16:04:53.515 | tempest_lib.exceptions.ServerFault: Got server fault 2015-08-21 16:04:53.515 | Details: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. 2015-08-21 16:04:53.515 | class 'nova.exception.InstanceNotFound' There is a trace in the n-api logs when trying to lazy-load a flavor on an instance: http://logs.openstack.org/70/215170/1/check/gate-tempest-dsvm- nova-v21-full/3fdc0d6/logs/screen-n-api.txt.gz?level=TRACE#_2015-08-21_15_39_06_148 2015-08-21 15:39:06.148 ERROR nova.api.openstack.extensions [req-5eca1fa9-7948-4a3d-bc80-7e84441bb74e tempest-ServersAdminTestJSON-973647583 tempest-ServersAdminTestJSON-692147874] Unexpected exception in API method 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/extensions.py, line 478, in wrapped 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 264, in detail 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions servers = self._get_servers(req, is_detail=True) 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 389, in _get_servers 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions response = self._view_builder.detail(req, instance_list) 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/views/servers.py, line 126, in detail 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions return self._list_view(self.show, request, instances, coll_name) 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/views/servers.py, line 138, in _list_view 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions server_list = [func(request, server)[server] for server in servers] 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/views/servers.py, line 266, in show 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions flavor: self._get_flavor(request, instance), 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/views/servers.py, line 198, in _get_flavor 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions instance_type = instance.get_flavor() 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/objects/instance.py, line 890, in get_flavor 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions return getattr(self, attr) 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py, line 65, in getter 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions self.obj_load_attr(name) 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions File /opt/stack/new/nova/nova/objects/instance.py, line 880, in obj_load_attr 2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions self._load_flavor()
[Yahoo-eng-team] [Bug 1456321] Re: nova-network DHCP server not correct when creating multiple networks
*** This bug is a duplicate of bug 1443970 *** https://bugs.launchpad.net/bugs/1443970 ** This bug has been marked a duplicate of bug 1443970 nova-manage create networks with wrong dhcp_server in DB(nova) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1456321 Title: nova-network DHCP server not correct when creating multiple networks Status in OpenStack Compute (nova): In Progress Bug description: This bug pertains to stable Kilo, package versions below: ii nova-api 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - API frontend ii nova-cert1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - certificate management ii nova-common 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - common files ii nova-compute 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - compute node base ii nova-compute-kvm 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - compute node (KVM) ii nova-compute-libvirt 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - compute node libvirt support ii nova-conductor 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - conductor service ii nova-consoleauth 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - Console Authenticator ii nova-network 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - Network manager ii nova-novncproxy 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.0-0ubuntu1~cloud0 all OpenStack Compute Python libraries ii python-nova-adminclient 0.1.8-0ubuntu2 amd64client for administering Openstack Nova ii python-novaclient1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API This bug was originally reported at https://github.com/bloomberg/chef- bcpc/issues/573. I triaged it there and am distilling it down here. In essence, we have a command in a Chef recipe that builds out a number of fixed IP networks using nova-manage during initial setup of an OpenStack cluster. The command looks like this: nova-manage network create --label fixed --fixed_range_v4=1.104.0.0/16 --num_networks=16 --multi_host=T --network_size=128 --vlan_start=1000 --bridge_interface=p2p2 As per the original bug report on GitHub, all subnets after the first one were being created with a DHCP server address of the gateway of the first subnet to be created. I dug in and found the problem code in nova/network/manager.py; if dhcp_server is not provided, the first iteration through the subnets enumeration will set it to the gateway IP. Since dhcp_server is scoped at method level, it sticks around for the entire loop and so every created subnet gets the same DHCP server IP. A little println debugging indicated that this was indeed the case. This causes things to break for us pretty badly. When launching instances in any network other than the first one, launch will fail because Nova tries to launch dnsmasq and bind it to an IP that's already bound by another instance of dnsmasq, which fails. I patched manager.py in the following way, which writes the DHCP server IP to a local variable in the loop. Specifying a DHCP server manually will still override inferring the DHCP server from the gateway address. Tests pass after this change (sorry that it is not in Gerrit already, but I haven't been able to sit down and get git review working yet): diff --git a/nova/network/manager.py b/nova/network/manager.py index 3e8e8b1..832fd1b 100644 --- a/nova/network/manager.py +++ b/nova/network/manager.py @@ -1351,13 +1351,15 @@ class NetworkManager(manager.Manager): else: net.gateway = current current += 1 -if not dhcp_server: -dhcp_server = net.gateway +if dhcp_server: +subnet_dhcp_server = dhcp_server +else: +subnet_dhcp_server = net.gateway net.dhcp_start = current current += 1 -if str(net.dhcp_start) == dhcp_server: +if str(net.dhcp_start) == subnet_dhcp_server:
[Yahoo-eng-team] [Bug 1487435] [NEW] don't hardcode tunnel bridge name in setup_tunnel_br
Public bug reported: Since https://review.openstack.org/#/c/182920/ merged ovs agent functional tests were failing on my machine, the reason is the the name of the tunnel bridge is hard coded. ** Affects: neutron Importance: Undecided Assignee: Rossella Sblendido (rossella-o) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487435 Title: don't hardcode tunnel bridge name in setup_tunnel_br Status in neutron: In Progress Bug description: Since https://review.openstack.org/#/c/182920/ merged ovs agent functional tests were failing on my machine, the reason is the the name of the tunnel bridge is hard coded. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487435/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1454074] Re: denial of service via large number of logout page requests
** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete = Won't Fix ** Tags added: security -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1454074 Title: denial of service via large number of logout page requests Status in OpenStack Dashboard (Horizon): Triaged Status in OpenStack Security Advisory: Won't Fix Bug description: This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments. While investigating CVE-2014-8124 (https://bugs.launchpad.net/horizon/+bug/1394370) I think I found another instance of the underlying issue, but with the logout form. I'm on Ubuntu 14.04 LTS, with distro-packaged openstack-dashboard 1:2014.1.4-0ubuntu2. I verified the patch from https://review.openstack.org/140356 is applied to the installed files. I configured horizon to use mysql datastore, and ran the following command: while true ; do wget http://localhost/horizon/auth/logout/ ; done While this command was running I checked the mysql dash database table django_sessions and found it growing without apparent bound: select * from django_session; ... 231 rows in set (0.00 sec) Is this an issue? Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1454074/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1463698] Re: XSS
Sounds like escaping of content needs to happen where the content is embedded. Swift just gives you back what you gave it. Adding Horizon, marking invalid for Swift. ** Also affects: horizon Importance: Undecided Status: New ** Changed in: swift Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1463698 Title: XSS Status in OpenStack Dashboard (Horizon): New Status in OpenStack Security Advisory: Won't Fix Status in OpenStack Object Storage (swift): Invalid Bug description: 2.14.2 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1463698/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487658] [NEW] Modification on name repeated error message of admin create volume type
Public bug reported: When input a volume type name that already exist, we will see the error message on the top right corner of the volume type page. Also, the error message is not specific and clear. If provide specific and clear error message beside the volume type name, it will be more better. ** Affects: horizon Importance: Undecided Assignee: qiaomin032 (chen-qiaomin) Status: In Progress ** Attachment added: volume_type_repeat.jpg https://bugs.launchpad.net/bugs/1487658/+attachment/4450655/+files/volume_type_repeat.jpg -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487658 Title: Modification on name repeated error message of admin create volume type Status in OpenStack Dashboard (Horizon): In Progress Bug description: When input a volume type name that already exist, we will see the error message on the top right corner of the volume type page. Also, the error message is not specific and clear. If provide specific and clear error message beside the volume type name, it will be more better. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487658/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487335] [NEW] Neutrondrivers should throw specific error instead for 500 internal server
Public bug reported: Implementing new mech driver for neutron not getting the proper error reply from the neutron driver code. For Error from the implemented mech driver : DBDuplicateEntry: (IntegrityError) MechanismDriverError: create_port_precommit failed. Error Thrown : HTTP/1.1 500 Internal Server Error Content-Type: application/json; charset=UTF-8 Content-Length: 108 X-Openstack-Request-Id: req-efd3e486-23e4-4699-ad74-7c28e8b7c6bf Date: Wed, 19 Aug 2015 11:00:30 GMT ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487335 Title: Neutrondrivers should throw specific error instead for 500 internal server Status in neutron: New Bug description: Implementing new mech driver for neutron not getting the proper error reply from the neutron driver code. For Error from the implemented mech driver : DBDuplicateEntry: (IntegrityError) MechanismDriverError: create_port_precommit failed. Error Thrown : HTTP/1.1 500 Internal Server Error Content-Type: application/json; charset=UTF-8 Content-Length: 108 X-Openstack-Request-Id: req-efd3e486-23e4-4699-ad74-7c28e8b7c6bf Date: Wed, 19 Aug 2015 11:00:30 GMT To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487335/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1473042] Re: s3 token authentication doesn't support v4 protocol
** Also affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1473042 Title: s3 token authentication doesn't support v4 protocol Status in Keystone: In Progress Status in keystonemiddleware: In Progress Bug description: Amazon has several versions of signature for requests. Now s3_token middleware supports only first s3 signature version. It will be good if s3_token middleware will support v4 version. http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html http://docs.aws.amazon.com/AmazonS3/latest/API/bucket-policy-s3-sigv4-conditions.html openstack/nova and stackforge/ec2-api projects don't have authenticatoin, so these projects can use keystone middleware if it will has v4 auth. Also stackforge/swift3 now uses keystone middleware and has a bug https://bugs.launchpad.net/swift3/+bug/1411078 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1473042/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487338] [NEW] OVS ARP spoofing protection breaks floating IPs without port security extension
Public bug reported: The OVS ARP spoofing protection depends on the port security extension being enabled to disable ARP spoofing protection on router interfaces that have floating IP traffic on them. So if the port security extension is disabled the router interface will get ARP spoofing rules, which don't know about the floating IPs and will drop the ARP requests for them. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487338 Title: OVS ARP spoofing protection breaks floating IPs without port security extension Status in neutron: New Bug description: The OVS ARP spoofing protection depends on the port security extension being enabled to disable ARP spoofing protection on router interfaces that have floating IP traffic on them. So if the port security extension is disabled the router interface will get ARP spoofing rules, which don't know about the floating IPs and will drop the ARP requests for them. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487338/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487391] [NEW] 'CreateVolumeTypeView' class in admin volumes views is redundant and useless
Public bug reported: The 'CreateVolumeTypeView' class in admin volumes views is redundant and useless. Another one also exist in volume_types views, but this is useful. ** Affects: horizon Importance: Undecided Assignee: qiaomin032 (chen-qiaomin) Status: In Progress ** Description changed: The 'CreateVolumeTypeView' class in admin volumes views is redundant and useless. Another one also exist in volume_types views, but this is - userful. + useful. ** Summary changed: - 'CreateVolumeTypeView' is repetitive in two views + 'CreateVolumeTypeView' class in admin volumes views is redundant and useless -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487391 Title: 'CreateVolumeTypeView' class in admin volumes views is redundant and useless Status in OpenStack Dashboard (Horizon): In Progress Bug description: The 'CreateVolumeTypeView' class in admin volumes views is redundant and useless. Another one also exist in volume_types views, but this is useful. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487391/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487350] [NEW] wrong execption msg of param backlog check
Public bug reported: in file nova/wsgi.py, Line 128 Line128if backlog 1: 129 raise exception.InvalidInput( 130 reason='The backlog must be more than 1') I think it wrong for variable reason='The backlog must be more than 1', because the condition is [if backlog 1:] I think Line130 it should change from 'The backlog must be more than 1' to 'The backlog must be more than 0' ** Affects: nova Importance: Undecided Assignee: liyuanyuan (liyuanyuan-fnst) Status: New ** Changed in: nova Assignee: (unassigned) = liyuanyuan (liyuanyuan-fnst) ** Description changed: in file nova/wsgi.py, Line 128 Line128if backlog 1: - 129 raise exception.InvalidInput( - 130 reason='The backlog must be more than 1') + 129 raise exception.InvalidInput( + 130 reason='The backlog must be more than 1') I think it wrong for variable reason='The backlog must be more than 1', because the condition is [if backlog 1:] I think Line130 it should change from 'The backlog must be more than 1' - to or 'The backlog must be more than 0' + to 'The backlog must be more than 0' -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1487350 Title: wrong execption msg of param backlog check Status in OpenStack Compute (nova): New Bug description: in file nova/wsgi.py, Line 128 Line128if backlog 1: 129 raise exception.InvalidInput( 130 reason='The backlog must be more than 1') I think it wrong for variable reason='The backlog must be more than 1', because the condition is [if backlog 1:] I think Line130 it should change from 'The backlog must be more than 1' to 'The backlog must be more than 0' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1487350/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487357] [NEW] No PoolInUse Check when creating VIP
Public bug reported: From the lbaasv1 api, there seems to me that many vips could map to the same pool. After reading the code, it turned out to be not. Deducing from below code snippet: class LoadBalancerPluginDb(loadbalancer.LoadBalancerPluginBase, base_db.CommonDbMixin): ... def create_vip(self, context, vip): ... if v['pool_id']: # fetching pool again pool = self._get_resource(context, Pool, v['pool_id']) # (NOTE): we rely on the fact that pool didn't change between # above block and here vip_db['pool_id'] = v['pool_id'] pool['vip_id'] = vip_db['id'] # explicitly flush changes as we're outside any transaction context.session.flush() ... ... (neutron_lbaas/db/loadbalancer/loadbalancer_db.py) the relationship between vip and pool should be 1:1. If this is the case, there should have checked whether pool[vip_id] is null or not and throw a PoolInUse exception if no null value present. Am I miss anything? Thanks, ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1487357 Title: No PoolInUse Check when creating VIP Status in neutron: New Bug description: From the lbaasv1 api, there seems to me that many vips could map to the same pool. After reading the code, it turned out to be not. Deducing from below code snippet: class LoadBalancerPluginDb(loadbalancer.LoadBalancerPluginBase, base_db.CommonDbMixin): ... def create_vip(self, context, vip): ... if v['pool_id']: # fetching pool again pool = self._get_resource(context, Pool, v['pool_id']) # (NOTE): we rely on the fact that pool didn't change between # above block and here vip_db['pool_id'] = v['pool_id'] pool['vip_id'] = vip_db['id'] # explicitly flush changes as we're outside any transaction context.session.flush() ... ... (neutron_lbaas/db/loadbalancer/loadbalancer_db.py) the relationship between vip and pool should be 1:1. If this is the case, there should have checked whether pool[vip_id] is null or not and throw a PoolInUse exception if no null value present. Am I miss anything? Thanks, To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1487357/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487271] Re: success_url miss 'reverse_lazy' in admin volume type panel
** Changed in: horizon Status: Invalid = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487271 Title: success_url miss 'reverse_lazy' in admin volume type panel Status in OpenStack Dashboard (Horizon): In Progress Bug description: In the admin volume type panel, several success_url in views miss 'reverse_lazy', this result in the cancel action redirect to wrong url. For example, when right click the Create Volume Type button and open a new page, click the Cancel button will cast error. See the attachment for more detail. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487271/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487372] [NEW] unretrieve project list when switching project in keystone v3
Public bug reported: when switching project, the dashboard show error , but we can still list project list by command. if we logout and login again, the dashboard works nomarly. ** Affects: horizon Importance: Undecided Status: New ** Attachment added: log.png https://bugs.launchpad.net/bugs/1487372/+attachment/4450128/+files/log.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1487372 Title: unretrieve project list when switching project in keystone v3 Status in OpenStack Dashboard (Horizon): New Bug description: when switching project, the dashboard show error , but we can still list project list by command. if we logout and login again, the dashboard works nomarly. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1487372/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1487324] [NEW] failed to list qos rule type due to policy check
Public bug reported: 2015-08-21 13:52:36.212 23375 INFO neutron.wsgi [-] (23375) accepted ('192.168.1.118', 43606) 2015-08-21 13:52:42.711 ERROR neutron.policy [req-ba182095-d12d-4bde-a47e-88507e4c4898 demo demo] Unable to verify match:%(tenant_id)s as the parent resource: tenant was not found 2015-08-21 13:52:42.711 23375 ERROR neutron.policy Traceback (most recent call last): 2015-08-21 13:52:42.711 23375 ERROR neutron.policy File /mnt/data3/opt/stack/neutron/neutron/policy.py, line 224, in __call__ 2015-08-21 13:52:42.711 23375 ERROR neutron.policy parent_res, parent_field = do_split(separator) 2015-08-21 13:52:42.711 23375 ERROR neutron.policy File /mnt/data3/opt/stack/neutron/neutron/policy.py, line 219, in do_split 2015-08-21 13:52:42.711 23375 ERROR neutron.policy separator, 1) 2015-08-21 13:52:42.711 23375 ERROR neutron.policy ValueError: need more than 1 value to unpack 2015-08-21 13:52:42.711 23375 ERROR neutron.policy 2015-08-21 13:52:42.714 ERROR neutron.api.v2.resource [req-ba182095-d12d-4bde-a47e-88507e4c4898 demo demo] index failed 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource Traceback (most recent call last): 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /mnt/data3/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource result = method(request=request, **args) 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /mnt/data3/opt/stack/neutron/neutron/api/v2/base.py, line 339, in index 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource return self._items(request, True, parent_id) 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /mnt/data3/opt/stack/neutron/neutron/api/v2/base.py, line 279, in _items 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource pluralized=self._collection)] 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /mnt/data3/opt/stack/neutron/neutron/policy.py, line 354, in check 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource pluralized=pluralized) 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /usr/lib/python2.7/site-packages/oslo_policy/policy.py, line 487, in enforce 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource result = rule(target, creds, self) 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /usr/lib/python2.7/site-packages/oslo_policy/_checks.py, line 238, in __call__ 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource return enforcer.rules[self.match](target, creds, enforcer) 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /usr/lib/python2.7/site-packages/oslo_policy/_checks.py, line 238, in __call__ 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource return enforcer.rules[self.match](target, creds, enforcer) 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /usr/lib/python2.7/site-packages/oslo_policy/_checks.py, line 191, in __call__ 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource if rule(target, cred, enforcer): 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource File /mnt/data3/opt/stack/neutron/neutron/policy.py, line 246, in __call__ 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource reason=err_reason) 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource PolicyCheckError: Failed to check policy tenant_id:%(tenant_id)s because Unable to verify match:%(tenant_id)s as the parent resource: tenant was not found 2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource $ neutron qos-available-rule-types -v DEBUG: keystoneclient.session REQ: curl -g -i -X GET http://172.17.42.1:5000/v2.0 -H Accept: application/json -H User-Agent: python-keystoneclient DEBUG: keystoneclient.session RESP: [200] Content-Length: 337 Vary: X-Auth-Token Connection: keep-alive Date: Fri, 21 Aug 2015 05:52:35 GMT Content-Type: application/json X-Openstack-Request-Id: req-3ff33f59-d69b-412a-8137-0ce5f6deb868 RESP BODY: {version: {status: stable, updated: 2014-04-17T00:00:00Z, media-types: [{base: application/json, type: application/vnd.openstack.identity-v2.0+json}], id: v2.0, links: [{href: http://172.17.42.1:5000/v2.0/;, rel: self}, {href: http://docs.openstack.org/;, type: text/html, rel: describedby}]}} DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = clifftablib.formatters:YamlFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse('json = clifftablib.formatters:JsonFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse('html = clifftablib.formatters:HtmlFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse('table = cliff.formatters.table:TableFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse('csv = cliff.formatters.commaseparated:CSVLister') DEBUG: stevedore.extension found