[Yahoo-eng-team] [Bug 1450370] [NEW] When one image member looks up the details of another image member, 404 is returned instead of 403.

2015-04-30 Thread Deepti Ramakrishna
Public bug reported:

Suppose project1 and project2 are members of a non-public image. When
user1, who belongs to project1, tries to get details of project2, we get
404 Not Found. 403 Forbidden would be more appropriate.

This bug is for the v2 api.

REPRO STEPS:
-
$ export OS_USERNAME=user1
$ export OS_TENANT_NAME=project1
$ openstack token issue // returns 8eb78ce1d12e462fb619b5036dee4086
// project2 id: 6f2aec926def49bebc4c8b71844abc55
// image id: e2846b31-3bb3-4e2f-92da-612804b2ebad
$ curl -g -i -X GET -H 'Content-Type: application/octet-stream' -H 
'Accept-Encoding: gzip, deflate, compress' -H 'Accept: */*' -H 'X-Auth-Token: 
8eb78ce1d12e462fb619b5036dee4086' 
http://localhost:9292/v2/images/e2846b31-3bb3-4e2f-92da-612804b2ebad/members/6f2aec926def49bebc4c8b71844abc55

EXPECTED HTTP RESPONSE CODE: 403 Forbidden

ACTUAL HTTP RESPONSE CODE: 404 Not Found

** Affects: glance
 Importance: Undecided
 Assignee: Deepti Ramakrishna (dramakri)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Deepti Ramakrishna (dramakri)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1450370

Title:
  When one image member looks up the details of another image member,
  404 is returned instead of 403.

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Suppose project1 and project2 are members of a non-public image. When
  user1, who belongs to project1, tries to get details of project2, we
  get 404 Not Found. 403 Forbidden would be more appropriate.

  This bug is for the v2 api.

  REPRO STEPS:
  -
  $ export OS_USERNAME=user1
  $ export OS_TENANT_NAME=project1
  $ openstack token issue // returns 8eb78ce1d12e462fb619b5036dee4086
  // project2 id: 6f2aec926def49bebc4c8b71844abc55
  // image id: e2846b31-3bb3-4e2f-92da-612804b2ebad
  $ curl -g -i -X GET -H 'Content-Type: application/octet-stream' -H 
'Accept-Encoding: gzip, deflate, compress' -H 'Accept: */*' -H 'X-Auth-Token: 
8eb78ce1d12e462fb619b5036dee4086' 
http://localhost:9292/v2/images/e2846b31-3bb3-4e2f-92da-612804b2ebad/members/6f2aec926def49bebc4c8b71844abc55

  EXPECTED HTTP RESPONSE CODE: 403 Forbidden

  ACTUAL HTTP RESPONSE CODE: 404 Not Found

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1450370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450344] [NEW] Invalid SQL Identity Assertion - Load Config from Database

2015-04-30 Thread Priti Desai
Public bug reported:

I have a default domain pointing to LDAP and ServiceDomain pointing to
SQL identity backend. This kind of configuration is supported with
enabling domain specific drivers. While upgrading to Kilo to leverage
domain config from database capability, the same configuration is not
supported.

$ openstack user list --domain 5681226a68de4f7ea8a2bd247d0fc54e
ERROR: openstack Invalid domain specific configuration: Domain specific sql 
drivers are not supported via the Identity API. One is specified in 
/domains/5681226a68de4f7ea8a2bd247d0fc54e/config (Disable debug mode to 
suppress these details.) (HTTP 403) (Request-ID: 
req-7bc89750-60de-45f2-8c80-05777a8469da)

Notice the same domain ID in request and error message.

In identity/core.py:

def _load_config_from_database(self, domain_id, specific_config):

def _assert_not_sql_driver(domain_id, new_config):
Ensure this is not an sql driver.

Due to multi-threading safety concerns, we do not currently support
the setting of a specific identity driver to sql via the Identity
API.


if new_config['driver'].is_sql:
reason = _('Domain specific sql drivers are not supported via '
   'the Identity API. One is specified in '
   '/domains/%s/config') % domain_id
raise exception.InvalidDomainConfig(reason=reason)

_assert_not_sql_driver causes such restriction, any domain with sql
identity backend is prohibited which should be restricted to at least
one.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  I have a default domain pointing to LDAP and ServiceDomain pointing to
  SQL identity backend. This kind of configuration is supported with
  enabling domain specific drivers. While upgrading to Kilo to leverage
  domain config from database capability, the same configuration is not
  supported.
  
  $ openstack user list --domain 5681226a68de4f7ea8a2bd247d0fc54e
  ERROR: openstack Invalid domain specific configuration: Domain specific sql 
drivers are not supported via the Identity API. One is specified in 
/domains/5681226a68de4f7ea8a2bd247d0fc54e/config (Disable debug mode to 
suppress these details.) (HTTP 403) (Request-ID: 
req-7bc89750-60de-45f2-8c80-05777a8469da)
  
  Notice the same domain ID in request and error message.
  
  In identity/core.py:
  
  def _load_config_from_database(self, domain_id, specific_config):
  
- def _assert_not_sql_driver(domain_id, new_config):
- Ensure this is not an sql driver.
+ def _assert_not_sql_driver(domain_id, new_config):
+ Ensure this is not an sql driver.
  
- Due to multi-threading safety concerns, we do not currently 
support
- the setting of a specific identity driver to sql via the Identity
- API.
+ Due to multi-threading safety concerns, we do not currently 
support
+ the setting of a specific identity driver to sql via the Identity
+ API.
  
- 
- if new_config['driver'].is_sql:
- reason = _('Domain specific sql drivers are not supported via 
'
-'the Identity API. One is specified in '
-'/domains/%s/config') % domain_id
- raise exception.InvalidDomainConfig(reason=reason)
+ 
+ if new_config['driver'].is_sql:
+ reason = _('Domain specific sql drivers are not supported via 
'
+    'the Identity API. One is specified in '
+    '/domains/%s/config') % domain_id
+ raise exception.InvalidDomainConfig(reason=reason)
  
  _assert_not_sql_driver causes such restriction, any domain with sql
- identity backend is prohibited which restricted to at least one.
+ identity backend is prohibited which should be restricted to at least
+ one.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1450344

Title:
  Invalid SQL Identity  Assertion - Load Config from Database

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I have a default domain pointing to LDAP and ServiceDomain pointing to
  SQL identity backend. This kind of configuration is supported with
  enabling domain specific drivers. While upgrading to Kilo to leverage
  domain config from database capability, the same configuration is not
  supported.

  $ openstack user list --domain 5681226a68de4f7ea8a2bd247d0fc54e
  ERROR: openstack Invalid domain specific configuration: Domain specific sql 
drivers are not supported via the Identity API. One is specified in 
/domains/5681226a68de4f7ea8a2bd247d0fc54e/config (Disable debug mode to 
suppress these details.) (HTTP 403) (Request-ID: 

[Yahoo-eng-team] [Bug 1031139] Re: quota-show should return error for invalid tenant id

2015-04-30 Thread Mh Raies
It is a problem/limitation of nova not of python-novaclient.

What nova do for this API - 
nova takes the API request and if no project id is given, curent tenant quotas 
will be displayed.
Again if we are taking --tenant tenant id  option then it will be ignored 
as per current implementation.

As per current implementation of nova - 
It will treat filtering on basis of current tenant and different users in 
current tenant.

So changing it to nova.

And also it is a Duplicate of
https://bugs.launchpad.net/nova/+bug/1118066

** Project changed: python-novaclient = nova

** Changed in: nova
 Assignee: Amandeep (rattenpal-amandeep) = Mh Raies (raiesmh08)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1031139

Title:
  quota-show should return error for invalid tenant id

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  quota-show does not handle alternatives for tenant_id as expected

  ENV: Devstack trunk (Folsom) / nova
  d56b5fc3ad6dbfc56e0729174925fb146cef87fa ,  Mon Jul 30 21:59:56 2012
  +

  I'd expect the following command to work as $ env | grep TENANT -
  OS_TENANT_NAME=demo

  $ nova --debug --os_username=admin --os_password=password quota-show 
  usage: nova quota-show tenant_id
  error: too few arguments

  
  I'd also expect the following to work:
  $ nova --debug --os_username=admin --os_password=password quota-show 
--os_tenant_name=demo
  usage: nova quota-show tenant_id
  error: too few arguments

  
  What is more awesome, if in the event that I do provide the wrong tenant_id, 
it proceeds to use OS_TENANT_NAME returning those results:

  $nova --debug --os_username=admin --os_password=password quota-show
  gg

  REQ: curl -i
  http://10.1.11.219:8774/v2/04adebe40d214581b84118bcce264f0e/os-quota-
  sets/ggg -X GET -H X-Auth-Project-Id:
  demo -H User-Agent: python-novaclient -H Accept: application/json
  -H X-Auth-Token: 10bd3f948df24039b2b88b98771b2b99

  +-+---+
  | Property| Value |
  +-+---+
  | cores   | 20|
  | floating_ips| 10|
  | gigabytes   | 1000  |
  | injected_file_content_bytes | 10240 |
  | injected_files  | 5 |
  | instances   | 10|
  | metadata_items  | 128   |
  | ram | 51200 |
  | volumes | 10|
  +-+---+

  
  I also couldn't figure out how to get the quota-show to work as a member 
(non-admin) of a project.

  Let me know if you want any of these issues broken out in to
  additional bugs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1031139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1031139] [NEW] quota-show should return error for invalid tenant id

2015-04-30 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

quota-show does not handle alternatives for tenant_id as expected

ENV: Devstack trunk (Folsom) / nova
d56b5fc3ad6dbfc56e0729174925fb146cef87fa ,  Mon Jul 30 21:59:56 2012
+

I'd expect the following command to work as $ env | grep TENANT -
OS_TENANT_NAME=demo

$ nova --debug --os_username=admin --os_password=password quota-show 
usage: nova quota-show tenant_id
error: too few arguments


I'd also expect the following to work:
$ nova --debug --os_username=admin --os_password=password quota-show 
--os_tenant_name=demo
usage: nova quota-show tenant_id
error: too few arguments


What is more awesome, if in the event that I do provide the wrong tenant_id, it 
proceeds to use OS_TENANT_NAME returning those results:

$nova --debug --os_username=admin --os_password=password quota-show
gg

REQ: curl -i http://10.1.11.219:8774/v2/04adebe40d214581b84118bcce264f0e
/os-quota-sets/ggg -X GET -H X-Auth-
Project-Id: demo -H User-Agent: python-novaclient -H Accept:
application/json -H X-Auth-Token: 10bd3f948df24039b2b88b98771b2b99

+-+---+
| Property| Value |
+-+---+
| cores   | 20|
| floating_ips| 10|
| gigabytes   | 1000  |
| injected_file_content_bytes | 10240 |
| injected_files  | 5 |
| instances   | 10|
| metadata_items  | 128   |
| ram | 51200 |
| volumes | 10|
+-+---+


I also couldn't figure out how to get the quota-show to work as a member 
(non-admin) of a project.

Let me know if you want any of these issues broken out in to additional
bugs.

** Affects: nova
 Importance: Medium
 Assignee: Amandeep (rattenpal-amandeep)
 Status: Confirmed


** Tags: security ux
-- 
quota-show should return error for invalid tenant id
https://bugs.launchpad.net/bugs/1031139
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432995] Re: Termination signals not handled correctly in case of several ProcessLauncher instances in one process

2015-04-30 Thread Thierry Carrez
** Changed in: oslo-incubator
   Status: Fix Committed = Fix Released

** Changed in: oslo-incubator
Milestone: None = 2015.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432995

Title:
  Termination signals not handled correctly in case of several
  ProcessLauncher instances in one process

Status in OpenStack Neutron (virtual network service):
  Opinion
Status in The Oslo library incubator:
  Fix Released

Bug description:
  Neutron server has api and rpc workers and when their number is
  configured to be non-zero each worker is launched using
  ProcessLauncher from oslo-incubator's service.py. It is important to
  note that different instances of ProcessLauncher are used  for
  launching api and rpc workers. [1], [2]

  When ProcessLauncher is initialized, among else it setups handlers for
  termination signals (SIGHUP, SIGTERM and SIGINT) [3]. It is known that
  only one signal handler can be installed per signal and only the
  latest installed handler will be active. So, if several
  ProcessLauncher instances are being initialized in the same process
  then only handlers of the last instance will be triggered on receiving
  a signal.

  The consequence is that when neutron-server is running with non-zero
  number of api and rpc workers  sending a parent process SIGHUP would
  result in reset method being called only for rpc workers.

  The possible solution is to store all handlers in a class attribute and 
redefine handle_signal so that it calls all handlers one by one.
  An alternative is to inherit from ProcessLauncher in neutron and redefine 
signal handling there.

  [1] 
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/service.py#L159
  [2] 
https://github.com/openstack/neutron/blob/e933891462408435c580ad42ff737f8bff428fbc/neutron/wsgi.py#L237
  [3] 
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L210

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-04-30 Thread Thierry Carrez
** Changed in: oslo-incubator
   Status: Fix Committed = Fix Released

** Changed in: oslo-incubator
Milestone: liberty-1 = 2015.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Committed
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in The Oslo library incubator:
  Fix Released

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo - master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255441] Re: annoying Arguments dropped when creating context

2015-04-30 Thread Thierry Carrez
** Changed in: neutron
Milestone: kilo-rc1 = 2015.1.0

** No longer affects: neutron/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255441

Title:
  annoying Arguments dropped when creating context

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  2013-11-27 16:45:11.379 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:11.593 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:12.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:12.909 5568 WARNING neutron.context 
[req-2d1bd4b5-dcc1-4cfe-b29c-711a938ab74e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:14.719 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:15.383 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:15.594 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:16.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:16.912 5568 WARNING neutron.context 
[req-21f20201-2691-4c13-a77d-3e05c0ad1777 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:18.722 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:19.387 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:19.596 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:20.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:20.914 5568 WARNING neutron.context 
[req-f89f6bde-b6b8-4f32-a897-30164c826bc0 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:22.719 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:23.390 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:23.595 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:24.194 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:24.916 5568 WARNING neutron.context 
[req-2da8f49d-77d1-49bf-9b3d-081075b1c1de None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:26.723 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:27.394 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:27.598 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:28.195 5568 WARNING neutron.context 
[req-9703f175-c339-4a18-88a9-3d3523167e2c None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:28.915 5568 WARNING neutron.context 
[req-9846d3db-f327-4df5-a4b9-ea83e1da0a4b None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:30.725 5568 WARNING neutron.context 
[req-edcf1bf6-39f1-493a-b72f-f7a484a1ba02 None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:31.395 5568 WARNING neutron.context 
[req-fa5e9429-2fc4-4b3f-ab56-cf00e517474f None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:31.599 5568 WARNING neutron.context 
[req-115c640c-5e42-495c-a17f-f771b49b951e None None] Arguments dropped when 
creating context: {'tenant': None}
  2013-11-27 16:45:32.196 5568 WARNING neutron.context 

[Yahoo-eng-team] [Bug 1450414] [NEW] can't get authentication with os-token and os-url

2015-04-30 Thread BaoLiang Cui
Public bug reported:

Hi, I can't get authentication with os-token and os-url on Juno pythone-
neutronclient.

On Icehouse, with os-token and os-url, we can get authentication.
[root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list
+--+--+--+
| id   | name | subnets 
 |
+--+--+--+
| 06c5d426-ec2c-4a19-a5c9-cfd21cfb5a0c | ext-net  | 
38d87619-9c76-481f-bfe8-b301e05693d9 193.160.15.0/24 |
+--+--+--+

But on Juno, it failed. The detail :
[root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list --debug 
ERROR: neutronclient.shell Unable to determine the Keystone version to 
authenticate with using the given auth_url. Identity service may not support 
API version discovery. Please provide a versioned auth_url instead. 
Traceback (most recent call last):
   File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 666, in 
run 
 self.initialize_app(remainder)
   File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 808, in 
initialize_app
 self.authenticate_user()
   File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 761, in 
authenticate_user
 auth_session = self._get_keystone_session()
   File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 904, in 
_get_keystone_session
 auth_url=self.options.os_auth_url)
   File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 889, in 
_discover_auth_versions
 raise exc.CommandError(msg)
 CommandError: Unable to determine the Keystone version to authenticate with 
using the given auth_url. Identity service may not support API version 
discovery. Please provide a versioned auth_url instead. 
Unable to determine the Keystone version to authenticate with using the given 
auth_url. Identity service may not support API version discovery. Please 
provide a versioned auth_url instead. 


my solution is this:
On /usr/lib/python2.6/site-packages/neutronclient/shell.py, modify the 
authenticate_user(self) method.

 Origin:
 auth_session = self._get_keystone_session()

Modified: 
auth_session = None
auth = None
if not self.options.os_token:
auth_session = self._get_keystone_session()
auth = auth_session.auth

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450414

Title:
  can't get authentication with os-token and os-url

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi, I can't get authentication with os-token and os-url on Juno
  pythone-neutronclient.

  On Icehouse, with os-token and os-url, we can get authentication.
  [root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | 06c5d426-ec2c-4a19-a5c9-cfd21cfb5a0c | ext-net  | 
38d87619-9c76-481f-bfe8-b301e05693d9 193.160.15.0/24 |
  
+--+--+--+

  But on Juno, it failed. The detail :
  [root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list --debug 
  ERROR: neutronclient.shell Unable to determine the Keystone version to 
authenticate with using the given auth_url. Identity service may not support 
API version discovery. Please provide a versioned auth_url instead. 
  Traceback (most recent call last):
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 666, 
in run 
   self.initialize_app(remainder)
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 808, 
in initialize_app
   self.authenticate_user()
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 761, 
in authenticate_user
   auth_session = self._get_keystone_session()
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 904, 
in _get_keystone_session
   auth_url=self.options.os_auth_url)
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 889, 
in _discover_auth_versions
   raise exc.CommandError(msg)
   CommandError: Unable to determine the Keystone version to 

[Yahoo-eng-team] [Bug 1382440] Re: Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device

2015-04-30 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382440

Title:
  Detaching multipath volume doesn't work properly when using different
  targets with same portal for each multipath device

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in Volume discovery and local storage management lib:
  New

Bug description:
  Overview:
  On Icehouse(2014.1.2) with iscsi_use_multipath=true, detaching iSCSI 
  multipath volume doesn't work properly. When we use different targets(IQNs) 
  associated with same portal for each different multipath device, all of 
  the targets will be deleted via disconnect_volume().

  This problem is not yet fixed in upstream. However, the attached patch
  fixes this problem.

  Steps to Reproduce:

  We can easily reproduce this issue without any special storage
  system in the following Steps:

1. configure iscsi_use_multipath=True in nova.conf on compute node.
2. configure volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
   in cinder.conf on cinder node.
2. create an instance.
3. create 3 volumes and attach them to the instance.
4. detach one of these volumes.
5. check multipath -ll and iscsiadm --mode session.

  Detail:

  This problem was introduced with the following patch which modified
  attaching and detaching volume operations for different targets
  associated with different portals for the same multipath device.

commit 429ac4dedd617f8c1f7c88dd8ece6b7d2f2accd0
Author: Xing Yang xing.y...@emc.com
Date:   Date: Mon Jan 6 17:27:28 2014 -0500

  Fixed a problem in iSCSI multipath

  We found out that:

   # Do a discovery to find all targets.
   # Targets for multiple paths for the same multipath device
   # may not be the same.
   out = self._run_iscsiadm_bare(['-m',
 'discovery',
 '-t',
 'sendtargets',
 '-p',
 iscsi_properties['target_portal']],
 check_exit_code=[0, 255])[0] \
   or 
  
   ips_iqns = self._get_target_portals_from_iscsiadm_output(out)
  ...
   # If no other multipath device attached has the same iqn
   # as the current device
   if not in_use:
   # disconnect if no other multipath devices with same iqn
   self._disconnect_mpath(iscsi_properties, ips_iqns)
   return
   elif multipath_device not in devices:
   # delete the devices associated w/ the unused multipath
   self._delete_mpath(iscsi_properties, multipath_device, ips_iqns)

  When we use different targets(IQNs) associated with same portal for each 
different
  multipath device, the ips_iqns has all targets in compute node from the 
result of
  iscsiadm -m discovery -t sendtargets -p the same portal.
  Then, the _delete_mpath() deletes all of the targets in the ips_iqns
  via /sys/block/sdX/device/delete.

  For example, we create an instance and attach 3 volumes to the
  instance:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
# multipath -ll
330030001 dm-7 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 23:0:0:1 sdd 8:48 active ready running
330010001 dm-5 IET,VIRTUAL-DISK
size=2.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 21:0:0:1 sdb 8:16 active ready running
330020001 dm-6 IET,VIRTUAL-DISK
size=3.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 22:0:0:1 sdc 8:32 active ready running

  Then we detach one of these volumes:

# nova volume-detach 95f959cd-d180-4063-ae03-9d21dbd7cc50 5c526ffa-
  ba88-4fe2-a570-9e35c4880d12

  As a result of detaching the volume, the compute node remains 3 iSCSI sessions
  and the instance fails to access the attached multipath devices:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19] 

[Yahoo-eng-team] [Bug 1450435] [NEW] resource usage calendar is not translated

2015-04-30 Thread Doug Fish
Public bug reported:

On Admin-System-Resource Usage-Stats when selecting Period/Other the
calendars are displayed in English

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450435

Title:
  resource usage calendar is not translated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Admin-System-Resource Usage-Stats when selecting Period/Other
  the calendars are displayed in English

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450471] [NEW] live-migration fails on shared storage with Cannot block migrate

2015-04-30 Thread René Gallati
Public bug reported:

Kilo RC from Ubuntu Cloudarchive for trusty. Patches
https://review.openstack.org/174307 and
https://review.openstack.org/174059 are applied manually without which
live-migration fails due to parameter changes.

KVM, libvirt, 4 compute hosts, a 9 host ceph cluster for shared storage. Newly 
created instances work fine on all computes. When initiating live-migrate 
either from Horizon or from CLI, status switches back from MIGRATING very fast 
but VM stays on original host. Message
MigrationError: Migration error: Cannot block migrate instance 
53d225ac-1915-4cff-8f15-54b5c66c20a3 with mapped volumes
is found in log of compute host.

Block-Migration option is explicitly not set, setting it for a try
causes a different error message earlier.

Pkg-Versions:
ii  nova-common 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - common files
ii  nova-compute1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node base
ii  nova-compute-kvm1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node libvirt support


Full log:


2015-04-30 13:42:46.985 17619 ERROR nova.compute.manager 
[req-ecbf0446-079a-45db-9000-539f06a9e9e4 382e4cb7197e43bf9b11fc1c6fa9d692 
9984ba4fc07c475e84a109967a897e4e - - -] [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] Pre live migration failed at compute04
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] Traceback (most recent call last):
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 5217, in 
live_migration
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] block_migration, disk, dest, 
migrate_data)
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/nova/compute/rpcapi.py, line 621, in 
pre_live_migration
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] disk=disk, migrate_data=migrate_data)
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py, line 156, in 
call
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] retry=self.retry)
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py, line 90, in 
_send
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] timeout=timeout, retry=retry)
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, line 
350, in send
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] retry=retry)
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, line 
341, in _send
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] raise result
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] MigrationError_Remote: Migration error: 
Cannot block migrate instance 53d225ac-1915-4cff-8f15-54b5c66c20a3 with mapped 
volumes
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] Traceback (most recent call last):
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] 
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] executor_callback))
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] 
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1450432] [NEW] Allows continuous failed login attempts

2015-04-30 Thread Raghavendra Kalimisetty
Public bug reported:

A user can make several login attempts towards the dashboard with
incorrect passwords

** Affects: horizon
 Importance: Undecided
 Assignee: Raghavendra Kalimisetty (raghavendra-kalimisetty)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Raghavendra Kalimisetty (raghavendra-kalimisetty)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450432

Title:
  Allows continuous failed login attempts

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  A user can make several login attempts towards the dashboard with
  incorrect passwords

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450438] [NEW] loopingcall: if a time drift to the future occurs, all timers will be blocked

2015-04-30 Thread Nikola Đipanov
Public bug reported:

Due to the fact that loopingcall.py uses time.time for recording wall-
clock time which is not guaranteed to be monotonic, if a time drift to
the future occurs, and then gets corrected, all the timers will get
blocked until the actual time reaches the moment of the original drift.

This can be pretty bad if the interval is not insignificant - in Nova's
case - all services uses FixedIntervalLoopingCall for it's heartbeat
periodic tasks - if a drift is on the order of magnitude of several
hours, no heartbeats will happen.

DynamicLoopingCall is affected by this as well but because it relies on
eventlet which would also use a non-monotonic time.time function for
it's internal timers.

Solving this will require looping calls to start using a monotonic timer
(for python 2.7 there is a monotonic package).

Also all services that want to use timers and avoid this issue should
doe something like

  import monotonic

  hub = eventlet.get_hub()
  hub.clock = monotonic.monotonic

immediately after calling eventlet.monkey_patch()

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo-incubator
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450438

Title:
  loopingcall: if a time drift to the future occurs, all timers will be
  blocked

Status in OpenStack Compute (Nova):
  New
Status in The Oslo library incubator:
  New

Bug description:
  Due to the fact that loopingcall.py uses time.time for recording wall-
  clock time which is not guaranteed to be monotonic, if a time drift to
  the future occurs, and then gets corrected, all the timers will get
  blocked until the actual time reaches the moment of the original
  drift.

  This can be pretty bad if the interval is not insignificant - in
  Nova's case - all services uses FixedIntervalLoopingCall for it's
  heartbeat periodic tasks - if a drift is on the order of magnitude of
  several hours, no heartbeats will happen.

  DynamicLoopingCall is affected by this as well but because it relies
  on eventlet which would also use a non-monotonic time.time function
  for it's internal timers.

  Solving this will require looping calls to start using a monotonic
  timer (for python 2.7 there is a monotonic package).

  Also all services that want to use timers and avoid this issue should
  doe something like

import monotonic

hub = eventlet.get_hub()
hub.clock = monotonic.monotonic

  immediately after calling eventlet.monkey_patch()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450432] Re: Allows continuous failed login attempts

2015-04-30 Thread Matthias Runge
I'm sorry, I fail to see a bug.

If you believe, there is a valid concern here, please elaborate.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450432

Title:
  Allows continuous failed login attempts

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  A user can make several login attempts towards the dashboard with
  incorrect passwords

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390124] Re: No validation between client's IdP and Keystone IdP

2015-04-30 Thread Nathan Kinder
This has been published as OSSN-0047:

  https://wiki.openstack.org/wiki/OSSN/OSSN-0047

** Changed in: ossn
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1390124

Title:
  No validation between client's IdP and Keystone IdP

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  With today's configuration there is no strict link between  federated
  assertion issued by a trusted IdP and a IdP configured inside
  Keystone. Hence, user has ability to choose a mapping and possibly get
  unauthorized access.

  Proposed solution: setup a IdP identified included in an assertion
  issued by a IdP and validate whether that both values are equal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1390124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366890] Re: Setting admin_state_up=False on a HA port causes split brain in HA routers

2015-04-30 Thread Assaf Muller
** Changed in: neutron
   Status: New = Won't Fix

** Changed in: neutron
   Importance: Medium = Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366890

Title:
  Setting admin_state_up=False on a HA port causes split brain in HA
  routers

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  1) Create a HA router on a setup with two L3 agents
  2) Find out who the master is
  3) Find out what is the HA port used by the master instance of the router
  4) Set it to admin state down

  The master instance won't be able to send VRRP messages, but since the
  tap device in the router namespace is still up, keepalive doesn't
  transition to backup or fault. It remains in the master state. The
  backup will stop receiving VRRP messages and will transition to master
  as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369266] Re: HA router priority should be according to configurable priority of L3-agent

2015-04-30 Thread Assaf Muller
** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369266

Title:
  HA router priority should be according to configurable priority of
  L3-agent

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Currently all instances have the same priority (hard coded 50)
  Admin should be able to assign priority to l3-agents so that Master will be 
chosen accordingly (suppose that you have an agent with smaller bandwidth than 
others, you would like it to have the least amount possible of active (Master) 
instances.
  This will require extending the L3-agent API

  This is blocked by bug:
  https://bugs.launchpad.net/neutron/+bug/1365429

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450548] [NEW] Some VMs get a bad metadata route

2015-04-30 Thread Mark Rawlings
Public bug reported:

In a configuration using the dhcp_agent.ini setting

   enable_isolated_metadata = True

When creating a network configuration that is *not* isolated it has been
observed that the dnsmasq processes are being configured with static
routes for the metadata-service (169.254.169.254) that point to the
local dhcp server.

ci-info: 
+---+-++-+---+---+
ci-info: | Route |   Destination   |  Gateway   | Genmask | Interface | 
Flags |
ci-info: 
+---+-++-+---+---+
ci-info: |   0   | 0.0.0.0 | 71.0.0.161 | 0.0.0.0 |eth0   | 
  UG  |
ci-info: |   1   |71.0.0.160   |  0.0.0.0   | 255.255.255.240 |eth0   | 
  U   |
ci-info: |   2   | 169.254.169.254 | 71.0.0.163 | 255.255.255.255 |eth0   | 
 UGH  |


However, in this particular scenario the dnsmasq processes have no 
metadata-proxy processes.

When a VM boots it gets the static route via DHCP and is unable to
access the metadata service.

This issue seems to have appeared due to patch #116832 Don't spawn
metadata-proxy for non-isolated nets.

Is it possible that the basis for that optimisation is flawed?

The optimisation implements checks of whether a subnet is considered isolated. 
These checks include whether a subnet has a neutron router port available. 
However, it appears that decision can change during network construction or 
manipulation. 
That potential change of decision would appear to defeat the previous 
optimisation.

Once it has been decided that a network is isolated the static route for 
metadata-service may be passed to VMs. At which point we cannot run without 
metadata-proxies on the dhcp-servers, even if a neutron router becomes 
available and the network become non-isolated.
 
A proposal would be to remove the optimisation of not launching 
metadata-proxy-agents on dhcp-servers. Which means we will return to carrying 
the metadata-proxy-agents processes.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450548

Title:
  Some VMs get a bad metadata route

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In a configuration using the dhcp_agent.ini setting

 enable_isolated_metadata = True

  When creating a network configuration that is *not* isolated it has
  been observed that the dnsmasq processes are being configured with
  static routes for the metadata-service (169.254.169.254) that point to
  the local dhcp server.

  ci-info: 
+---+-++-+---+---+
  ci-info: | Route |   Destination   |  Gateway   | Genmask | Interface 
| Flags |
  ci-info: 
+---+-++-+---+---+
  ci-info: |   0   | 0.0.0.0 | 71.0.0.161 | 0.0.0.0 |eth0   
|   UG  |
  ci-info: |   1   |71.0.0.160   |  0.0.0.0   | 255.255.255.240 |eth0   
|   U   |
  ci-info: |   2   | 169.254.169.254 | 71.0.0.163 | 255.255.255.255 |eth0   
|  UGH  |

  
  However, in this particular scenario the dnsmasq processes have no 
metadata-proxy processes.

  When a VM boots it gets the static route via DHCP and is unable to
  access the metadata service.

  This issue seems to have appeared due to patch #116832 Don't spawn
  metadata-proxy for non-isolated nets.

  Is it possible that the basis for that optimisation is flawed?

  The optimisation implements checks of whether a subnet is considered 
isolated. These checks include whether a subnet has a neutron router port 
available. However, it appears that decision can change during network 
construction or manipulation. 
  That potential change of decision would appear to defeat the previous 
optimisation.

  Once it has been decided that a network is isolated the static route for 
metadata-service may be passed to VMs. At which point we cannot run without 
metadata-proxies on the dhcp-servers, even if a neutron router becomes 
available and the network become non-isolated.
   
  A proposal would be to remove the optimisation of not launching 
metadata-proxy-agents on dhcp-servers. Which means we will return to carrying 
the metadata-proxy-agents processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376169] Re: ODL MD can't reconnect to ODL after it restarts

2015-04-30 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376169

Title:
  ODL MD can't reconnect to ODL after it restarts

Status in OpenDaylight backend controller integration with Neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  ODL MD doesn't process any traitment when it receives 401 http error 
(Unauthorized) which happens after ODL restarts.
  The only way to recover is to restart neutron. 

  It induces a strongly link between restarts of neutron and ODL.

  To reproduce it:
   - start ODL and neutron
   - create a network
   - restart ODL
   - create another network

  The last command raises an 401 http error and any extra single
  operations will fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1376169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450624] [NEW] Nova waits for events from neutron on resize-revert that aren't coming

2015-04-30 Thread Dan Smith
Public bug reported:

On resize-revert, the original host was waiting for plug events from
neutron before restarting the instance. These aren't sent since we don't
ever unplug the vifs. Thus, we'll always fail like this:


2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py,
 line 88, in wrapped
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher payload)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py,
 line 71, in wrapped
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 298, in decorated_function
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher pass
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 284, in decorated_function
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 348, in decorated_function
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 326, in decorated_function
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 314, in decorated_function
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 3573, in finish_revert_resize
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
block_device_info, power_on)
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py,
 line 6095, in finish_revert_migration
2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
block_device_info, power_on)
2015-04-30 19:45:42.602 23513 TRACE 

[Yahoo-eng-team] [Bug 1363967] Re: RESTful API to retrieve dvr host mac for ODL

2015-04-30 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363967

Title:
  RESTful API to retrieve dvr host mac for ODL

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Implementing RESTful interface to retrieve DVR host mac.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434172] Re: security group create errors without description

2015-04-30 Thread Dean Troyer
** Changed in: python-openstackclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1434172

Title:
  security group create errors without description

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Nova:
  Confirmed
Status in OpenStack Command Line Client:
  Fix Released

Bug description:
  security group create returns an error without --description supplied.
  This appears to be the server rejecting the request so we should set a
  default value rather than sending None.

    $ openstack security group create qaz
    ERROR: openstack Security group description is not a string or unicode 
(HTTP 400) (Request-ID: req-dee03de3-893a-4d58-bc3d-de87d09c3fb8)

  Sent body:

    {security_group: {name: qaz2, description: null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1434172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450617] [NEW] Neutron extension to support service chaining

2015-04-30 Thread cathy Hong Zhang
Public bug reported:

Currently Neutron does not support service chaining. To support service 
chaining, Service VMs must be attached to points of the
network and then traffic must be steered between these attachment points.

There are two steps in creating a service chain. First, Services VMs
(such as FW VM) need to be created and connected to a Neutron network
via Neutron ports. After that, selected traffic flows need to be steered
through an ordered sequence of these service VM ports. Current OpenStack
already support creation of service VMs and attachment of these service
VMs to Neutron network ports. What is missing is an API to specify
classification rules of the selected  flow and the sequence of service
VM ports the selected flow needs to go through so that it can get the
desired service treatment.  Neutron API can be extended to fill in this
gap. This new port chain API does not need to know the actual services
attached to these Neutron ports since the Service VM creation API
already has this information.

In summary, first the  service function is instantiated and connected to
the network through Neutron ports. Once the service function is attached
to Neutron ports, the ports are included in a port chain to allow the
service function to provide treatment to the user's traffic.

** Affects: neutron
 Importance: Undecided
 Assignee: cathy Hong Zhang (cathy-h-zhang)
 Status: In Progress


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) = cathy Hong Zhang (cathy-h-zhang)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450617

Title:
  Neutron extension to support service chaining

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Currently Neutron does not support service chaining. To support service 
chaining, Service VMs must be attached to points of the
  network and then traffic must be steered between these attachment points.

  There are two steps in creating a service chain. First, Services VMs
  (such as FW VM) need to be created and connected to a Neutron network
  via Neutron ports. After that, selected traffic flows need to be
  steered through an ordered sequence of these service VM ports. Current
  OpenStack already support creation of service VMs and attachment of
  these service VMs to Neutron network ports. What is missing is an API
  to specify classification rules of the selected  flow and the sequence
  of service VM ports the selected flow needs to go through so that it
  can get the desired service treatment.  Neutron API can be extended to
  fill in this gap. This new port chain API does not need to know the
  actual services attached to these Neutron ports since the Service VM
  creation API already has this information.

  In summary, first the  service function is instantiated and connected
  to the network through Neutron ports. Once the service function is
  attached to Neutron ports, the ports are included in a port chain to
  allow the service function to provide treatment to the user's traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450629] [NEW] Ensure that JS files follow JSCS rules

2015-04-30 Thread Matt Borland
Public bug reported:

Some JS files do not follow the established JSCS ruleset.

Modify files so no notices are thrown.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450629

Title:
  Ensure that JS files follow JSCS rules

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Some JS files do not follow the established JSCS ruleset.

  Modify files so no notices are thrown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450626] [NEW] No JavaScript code style checker

2015-04-30 Thread Matt Borland
Public bug reported:

There is currently no JavaScript code style checker in the development
environment.

JSCS is one typical code style checker that provides many good ways to
establish formal rules.

Ideally, we establish a patch with some basics, then add onto it as
appropriate.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450626

Title:
  No JavaScript code style checker

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is currently no JavaScript code style checker in the development
  environment.

  JSCS is one typical code style checker that provides many good ways to
  establish formal rules.

  Ideally, we establish a patch with some basics, then add onto it as
  appropriate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450604] [NEW] Fix DVR multinode upstream CI testing

2015-04-30 Thread Armando Migliaccio
Public bug reported:

This bug should capture any change required to get the DVR multi node
job to run successfully and reliably.

** Affects: neutron
 Importance: Undecided
 Assignee: Swaminathan Vasudevan (swaminathan-vasudevan)
 Status: Confirmed


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Swaminathan Vasudevan (swaminathan-vasudevan)

** Tags added: l3-dvr-backlog

** Changed in: neutron
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450604

Title:
  Fix DVR multinode upstream CI testing

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  This bug should capture any change required to get the DVR multi node
  job to run successfully and reliably.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430365] Re: config drive table force_config_drive parameter values do not match Impl

2015-04-30 Thread Tom Fifield
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
Milestone: kilo = liberty

** Tags added: doc-tools

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430365

Title:
  config drive table force_config_drive parameter  values do not match
  Impl

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Manuals:
  Confirmed

Bug description:
  The config drive table [1] describes for the force_config_drive parameter the 
following options: 
  Default: None
  Other Options: always

  But looking at the code [2], only the following values are allowed:
  always, True, False

  What does the current default 'None' mean? Is it really a value? Or
  desribes it the abscense of the parameter?  I could not configure it
  for the force_config_drive option. The abscense of the parameter is
  the same as 'False', so I guess that's the correct default, isn't it?

  The docs should be updated accordingly.

  
  [1] 
https://github.com/openstack/openstack-manuals/blob/master/doc/common/tables/nova-configdrive.xml
  [2] https://github.com/openstack/nova/blob/master/nova/virt/configdrive.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450102] Re: neutron uses floating ips for on qrouter

2015-04-30 Thread Kevin Benton
The allocation pool applies to all addresses allocated from a network,
including the router gateway.

What you want will probably be possible after the M release once
pluggable IPAM is in.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450102

Title:
  neutron uses floating ips for on qrouter

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Hi all..

  Scenario:

  stack@controller:~$ nova --version
  2.19.0
  stack@network:~$ neutron --version
  2.3.8

  Test tenant has one running test-instance with one test network 
(internal-net) and one internal-subnet (belonging to internal-net) has one 
router (test-router - qrouter-679e3d17-4e4f-42f8-b8c4-d76b38c565f7).  
  The test-router has ext-net  set as gateway. 

  ext-net and ext-subnet create as follows:
  neutron net-create ext-net --router:external True --provider:physical_network 
external --provider:network_type flat
  neutron subnet-create ext-net --name ext-subnet --allocation-pool 
start=A.B.C.147,end=A.B.C.158 --disable-dhcp --gateway A.B.C.146 A.B.C.0/24

  
  Expected: 
  A.B.C.147 floating IP to be allocated to the tenant project.

  What happens:
  A.B.C.148 floating IP is allocated to the tenant project.
  A.B.C.147 was set on test-router 
(qrouter-679e3d17-4e4f-42f8-b8c4-d76b38c565f7)

  root@network:~# ip netns exec qrouter-679e3d17-4e4f-42f8-b8c4-d76b38c565f7 
ifconfig qg-8f04a366-ef
  qg-8f04a366-ef Link encap:Ethernet  HWaddr fa:16:3e:6d:45:e2
inet addr:A.B.C.147  Bcast:A.B.C.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe6d:45e2/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:69 errors:0 dropped:0 overruns:0 frame:0
TX packets:67 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6577 (6.5 KB)  TX bytes:5296 (5.2 KB)

  
  I consider it a bug because routers could be set to use other ips from the 
network A.B.C.0/24 which are not in the A.B.C.147-A.B.C.158 range (the floating 
ip range). The floating ip range should be exclusively used by virtual machines 
in cases where the CIDR defined is larger.
  It's also bad because A.B.C.147 is routable and packets coming to A.B.C.147 
from world will end up on my qrouter ;/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450682] [NEW] nova unit tests failing with pbr 0.11

2015-04-30 Thread Joe Gordon
Public bug reported:

test_version_string_with_package_is_good breaks with the release of pbr
0.11

nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good
--

Captured traceback:
~~~
Traceback (most recent call last):
  File nova/tests/unit/test_versions.py, line 33, in 
test_version_string_with_package_is_good
version.version_string_with_package())
  File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: '5.5.5.5-g9ec3421' != 
'2015.2.0-g9ec3421'


http://logs.openstack.org/27/169827/8/check/gate-nova-python27/2009c78/console.html

** Affects: nova
 Importance: Critical
 Assignee: Joe Gordon (jogo)
 Status: In Progress

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450682

Title:
  nova unit tests failing with pbr 0.11

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  test_version_string_with_package_is_good breaks with the release of
  pbr 0.11

  
nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File nova/tests/unit/test_versions.py, line 33, in 
test_version_string_with_package_is_good
  version.version_string_with_package())
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: '5.5.5.5-g9ec3421' != 
'2015.2.0-g9ec3421'

  
  
http://logs.openstack.org/27/169827/8/check/gate-nova-python27/2009c78/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416496] Re: nova.conf - configuration options icehouse compat flag is not right

2015-04-30 Thread Tom Fifield
** Changed in: openstack-manuals
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416496

Title:
  nova.conf - configuration options icehouse compat flag is not right

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Manuals:
  Fix Released

Bug description:
  Table 2.57. Description of upgrade levels configuration options has
  the wrong information for setting icehouse/juno compat flags during
  upgrades.

  Specifically this section:

  compute = None (StrOpt) Set a version cap for messages sent to compute
  services. If you plan to do a live upgrade from havana to icehouse, you 
should set this option to icehouse-compat
  before beginning the live upgrade procedure

  This should be compute = old release, for example compute = icehouse
  when doing an upgrade from I to J.

  ---
  Built: 2015-01-29T19:27:05 00:00
  git SHA: 3e80c2419cfe03f86057f3229044cd0d495e0295
  URL: 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/config-reference/compute/section_compute-options-reference.xml
  xml:id: list-of-compute-config-options

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449850] Re: Join multiple criteria together

2015-04-30 Thread OpenStack Infra
** Changed in: keystone
   Status: Opinion = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1449850

Title:
  Join multiple criteria together

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  SQLAlchemy supports to join multiple criteria together, this is
  provided to build the query statements when there is multiple
  filtering criterion instead of constructing query statement one by
  one,  just *assume* SQLAlchemy prefer to use it in this way, and the
  code looks more clean after refactoring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1449850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450668] [NEW] need to allow the tenant to config meta data of an image

2015-04-30 Thread Tracy Jones
Public bug reported:


I can configure and update metadata at the project level, but i need
that type of control at the tenant level as well.  In talking on irc
with TravT it looks like glance supports this already.  Just need to
expose it in horizon

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450668

Title:
  need to allow the tenant to config meta data of an image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  I can configure and update metadata at the project level, but i need
  that type of control at the tenant level as well.  In talking on irc
  with TravT it looks like glance supports this already.  Just need to
  expose it in horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428321] Re: Crosslink keystone documentation sites

2015-04-30 Thread Morgan Fainberg
** Changed in: keystonemiddleware
Milestone: None = 1.6.1

** Changed in: keystonemiddleware
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1428321

Title:
  Crosslink keystone documentation sites

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Identity  (Keystone) Middleware:
  Fix Released
Status in Python client library for Keystone:
  Fix Released

Bug description:
  Keystone has three formal documentation sites (that I'm aware of):

http://docs.openstack.org/developer/keystone/
http://docs.openstack.org/developer/keystonemiddleware/
http://docs.openstack.org/developer/python-keystoneclient/

  But none of these are cross-linked with each other. All three should
  provide top-level links to the other two, with a brief explanation as
  to how each component fits together with the others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1428321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450658] [NEW] VolumeBackendAPIException during _shutdown_instance are not handled

2015-04-30 Thread Jay Bryant
Public bug reported:

Use rally for cinder boot from voume test, after test completed, found a lot of 
VMs were deleted failed with error status, manual retry a few times to delete 
VM, but can't work.
[root@abba-n04 home]#  nova list --all-tenants
+--+---+--+++-+--+
| ID   | Name  | 
Tenant ID| Status | Task State | Power State | Networks 
|
+--+---+--+++-+--+
| 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89 | rally_novaserver_aftvfqejftusyflf | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
| 888aa512-07f0-42ad-ae5e-255bbca6fe34 | rally_novaserver_ajqhjxjxnjojelgm | 
7e05339543e743c3b023cee8128e | ERROR  | -  | NOSTATE |  
|
| 84b32483-2997-4a2a-8d75-236395b4ef2f | rally_novaserver_arycquunqovvyxnl | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
| 535f1ce1-63fe-4681-b215-e21d38f77fbb | rally_novaserver_arzjesynagiqfjgt | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
| 6ef3fa37-e5a9-4a21-8996-d291070c246a | rally_novaserver_aufevkgzvynumwwo | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
| e5d2dc5f-6e86-43e2-8f1f-1a3f6d4951bc | rally_novaserver_ayzjeqcplouwcaht | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
| 2dcb294a-e2cc-4989-9e87-74081f1567db | rally_novaserver_bbphsjrexkcgcjtt | 
7e05339543e743c3b023cee8128e | ERROR  | -  | NOSTATE |  
|
| 88053991-2fab-4442-86c7-7825ed47ff0a | rally_novaserver_beveqwdokixwdbgi | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
| 1e109862-34ea-4686-a099-28f5840244cf | rally_novaserver_bimlwsfkndjbrczv | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
| 6143bbb2-d3eb-4635-9554-85b30fbb5aa5 | rally_novaserver_bmycsptyoicymdmb | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
| 5e9308ae-9736-485f-92b7-e6783c613ba1 | rally_novaserver_bsvcnsqeyawcnbgp | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
| 13fb2413-15b6-41a3-a8a0-a0b775747261 | rally_novaserver_bvfyfnnixwgisbkk | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
| d5ce6fa3-99b5-4d61-ac7c-4b5bc13d1c27 | rally_novaserver_cpbevhulxylepvql | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
| 23ce9007-ab43-4b57-8686-4aa1ce336f09 | rally_novaserver_cswudaopawepnmms | 
7e05339543e743c3b023cee8128e | ERROR  | -  | NOSTATE |  
|

[root@abba-n04 home]# nova delete 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89
Request to delete server 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89 has been accepted.
[root@abba-n04 home]# nova  list --all | grep  
335f2ca0-a86d-45a5-b4a6-4d4ea1930e89
| 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89 | rally_novaserver_aftvfqejftusyflf | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
[root@abba-n04 home]#

The system is using local LVM volumes attached via iSCSI.  It appears
that something is going wrong when the attempt to detach the volume is
being made:

2015-04-29 07:26:02.680 21775 DEBUG oslo_concurrency.processutils 
[req-fec5ff89-5465-4e76-b573-6a5afc0d4ee2 a9d0160b91f2412d84972a615b7547dc 
828348052e494d76a401f669f85829f3 - - -] Running cmd (subprocess): sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete-initiator 
iqn.2010-10.org.openstack:volume-a2907017-dda6-4243-bc50-85fe2164f05f 
iqn.1994-05.com.redhat:734fb33285d0 execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
2015-04-29 07:26:02.813 21775 DEBUG oslo_concurrency.processutils 
[req-fec5ff89-5465-4e76-b573-6a5afc0d4ee2 a9d0160b91f2412d84972a615b7547dc 
828348052e494d76a401f669f85829f3 - - -] CMD sudo cinder-rootwrap 
/etc/cinder/rootwrap.conf cinder-rtstool delete-initiator 
iqn.2010-10.org.openstack:volume-a2907017-dda6-4243-bc50-85fe2164f05f 
iqn.1994-05.com.redhat:734fb33285d0 returned: 1 in 0.133s execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225
2015-04-29 07:26:02.814 21775 DEBUG oslo_concurrency.processutils 
[req-fec5ff89-5465-4e76-b573-6a5afc0d4ee2 a9d0160b91f2412d84972a615b7547dc 
828348052e494d76a401f669f85829f3 - - -] u'sudo cinder-rootwrap 
/etc/cinder/rootwrap.conf cinder-rtstool delete-initiator 
iqn.2010-10.org.openstack:volume-a2907017-dda6-4243-bc50-85fe2164f05f 
iqn.1994-05.com.redhat:734fb33285d0' failed. Not Retrying. execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:258
2015-04-29 

[Yahoo-eng-team] [Bug 1420192] Re: Nova interface-attach command has optional arguments to add network details. It should be positional arguments otherwise command fails.

2015-04-30 Thread melanie witt
Hi Park, the nova part of this bug is about nova api stack tracing when
it receives interface attachment parameters. It is not yet fixed. The
os-interface api should validate the request or otherwise check return
values and raise an appropriate exception instead.

Excerpt of example request:

http://10.0.2.15:8774/v2/46d0d928a3814510ab6ab8f65380381b/servers
/37c084fb-ecea-4a77-86f5-d2ce67be48bd/os-interface -H User-Agent:
python-novaclient -H Content-Type: application/json -H Accept:
application/json -H X-Auth-Token:
{SHA1}0448c6b841247e17e04a460a1479ac16cff64903 -d
'{interfaceAttachment: {}}'

Error in n-api.log:

2015-04-30 22:50:00.014 INFO 
nova.api.openstack.compute.contrib.attach_interfaces 
[req-6aac752c-103a-4a49-8d25-32b69bc89cf9 admin admin] [instance: 
37c084fb-ecea-4a77-86f5-d2ce67be48bd] Attach interface
2015-04-30 22:50:00.014 ERROR nova.api.openstack.wsgi 
[req-6aac752c-103a-4a49-8d25-32b69bc89cf9 admin admin] Exception handling 
resource: 'NoneType' object has no attribute '__getitem__'
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi Traceback (most recent 
call last):
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 821, in _process_stack
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi action_result = 
self.dispatch(meth, request, action_args)
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 911, in dispatch
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/attach_interfaces.py, line 
145, in create
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi return self.show(req, 
server_id, vif['id'])
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi TypeError: 'NoneType' 
object has no attribute '__getitem__'
2015-04-30 22:50:00.014 TRACE nova.api.openstack.wsgi


** Changed in: nova
   Status: Fix Released = Confirmed

** Changed in: nova
 Assignee: Park (jianlonghei) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420192

Title:
  Nova interface-attach command has optional arguments to add network
  details. It should be positional arguments otherwise command fails.

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Nova:
  Confirmed

Bug description:
  On execution of nova interface-attach command without optional
  arguments command fails.

  root@ubuntu:~# nova interface-attach vm1
  ERROR (ClientException): Failed to attach interface (HTTP 500) (Request-ID: 
req-ebca9af6-8d2f-4f68-8a80-ad002b03c2fc)
  root@ubuntu:~# 

  To add a network interface atleast one amongst the optional arguments
  must be provided. Thus, help message needs to be modified.

  root@ubuntu:~# nova help interface-attach
  usage: nova interface-attach [--port-id port_id] [--net-id net_id]
   [--fixed-ip fixed_ip]
   server

  Attach a network interface to a server.

  Positional arguments:
server   Name or ID of server.

  Optional arguments:
--port-id port_idPort ID.
--net-id net_id  Network ID
--fixed-ip fixed_ip  Requested fixed IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449850] Re: Join multiple criteria together

2015-04-30 Thread Brant Knudson
** Changed in: keystone
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1449850

Title:
  Join multiple criteria together

Status in OpenStack Identity (Keystone):
  Opinion

Bug description:
  SQLAlchemy supports to join multiple criteria together, this is
  provided to build the query statements when there is multiple
  filtering criterion instead of constructing query statement one by
  one,  just *assume* SQLAlchemy prefer to use it in this way, and the
  code looks more clean after refactoring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1449850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445202] Re: Bug #1414218 is not fixed on the stable/juno branch

2015-04-30 Thread Stephen Ma
** Changed in: neutron
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445202

Title:
  Bug #1414218 is not fixed on the stable/juno branch

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  On the stable/juno branch, https://review.openstack.org/#/c/164329
  (ChangeId: I3ad7864eeb2f959549ed356a1e34fa18804395cc, fixed bug
  https://bugs.launchpad.net/neutron/+bug/1414218) was merged on April
  1st.  Less than 1 hour before this merge,
  https://review.openstack.org/#/c/153181 was merged.  Both patches
  modified the same function _output_hosts_file() in the same file
  (neutron/agent/linux/dhcp.py).
  https://review.openstack.org/#/c/164329 removed LOG.debug statements
  from the _output_hosts_file while
  https://review.openstack.org/#/c/153181 added LOG.debug statements.
  The end result is that the bad performance problem fixed by
  https://review.openstack.org/#/c/164329 is reverted by
  https://review.openstack.org/#/c/153181 unintentionally.

  The https://review.openstack.org/#/c/164329 fixes bug
  https://bugs.launchpad.net/neutron/+bug/1414218. The root cause is the
  performance overhead due to the LOG.debug statements in the for-loop
  of the _output_hosts_file() function.

  This problem is only found on the stable/juno branch of neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450535] [NEW] [data processing] Create node group and cluster templates can fail

2015-04-30 Thread Chad Roberts
Public bug reported:

* Probably a kilo-backport candidate *

In an environment that uses a rewrite / to /dashboard (or whatever),
trying to create a node group, cluster template or job will fail when we
try to do a urlresolver.resolve on the path.  That operation isn't even
necessary since the required kwargs are already available in
request.resolver_match.kwargs.

** Affects: horizon
 Importance: Undecided
 Assignee: Chad Roberts (croberts)
 Status: New


** Tags: sahara

** Changed in: horizon
 Assignee: (unassigned) = Chad Roberts (croberts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450535

Title:
  [data processing] Create node group and cluster templates can fail

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  * Probably a kilo-backport candidate *

  In an environment that uses a rewrite / to /dashboard (or whatever),
  trying to create a node group, cluster template or job will fail when
  we try to do a urlresolver.resolve on the path.  That operation isn't
  even necessary since the required kwargs are already available in
  request.resolver_match.kwargs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp