[Yahoo-eng-team] [Bug 1521772] [NEW] List users in a group by name throws HTTP 500 error

2015-12-01 Thread Haneef Ali
Public bug reported:

(keystone.common.wsgi): 2015-12-01 21:53:58,603 INFO wsgi __call__ GET 
http://192.168.245.9:35357/v3/groups/42b6bb3bb70f487cbf9633bf55eb9ddc/users?name=admin
(keystone.common.wsgi): 2015-12-01 21:53:58,610 ERROR wsgi __call__ Entity 
'' has no property 
'name'
Traceback (most recent call last):
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 248, in __call__
result = method(context, **params)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/controller.py",
 line 207, in wrapper
return f(self, context, filters, **kwargs)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/controllers.py",
 line 233, in list_users_in_group
refs = self.identity_api.list_users_in_group(group_id, hints=hints)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/manager.py",
 line 58, in wrapper
return f(self, *args, **kwargs)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/core.py",
 line 433, in wrapper
return f(self, *args, **kwargs)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/core.py",
 line 444, in wrapper
return f(self, *args, **kwargs)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/core.py",
 line 1123, in list_users_in_group
ref_list = driver.list_users_in_group(entity_id, hints)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/backends/sql.py",
 line 226, in list_users_in_group
query = sql.filter_limit_query(User, query, hints)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/sql/core.py",
 line 410, in filter_limit_query
query = _filter(model, query, hints)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/sql/core.py",
 line 362, in _filter
query = query.filter_by(**filter_dict)
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 1345, in filter_by
for key, value in kwargs.items()]
  File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/sqlalchemy/orm/base.py",
 line 383, in _entity_descriptor
(description, key)
InvalidRequestError: Entity '' has no property 'name'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1521772

Title:
  List users in a group by name throws  HTTP 500 error

Status in OpenStack Identity (keystone):
  New

Bug description:
  (keystone.common.wsgi): 2015-12-01 21:53:58,603 INFO wsgi __call__ GET 
http://192.168.245.9:35357/v3/groups/42b6bb3bb70f487cbf9633bf55eb9ddc/users?name=admin
  (keystone.common.wsgi): 2015-12-01 21:53:58,610 ERROR wsgi __call__ Entity 
'' has no property 
'name'
  Traceback (most recent call last):
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 248, in __call__
  result = method(context, **params)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/controller.py",
 line 207, in wrapper
  return f(self, context, filters, **kwargs)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/controllers.py",
 line 233, in list_users_in_group
  refs = self.identity_api.list_users_in_group(group_id, hints=hints)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/manager.py",
 line 58, in wrapper
  return f(self, *args, **kwargs)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/core.py",
 line 433, in wrapper
  return f(self, *args, **kwargs)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/core.py",
 line 444, in wrapper
  return f(self, *args, **kwargs)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/core.py",
 line 1123, in list_users_in_group
  ref_list = driver.list_users_in_group(entity_id, hints)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/identity/backends/sql.py",
 line 226, in list_users_in_group
  query = sql.filter_limit_query(User, query, hints)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/sql/core.py",
 line 410, in filter_limit_query
  query = _filter(model, query, hints)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/sql/core.py",
 line 362, in _filter
  query = query.filter_by(**filter_dict)
File 
"/opt/stack/service/keystone/venv/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 1345, in 

[Yahoo-eng-team] [Bug 1521766] [NEW] pylint breakage in neutron and neutron-vpnaas

2015-12-01 Thread Paul Michali
Public bug reported:

The astroid package, used by pylint, was recently updated to version
1.4.1. Both 1.4.0 and 1.4.1 do not work with pylint 1.4.4, which is
being used by LB, VPN, and neutron. astroid 1.3.8 works with pylint
1.4.4.

For neutron:
master and liberty gate uses pep8-constraints, which has a pin for astroid
kilo needs pinning (proposal is to add to requriements)
juno doesn't use pylint, so works

Note: Can modify pep8 target to do same as pep8-constraints, so that
developers can use same command they are used to.

To add users
For LB:
master - Temp workaround was to remove pylint, can pin, if desired.
liberty, kilo? - will need to pin
juno?


For VPN:
master, liberty, and kilo need to pin pylint and astroid
juno does not use pylint

NOTE: Can migrate to using pep8-constraints job and target.


For FW:
does not do pylint for pep8, so no problem seen (but no coverage).

Because this broke gate, there are several patches already in play for
this.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: neutron neutron-lbaas neutron-vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Paul Michali (pcm)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521766

Title:
  pylint breakage in neutron and neutron-vpnaas

Status in neutron:
  In Progress

Bug description:
  The astroid package, used by pylint, was recently updated to version
  1.4.1. Both 1.4.0 and 1.4.1 do not work with pylint 1.4.4, which is
  being used by LB, VPN, and neutron. astroid 1.3.8 works with pylint
  1.4.4.

  For neutron:
  master and liberty gate uses pep8-constraints, which has a pin for astroid
  kilo needs pinning (proposal is to add to requriements)
  juno doesn't use pylint, so works

  Note: Can modify pep8 target to do same as pep8-constraints, so that
  developers can use same command they are used to.

  To add users
  For LB:
  master - Temp workaround was to remove pylint, can pin, if desired.
  liberty, kilo? - will need to pin
  juno?

  
  For VPN:
  master, liberty, and kilo need to pin pylint and astroid
  juno does not use pylint

  NOTE: Can migrate to using pep8-constraints job and target.

  
  For FW:
  does not do pylint for pep8, so no problem seen (but no coverage).

  Because this broke gate, there are several patches already in play for
  this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521747] [NEW] nova usage call is not accurate for resized images

2015-12-01 Thread Eric Peterson
Public bug reported:

The detailed response from the nova usage call, reports invalid instance
size info if the instance has been resized.  This API is used by
Horizon, which is how we found this.

Steps to reproduce:

1)  Create an instance, say small

2) resize the instance, something like large etc

3) Make this call (adjust to hit a recent date etc):
nova --debug usage --start 2015-12-01 --tenant XYZ

Doing that call, we see the VCPU count reflect the old size for the
instance.   This makes horizon quota and usage calculations display the
wrong info, and prevents users from getting work done.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521747

Title:
  nova usage call is not accurate for resized images

Status in OpenStack Compute (nova):
  New

Bug description:
  The detailed response from the nova usage call, reports invalid
  instance size info if the instance has been resized.  This API is used
  by Horizon, which is how we found this.

  Steps to reproduce:

  1)  Create an instance, say small

  2) resize the instance, something like large etc

  3) Make this call (adjust to hit a recent date etc):
  nova --debug usage --start 2015-12-01 --tenant XYZ

  Doing that call, we see the VCPU count reflect the old size for the
  instance.   This makes horizon quota and usage calculations display
  the wrong info, and prevents users from getting work done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521756] [NEW] race/python3 issue

2015-12-01 Thread Robert Collins
Public bug reported:

Victor asked for me to have a look at an intermittent failure he was
seeing in https://review.openstack.org/#/c/250083/

it shows up like so:
Traceback (most recent call last):
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/eventlet/greenpool.py",
 line 82, in _spawn_n_impl
func(*args, **kwargs)
  File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 737, 
in _upload_and_activate
location_data = self._upload(req, image_meta)
  File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 671, 
in _upload
{'status': 'saving'})
  File "/home/robertc/work/openstack/glance/glance/registry/client/v1/api.py", 
line 174, in update_image_metadata
from_state=from_state)
  File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
209, in update_image
headers=headers)
  File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
141, in do_request
'exc_name': exc_name})
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/oslo_utils/excutils.py",
 line 204, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/six.py",
 line 686, in reraise
raise value
  File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
124, in do_request
**kwargs)
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 71, 
in wrapped
return func(self, *args, **kwargs)
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 375, 
in do_request
headers=copy.deepcopy(headers))
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 88, 
in wrapped
return func(self, method, url, body, headers)
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 524, 
in _do_request
raise exception.NotFound(res.read())
glance.common.exception.NotFound: b'Image not found'
==
FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_upload_image_http_nonexistent_location_url
tags: worker-0
--
Traceback (most recent call last):
  File "/home/robertc/work/openstack/glance/glance/tests/unit/v1/test_api.py", 
line 1149, in test_upload_image_http_nonexistent_location_url
self.assertEqual(404, res.status_int)
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/testtools/testcase.py",
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/testtools/testcase.py",
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 404 != 201


and through bisection I can reproduce it with 75 tests - I'm working on 
shrinking the set but it takes a couple hundred runs to be sure its a false 
branch, so its not super fast.

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: "current working set to reproduce"
   
https://bugs.launchpad.net/bugs/1521756/+attachment/4528225/+files/worker-0-l-l-l

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521756

Title:
  race/python3 issue

Status in Glance:
  New

Bug description:
  Victor asked for me to have a look at an intermittent failure he was
  seeing in https://review.openstack.org/#/c/250083/

  it shows up like so:
  Traceback (most recent call last):
File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/eventlet/greenpool.py",
 line 82, in _spawn_n_impl
  func(*args, **kwargs)
File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 
737, in _upload_and_activate
  location_data = self._upload(req, image_meta)
File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 
671, in _upload
  {'status': 'saving'})
File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/api.py", line 
174, in update_image_metadata
  from_state=from_state)
File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
209, in update_image
  headers=headers)
File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
141, in do_request
  'exc_name': exc_name})
File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/oslo_utils/excutils.py",
 line 204, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/six.py",
 line 686, in reraise
  raise value
File 

[Yahoo-eng-team] [Bug 1521783] [NEW] RfE: Cascading delete for LBaaS Objects

2015-12-01 Thread German Eichberger
Public bug reported:

The LBaaS-Horizon Dashboard people requested a cascading delete in the
LBaaS V2 REST API. So that if say you use an additional parameter (let's
call it force=True) by deleting a load balancer it will also delete
listeners, pools, and members. The same should be true for listeners,
pools, etc.

In a first step we likely should just a dd that to the API, and then in
a next step add it to the CLI.

As a side effect that might help operators cleaning out accounts
efficiently...

** Affects: neutron
 Importance: Undecided
 Assignee: Bharath (bharathm)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521783

Title:
  RfE: Cascading delete for LBaaS Objects

Status in neutron:
  New

Bug description:
  The LBaaS-Horizon Dashboard people requested a cascading delete in the
  LBaaS V2 REST API. So that if say you use an additional parameter
  (let's call it force=True) by deleting a load balancer it will also
  delete listeners, pools, and members. The same should be true for
  listeners, pools, etc.

  In a first step we likely should just a dd that to the API, and then
  in a next step add it to the CLI.

  As a side effect that might help operators cleaning out accounts
  efficiently...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521809] [NEW] No checkpoint if email address entered for user is in email format or not

2015-12-01 Thread Karan Soni
Public bug reported:

While creating new user if the email address entered by user is not in
email format then there is no error and is accepted as it is. In case of
typo in email format there will be no validation so user will not be
intimated if wrong email format is entered.

This field can accept anything like phone number, address so the name of
this should not say email specifically or can be named as Remarks/Info

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1521809

Title:
  No checkpoint if email address entered for user is in email format or
  not

Status in OpenStack Identity (keystone):
  New

Bug description:
  While creating new user if the email address entered by user is not in
  email format then there is no error and is accepted as it is. In case
  of typo in email format there will be no validation so user will not
  be intimated if wrong email format is entered.

  This field can accept anything like phone number, address so the name
  of this should not say email specifically or can be named as
  Remarks/Info

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1521809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521811] [NEW] No restriction on number of characters for email field

2015-12-01 Thread Karan Soni
Public bug reported:

There is no restriction on number of characters set in for user's email
address. There should be max length set for the email field

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1521811

Title:
  No restriction on number of characters for email field

Status in OpenStack Identity (keystone):
  New

Bug description:
  There is no restriction on number of characters set in for user's
  email address. There should be max length set for the email field

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1521811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521788] [NEW] nova.network.neutronv2.api.validate_networks could be smarter when listing ports

2015-12-01 Thread Matt Riedemann
Public bug reported:

There are two things we can do to make this more efficient:

https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1182

1. Move the list_ports call after the unlimited quota check - if the
quota is unlimited, we don't need to list ports.

2. Filter the list_ports response to only return the port id, we don't
need the other port details in the response since we don't use those
fields, we're just getting a count.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: low-hanging-fruit network neutron performance

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Status: New => Triaged

** Tags added: low-hanging-fruit

** Description changed:

- There are two things we can to make this more efficient:
+ There are two things we can do to make this more efficient:
  
  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1182
  
  1. Move the list_ports call after the unlimited quota check - if the
  quota is unlimited, we don't need to list ports.
  
  2. Filter the list_ports response to only return the port id, we don't
  need the other port details in the response since we don't use those
  fields, we're just getting a count.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521788

Title:
  nova.network.neutronv2.api.validate_networks could be smarter when
  listing ports

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  There are two things we can do to make this more efficient:

  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1182

  1. Move the list_ports call after the unlimited quota check - if the
  quota is unlimited, we don't need to list ports.

  2. Filter the list_ports response to only return the port id, we don't
  need the other port details in the response since we don't use those
  fields, we're just getting a count.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521797] [NEW] Support for Name field in Members and HMs

2015-12-01 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/245664
commit cb3ae497c0a6349dfea0a41788b962a4cd3ef3eb
Author: Reedip Banerjee 
Date:   Fri Nov 13 12:32:27 2015 +0530

Support for Name field in Members and HMs

This patch adds support to enable naming LBaasV2 Members and Health
Monitors(HMs).

DocImpact

Closes-Bug: #1515506
Change-Id: Ieb66386fac3a5a4dace0112838fe9afde212f055

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521797

Title:
  Support for Name field in Members and HMs

Status in neutron:
  New

Bug description:
  https://review.openstack.org/245664
  commit cb3ae497c0a6349dfea0a41788b962a4cd3ef3eb
  Author: Reedip Banerjee 
  Date:   Fri Nov 13 12:32:27 2015 +0530

  Support for Name field in Members and HMs
  
  This patch adds support to enable naming LBaasV2 Members and Health
  Monitors(HMs).
  
  DocImpact
  
  Closes-Bug: #1515506
  Change-Id: Ieb66386fac3a5a4dace0112838fe9afde212f055

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521812] [NEW] Different Tenants/Projects should allow users with same name

2015-12-01 Thread Karan Soni
Public bug reported:

As per the specifications of Tenant or Project they are container for
isolating and grouping resources. So according to that it should allow
users with same name to be contained in different projects.

Example in big data center there are multiple clients which are
segregated by different projects (let's name of organisation). In case
they want to create user with same name they won't be able to.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1521812

Title:
  Different Tenants/Projects should allow users with same name

Status in OpenStack Identity (keystone):
  New

Bug description:
  As per the specifications of Tenant or Project they are container for
  isolating and grouping resources. So according to that it should allow
  users with same name to be contained in different projects.

  Example in big data center there are multiple clients which are
  segregated by different projects (let's name of organisation). In case
  they want to create user with same name they won't be able to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1521812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Haiwei Xu
** Also affects: monasca
   Importance: Undecided
   Status: New

** Changed in: monasca
 Assignee: (unassigned) => Haiwei Xu (xu-haiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Kenji Yasui
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465086] Re: tox doesn't work under proxy

2015-12-01 Thread Kenji Yasui
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kenji Yasui (k-yasui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465086

Title:
  tox doesn't work under proxy

Status in Barbican:
  In Progress
Status in Ceilometer:
  In Progress
Status in Gnocchi:
  In Progress
Status in Magnum:
  Fix Released
Status in neutron:
  New
Status in python-magnumclient:
  Fix Released

Bug description:
  When a development environment is under a proxy, tox fails like this:
  (even if environment variables of the proxy are set.)

  $ tox -epep8
  pep8 create: /home/system/magnum/.tox/pep8
  pep8 installdeps: -r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt
  ERROR: invocation failed (exit code 1), logfile: 
/home/system/magnum/.tox/pep8/log/pep8-1.log
  ERROR: actionid: pep8
  msg: getenv
  cmdargs: [local('/home/system/magnum/.tox/pep8/bin/pip'), 'install', '-U', 
'-r/home/system/magnum/requirements.txt', 
'-r/home/system/magnum/test-requirements.txt']
  env: {'PATH': 
'/home/system/magnum/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'VIRTUAL_ENV': '/home/system/magnum/.tox/pep8', 'PYTHONHASHSEED': '2857363521'}

  Collecting Babel>=1.3 (from -r /home/system/magnum/requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Could not find a version that satisfies the requirement Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1)) (from versions: )
  No matching distribution found for Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1))

  ERROR: could not install deps [-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)
  ___ summary 

  ERROR:   pep8: could not install deps 
[-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1465086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521812] Re: Different Tenants/Projects should allow users with same name

2015-12-01 Thread Steve Martinelli
different users with the same name should be in different domains.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1521812

Title:
  Different Tenants/Projects should allow users with same name

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  As per the specifications of Tenant or Project they are container for
  isolating and grouping resources. So according to that it should allow
  users with same name to be contained in different projects.

  Example in big data center there are multiple clients which are
  segregated by different projects (let's name of organisation). In case
  they want to create user with same name they won't be able to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1521812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521823] [NEW] reboot test fails in gate-grenade-dsvm-multinode with missing disk path

2015-12-01 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/44/245344/4/gate/gate-grenade-dsvm-
multinode/1840523/logs/subnode-2/old/screen-n-cpu.txt.gz?level=TRACE#_2015-12-01_23_58_04_999

2015-12-01 23:58:04.999 ERROR nova.compute.manager 
[req-8e76a5ba-f89f-4bfe-8218-aeaa580a6e13 
tempest-ServerActionsTestJSON-475123373 
tempest-ServerActionsTestJSON-884217194] [instance: 
e50d2ceb-2be4-4c1a-a190-4aa9ab160af9] Cannot reboot instance: [Errno 2] No such 
file or directory: 
'/opt/stack/data/nova/instances/e50d2ceb-2be4-4c1a-a190-4aa9ab160af9/disk'
2015-12-01 23:58:05.513 ERROR oslo_messaging.rpc.dispatcher 
[req-8e76a5ba-f89f-4bfe-8218-aeaa580a6e13 
tempest-ServerActionsTestJSON-475123373 
tempest-ServerActionsTestJSON-884217194] Exception during message handling: 
[Errno 2] No such file or directory: 
'/opt/stack/data/nova/instances/e50d2ceb-2be4-4c1a-a190-4aa9ab160af9/disk'
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/exception.py", line 89, in wrapped
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher payload)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/exception.py", line 72, in wrapped
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 350, in decorated_function
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance=instance)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 323, in decorated_function
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 400, in decorated_function
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 378, in decorated_function
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 366, in decorated_function
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 2954, in reboot_instance
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
self._set_instance_obj_error_state(context, instance)
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-01 

[Yahoo-eng-team] [Bug 1521820] [NEW] Some DVR functional tests leak the FIP namespace

2015-12-01 Thread Assaf Muller
Public bug reported:

The FIP namespace is deleted when the agent receives an RPC message
'fipnamespace_delete_on_ext_net'
(https://review.openstack.org/#/c/230079/). This is simulated in some
DVR tests (Thus cleaning up the namespace), but not all. All of the DVR
'lifecycle' tests execute _dvr_router_lifecycle, which in turn executes
_assert_fip_namespace_deleted, that calls
agent.fipnamespace_delete_on_ext_net and asserts it succeeded. Any test
that creates a distributed router that is not a 'lifecycle' test does
not clean up the FIP namespace, this includes:

* test_dvr_router_fips_for_multiple_ext_networks
* test_dvr_router_rem_fips_on_restarted_agent
* test_dvr_router_add_fips_on_restarted_agent
* test_dvr_router_add_internal_network_set_arp_cache
* test_dvr_router_fip_agent_mismatch
* test_dvr_router_fip_late_binding
* test_dvr_router_snat_namespace_with_interface_remove
* test_dvr_ha_router_failover

** Affects: neutron
 Importance: Low
 Status: New


** Tags: functional-tests l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521820

Title:
  Some DVR functional tests leak the FIP namespace

Status in neutron:
  New

Bug description:
  The FIP namespace is deleted when the agent receives an RPC message
  'fipnamespace_delete_on_ext_net'
  (https://review.openstack.org/#/c/230079/). This is simulated in some
  DVR tests (Thus cleaning up the namespace), but not all. All of the
  DVR 'lifecycle' tests execute _dvr_router_lifecycle, which in turn
  executes _assert_fip_namespace_deleted, that calls
  agent.fipnamespace_delete_on_ext_net and asserts it succeeded. Any
  test that creates a distributed router that is not a 'lifecycle' test
  does not clean up the FIP namespace, this includes:

  * test_dvr_router_fips_for_multiple_ext_networks
  * test_dvr_router_rem_fips_on_restarted_agent
  * test_dvr_router_add_fips_on_restarted_agent
  * test_dvr_router_add_internal_network_set_arp_cache
  * test_dvr_router_fip_agent_mismatch
  * test_dvr_router_fip_late_binding
  * test_dvr_router_snat_namespace_with_interface_remove
  * test_dvr_ha_router_failover

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

** Changed in: python-neutronclient
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521815] [NEW] DVR functional tests failing intermittently

2015-12-01 Thread Assaf Muller
Public bug reported:

Some console logs:

http://logs.openstack.org/18/248418/3/check/gate-neutron-dsvm-functional/8a6dfcf/console.html
http://logs.openstack.org/00/189500/23/check/gate-neutron-dsvm-functional/d949ce0/console.html
http://logs.openstack.org/72/252072/1/check/gate-neutron-dsvm-functional/aafcd9a/console.html
http://logs.openstack.org/32/192032/26/check/gate-neutron-dsvm-functional/b267f83/console.html
http://logs.openstack.org/02/251502/3/check/gate-neutron-dsvm-functional/b074a96/console.html

Tests seen failing so far (May not be comprehensive):
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_add_fips_on_restarted_agent
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_with_snat_with_fips
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_ha_with_snat_with_fips
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_without_snat_with_fips

The commonality is:
1) DVR tests (With and without HA, with and without SNAT)
2) The tests are taking thousands of seconds to fail, causing the job to time 
out, this is even though we're supposed to have a per-test timeout of 180 
seconds defined in tox.ini. This means (I suspect) that we're not getting the 
functional tests logs.

** Affects: neutron
 Importance: High
 Status: New


** Tags: functional-tests gate-failure l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521815

Title:
  DVR functional tests failing intermittently

Status in neutron:
  New

Bug description:
  Some console logs:

  
http://logs.openstack.org/18/248418/3/check/gate-neutron-dsvm-functional/8a6dfcf/console.html
  
http://logs.openstack.org/00/189500/23/check/gate-neutron-dsvm-functional/d949ce0/console.html
  
http://logs.openstack.org/72/252072/1/check/gate-neutron-dsvm-functional/aafcd9a/console.html
  
http://logs.openstack.org/32/192032/26/check/gate-neutron-dsvm-functional/b267f83/console.html
  
http://logs.openstack.org/02/251502/3/check/gate-neutron-dsvm-functional/b074a96/console.html

  Tests seen failing so far (May not be comprehensive):
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_add_fips_on_restarted_agent
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_with_snat_with_fips
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_ha_with_snat_with_fips
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_without_snat_with_fips

  The commonality is:
  1) DVR tests (With and without HA, with and without SNAT)
  2) The tests are taking thousands of seconds to fail, causing the job to time 
out, this is even though we're supposed to have a per-test timeout of 180 
seconds defined in tox.ini. This means (I suspect) that we're not getting the 
functional tests logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465086] Re: tox doesn't work under proxy

2015-12-01 Thread lvdongbing
** Also affects: senlin
   Importance: Undecided
   Status: New

** Changed in: senlin
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465086

Title:
  tox doesn't work under proxy

Status in Barbican:
  In Progress
Status in Ceilometer:
  In Progress
Status in Gnocchi:
  In Progress
Status in Magnum:
  Fix Released
Status in neutron:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in senlin:
  In Progress

Bug description:
  When a development environment is under a proxy, tox fails like this:
  (even if environment variables of the proxy are set.)

  $ tox -epep8
  pep8 create: /home/system/magnum/.tox/pep8
  pep8 installdeps: -r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt
  ERROR: invocation failed (exit code 1), logfile: 
/home/system/magnum/.tox/pep8/log/pep8-1.log
  ERROR: actionid: pep8
  msg: getenv
  cmdargs: [local('/home/system/magnum/.tox/pep8/bin/pip'), 'install', '-U', 
'-r/home/system/magnum/requirements.txt', 
'-r/home/system/magnum/test-requirements.txt']
  env: {'PATH': 
'/home/system/magnum/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'VIRTUAL_ENV': '/home/system/magnum/.tox/pep8', 'PYTHONHASHSEED': '2857363521'}

  Collecting Babel>=1.3 (from -r /home/system/magnum/requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Could not find a version that satisfies the requirement Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1)) (from versions: )
  No matching distribution found for Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1))

  ERROR: could not install deps [-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)
  ___ summary 

  ERROR:   pep8: could not install deps 
[-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1465086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Kenji Yasui
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kenji Yasui (k-yasui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262424] Re: Files without code should not contain copyright notices

2015-12-01 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262424

Title:
  Files without code should not contain copyright notices

Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in taskflow:
  Fix Released

Bug description:
  Due to a recent policy change in HACKING
  (http://docs.openstack.org/developer/hacking/#openstack-licensing),
  empty files should no longer contain copyright notices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/manila/+bug/1262424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521692] [NEW] Modal does not return a result when closing

2015-12-01 Thread Justin Pomeroy
Public bug reported:

The angular wizard modal does not return a result when the modal is
closed.  It would be very helpful for the result of the submit() to be
passed on when closing the modal so it can be handled.

** Affects: horizon
 Importance: Undecided
 Assignee: Justin Pomeroy (jpomero)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Justin Pomeroy (jpomero)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1521692

Title:
  Modal does not return a result when closing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The angular wizard modal does not return a result when the modal is
  closed.  It would be very helpful for the result of the submit() to be
  passed on when closing the modal so it can be handled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1521692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508243] Re: Store Private Key Passphrase in Neutron-LBaaS for TLS Terminations

2015-12-01 Thread Doug Wiegley
Isn't this a failure of the global lbaas creds to barbican? Lbaas
becomes a trusted source since it has global access, and that seems the
security fail, not passwords that we'd then have to store in a db
(double security fail.)

** Changed in: neutron
   Status: Incomplete => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508243

Title:
  Store Private Key Passphrase in Neutron-LBaaS for TLS Terminations

Status in neutron:
  Opinion

Bug description:
  The current workflow for TLS Termination on loadbalancers has a couple
  of interesting security vulnerabilities that need to be addressed
  somehow. The solution I propose is to encourage the use of passphrase
  encryption on private keys, and to store that passphrase in Neutron-
  LBaaS along with the Barbican href, instead of inside Barbican.

  Spec: https://review.openstack.org/237807

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519588] Re: ldap backend for roles is not deprecated

2015-12-01 Thread Steve Martinelli
use bp https://blueprints.launchpad.net/keystone/+spec/deprecated-as-of-
mitaka to track this

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1519588

Title:
  ldap backend for roles is not deprecated

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  With the LDAP backend for assignment and resource being potentially
  removed in Mitaka, we probably don't want the role backend to support
  LDAP either. It currently does and is not marked for deprecation
  either
  
https://github.com/openstack/keystone/blob/master/keystone/assignment/role_backends/ldap.py#L30-L43

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1519588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433765] Re: FileField need clean up temp file and and get file size without loading file

2015-12-01 Thread zhaoyim
Din NOT get any response, so mark to invalid. Anyone think it is not
right, please change the status and give some comments. Thanks!

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1433765

Title:
  FileField need clean up temp file and and get file size without
  loading file

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  
  We are using forms.FileField to upload a client local file. 

  We noticed that if the file size is big, for example 7 gb,  Horizon
  dumps a temp file in /tmp dir where horizon server runs. If the file
  loaded is different each time, the temp files are left in the /tmp.
  Over the long run, it could overflow the disk spaces.

  We would like to have a way to clean up the tmp file once file loading
  is done.

  The way that we use to get the file handle  in the form's clean or
  handle method is :

  f = self.request.FILES['file"]

  If we use this to get the file size in the clean method to validate,
  the whole file gets loaded either into memory or to temp dir,  it
  would be nice to get the file size without loading the whole file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1433765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-ceilometerclient
   Importance: Undecided
   Status: New

** Changed in: python-ceilometerclient
   Status: New => In Progress

** Changed in: python-ceilometerclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-ceilometerclient:
  In Progress
Status in python-cinderclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-keystoneclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  In Progress
Status in Trove:
  Fix Committed
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521871] [NEW] Networks form validation error message not shown

2015-12-01 Thread Mitali Parthasarathy
Public bug reported:

When you are creating a network with a subnet under project -> network
topology -> create network, if you input a valid IP without a subnet
mask under "network address," Horizon will not throw an error but will
still prevent you from advancing.


Steps to reproduce:
1. Under Project -> Network Topology, open the "+ Create Network" modal.
2. Enter anything for page 1.
3. On page 2, enable "create subnet."
4. For Network Address, enter a valid IP which is not in the subnet format, 
e.g. 10.100.0.0 or 0.0.0.0, etc. 
5. Press "Next"

Horizon will not allow you to progress, but there is no error displayed
on the UI. Based on the code,  /32 subnet is assumed and the below error
should be thrown but I don't see it on the screen.

_("The subnet in the Network Address is too small (/%s).") %
subnet.prefixlen

** Affects: horizon
 Importance: Undecided
 Status: New

** Project changed: django-openstack-auth => horizon

** Summary changed:

- Form validation error message not shown
+ Networks form validation error message not shown

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1521871

Title:
  Networks form validation error message not shown

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When you are creating a network with a subnet under project -> network
  topology -> create network, if you input a valid IP without a subnet
  mask under "network address," Horizon will not throw an error but will
  still prevent you from advancing.

  
  Steps to reproduce:
  1. Under Project -> Network Topology, open the "+ Create Network" modal.
  2. Enter anything for page 1.
  3. On page 2, enable "create subnet."
  4. For Network Address, enter a valid IP which is not in the subnet format, 
e.g. 10.100.0.0 or 0.0.0.0, etc. 
  5. Press "Next"

  Horizon will not allow you to progress, but there is no error
  displayed on the UI. Based on the code,  /32 subnet is assumed and the
  below error should be thrown but I don't see it on the screen.

  _("The subnet in the Network Address is too small (/%s).") %
  subnet.prefixlen

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1521871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521868] Re: [RFE] API to fetch published configurations

2015-12-01 Thread Sreekumar S
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521868

Title:
  [RFE] API to fetch published configurations

Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  New

Bug description:
  Neutron clients like Horizon and CLI depend on redundant settings or
  hard coded values for configurations that are specific to the
  functionality of neutron and its sub-agents. For example Horizon has
  settings for 'supported_provider_types', 'segmentation_id_range' and
  'extra_provider_types' (proposed). There have been requests/bugs to
  neutron team to add APIs to fetch these values, so that the API
  consumers don't have to hard code or add their own configurations and
  keep in them in sync. In the case of multiple configuration settings,
  the admin must also know and keep track to set both configurations in
  his environment.

  This RFE proposes to add a standard identity/role based configuration
  fetching API for accessing the current exposed configuration. Exposed
  keys and their corresponding namespaces should be published by neutron
  and its sub-modules.

  Setting or modifying configuration (static or dynamic) is not
  achievable through the config APIs. For this the client/admin should
  resort to the individual/specific APIs exposed or modify the config
  files themselves.

  Underlying configuration implementation can also be specific to
  neutron, it could be configdb, *.ini/*.conf file or anything else.

  The neutron config API should expose a unified parent/root namespace from 
which all other namespaces specific to each sub-agent is accessible.
  For example...
  /ml2/ml2_type_gre/tunnel_id_ranges -- for ml2 agent
  /l3/interface_driver -- for l3 agent
  /dhcp/dhcp_driver -- for dhcp agent
  /ml2/* -- for ml2 agent

  This RFE proposes a new section in network API for 'Configuration',
  along with 'Networks', 'Subnets', 'Ports', and 'Service providers'.

  Example:
  GET - /v2.0/config -- List all configuration
  GET - /v2.0/config/l3 -- List l3 specific ones.
  GET - /v2.0/config/ml2/ml2_type_gre -- List everything under 
/ml2/ml2_type_gre/*
  GET - /v2.0/config/ml2/type_drivers -- Value for 'type_drivers' key inside 
ml2, returning for eg. "flat,vlan,gre,vxlan"

  Such an interface for configuration fetching could be followed suit by other 
projects as well, like may be nova, cinder etc which may have their own 
configuration namespaces to expose.
  It's like an OpenStack equivalent of /proc file system without the set/mod 
capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1521868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521868] [NEW] [RFE] API to fetch published configurations

2015-12-01 Thread Sreekumar S
Public bug reported:

Neutron clients like Horizon and CLI depend on redundant settings or
hard coded values for configurations that are specific to the
functionality of neutron and its sub-agents. For example Horizon has
settings for 'supported_provider_types', 'segmentation_id_range' and
'extra_provider_types' (proposed). There have been requests/bugs to
neutron team to add APIs to fetch these values, so that the API
consumers don't have to hard code or add their own configurations and
keep in them in sync. In the case of multiple configuration settings,
the admin must also know and keep track to set both configurations in
his environment.

This RFE proposes to add a standard identity/role based configuration
fetching API for accessing the current exposed configuration. Exposed
keys and their corresponding namespaces should be published by neutron
and its sub-modules.

Setting or modifying configuration (static or dynamic) is not achievable
through the config APIs. For this the client/admin should resort to the
individual/specific APIs exposed or modify the config files themselves.

Underlying configuration implementation can also be specific to neutron,
it could be configdb, *.ini/*.conf file or anything else.

The neutron config API should expose a unified parent/root namespace from which 
all other namespaces specific to each sub-agent is accessible.
For example...
/ml2/ml2_type_gre/tunnel_id_ranges -- for ml2 agent
/l3/interface_driver -- for l3 agent
/dhcp/dhcp_driver -- for dhcp agent
/ml2/* -- for ml2 agent

This RFE proposes a new section in network API for 'Configuration',
along with 'Networks', 'Subnets', 'Ports', and 'Service providers'.

Example:
GET - /v2.0/config -- List all configuration
GET - /v2.0/config/l3 -- List l3 specific ones.
GET - /v2.0/config/ml2/ml2_type_gre -- List everything under /ml2/ml2_type_gre/*
GET - /v2.0/config/ml2/type_drivers -- Value for 'type_drivers' key inside ml2, 
returning for eg. "flat,vlan,gre,vxlan"

Such an interface for configuration fetching could be followed suit by other 
projects as well, like may be nova, cinder etc which may have their own 
configuration namespaces to expose.
It's like an OpenStack equivalent of /proc file system without the set/mod 
capability.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521868

Title:
  [RFE] API to fetch published configurations

Status in neutron:
  New

Bug description:
  Neutron clients like Horizon and CLI depend on redundant settings or
  hard coded values for configurations that are specific to the
  functionality of neutron and its sub-agents. For example Horizon has
  settings for 'supported_provider_types', 'segmentation_id_range' and
  'extra_provider_types' (proposed). There have been requests/bugs to
  neutron team to add APIs to fetch these values, so that the API
  consumers don't have to hard code or add their own configurations and
  keep in them in sync. In the case of multiple configuration settings,
  the admin must also know and keep track to set both configurations in
  his environment.

  This RFE proposes to add a standard identity/role based configuration
  fetching API for accessing the current exposed configuration. Exposed
  keys and their corresponding namespaces should be published by neutron
  and its sub-modules.

  Setting or modifying configuration (static or dynamic) is not
  achievable through the config APIs. For this the client/admin should
  resort to the individual/specific APIs exposed or modify the config
  files themselves.

  Underlying configuration implementation can also be specific to
  neutron, it could be configdb, *.ini/*.conf file or anything else.

  The neutron config API should expose a unified parent/root namespace from 
which all other namespaces specific to each sub-agent is accessible.
  For example...
  /ml2/ml2_type_gre/tunnel_id_ranges -- for ml2 agent
  /l3/interface_driver -- for l3 agent
  /dhcp/dhcp_driver -- for dhcp agent
  /ml2/* -- for ml2 agent

  This RFE proposes a new section in network API for 'Configuration',
  along with 'Networks', 'Subnets', 'Ports', and 'Service providers'.

  Example:
  GET - /v2.0/config -- List all configuration
  GET - /v2.0/config/l3 -- List l3 specific ones.
  GET - /v2.0/config/ml2/ml2_type_gre -- List everything under 
/ml2/ml2_type_gre/*
  GET - /v2.0/config/ml2/type_drivers -- Value for 'type_drivers' key inside 
ml2, returning for eg. "flat,vlan,gre,vxlan"

  Such an interface for configuration fetching could be followed suit by other 
projects as well, like may be nova, cinder etc which may have their own 
configuration namespaces to expose.
  It's like an OpenStack equivalent of /proc file system without the set/mod 
capability.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1521871] [NEW] Form validation error message not shown

2015-12-01 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When you are creating a network with a subnet under project -> network
topology -> create network, if you input a valid IP without a subnet
mask under "network address," Horizon will not throw an error but will
still prevent you from advancing.


Steps to reproduce:
1. Under Project -> Network Topology, open the "+ Create Network" modal.
2. Enter anything for page 1.
3. On page 2, enable "create subnet."
4. For Network Address, enter a valid IP which is not in the subnet format, 
e.g. 10.100.0.0 or 0.0.0.0, etc. 
5. Press "Next"

Horizon will not allow you to progress, but there is no error displayed
on the UI. Based on the code,  /32 subnet is assumed and the below error
should be thrown but I don't see it on the screen.

_("The subnet in the Network Address is too small (/%s).") %
subnet.prefixlen

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Form validation error message not shown
https://bugs.launchpad.net/bugs/1521871
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-senlinclient
   Importance: Undecided
   Status: New

** Changed in: python-senlinclient
   Status: New => In Progress

** Changed in: python-senlinclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

** No longer affects: python-senlinclient

** Also affects: python-swiftclient
   Importance: Undecided
   Status: New

** Changed in: python-swiftclient
   Status: New => In Progress

** Changed in: python-swiftclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-keystoneclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  In Progress
Status in Trove:
  Fix Committed
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521876] [NEW] Top nav bar does not stretch when table is longer than browser window on certain pages

2015-12-01 Thread Mitali Parthasarathy
Public bug reported:

This issue occurs on both Firefox and Chrome.

Under the following conditions, the top nav bar does not stretch to fill the 
screen:
1) The browser window is horizontally thin enough such that the table invokes a 
horizontal scroll bar
2) You are on one of the following pages:
Project:
instances
volumes
Admin:
hypervisors
instances
volumes
flavors
networks
Identity:
projects
users

Under these two circumstances, the nav bar will not stretch when scrolling 
horizontally. This issue only occurs on the pages above; any other page with a 
table that is wider than the browser window renders properly.
I've attached a screenshot of the problem.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "_thumb_451312.png"
   
https://bugs.launchpad.net/bugs/1521876/+attachment/4528417/+files/_thumb_451312.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1521876

Title:
  Top nav bar does not stretch when table is longer than browser window
  on certain pages

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This issue occurs on both Firefox and Chrome.

  Under the following conditions, the top nav bar does not stretch to fill the 
screen:
  1) The browser window is horizontally thin enough such that the table invokes 
a horizontal scroll bar
  2) You are on one of the following pages:
  Project:
  instances
  volumes
  Admin:
  hypervisors
  instances
  volumes
  flavors
  networks
  Identity:
  projects
  users

  Under these two circumstances, the nav bar will not stretch when scrolling 
horizontally. This issue only occurs on the pages above; any other page with a 
table that is wider than the browser window renders properly.
  I've attached a screenshot of the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1521876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-saharaclient
   Importance: Undecided
   Status: New

** Changed in: python-saharaclient
   Status: New => In Progress

** Changed in: python-saharaclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-keystoneclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread MD NADEEM
** Also affects: zaqar
   Importance: Undecided
   Status: New

** Changed in: zaqar
 Assignee: (unassigned) => MD NADEEM (mail2nadeem92)

** Also affects: python-zaqarclient
   Importance: Undecided
   Status: New

** Changed in: python-zaqarclient
 Assignee: (unassigned) => MD NADEEM (mail2nadeem92)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-keystoneclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  New
Status in Trove:
  Fix Committed
Status in zaqar:
  New

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-12-01 Thread lvdongbing
** Also affects: cloudkitty
   Importance: Undecided
   Status: New

** Changed in: cloudkitty
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in cloudkitty:
  New
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  In Progress
Status in neutron:
  Fix Committed
Status in Sahara:
  Fix Committed
Status in senlin:
  Fix Committed

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloudkitty/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465086] Re: tox doesn't work under proxy

2015-12-01 Thread lvdongbing
** Also affects: cloudkitty
   Importance: Undecided
   Status: New

** Changed in: cloudkitty
 Assignee: (unassigned) => lvdongbing (dbcocle)

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465086

Title:
  tox doesn't work under proxy

Status in Barbican:
  In Progress
Status in Ceilometer:
  In Progress
Status in cloudkitty:
  New
Status in Gnocchi:
  In Progress
Status in Magnum:
  Fix Released
Status in neutron:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in senlin:
  Fix Committed
Status in tempest:
  New

Bug description:
  When a development environment is under a proxy, tox fails like this:
  (even if environment variables of the proxy are set.)

  $ tox -epep8
  pep8 create: /home/system/magnum/.tox/pep8
  pep8 installdeps: -r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt
  ERROR: invocation failed (exit code 1), logfile: 
/home/system/magnum/.tox/pep8/log/pep8-1.log
  ERROR: actionid: pep8
  msg: getenv
  cmdargs: [local('/home/system/magnum/.tox/pep8/bin/pip'), 'install', '-U', 
'-r/home/system/magnum/requirements.txt', 
'-r/home/system/magnum/test-requirements.txt']
  env: {'PATH': 
'/home/system/magnum/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'VIRTUAL_ENV': '/home/system/magnum/.tox/pep8', 'PYTHONHASHSEED': '2857363521'}

  Collecting Babel>=1.3 (from -r /home/system/magnum/requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Could not find a version that satisfies the requirement Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1)) (from versions: )
  No matching distribution found for Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1))

  ERROR: could not install deps [-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)
  ___ summary 

  ERROR:   pep8: could not install deps 
[-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1465086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465086] Re: tox doesn't work under proxy

2015-12-01 Thread lvdongbing
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465086

Title:
  tox doesn't work under proxy

Status in Barbican:
  In Progress
Status in Ceilometer:
  In Progress
Status in cloudkitty:
  New
Status in Gnocchi:
  In Progress
Status in Magnum:
  Fix Released
Status in neutron:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in senlin:
  Fix Committed

Bug description:
  When a development environment is under a proxy, tox fails like this:
  (even if environment variables of the proxy are set.)

  $ tox -epep8
  pep8 create: /home/system/magnum/.tox/pep8
  pep8 installdeps: -r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt
  ERROR: invocation failed (exit code 1), logfile: 
/home/system/magnum/.tox/pep8/log/pep8-1.log
  ERROR: actionid: pep8
  msg: getenv
  cmdargs: [local('/home/system/magnum/.tox/pep8/bin/pip'), 'install', '-U', 
'-r/home/system/magnum/requirements.txt', 
'-r/home/system/magnum/test-requirements.txt']
  env: {'PATH': 
'/home/system/magnum/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'VIRTUAL_ENV': '/home/system/magnum/.tox/pep8', 'PYTHONHASHSEED': '2857363521'}

  Collecting Babel>=1.3 (from -r /home/system/magnum/requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Could not find a version that satisfies the requirement Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1)) (from versions: )
  No matching distribution found for Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1))

  ERROR: could not install deps [-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)
  ___ summary 

  ERROR:   pep8: could not install deps 
[-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1465086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-12-01 Thread lvdongbing
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in cloudkitty:
  In Progress
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  In Progress
Status in neutron:
  Fix Committed
Status in Sahara:
  Fix Committed
Status in senlin:
  Fix Committed
Status in tempest:
  In Progress

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloudkitty/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
   Status: New => In Progress

** Changed in: python-keystoneclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-keystoneclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

** Changed in: python-cinderclient
   Status: New => In Progress

** Changed in: python-cinderclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-keystoneclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521846] [NEW] Metering not configured for all-in-one DVR job, failing tests

2015-12-01 Thread Assaf Muller
Public bug reported:

Here's an example:
http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-dvr/62fdece/console.html#_2015-12-02_02_15_59_182

It looks like the metering setUpClass is trying to register metering
resources before checking if the extension is actually loaded (Which is
a separate bug in the tests). Another test fails showing that metering
is not configured:

http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-
dvr/62fdece/console.html#_2015-12-02_02_15_59_167

Metering seems to be configured fine both in the non-DVR job, as well as
the multinode DVR job.

** Affects: neutron
 Importance: High
 Status: New


** Tags: gate-failure l3-dvr-backlog metering

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521846

Title:
  Metering not configured for all-in-one DVR job, failing tests

Status in neutron:
  New

Bug description:
  Here's an example:
  
http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-dvr/62fdece/console.html#_2015-12-02_02_15_59_182

  It looks like the metering setUpClass is trying to register metering
  resources before checking if the extension is actually loaded (Which
  is a separate bug in the tests). Another test fails showing that
  metering is not configured:

  http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-
  dvr/62fdece/console.html#_2015-12-02_02_15_59_167

  Metering seems to be configured fine both in the non-DVR job, as well
  as the multinode DVR job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521844] [NEW] pycadf ID validation fails for multi-domain IDs

2015-12-01 Thread Steve Martinelli
Public bug reported:

With the latest pycadf release (2.0.0), there is a more strict
validation on the ID fields of various CADF resources, in this case, the
initiator is failing to validate some keystone user IDs.

This only happens when multi-domains are configured. An ID for a user in
a multi-domain setup is in fact two IDs concatenated together.

The code to check for a valid ID / UUID is:
https://github.com/openstack/pycadf/blob/master/pycadf/identifier.py#L50-L60

def is_valid(value):
"""Validation to ensure Identifier is correct.
"""
if value in ['target', 'initiator', 'observer']:
return True
try:
uuid.UUID(value)
except ValueError:
return False
else:
return True

A typical userID in a multi domain setup is:
c79a927caef36ade4ed36679cd084fa45df4563f94af6a956fafa936889b4faf

When this is validated in pycadf, it fails:

>>> import uuid
>>> uuid.UUID("c79a927caef36ade4ed36679cd084fa45df4563f94af6a956fafa936889b4faf")
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/uuid.py", line 134, in __init__
raise ValueError('badly formed hexadecimal UUID string')
ValueError: badly formed hexadecimal UUID string

Options: we can revert the change to pycadf and loosen the validation of
IDs, or make keystone use a different value.

This is the part of keystone that fails:
https://github.com/openstack/keystone/blob/master/keystone/notifications.py#L504-L505

** Affects: keystone
 Importance: Critical
 Status: Triaged

** Affects: pycadf
 Importance: Undecided
 Status: New

** Changed in: keystone
   Importance: Undecided => Critical

** Changed in: keystone
   Status: New => Triaged

** Also affects: pycadf
   Importance: Undecided
   Status: New

** Description changed:

  With the latest pycadf release (2.0.0), there is a more strict
  validation on the ID fields of various CADF resources, in this case, the
  initiator is failing to validate some keystone user IDs.
  
  This only happens when multi-domains are configured. An ID for a user in
  a multi-domain setup is in fact two IDs concatenated together.
  
  The code to check for a valid ID / UUID is:
+ https://github.com/openstack/pycadf/blob/master/pycadf/identifier.py#L50-L60
  
  def is_valid(value):
- """Validation to ensure Identifier is correct.
- """
- if value in ['target', 'initiator', 'observer']:
- return True
- try:
- uuid.UUID(value)
- except ValueError:
- return False
- else:
- return True
+ """Validation to ensure Identifier is correct.
+ """
+ if value in ['target', 'initiator', 'observer']:
+ return True
+ try:
+ uuid.UUID(value)
+ except ValueError:
+ return False
+ else:
+ return True
  
  A typical userID in a multi domain setup is:
  c79a927caef36ade4ed36679cd084fa45df4563f94af6a956fafa936889b4faf
  
  When this is validated in pycadf, it fails:
  
  >>> import uuid
  >>> 
uuid.UUID("c79a927caef36ade4ed36679cd084fa45df4563f94af6a956fafa936889b4faf")
  Traceback (most recent call last):
-   File "", line 1, in 
-   File "/usr/lib/python2.7/uuid.py", line 134, in __init__
- raise ValueError('badly formed hexadecimal UUID string')
+   File "", line 1, in 
+   File "/usr/lib/python2.7/uuid.py", line 134, in __init__
+ raise ValueError('badly formed hexadecimal UUID string')
  ValueError: badly formed hexadecimal UUID string
  
- 
- Options: we can revert the change to pycadf and loosen the validation of IDs, 
or make keystone use a different value.
+ Options: we can revert the change to pycadf and loosen the validation of
+ IDs, or make keystone use a different value.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1521844

Title:
  pycadf ID validation fails for multi-domain IDs

Status in OpenStack Identity (keystone):
  Triaged
Status in pycadf:
  New

Bug description:
  With the latest pycadf release (2.0.0), there is a more strict
  validation on the ID fields of various CADF resources, in this case,
  the initiator is failing to validate some keystone user IDs.

  This only happens when multi-domains are configured. An ID for a user
  in a multi-domain setup is in fact two IDs concatenated together.

  The code to check for a valid ID / UUID is:
  https://github.com/openstack/pycadf/blob/master/pycadf/identifier.py#L50-L60

  def is_valid(value):
  """Validation to ensure Identifier is correct.
  """
  if value in ['target', 'initiator', 'observer']:
  return True
  try:
  uuid.UUID(value)
  except ValueError:
  return False
  else:
  return True

  A typical userID in a multi domain setup is:
  c79a927caef36ade4ed36679cd084fa45df4563f94af6a956fafa936889b4faf

  When this is validated in pycadf, it fails:

  >>> 

[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: python-glanceclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

** Changed in: python-glanceclient
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-01 Thread Shu Muto
** Also affects: python-heatclient
   Importance: Undecided
   Status: New

** Changed in: python-heatclient
   Status: New => In Progress

** Changed in: python-heatclient
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Monasca:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521524] [NEW] With DVR enabled instances sometimes fail to get metadata

2015-12-01 Thread Oleg Bondarev
Public bug reported:

Rally scenario which creates VMs with floating IPs at a high rate
sometimes fails with SSHTimeout when trying to connect to the VM by
floating IP. At the same time pings to the VM are fine.

It appeared that VMs may sometimes fail to get public key from metadata.
That happens because metadata proxy process was started after VM boot.

Further analysis showed that l3 agent on compute node was not notified
about new VM port at the time this port was created.

** Affects: neutron
 Importance: High
 Assignee: Oleg Bondarev (obondarev)
 Status: In Progress


** Tags: l3-dvr-backlog liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521524

Title:
  With DVR enabled instances sometimes fail to get metadata

Status in neutron:
  In Progress

Bug description:
  Rally scenario which creates VMs with floating IPs at a high rate
  sometimes fails with SSHTimeout when trying to connect to the VM by
  floating IP. At the same time pings to the VM are fine.

  It appeared that VMs may sometimes fail to get public key from
  metadata. That happens because metadata proxy process was started
  after VM boot.

  Further analysis showed that l3 agent on compute node was not notified
  about new VM port at the time this port was created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518016] Re: [SRU] Nova kilo requires concurrency 1.8.2 or better

2015-12-01 Thread James Page
python-oslo.concurrency and nova promoted to -updates for Kilo UCA.

** Changed in: cloud-archive/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518016

Title:
  [SRU] Nova kilo requires concurrency 1.8.2 or better

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.concurrency package in Ubuntu:
  Invalid
Status in nova source package in Vivid:
  Fix Released
Status in python-oslo.concurrency source package in Vivid:
  Fix Released

Bug description:
  [Impact]
  Some operations on instances will fail due to missing functions in 
oslo-concurrency 1.8.0 that the latest Nova stable release requires.

  [Test Case]
  Resize or migrate an instance on the latest stable kilo updates

  [Regression Potential]
  Minimal - this is recommended and tested upstream already.

  [Original Bug Report]
  OpenStack Nova Kilo release requires 1.8.2 or higher, this is due to the 
addition of on_execute and on_completion to the execute(..) function.  The 
latest Ubuntu OpenStack Kilo packages currently have code that depend on this 
new updated release.  This results in a crash in some operations like resizes 
or migrations.

  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] Traceback (most recent call last):
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6459, in 
_error_out_instance_on_exception
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] yield
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4054, in 
resize_instance
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] timeout, retry_interval)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6353, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] shared_storage)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] six.reraise(self.type_, self.value, 
self.tb)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6342, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 329, in 
copy_image
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_execute=on_execute, 
on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 55, in 
execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return utils.execute(*args, **kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 207, in execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return processutils.execute(*cmd, 
**kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 174, 
in execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] raise UnknownArgumentError(_('Got 
unknown keyword args: %r') % kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] UnknownArgumentError: Got unknown keyword 
args: {'on_execute':  at 

[Yahoo-eng-team] [Bug 1512250] Re: nova api crashes on floating ip creation

2015-12-01 Thread Alex Xu
*** This bug is a duplicate of bug 1513879 ***
https://bugs.launchpad.net/bugs/1513879

** This bug is no longer a duplicate of bug 1488537
   Lack of graceful handling of 404 error from neutron
** This bug has been marked a duplicate of bug 1513879
   NeutronClientException: 404 Not Found

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1512250

Title:
  nova api crashes on floating ip creation

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm using Openstack Liberty

  2015-11-02 13:36:26.522 118707 INFO nova.osapi_compute.wsgi.server 
[req-3a1d6c46-a04e-452b-b5b5-240f4d8f7ff7 4fee825c04e04bcc8779fbc0e1c75154 
d8c1e9ee4f6e429199a85389ab64868d - - -] 115.124.106.199 "GET /v2/ HTTP/1.1" 
status: 200 len: 572 time: 0.0321729
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
[req-f37d3666-00cf-4b69-bd07-1764cb5ab663 4fee825c04e04bcc8779fbc0e1c75154 
d8c1e9ee4f6e429199a85389ab64868d - - -] Unexpected exception in API method
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/floating_ips.py", 
line 122, in index
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
floating_ips = self.network_api.get_floating_ips_by_project(context)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1402, in 
get_floating_ips_by_project
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions fips = 
client.list_floatingips(tenant_id=project_id)['floatingips']
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions ret = 
self.function(instance, *args, **kwargs)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 731, in 
list_floatingips
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
**_params)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 307, in 
list
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions for r 
in self._pagination(collection, path, **params):
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 320, in 
_pagination
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions res = 
self.get(path, params=params)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 293, in 
get
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
headers=headers, params=params)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 270, in 
retry_request
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
headers=headers, params=params)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 211, in 
do_request
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
self._handle_fault_response(status_code, replybody)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 185, in 
_handle_fault_response
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
exception_handler_v20(status_code, des_error_body)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 83, in 
exception_handler_v20
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
message=message)
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
NeutronClientException: 404 Not Found
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions The 
resource could not be found.
  2015-11-02 13:36:26.695 118707 ERROR nova.api.openstack.extensions 
  

[Yahoo-eng-team] [Bug 1521525] [NEW] 'Project' is not shown when image is in Queued status

2015-12-01 Thread Liyingjun
Public bug reported:

And error in log:

The attribute tenant_name doesn't exist on 

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: New

** Attachment added: "Screen Shot 2015-12-01 at 4.43.00 PM.png"
   
https://bugs.launchpad.net/bugs/1521525/+attachment/4527859/+files/Screen%20Shot%202015-12-01%20at%204.43.00%20PM.png

** Changed in: horizon
 Assignee: (unassigned) => Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1521525

Title:
  'Project' is not shown when image is in Queued status

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  And error in log:

  The attribute tenant_name doesn't exist on 

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1521525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521458] Re: ParaVirtualSCSI adapter type string doesn't match the one in the constants.py

2015-12-01 Thread Zhao Zhe
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521458

Title:
  ParaVirtualSCSI adapter type string doesn't match the one in the
  constants.py

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The user is trying to do a single server provision using a basic RHEL
  template, but the flavor has an ephemeral disk defined. Provision the
  same flavor without an ephemeral disk doesn't have problem.

  The VMWare server is ESXi 5.5.0 2143827, vCenter is 5.5.0.2 Build
  2063318

  Error from vCenter:
  Create virtual disk
  A specified parameter was not correct.  paraVirtualscsi

  
  Based on the API:
  
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.VirtualDiskManager.VirtualDiskAdapterType.html
  There is no paraVirtualscsi Constant, so it is rejected by the API as "A 
specified parameter was not correct".

  in nova/virt/vmwareapi/vm_util.py:
  def get_vmdk_adapter_type(adapter_type):
  """Return the adapter type to be used in vmdk descriptor.
  Adapter type in vmdk descriptor is same for LSI-SAS, LSILogic & 
ParaVirtual
  because Virtual Disk Manager API does not recognize the newer controller
  types.
  """
  if adapter_type in [constants.ADAPTER_TYPE_LSILOGICSAS,
  constants.ADAPTER_TYPE_PARAVIRTUAL]:
  vmdk_adapter_type = constants.DEFAULT_ADAPTER_TYPE
  else:
  vmdk_adapter_type = adapter_type
  return vmdk_adapter_type

  in nova/virt/vmwareapi/constants.py
  DEFAULT_ADAPTER_TYPE = "lsiLogic"
  ...
  ADAPTER_TYPE_BUSLOGIC = "busLogic"
  ADAPTER_TYPE_IDE = "ide"
  ADAPTER_TYPE_LSILOGICSAS = "lsiLogicsas"
  ADAPTER_TYPE_PARAVIRTUAL = "paraVirtual"

  
  It appears as though the original authors were well aware that the newer 
controller types aren't supported by the CreateVirtualDisk_Task and attempted 
to mitigate the problem by returning the default adapter type instead of the 
paraVirtualSCSI or LSILogicSAS which they knew would result in an error.

  The problem is that the value of "adapter_type" is not what they
  expected.

  In this instance adapter_type is being returned as "paraVirtualscsi"
  and this does not match constants.ADAPTER_TYPE_PARAVIRTUAL which is
  defined in constants.py as: ADAPTER_TYPE_PARAVIRTUAL = "paraVirtual"

  The "scsi" at the end has been dropped and therefore no match is made
  and the adapter_type is passed straight through to the vmware API
  resulting in an error.

  The intent seems to be that a match would have been made, and that
  constants.DEFAULT_ADAPTER_TYPE is returned instead which is defined in
  constants.py as: DEFAULT_ADAPTER_TYPE = "lsiLogic"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521554] [NEW] lock_passwd (default_user) vs lock-passwd (normal user)

2015-12-01 Thread Jan Collijs
Public bug reported:

In our use case we only have one user we create with the cloud-init
configuration file. We need a password set for this user so we
configured him as the default_user as follows:

system_info:
  default_user:
   name: demo
 passwd: PASSWORDHASH
 shell: /bin/bash
 lock-passwd: False

But the password was always locked over and over again.

After some troubleshooting I figured the lock param was wrong.
Apparently for a normal user it's lock-passwd (with a hyphen) but for
the default user it's lock_passwd (with an underscore)

To me this was very confusing and I lost a lot of time on this little
difference. Is there any particular reason why the differ? Wouldn't it
be a better idea to streamline them using only one of the two options
for both?

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1521554

Title:
  lock_passwd  (default_user) vs lock-passwd (normal user)

Status in cloud-init:
  New

Bug description:
  In our use case we only have one user we create with the cloud-init
  configuration file. We need a password set for this user so we
  configured him as the default_user as follows:

  system_info:
default_user:
 name: demo
   passwd: PASSWORDHASH
   shell: /bin/bash
   lock-passwd: False

  But the password was always locked over and over again.

  After some troubleshooting I figured the lock param was wrong.
  Apparently for a normal user it's lock-passwd (with a hyphen) but for
  the default user it's lock_passwd (with an underscore)

  To me this was very confusing and I lost a lot of time on this little
  difference. Is there any particular reason why the differ? Wouldn't it
  be a better idea to streamline them using only one of the two options
  for both?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1521554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521553] [NEW] v2 - "type" property not provided at top level for image schema

2015-12-01 Thread Jamie Hannaford
Public bug reported:

If you retrieve the schema for an image by executing this HTTP request:

GET /schemas/image

you get back this JSON schema:

{
"additionalProperties": {
"type": "string"
},
"name": "image",
"links": [
{
"href": "{self}",
"rel": "self"
},
{
"href": "{file}",
"rel": "enclosure"
},
{
"href": "{schema}",
"rel": "describedby"
}
],
"properties": {
"status": {
"enum": [
"queued",
"saving",
"active",
"killed",
"deleted",
"pending_delete",
"deactivated"
],
"type": "string",
"description": "Status of the image (READ-ONLY)"
},
"tags": {
"items": {
"type": "string",
"maxLength": 255
},
"type": "array",
"description": "List of strings related to the image"
},
"kernel_id": {
"pattern": 
"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
"type": [
"null",
"string"
],
"description": "ID of image stored in Glance that should be used as 
the kernel when booting an AMI-style image.",
"is_base": false
},
"container_format": {
"enum": [
null,
"ami",
"ari",
"aki",
"bare",
"ovf",
"ova"
],
"type": [
"null",
"string"
],
"description": "Format of the container"
},
"min_ram": {
"type": "integer",
"description": "Amount of ram (in MB) required to boot image."
},
"ramdisk_id": {
"pattern": 
"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
"type": [
"null",
"string"
],
"description": "ID of image stored in Glance that should be used as 
the ramdisk when booting an AMI-style image.",
"is_base": false
},
"locations": {
"items": {
"required": [
"url",
"metadata"
],
"type": "object",
"properties": {
"url": {
"type": "string",
"maxLength": 255
},
"metadata": {
"type": "object"
}
}
},
"type": "array",
"description": "A set of URLs to access the image file kept in 
external store"
},
"visibility": {
"enum": [
"public",
"private"
],
"type": "string",
"description": "Scope of image accessibility"
},
"updated_at": {
"type": "string",
"description": "Date and time of the last image modification 
(READ-ONLY)"
},
"owner": {
"type": [
"null",
"string"
],
"description": "Owner of the image",
"maxLength": 255
},
"file": {
"type": "string",
"description": "(READ-ONLY)"
},
"min_disk": {
"type": "integer",
"description": "Amount of disk space (in GB) required to boot 
image."
},
"virtual_size": {
"type": [
"null",
"integer"
],
"description": "Virtual size of image in bytes (READ-ONLY)"
},
"id": {
"pattern": 
"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
"type": "string",
"description": "An identifier for the image"
},
"size": {
"type": [
"null",
"integer"
],
"description": "Size of image file in bytes (READ-ONLY)"
},
"instance_uuid": {
"type": "string",
"description": "ID of instance used to create this image.",
"is_base": false
},
"os_distro": {
"type": "string",
"description": "Common name of operating system distribution as 
specified in 
http://docs.openstack.org/trunk/openstack-compute/admin/content/adding-images.html;,
"is_base": false
},
"name": {
"type": [
"null",
"string"
],

[Yahoo-eng-team] [Bug 1514480] Re: Bug in nova image-list

2015-12-01 Thread aginwala
Alright. This is a password drift. Check in glance-api.conf and glance-
registry.conf. Please check password in db for the same. Try to reset
the glance password if not. This is duplicate bug since many users have
reported in the past. Please refer to old bugs and follow the same.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1514480

Title:
  Bug in nova image-list

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I installed OpenStack using Ubuntu formal documentation. every things
  is good but in chapter installing compute service in verify section
  when i run #nova image-list , i recieve the following error:

  "ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-d6d93391-b65d-4d0c-b3ba-a8ad51744b74)"

  While i run #nova service-list or #nova flavor-list without any
  problem.

  useful log files is attached to this bug report.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1514480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402959] Re: Support Launching an instance with a port with vnic_type=direct

2015-12-01 Thread Matthias Runge
** Changed in: horizon
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1402959

Title:
  Support Launching an instance with a port with vnic_type=direct

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  To support Launching instances with 'SR-IOV' interfaces using the
  dashboard there is a need to:

  1)Adding the ability to specify vnic_type to 'port create' operation
  2)Adding option to create a port as a tenant (Right now only Admin can do 
this)
  3)Adding the ability to launch an instance with a pre configured port

  Duplicate bugs:
  https://bugs.launchpad.net/horizon/+bug/1399252
  https://bugs.launchpad.net/horizon/+bug/1399254

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1402959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518431] [NEW] Glance failed to upload image to swift storage

2015-12-01 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When glance configured with swift backend, and swift API provides via
RadosGW is unable to upload image.

Command:
glance --debug image-create --name trusty_ext4 --disk-format raw 
--container-format bare --file trusty-server-cloudimg-amd64.img --visibility 
public --progress
Logs:
http://paste.openstack.org/show/479621/

** Affects: glance
 Importance: Undecided
 Status: Confirmed

-- 
Glance failed to upload image to swift storage
https://bugs.launchpad.net/bugs/1518431
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519269] Re: Release request for networking-fujitsu for stable/liberty

2015-12-01 Thread Kyle Mestery
Version 1.0.1 is on PyPI now [1].

[1] https://pypi.python.org/pypi/networking-fujitsu/1.0.1

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519269

Title:
  Release request for networking-fujitsu for stable/liberty

Status in networking-fujitsu:
  New
Status in neutron:
  Fix Released

Bug description:
  Branch name:

 stable/liberty

  Tags name:

 1.0.0

  
  The Liberty release of networking-fujitsu

* Mechanism driver for FUJITSU Converged Fabric Switch

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-fujitsu/+bug/1519269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518794] Re: Release request for networking-L2GW for stable/liberty

2015-12-01 Thread Kyle Mestery
Version 1.0.0 of networking-l2gw is now on PyPI:

https://pypi.python.org/pypi/networking-l2gw/1.0.0

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518794

Title:
  Release request for networking-L2GW for stable/liberty

Status in networking-l2gw:
  Confirmed
Status in neutron:
  Fix Released

Bug description:
  Branch:  stable/liberty
  New Tag:  2015.2

  The Liberty release of networking-L2GW contains new features as well
  enhancements (bug fixes)

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-l2gw/+bug/1518794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521666] [NEW] DHCP agent should release IPv6 leases

2015-12-01 Thread Alexey I. Froloff
Public bug reported:

dhcp_relase does not work with IPv6, but IPv6 leases still should be
released.  Example:

1. Start VM in dhcpv6-statefull network, make it acquire IPv6 address.
2. Delete VM.
3. Start another VM in same network before lease expires.

There's a very high chance that same IPv6 address will be allocated for
both these VMs (same address will be reused after first VM was delete).
On DHCP agent hosts file would be changed, but lease file not, so
dnsmasq will not give second VM address until lease expitres.  Reducing
lease time is not a good solution here...

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521666

Title:
  DHCP agent should release IPv6 leases

Status in neutron:
  New

Bug description:
  dhcp_relase does not work with IPv6, but IPv6 leases still should be
  released.  Example:

  1. Start VM in dhcpv6-statefull network, make it acquire IPv6 address.
  2. Delete VM.
  3. Start another VM in same network before lease expires.

  There's a very high chance that same IPv6 address will be allocated
  for both these VMs (same address will be reused after first VM was
  delete).  On DHCP agent hosts file would be changed, but lease file
  not, so dnsmasq will not give second VM address until lease expitres.
  Reducing lease time is not a good solution here...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521314] Re: Changing physical interface mapping may result in multiple physical interfaces in bridge

2015-12-01 Thread Rossella Sblendido
The agent doesn't delete unused bridges. If you want to clean up,
there's a tool for that, it's neutron/cmd/linuxbridge_cleanup.py :)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521314

Title:
  Changing physical interface mapping may result in multiple physical
  interfaces in bridge

Status in neutron:
  Invalid

Bug description:
  Version: 2015.2 (Liberty)
  Plugin: ML2 w/ LinuxBridge

  While testing various NICs, I found that changing the physical
  interface mapping in the ML2 configuration file and restarting the
  agent resulted in the old physical interface remaining in the bridge.
  This can be observed with the following steps:

  Original configuration:

  [linux_bridge]
  physical_interface_mappings = physnet1:eth2

  racker@compute01:~$ brctl show
  bridge name   bridge id  STP enabled   interfaces
  brqad516357-478000.e41d2d5b6213  noeth2
 tap72e7d2be-24

  Modify the bridge mapping:

  [linux_bridge]
  #physical_interface_mappings = physnet1:eth2
  physical_interface_mappings = physnet1:eth1

  Restart the agent:

  racker@compute01:~$ sudo service neutron-plugin-linuxbridge-agent restart
  neutron-plugin-linuxbridge-agent stop/waiting
  neutron-plugin-linuxbridge-agent start/running, process 12803

  Check the bridge:

  racker@compute01:~$ brctl show
  bridge name   bridge id  STP enabled   interfaces
  brqad516357-478000.6805ca37dc39  noeth1
 eth2
 tap72e7d2be-24

  This behavior was observed with flat or vlan networks, and can result
  in some wonky behavior. Removing the original interface from the
  bridge(s) by hand or restarting the node is a workaround, but I
  suspect LinuxBridge users aren't used to modifying the bridges
  manually as the agent usually handles that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510395] Re: resize vm across azs

2015-12-01 Thread Tardis Xu
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1510395

Title:
  resize vm across azs

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Problem:

  i create instance1 in az A, one day the instance was resized and in az
  B.this confused me.

  my nova filter config is :
  
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

  1. Exact version of Nova/OpenStack you are running:

  openstack juno

  2. Relevant log files:

  none

  3. Reproduce steps:

  random

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1510395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521581] [NEW] v2 - "readOnly" key should be used in schemas

2015-12-01 Thread Jamie Hannaford
Public bug reported:

Currently, the way object properties are labelled read-only is through
the description, like so:

"status": {
"enum": [
"queued",
"saving",
"active",
"killed",
"deleted",
"pending_delete",
"deactivated"
],
"type": "string",
"description": "Status of the image (READ-ONLY)"
}


This is not the recommended way to indicate read-only status. The "readOnly" 
property should be used instead:

"status": {
"enum": [
"queued",
"saving",
"active",
"killed",
"deleted",
"pending_delete",
"deactivated"
],
"type": "string",
"readOnly": true,
"description": "Status of the image"
}


Further link for reference: 
http://json-schema.org/latest/json-schema-hypermedia.html#anchor15

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521581

Title:
  v2 - "readOnly" key should be used in schemas

Status in Glance:
  New

Bug description:
  Currently, the way object properties are labelled read-only is through
  the description, like so:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "description": "Status of the image (READ-ONLY)"
  }

  
  This is not the recommended way to indicate read-only status. The "readOnly" 
property should be used instead:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "readOnly": true,
  "description": "Status of the image"
  }

  
  Further link for reference: 
http://json-schema.org/latest/json-schema-hypermedia.html#anchor15

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1521581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520336] Re: ovs-vsctl command in neutron agent fails with version 1.4.2

2015-12-01 Thread Rossella Sblendido
Paul sorry but I think we can't help here. You should contact Wheezy
maintainers to update the package. Marking the bug as invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520336

Title:
  ovs-vsctl command in neutron agent fails with version 1.4.2

Status in neutron:
  Invalid

Bug description:
  Wheezy (which I believe is still supported?) comes with version 1.4.2
  of ovs-vsctl.

  It fails to start neutron agent (which also uses  --may-exist)  with

  2015-11-26 15:01:07.155 92027 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ovs-vsctl', '--timeout=10', '--oneline', 
'--format=json', '--', 'set', 'Bridge', 'br-int', 'protocols=[OpenFlow10]'] 
execute_rootwrap_daemon 
/home/pcarlton/openstack/neutron/neutron/agent/linux/utils.py:100
  2015-11-26 15:01:07.162 92027 ERROR neutron.agent.ovsdb.impl_vsctl [-] Unable 
to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'set', 'Bridge', 'br-int', 'protocols=[OpenFlow10]']. Exception: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: ovs-vsctl: Bridge does not contain a column whose 
name matches "protocols"

  2015-11-26 15:01:07.163 92027 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Exit code: 
1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: Bridge does not contain a column whose 
name matches "protocols"
   Agent terminated!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1520336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521607] [NEW] v2 - replacing array elements with PATCH results in 400 error

2015-12-01 Thread Jamie Hannaford
Public bug reported:

I have the following image:

{
"status": "active",
"name": "foo",
"tags": [
"1",
"3",
"2"
],
"container_format": "ami",
"created_at": "2015-11-12T14:26:08Z",
"size": 983040,
"disk_format": "ami",
"updated_at": "2015-12-01T12:25:42Z",
"visibility": "public",
"self": "/v2/images/386f0425-3ee8-4688-b73f-272328fe4c71",
"min_disk": 20,
"protected": false,
"id": "386f0425-3ee8-4688-b73f-272328fe4c71",
"architecture": "x86_64",
"file": "/v2/images/386f0425-3ee8-4688-b73f-272328fe4c71/file",
"checksum": "061d01418b94d4743a98ee26d941e87c",
"owner": "057aad9fa85b4e29b23e7888000446ef",
"virtual_size": null,
"min_ram": 0,
"schema": "/v2/schemas/image"
}

When I send this PATCH request to update it:

[{"op":"replace","path":"/tags/0","value":"10"}]

I get back the following 400 error:

"""

 
  400 Bad Request
 
 
  400 Bad Request
  Invalid JSON pointer for this resource: '/tags/0'


 

"""

"/tags/0" is a correct pointer, however, which should be supported in a
"replace" op. Why doesn't it work?

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521607

Title:
  v2 - replacing array elements with PATCH results in 400 error

Status in Glance:
  New

Bug description:
  I have the following image:

  {
  "status": "active",
  "name": "foo",
  "tags": [
  "1",
  "3",
  "2"
  ],
  "container_format": "ami",
  "created_at": "2015-11-12T14:26:08Z",
  "size": 983040,
  "disk_format": "ami",
  "updated_at": "2015-12-01T12:25:42Z",
  "visibility": "public",
  "self": "/v2/images/386f0425-3ee8-4688-b73f-272328fe4c71",
  "min_disk": 20,
  "protected": false,
  "id": "386f0425-3ee8-4688-b73f-272328fe4c71",
  "architecture": "x86_64",
  "file": "/v2/images/386f0425-3ee8-4688-b73f-272328fe4c71/file",
  "checksum": "061d01418b94d4743a98ee26d941e87c",
  "owner": "057aad9fa85b4e29b23e7888000446ef",
  "virtual_size": null,
  "min_ram": 0,
  "schema": "/v2/schemas/image"
  }

  When I send this PATCH request to update it:

  [{"op":"replace","path":"/tags/0","value":"10"}]

  I get back the following 400 error:

  """
  
   
400 Bad Request
   
   
400 Bad Request
Invalid JSON pointer for this resource: '/tags/0'


   
  
  """

  "/tags/0" is a correct pointer, however, which should be supported in
  a "replace" op. Why doesn't it work?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1521607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521599] [NEW] py34 unit tests fail randomly for network.test_neutronv2

2015-12-01 Thread Markus Zoeller (markus_z)
Public bug reported:

Description
===
The following unit tests fail **randomly** in the "gate-nova-python34" check 
queue:
* 
nova.tests.unit.network.test_neutronv2.TestNeutronv2.test_deallocate_for_instance_2_with_requested
* 
nova.tests.unit.network.test_neutronv2.TestNeutronv2.test_deallocate_for_instance_2
At least I don't see the root cause for this.

Steps to reproduce
==
I discovered this with this review https://review.openstack.org/#/c/250907/3

Expected result
===
The "gate-nova-python34" check should pass the neutron tests as the review 
doesn't contain any network related changes.

Actual result
=
b'mox3.mox.ExpectedMethodCallsError: Verify: Expected methods never called:'
b"  0.  Client.delete_port('my_portid1') -> None"

and

b'mox3.mox.UnexpectedMethodCallError: Unexpected method call.  unexpected:-  
expected:+'
b"- Client.delete_port('my_portid1') -> None"
b'?  ^'
b"+ Client.delete_port('my_portid2') -> None"
b'? 

see http://paste.openstack.org/show/480490/

The logstash query shows a few hits since 2015-11-23 (see below)

Environment
===
* Master code (Mitaka cycle)
* gate-nova-python34
* https://review.openstack.org/#/c/250907/3

Logs

* 
http://logs.openstack.org/07/250907/3/check/gate-nova-python34/d73accd/console.html
* 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=+message:%5C%22Client.delete_port('my_portid1')%5C%22%20+project:%5C%22openstack/nova%5C%22

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521599

Title:
  py34 unit tests fail randomly for network.test_neutronv2

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  The following unit tests fail **randomly** in the "gate-nova-python34" check 
queue:
  * 
nova.tests.unit.network.test_neutronv2.TestNeutronv2.test_deallocate_for_instance_2_with_requested
  * 
nova.tests.unit.network.test_neutronv2.TestNeutronv2.test_deallocate_for_instance_2
  At least I don't see the root cause for this.

  Steps to reproduce
  ==
  I discovered this with this review https://review.openstack.org/#/c/250907/3

  Expected result
  ===
  The "gate-nova-python34" check should pass the neutron tests as the review 
doesn't contain any network related changes.

  Actual result
  =
  b'mox3.mox.ExpectedMethodCallsError: Verify: Expected methods never called:'
  b"  0.  Client.delete_port('my_portid1') -> None"

  and

  b'mox3.mox.UnexpectedMethodCallError: Unexpected method call.  unexpected:-  
expected:+'
  b"- Client.delete_port('my_portid1') -> None"
  b'?  ^'
  b"+ Client.delete_port('my_portid2') -> None"
  b'? 

  see http://paste.openstack.org/show/480490/

  The logstash query shows a few hits since 2015-11-23 (see below)

  Environment
  ===
  * Master code (Mitaka cycle)
  * gate-nova-python34
  * https://review.openstack.org/#/c/250907/3

  Logs
  
  * 
http://logs.openstack.org/07/250907/3/check/gate-nova-python34/d73accd/console.html
  * 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=+message:%5C%22Client.delete_port('my_portid1')%5C%22%20+project:%5C%22openstack/nova%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518431] Re: Glance failed to upload image to swift storage

2015-12-01 Thread Andrey Shestakov
** Also affects: mos
   Importance: Undecided
   Status: New

** Changed in: mos
   Status: New => Confirmed

** Changed in: mos
 Assignee: (unassigned) => MOS Glance (mos-glance)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1518431

Title:
  Glance failed to upload image to swift storage

Status in Glance:
  Confirmed
Status in Mirantis OpenStack:
  Confirmed

Bug description:
  When glance configured with swift backend, and swift API provides via
  RadosGW is unable to upload image.

  Command:
  glance --debug image-create --name trusty_ext4 --disk-format raw 
--container-format bare --file trusty-server-cloudimg-amd64.img --visibility 
public --progress
  Logs:
  http://paste.openstack.org/show/479621/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1518431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521591] [NEW] v2 - replacing root document returns a schema error

2015-12-01 Thread Jamie Hannaford
Public bug reported:

According to the JSON schema spec, you can replace the entire document
using an empty string ""  as the JSON pointer. If the original schema
looked like this:

{
   "type": "object",
   "properties": {
  "foo": {"type": "string"},
  "bar": {"type": "string"}
   }
}

the PATCH doc could look something like this:

{
   "op": "replace",
   "path": "",
   "value": {
  "foo": "val1",
  "bar": "val2"
}
}

but when you try to update a Glance image this way, you can back the
following error:

"""

 
  400 Bad Request
 
 
  400 Bad Request
  Pointer `` does not start with /.


 

"""

I have two problems with this:

1. A leading slash is not a requirement for a valid JSON pointer
2. Why is HTML being used as the output format on a JSON API?

This test should demonstrate what I mean:

https://github.com/json-patch/json-patch-
tests/blob/master/tests.json#L180-L183

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521591

Title:
  v2 - replacing root document returns a schema error

Status in Glance:
  New

Bug description:
  According to the JSON schema spec, you can replace the entire document
  using an empty string ""  as the JSON pointer. If the original schema
  looked like this:

  {
 "type": "object",
 "properties": {
"foo": {"type": "string"},
"bar": {"type": "string"}
 }
  }

  the PATCH doc could look something like this:

  {
 "op": "replace",
 "path": "",
 "value": {
"foo": "val1",
"bar": "val2"
  }
  }

  but when you try to update a Glance image this way, you can back the
  following error:

  """
  
   
400 Bad Request
   
   
400 Bad Request
Pointer `` does not start with /.


   
  
  """

  I have two problems with this:

  1. A leading slash is not a requirement for a valid JSON pointer
  2. Why is HTML being used as the output format on a JSON API?

  This test should demonstrate what I mean:

  https://github.com/json-patch/json-patch-
  tests/blob/master/tests.json#L180-L183

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1521591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521595] [NEW] Each page refresh always trigger a get on nova extensions

2015-12-01 Thread Lin Hua Cheng
Public bug reported:

related change:
https://github.com/openstack/horizon/commit/4556bc808635d0f0b77139e6b1f2c25f3f4c1093

We should investigate if this can be cached statically

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1521595

Title:
  Each page refresh always trigger a get on nova extensions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  related change:
  
https://github.com/openstack/horizon/commit/4556bc808635d0f0b77139e6b1f2c25f3f4c1093

  We should investigate if this can be cached statically

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1521595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467560] Re: RFE: add instance uuid field to nova.quota_usages table

2015-12-01 Thread Markus Zoeller (markus_z)
@Dan Yocum:

IIUC in comment #5, there is no need anymore for this bug as the effort
for this will be driven by the blueprint mentioned in comment #4. => I'm
setting this bug to "invalid", I don't see a status which fits better.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467560

Title:
  RFE: add instance uuid field to nova.quota_usages table

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In Icehouse, the nova.quota_usages table frequently gets out-of-sync
  with the currently active/stopped instances in a tenant/project,
  specifically, there are times when the instance will be set to
  terminated/deleted in the instances table and the quota_usages table
  will retain the data, counting against the tenant's total quota.  As
  far as I can tell there is no way to correlate instances.uuid with the
  records in nova.quota_usages.

  I propose adding an instance uuid column to make future cleanup of
  this table easier.

  I also propose a housecleaning task that does this clean up
  automatically.

  Thanks,
  Dan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521620] [NEW] Neutron-lbaas gate broken due to pep8

2015-12-01 Thread Gary Kotton
Public bug reported:

2015-12-01 11:39:24.491 | pep8 runtests: PYTHONHASHSEED='3236704285'
2015-12-01 11:39:24.491 | pep8 runtests: commands[0] | flake8
2015-12-01 11:39:24.492 |   /home/jenkins/workspace/gate-neutron-lbaas-pep8$ 
/home/jenkins/workspace/gate-neutron-lbaas-pep8/.tox/pep8/bin/flake8 
2015-12-01 11:39:27.194 | pep8 runtests: commands[1] | pylint 
--rcfile=.pylintrc --output-format=colorized neutron_lbaas
2015-12-01 11:39:27.194 |   /home/jenkins/workspace/gate-neutron-lbaas-pep8$ 
/home/jenkins/workspace/gate-neutron-lbaas-pep8/.tox/pep8/bin/pylint 
--rcfile=.pylintrc --output-format=colorized neutron_lbaas 
2015-12-01 11:39:27.513 | Warning: option ignore-iface-methods is obsolete and 
it is slated for removal in Pylint 1.6.
2015-12-01 11:39:46.623 | * Module 
neutron_lbaas.agent_scheduler
2015-12-01 11:39:46.623 | C: 22, 0: external import "from oslo_log 
import log as logging" comes before "from neutron.db import agents_db" 
(wrong-import-order)
2015-12-01 11:39:46.623 | C: 23, 0: external import "import six" 
comes before "from neutron.db import agents_db" (wrong-import-order)
2015-12-01 11:39:46.624 | C: 24, 0: external import "import 
sqlalchemy as sa" comes before "from neutron.db import agents_db" 
(wrong-import-order)
2015-12-01 11:39:46.624 | C: 25, 0: external import "from 
sqlalchemy import orm" comes before "from neutron.db import agents_db" 
(wrong-import-order)
2015-12-01 11:39:46.624 | C: 26, 0: external import "from 
sqlalchemy.orm import joinedload" comes before "from neutron.db import 
agents_db" (wrong-import-order)
2015-12-01 11:39:46.624 | * Module 
neutron_lbaas.db.loadbalancer.loadbalancer_dbv2
2015-12-01 11:39:46.624 | C: 25, 0: external import "from oslo_db 
import exception" comes before "from neutron.api.v2 import attributes" 
(wrong-import-order)
2015-12-01 11:39:46.624 | C: 26, 0: external import "from oslo_log 
import log as logging" comes before "from neutron.api.v2 import attributes" 
(wrong-import-order)
2015-12-01 11:39:46.624 | C: 27, 0: external import "from 
oslo_utils import excutils" comes before "from neutron.api.v2 import 
attributes" (wrong-import-order)
2015-12-01 11:39:46.624 | C: 28, 0: external import "from 
oslo_utils import uuidutils" comes before "from neutron.api.v2 import 
attributes" (wrong-import-order)
2015-12-01 11:39:46.624 | C: 29, 0: external import "from 
sqlalchemy import orm" comes before "from neutron.api.v2 import attributes" 
(wrong-import-order)
2015-12-01 11:39:46.624 | C: 30, 0: external import "from 
sqlalchemy.orm import exc" comes before "from neutron.api.v2 import 
attributes" (wrong-import-order)
2015-12-01 11:39:46.624 | * Module 
neutron_lbaas.db.loadbalancer.models
2015-12-01 11:39:46.625 | C: 20, 0: external import "import 
sqlalchemy as sa" comes before "from neutron.db import model_base" 
(wrong-import-order)
2015-12-01 11:39:46.625 | C: 21, 0: external import "from 
sqlalchemy.ext import orderinglist" comes before "from neutron.db import 
model_base" (wrong-import-order)
2015-12-01 11:39:46.625 | C: 22, 0: external import "from 
sqlalchemy import orm" comes before "from neutron.db import model_base" 
(wrong-import-order)
2015-12-01 11:39:46.625 | * Module 
neutron_lbaas.db.loadbalancer.loadbalancer_db
2015-12-01 11:39:46.625 | C: 29, 0: external import "from oslo_db 
import exception" comes before "from neutron.api.v2 import attributes" 
(wrong-import-order)
2015-12-01 11:39:46.625 | C: 30, 0: external import "from oslo_log 
import log as logging" comes before "from neutron.api.v2 import attributes" 
(wrong-import-order)
2015-12-01 11:39:46.625 | C: 31, 0: external import "from 
oslo_utils import excutils" comes before "from neutron.api.v2 import 
attributes" (wrong-import-order)
2015-12-01 11:39:46.625 | C: 32, 0: external import "from 
oslo_utils import uuidutils" comes before "from neutron.api.v2 import 
attributes" (wrong-import-order)
2015-12-01 11:39:46.625 | C: 33, 0: external import "import 
sqlalchemy as sa" comes before "from neutron.api.v2 import attributes" 
(wrong-import-order)
2015-12-01 11:39:46.625 | C: 34, 0: external import "from 
sqlalchemy import orm" comes before "from neutron.api.v2 import attributes" 
(wrong-import-order)
2015-12-01 11:39:46.625 | C: 35, 0: external import "from 
sqlalchemy.orm import exc" comes before "from neutron.api.v2 import 
attributes" (wrong-import-order)
2015-12-01 11:39:46.626 | 

[Yahoo-eng-team] [Bug 1518431] Re: Glance failed to upload image to swift storage

2015-12-01 Thread Roman Podoliaka
** No longer affects: mos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1518431

Title:
  Glance failed to upload image to swift storage

Status in Glance:
  Confirmed

Bug description:
  When glance configured with swift backend, and swift API provides via
  RadosGW is unable to upload image.

  Command:
  glance --debug image-create --name trusty_ext4 --disk-format raw 
--container-format bare --file trusty-server-cloudimg-amd64.img --visibility 
public --progress
  Logs:
  http://paste.openstack.org/show/479621/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1518431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521675] [NEW] n-api-meta handler could be more efficient with db

2015-12-01 Thread Matt Riedemann
Public bug reported:

The nova API metadata handler has some flows like this where it's
getting instance metadata by the fixed IP:

https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/handler.py#L88

That leads to queries to neutron (if using neutron) to list ports by
that fixed IP and then get the instance uuid (via device_id on the port)
for the fixed_ip:

https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L533

And then we get the instance object via that uuid:

https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L544

Note we're only joining on these fields:

expected_attrs=['ec2_ids', 'flavor', 'info_cache']

But when constructing the InstanceMetadata object, we're loading up
security groups separately:

https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L130

Lazy loading 'metadata':

https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L143

And lazy loading system_metadata:

https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L145

https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/password.py#L32

We can load the metadata/system_metadata/security_groups when we get the
instance object from the database the first time, which would avoid
these extra queries to the database, which requires more round trips
through to conductor.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Confirmed


** Tags: api metadata performance

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Importance: Undecided => Medium

** Tags added: api metadata performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521675

Title:
  n-api-meta handler could be more efficient with db

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The nova API metadata handler has some flows like this where it's
  getting instance metadata by the fixed IP:

  
https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/handler.py#L88

  That leads to queries to neutron (if using neutron) to list ports by
  that fixed IP and then get the instance uuid (via device_id on the
  port) for the fixed_ip:

  
https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L533

  And then we get the instance object via that uuid:

  
https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L544

  Note we're only joining on these fields:

  expected_attrs=['ec2_ids', 'flavor', 'info_cache']

  But when constructing the InstanceMetadata object, we're loading up
  security groups separately:

  
https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L130

  Lazy loading 'metadata':

  
https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L143

  And lazy loading system_metadata:

  
https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/base.py#L145

  
https://github.com/openstack/nova/blob/7fc982e19f94a4624d54c3ac113057bed7750ec4/nova/api/metadata/password.py#L32

  We can load the metadata/system_metadata/security_groups when we get
  the instance object from the database the first time, which would
  avoid these extra queries to the database, which requires more round
  trips through to conductor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423972] Re: cloud-init user-data mime conversion fails on base64 encoded data

2015-12-01 Thread Mathew Hodson
** Package changed: ubuntu => cloud-init (Ubuntu)

** Changed in: cloud-init (Ubuntu)
Milestone: ubuntu-15.03 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1423972

Title:
  cloud-init user-data mime conversion fails on base64 encoded data

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  In Progress

Bug description:
  Cloud-init's conversion of user-data to mime fails when the user-data
  is base64 encoded due to the Python2 to Python3 switch.
  base64.b64decode in Python 2 returns a string, whilst Python3 returns
  a byte stream.

  Consider:
    import base64

    hi = "aGkK"
print(type(base64.b64decode(hi)).__name__)
    if 'hi' in str(base64.b64decode(hi)):
   print("works")
    if 'hi' in base64.b64decode(hi):
   print("works on Py2")

  ben@prongs:~$ python /tmp/proof.py 
  str
  works
  works on Py2
  ben@prongs:~$ python3 /tmp/proof.py 
  bytes
  works
  Traceback (most recent call last):
File "/tmp/proof.py", line 10, in 
  if 'hi' in base64.b64decode(hi):
  TypeError: Type str doesn't support the buffer API

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1423972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486388] Re: use timestamp of resources to reduce the agent sync load

2015-12-01 Thread Armando Migliaccio
Based on discussion [1], there's overlapping work [2] that looks more
promising and less prone to issues

[1] 
http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-12-01-15.00.log.html
[2] https://bugs.launchpad.net/neutron/+bug/1516195

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486388

Title:
  use timestamp of resources to reduce the agent sync load

Status in neutron:
  Won't Fix

Bug description:
  Problem Description
  ===

  agent needs to resync with neutron server for some kind of reasons time to 
time.
  These syncs will consume lots of resources for neutron server, database, 
message queues etc.

  
  Proposed Change
  ===

  add update timestamp to neutron resources,
   keep related resources and their synced time stamp on agent sides
   when resync is needed, agent sends resync time stamp to nerutron server, 
neutron server compares the time stamp with its resources, and send newer 
resources to agent then.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508384] Re: QoS proxy functions

2015-12-01 Thread Armando Migliaccio
Based on discussion [1], there does not seem to be much of a desire for
this initiative, especially in light of the cost involved to embrace
this type of framework.

[1]
http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-12-01-15.00.log.html

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508384

Title:
  QoS proxy functions

Status in neutron:
  Won't Fix

Bug description:
  The current QoS API is structured so that rules that are added to the API 
need to be added to the neutron client as well.
  I propose the use of proxy functions in neutron that determine which 
functions to use based on the rule type retrieved using the rule_id or 
specified through the command line. These proxy functions will take the rule_id 
or rule_type, policy_id and a list containing the rest of the command line 
arguments and send them to the corresponding function of that rule.

  This would allow new rules to be added to the QoS API without needing
  to update the neutron client.

  i.e
  replace:
  qos-bandwidth-limit-rule-create 
  with
  qos-rule-create  

  and

  replace:
  qos-bandwidth-limit-rule-update  
  with
  qos-rule-update  

  Further discussion and ideas would be appreciated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp