[Yahoo-eng-team] [Bug 1413610] Re: Nova volume-update leaves volumes stuck in attaching/detaching

2015-10-15 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413610

Title:
  Nova volume-update leaves volumes stuck in attaching/detaching

Status in Cinder:
  Incomplete
Status in OpenStack Compute (nova):
  Expired

Bug description:
  There is a problem with the nova command 'volume-update' that leaves
  cinder volumes in the states 'attaching' and 'deleting'.

  If the nova command 'volume-update' is used by a non admin user the
  command fails and the volumes referenced in the command are left in
  the states 'attaching' and 'deleting'.

  
  For example, if a non admin user runs the command
   $ nova volume-update d39dc7f2-929d-49bb-b22f-56adb3f378c7 
f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b 59b0cf66-67c8-4041-a505-78000b9c71f6

   Will result in the two volumes stuck like this:

   $ cinder list
   
+--+---+--+--+-+--+--+
   |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
   
+--+---+--+--+-+--+--+
   | 59b0cf66-67c8-4041-a505-78000b9c71f6 | attaching | vol2 |  1   |   
  None|  false   |  |
   | f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b | detaching | vol1 |  1   |   
  None|  false   | d39dc7f2-929d-49bb-b22f-56adb3f378c7 |
   
+--+---+--+--+-+--+--+

  
  And the following in the cinder-api log:

  
  2015-01-21 11:00:03.969 13588 DEBUG keystonemiddleware.auth_token [-] 
Received request from user: user_id None, project_id None, roles None service: 
user_id None, project_id None, roles None __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:746
  2015-01-21 11:00:03.970 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Matched POST 
/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:100
  2015-01-21 11:00:03.971 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Route path: 
'/{project_id}/volumes/:(id)/action', defaults: {'action': u'action', 
'controller': } 
__call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:102
  2015-01-21 11:00:03.971 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Match dict: {'action': u'action', 
'controller': , 
'project_id': u'd40e3207e34a4b558bf2d58bd3fe268a', 'id': 
u'f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b'} __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:103
  2015-01-21 11:00:03.972 13588 INFO cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] POST 
http://192.0.2.24:8776/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
  2015-01-21 11:00:03.972 13588 DEBUG cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Action body: 
{"os-migrate_volume_completion": {"new_volume": 
"59b0cf66-67c8-4041-a505-78000b9c71f6", "error": false}} get_method 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:1010
  2015-01-21 11:00:03.973 13588 INFO cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] 
http://192.0.2.24:8776/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 returned with HTTP 403
  2015-01-21 11:00:03.975 13588 INFO eventlet.wsgi.server 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] 127.0.0.1 - - [21/Jan/2015 11:00:03] 
"POST 
/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 HTTP/1.1" 403 429 0.123613


  
  The problem is that the nova policy.json file allows a non admin user to run 
the command 'volume-update', but the cinder policy.json file requires the admin 
role to run the action os-migrate

[Yahoo-eng-team] [Bug 1506580] Re: Request for first release of networking-calico

2015-10-15 Thread Kyle Mestery
And the release is on pypi [1].

[1] https://pypi.python.org/pypi/networking-calico/1.0.0

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: neutron
Milestone: None => mitaka-1

** Changed in: networking-calico
   Status: New => Fix Released

** Changed in: networking-calico
 Assignee: (unassigned) => Kyle Mestery (mestery)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506580

Title:
  Request for first release of networking-calico

Status in networking-calico:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Please could you tag and release networking-calico, as it currently
  stands at:

  https://git.openstack.org/cgit/openstack/networking-calico

  neil@nj-ubuntu:~/calico/networking-calico$ git log -1
  commit 792f1cf63e63ce4660f837d261e9c2bb66434f7b
  Author: Neil Jerram 
  Date:   Thu Oct 15 16:07:05 2015 +0100

  DevStack plugin doc: mention the bootstrap script
  
  Change-Id: I4ea7622d815be946e2dfec445c071d1db8d8bc07

  This will be the first ever release of networking-calico, so I guess
  it will be 1.0.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-calico/+bug/1506580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506701] [NEW] metadata service security-groups doesn't work with neutron

2015-10-15 Thread Sam Morrison
Public bug reported:

Using the metadata to get the security groups for an instance by

curl http://169.254.169.254/latest/meta-data/security-groups

Doesn't work when you are using neutron. This is because the metadata
server is hard coded to look for security groups in the nova DB.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506701

Title:
  metadata service security-groups doesn't work with neutron

Status in OpenStack Compute (nova):
  New

Bug description:
  Using the metadata to get the security groups for an instance by

  curl http://169.254.169.254/latest/meta-data/security-groups

  Doesn't work when you are using neutron. This is because the metadata
  server is hard coded to look for security groups in the nova DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377841] Re: Instances won't obtain IPv6 address and gateway when using SLAAC provided by OpenStack

2015-10-15 Thread Sean M. Collins
You must attach a router to the network when ipv6_address_mode is set to
"slaac" for instances to get addresses. You specified via the API that
you want openstack to use stateless auto address configuration, and that
requires a router to transmit RAs.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377841

Title:
  Instances won't obtain IPv6 address and gateway when using SLAAC
  provided by OpenStack

Status in neutron:
  Invalid

Bug description:
  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: slaac
  2. ipv6_address_mode: slaac

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # neutron subnet-create2001:db1::/64 --name 
usecase1_ipv6_slaac --ipv6-address-mode slaac --ipv6_ra_mode slaac --ip-version 
6
  3. boot an instance with that network

  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set

  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401170] Re: 0-size images allow unprivileged user to deplete glance resources

2015-10-15 Thread Nathan Kinder
This has been published as OSSN-0057:

  https://wiki.openstack.org/wiki/OSSN/OSSN-0057

** Changed in: ossn
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1401170

Title:
  0-size images allow unprivileged user to deplete glance resources

Status in Glance:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  Glance allows to create 0-size images ('glance image-create' without
  parameters). Those images do not consume resources of storage backend
  and do not hit any limits for size, but take up space in database.

  Malicious user can cause database resource depletion with endless
  flood of 'image-create'  requests. Because an empty request is small
  it will cause more strain on openstack than on the attacker.

  RateLimit on API requests allows to delay consequences of attack, but
  does not prevent it.

  Here is simple script to run attack:
  while true;do curl -i -X POST  -H 'X-Auth-Token: ***'  
http://glance-endpoint:9292/v1/images;done

  My estimation for database  growth is about 1Mb/minute (with extra-
  slow shell-based attack, but a specially crafted script will allow to
  run it with RateLimit speed).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1401170/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506653] [NEW] Retrieving either a project's parents or subtree as_list does not work

2015-10-15 Thread Timothy Symanczyk
Public bug reported:

To reproduce this I created five projects ("1", "2", "3", "4", "5") -
with "1" as the top level project, and each subsequent project as a
child of the previous. All four of the following calls were performed
against project "3".

parents_as_list (NON-working) :
$ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?parents_as_list
{
  "project": {
"name": "3",
"is_domain": false,
"description": "",
"links": {
  "self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248";
},
"enabled": true,
"id": "b4a6fb7dcc504373a2e1301ab357d248",
"parent_id": "0b09fce9246f42dda11125d4d32aa013",
"parents": [],
"domain_id": "default"
  }
}

parents_as_ids (working) :
$ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?parents_as_ids
{
  "project": {
"name": "3",
"is_domain": false,
"description": "",
"links": {
  "self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248";
},
"enabled": true,
"id": "b4a6fb7dcc504373a2e1301ab357d248",
"parent_id": "0b09fce9246f42dda11125d4d32aa013",
"parents": {
  "0b09fce9246f42dda11125d4d32aa013": {
"7092bca4a8d444619bcee53a47585876": null
  }
},
"domain_id": "default"
  }
}

subtree_as_list (NON-working) :
$ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?subtree_as_list
{
  "project": {
"name": "3",
"is_domain": false,
"description": "",
"links": {
  "self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248";
},
"enabled": true,
"subtree": [],
"id": "b4a6fb7dcc504373a2e1301ab357d248",
"parent_id": "0b09fce9246f42dda11125d4d32aa013",
"domain_id": "default"
  }
}

subtree_as_ids (working) :
$ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?subtree_as_ids
{
  "project": {
"name": "3",
"is_domain": false,
"description": "",
"links": {
  "self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248";
},
"enabled": true,
"subtree": {
  "421143ab145e4b278d1b971d6509dd23": {
"1484a4e8493d4f3eb6a81bef582f455a": null
  }
},
"id": "b4a6fb7dcc504373a2e1301ab357d248",
"parent_id": "0b09fce9246f42dda11125d4d32aa013",
"domain_id": "default"
  }
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1506653

Title:
  Retrieving either a project's parents or subtree as_list does not work

Status in Keystone:
  New

Bug description:
  To reproduce this I created five projects ("1", "2", "3", "4", "5") -
  with "1" as the top level project, and each subsequent project as a
  child of the previous. All four of the following calls were performed
  against project "3".

  parents_as_list (NON-working) :
  $ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?parents_as_list
  {
"project": {
  "name": "3",
  "is_domain": false,
  "description": "",
  "links": {
"self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248";
  },
  "enabled": true,
  "id": "b4a6fb7dcc504373a2e1301ab357d248",
  "parent_id": "0b09fce9246f42dda11125d4d32aa013",
  "parents": [],
  "domain_id": "default"
}
  }

  parents_as_ids (working) :
  $ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?parents_as_ids
  {
"project": {
  "name": "3",
  "is_domain": false,
  "description": "",
  "links": {
"self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248";
  },
  "enabled": true,
  "id": "b4a6fb7dcc504373a2e1301ab357d248",
  "parent_id": "0b09fce9246f42dda11125d4d32aa013",
  "parents": {
"0b09fce9246f42dda11125d4d32aa013": {
  "7092bca4a8d444619bcee53a47585876": null
}
  },
  "domain_id": "default"
}
  }

  subtree_as_list (NON-working) :
  $ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?subtree_as_list
  {
"project": {
  "name": "3",
  "is_domain": false,
  "description": "",
  "links": {
"self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248";
  },
  "enabled": true,
  "subtree": [],
  "id": "b4a6fb7dcc504373a2e1301ab

[Yahoo-eng-team] [Bug 957009] Re: Instance task_state remains in 'deleting' state if Compute server is down

2015-10-15 Thread Sumant Murke
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/957009

Title:
  Instance task_state remains in 'deleting' state if Compute server is
  down

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Scenario: If the Compute server where the instance is running goes down, and 
a delete request for the instance is received, the instance state remains in 
'deleting' state.
  Verify the response in the database or dashboard.

  Expected Response: The vm_state of instance must be 'error'.
  Actual Response: The vm_state remains in the 'active' state and task_state is 
'deleting'.

  Branch: master

  The Nova API must check if the Compute server (hosting the instance)
  is up, and if it is down, update the vm_state state in database to
  'error'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/957009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2015-10-15 Thread Sergey Lukjanov
** Also affects: sahara/liberty
   Importance: Undecided
   Status: New

** Also affects: sahara/mitaka
   Importance: High
 Assignee: Michael McCune (mimccune)
   Status: Fix Committed

** Changed in: sahara/liberty
   Status: New => In Progress

** Changed in: sahara/liberty
   Importance: Undecided => High

** Changed in: sahara/liberty
 Assignee: (unassigned) => Sergey Lukjanov (slukjanov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Committed
Status in Ironic:
  Fix Committed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Committed
Status in Sahara liberty series:
  In Progress
Status in Sahara mitaka series:
  Fix Committed

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506594] [NEW] Keystone endpoint can not resolve DNS

2015-10-15 Thread Alfred Shen
Public bug reported:

Keystone does not seem to be able to resolve DNS, if endpoints were
configured with hostname instead of IP.

MariaDB [keystone]> select * from endpoint where url like '%500%' or '%3535%';;
+--+--+---+--+--+---+-+---+
| id   | legacy_endpoint_id   | 
interface | service_id   | url  
| extra | enabled | region_id |
+--+--+---+--+--+---+-+---+
| 39cf5bb25fef4f01a1bc2b83f76ce8dd | 99c316ae7cbd449ebccfd6efe1c5d03c | admin   
  | 439e97af7e4b43f9a3b0ee82e33751fe |  http://vrrp01:35357/v2.0| 
{}|   1 | RegionOne |
| 8a75236613354776b50b56c527fe3a75 | 99c316ae7cbd449ebccfd6efe1c5d03c | public  
  | 439e97af7e4b43f9a3b0ee82e33751fe |  http://vrrp01:5000/v2.0 | 
{}|   1 | RegionOne |

+--+--+---+--+--+---+-+---+

root@ctl10:/var/log/apache2# keystone --debug user-list
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://vrrp01:35357/v2.0/tokens
INFO:urllib3.connectionpool:Starting new HTTP connection (1): vrrp01
DEBUG:urllib3.connectionpool:Setting read timeout to 600.0
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 1699
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET  
http://vrrp01:35357/v2.0/users -H "User-Agent: python-keystoneclient" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}cfd888a32a64ee77e42524a2c15cb4547ab9d534"
No connection adapters were found for ' http://vrrp01:35357/v2.0/users'


Change endpoints to use IPs and keystone works normally. 


MariaDB [keystone]> select * from endpoint where url like '%500%' or '%3535%';
+--+--+---+--++---+-+---+
| id   | legacy_endpoint_id   | 
interface | service_id   | url| 
extra | enabled | region_id |
+--+--+---+--++---+-+---+
| 8a75236613354776b50b56c527fe3a75 | 99c316ae7cbd449ebccfd6efe1c5d03c | public  
  | 439e97af7e4b43f9a3b0ee82e33751fe | http://10.11.3.4:5000/v2.0 | {}| 
  1 | RegionOne |
| c4e486c0ebc241659f76c37b3917eaec | 99c316ae7cbd449ebccfd6efe1c5d03c | 
internal  | 439e97af7e4b43f9a3b0ee82e33751fe | http://10.11.3.4:5000/v2.0 | {}  
  |   1 | RegionOne |
+--+--+---+--++---+-+---+

root@ctl10:/var/log/apache2# keystone --debug token-get
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://vrrp01:35357/v2.0/tokens
INFO:urllib3.connectionpool:Starting new HTTP connection (1): vrrp01
DEBUG:urllib3.connectionpool:Setting read timeout to 600.0
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 1699
+---+--+
|  Property |  Value   |
+---+--+
|  expires  |   2015-10-15T19:36:25Z   |
| id| 25789066641c4caf80be4173a96ae0b8 |
| tenant_id | 02f8e769e5e3430ca1e77582ba0d73e0 |
|  user_id  | 0815385ac2d044a6b524d8e05839b824 |

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1506594

Title:
  Keystone endpoint can not resolve DNS

Status in Keystone:
  New

Bug description:
  Keystone does not seem to be able to resolve DNS, if endpoints were
  configured with hostname instead of IP.

  MariaDB [keystone]> select * from endpoint where url like '%500%' or 
'%3535%';;
  
+--+--+---+--+--+---+-+---+
  | id   | legacy_endpoint_id   | 
interface | service_id   | url  
| extra | enabled | region_id |
  
+--+--+---+--+--+---+-+---

[Yahoo-eng-team] [Bug 1506580] [NEW] Request for first release of networking-calico

2015-10-15 Thread Neil Jerram
Public bug reported:

Please could you tag and release networking-calico, as it currently
stands at:

https://git.openstack.org/cgit/openstack/networking-calico

neil@nj-ubuntu:~/calico/networking-calico$ git log -1
commit 792f1cf63e63ce4660f837d261e9c2bb66434f7b
Author: Neil Jerram 
Date:   Thu Oct 15 16:07:05 2015 +0100

DevStack plugin doc: mention the bootstrap script

Change-Id: I4ea7622d815be946e2dfec445c071d1db8d8bc07

This will be the first ever release of networking-calico, so I guess it
will be 1.0.0.

** Affects: networking-calico
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506580

Title:
  Request for first release of networking-calico

Status in networking-calico:
  New
Status in neutron:
  New

Bug description:
  Please could you tag and release networking-calico, as it currently
  stands at:

  https://git.openstack.org/cgit/openstack/networking-calico

  neil@nj-ubuntu:~/calico/networking-calico$ git log -1
  commit 792f1cf63e63ce4660f837d261e9c2bb66434f7b
  Author: Neil Jerram 
  Date:   Thu Oct 15 16:07:05 2015 +0100

  DevStack plugin doc: mention the bootstrap script
  
  Change-Id: I4ea7622d815be946e2dfec445c071d1db8d8bc07

  This will be the first ever release of networking-calico, so I guess
  it will be 1.0.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-calico/+bug/1506580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495701] Re: Sometimes Cinder volumes fail to attach with error "The device is not writable: Permission denied"

2015-10-15 Thread Patrick East
** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495701

Title:
  Sometimes Cinder volumes fail to attach with error "The device is not
  writable: Permission denied"

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New

Bug description:
  This is happening on the latest master branch in CI systems. It
  happens very rarely in the gate:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImxpYnZpcnRFcnJvcjogb3BlcmF0aW9uIGZhaWxlZDogb3BlbiBkaXNrIGltYWdlIGZpbGUgZmFpbGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDIyNjY3MDU1NzZ9

  And on some third party CI systems (not included in the logstash
  results):

  http://ec2-52-8-200-217.us-
  west-1.compute.amazonaws.com/28/216728/5/check/PureFCDriver-tempest-
  dsvm-volume-
  multipath/bd3618d/logs/libvirt/libvirtd.txt.gz#_2015-09-14_09_00_44_829

  When the error occurs there is a stack trace in the n-cpu log like
  this:

  http://logs.openstack.org/22/222922/2/check/gate-tempest-dsvm-full-
  lio/550be5e/logs/screen-n-cpu.txt.gz?level=DEBUG#_2015-09-13_17_34_07_787

  2015-09-13 17:34:07.787 ERROR nova.virt.libvirt.driver 
[req-4ac04f97-f468-466a-9fb2-02d1df3a5633 
tempest-TestEncryptedCinderVolumes-1564844141 
tempest-TestEncryptedCinderVolumes-804461249] [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] Failed to attach volume at mountpoint: 
/dev/vdb
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] Traceback (most recent call last):
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1115, in attach_volume
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] guest.attach_device(conf, 
persistent=True, live=live)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 233, in attach_device
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] 
self._domain.attachDeviceFlags(conf.to_xml(), flags=flags)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] rv = execute(f, *args, **kwargs)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] six.reraise(c, e, tb)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] rv = meth(*args, **kwargs)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in 
attachDeviceFlags
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6] libvirtError: operation failed: open disk 
image file failed
  2015-09-13 17:34:07.787 22300 ERROR nova.virt.libvirt.driver [instance: 
82f33247-d8be-49c7-9f89-f02602de5ef6]

  and a corresponding error in the libvirt log such as this:

  http://logs.openstack.org/22/222922/2/check/gate-tempest-dsvm-full-
  lio/550be5e/logs/libvirt/libvirtd.txt.gz#_2015-09-13_17_34_07_499

  2015-09-13 17:34:07.496+: 16871: debug : qemuMonitorJSONCommandWithFd:264 
: Send command 
'{"execute":"human-monitor-command","argume

[Yahoo-eng-team] [Bug 1503712] Re: Error while deleting tenant in openstack Juno

2015-10-15 Thread Matt Kassawara
Thanks, closing for openstack-manuals.

** Changed in: openstack-manuals
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1503712

Title:
  Error while deleting tenant in openstack Juno

Status in Keystone:
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  Hi,

  When I'm trying to delete project with keystone:

  keystone tenant-delete radomirProject

  I get this error in keystone.log

  2015-10-07 16:28:49.132 2465 INFO eventlet.wsgi.server [-] 10.0.2.60 - - 
[07/Oct/2015 16:28:49] "POST /v2.0/tokens HTTP/1.1" 200 2494 0.091314
  2015-10-07 16:28:49.154 2455 INFO eventlet.wsgi.server [-] 10.0.2.60 - - 
[07/Oct/2015 16:28:49] "GET /v2.0/tenants/12a876bf668240de8bff9d9869bb4334 
HTTP/1.1" 200 263 0.011250
  2015-10-07 16:28:49.182 2455 ERROR keystone.common.wsgi [-] 'Revoke' object 
has no attribute 'list_trusts_for_trustee'
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 223, in 
__call__
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi result = 
method(context, **params)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/assignment/controllers.py", line 
135, in delete_project
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi 
self.assignment_api.delete_project(tenant_id)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/notifications.py", line 112, in 
wrapper
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi result = f(*args, 
**kwargs)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/assignment/core.py", line 150, in 
delete_project
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi 
self._emit_invalidate_user_project_tokens_notification(payload)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/notifications.py", line 124, in 
wrapper
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi 
public=self.public)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/notifications.py", line 254, in 
_send_notification
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi 
notify_event_callbacks(service, resource_type, operation, payload)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/notifications.py", line 204, in 
notify_event_callbacks
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi cb(service, 
resource_type, operation, payload)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/token/provider.py", line 516, in 
_delete_user_project_tokens_callback
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi 
project_id=project_id)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/token/persistence/core.py", line 
167, in delete_tokens_for_user
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi for trust in 
self.trust_api.list_trusts_for_trustee(user_id):
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 74, in 
__getattr__
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi f = 
getattr(self.driver, name)
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi AttributeError: 
'Revoke' object has no attribute 'list_trusts_for_trustee'
  2015-10-07 16:28:49.182 2455 TRACE keystone.common.wsgi

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1503712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378783] Re: IPv6 namespaces are not updated upon router interface deletion

2015-10-15 Thread Nandini
Could not reproduce the bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378783

Title:
  IPv6 namespaces are not updated upon router interface deletion

Status in neutron:
  Invalid

Bug description:
  Description of problem:
  ===
  In case the namespace contains both IPv4 and IPv6 Interfaces, they will not 
be deleted with interfaces are detached from the router.

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =

  Steps to Reproduce:
  ===
  1. Create a neutron Router
  2. Attach an IPv6 interface
  3. Attach an IPv4 interface
  4. Delete both interfaces
  5. Check if interfaces were deleted from the router namespace:
 # ip netns exec qrouter- ifconfig | grep inet

  Actual results:
  ===
  Interfaces were not deleted.

  Expected results:
  =
  Interfaces should be deleted.

  Additional info:
  
  Tested with RHEL7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506571] [NEW] hzPasswordMatch directive does not do bi-direction check

2015-10-15 Thread Thai Tran
Public bug reported:

Currently, hzPasswordMatch directive does not do bi-directional check.
Meaning that the confirmation password input triggers a check on the
password input, but not vice versa. To replicate, follow these steps:

1. change password to 'abc'
2. change confirmation password to 'abc'
3. change password to 'abcd'

** Affects: horizon
 Importance: High
 Assignee: Thai Tran (tqtran)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506571

Title:
  hzPasswordMatch directive does not do bi-direction check

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently, hzPasswordMatch directive does not do bi-directional check.
  Meaning that the confirmation password input triggers a check on the
  password input, but not vice versa. To replicate, follow these steps:

  1. change password to 'abc'
  2. change confirmation password to 'abc'
  3. change password to 'abcd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506569] [NEW] Add correct license for hzPasswordMatch

2015-10-15 Thread Thai Tran
Public bug reported:

Somewhere along the way, the license for hzPasswordMatch directive was
mishandled. This should be corrected.

** Affects: horizon
 Importance: Low
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506569

Title:
  Add correct license for hzPasswordMatch

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Somewhere along the way, the license for hzPasswordMatch directive was
  mishandled. This should be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506567] [NEW] No information from Neutron Metering agent

2015-10-15 Thread Sergey Kolekonov
Public bug reported:

I deployed OpenStack cloud with stable/kilo code - a controller/network node 
and a compute node (Ubuntu 14.04). I have Metering agent enabled and configured 
 as follows:
driver = 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

When I try to check information from it in Ceilometer, I always get
zero. e.x.:

root@node-1:~# ceilometer sample-list -m bandwidth  -l10
+--+---+---++--++
| Resource ID  | Name  | Type  | Volume | Unit | 
Timestamp  |
+--+---+---++--++
| 66fab4a9-aefe-4534-b8a3-b0c0db9edf82 | bandwidth | delta | 0.0| B| 
2015-10-15T16:55:26.766000 |

Is it expected? I spawned two VMs and tried to pass some traffic between
them using iperf, but still no results

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506567

Title:
  No information from Neutron Metering agent

Status in neutron:
  New

Bug description:
  I deployed OpenStack cloud with stable/kilo code - a controller/network node 
and a compute node (Ubuntu 14.04). I have Metering agent enabled and configured 
 as follows:
  driver = 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

  When I try to check information from it in Ceilometer, I always get
  zero. e.x.:

  root@node-1:~# ceilometer sample-list -m bandwidth  -l10
  
+--+---+---++--++
  | Resource ID  | Name  | Type  | Volume | Unit | 
Timestamp  |
  
+--+---+---++--++
  | 66fab4a9-aefe-4534-b8a3-b0c0db9edf82 | bandwidth | delta | 0.0| B| 
2015-10-15T16:55:26.766000 |

  Is it expected? I spawned two VMs and tried to pass some traffic
  between them using iperf, but still no results

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506566] [NEW] no notifications from Neutron in Ceilometer

2015-10-15 Thread Sergey Kolekonov
Public bug reported:

I've deployed devstack using the following local.conf (with Ceilometer)
- http://paste.openstack.org/show/476407/ (master branch)

After deployment I tried to check Ceilometer notifications and get
nothing from Neutron. For example, I created a router and then executed
ceilometer sample-list -m router.create  -l10 - got nothing.

Is something misconfigured? Also there's no information from Neutron
metering agent regarding bandwidth.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506566

Title:
  no notifications from Neutron in Ceilometer

Status in neutron:
  New

Bug description:
  I've deployed devstack using the following local.conf (with
  Ceilometer) - http://paste.openstack.org/show/476407/ (master branch)

  After deployment I tried to check Ceilometer notifications and get
  nothing from Neutron. For example, I created a router and then
  executed ceilometer sample-list -m router.create  -l10 - got nothing.

  Is something misconfigured? Also there's no information from Neutron
  metering agent regarding bandwidth.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506565] [NEW] Add correct license for simple modal

2015-10-15 Thread Thai Tran
Public bug reported:

Somewhere along the way, the license for simple modal was mishandled.
This should be corrected.

** Affects: horizon
 Importance: Low
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506565

Title:
  Add correct license for simple modal

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Somewhere along the way, the license for simple modal was mishandled.
  This should be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506492] Re: Deprecate new= argument from create_connection function

2015-10-15 Thread YAMAMOTO Takashi
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506492

Title:
  Deprecate new= argument from create_connection function

Status in networking-midonet:
  New
Status in neutron:
  In Progress

Bug description:
  It's not used since we switched to oslo.messaging in Juno. It's time
  to deprecate and eventually remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1506492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453419] Re: Remove workaround for db2 dialect primary key issue

2015-10-15 Thread Brant Knudson
Marking this as invalid since we didn't provide migration 73 with the
problem, it was only in review.

** Changed in: keystone
   Status: In Progress => Fix Released

** Changed in: keystone
   Status: Fix Released => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1453419

Title:
  Remove workaround for db2 dialect primary key issue

Status in Keystone:
  Invalid

Bug description:
  
  Migration 73 has a workaround for ibm_db_sa issue 173 ( 
https://code.google.com/p/ibm-db/issues/detail?id=173 ). Once that issue is 
fixed we can remove the workaround.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1453419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479962] Re: Use extras for deployment-specific package requirements

2015-10-15 Thread Brant Knudson
This was released in keystone liberty.

** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1479962

Title:
  Use extras for deployment-specific package requirements

Status in devstack:
  In Progress
Status in Keystone:
  Fix Released

Bug description:
  
  Keystone should use "extras" in setup.cfg for deployment-specific package 
requirements.

  One example is ldap.

  With this change, deployers can do something like `pip install
  keystone[ldap]` to install the packages required for ldap.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1479962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506503] [NEW] OVS agents periodically fail to start in fullstack

2015-10-15 Thread John Schwarz
Public bug reported:

Changeset [1] introduced a validation that the local_ip specified for
tunneling is actually used by one of the devices on the machine running
an OVS agent.

In Fullstack, multiple tests may run concurrently, which can cause a
race condition: suppose an ovs agent starts running as part of test A.
It retrieves the list of all devices on the host and starts a sequential
loop on them. In the mean time, some *other* fullstack test (test B)
completes and deletes the devices it created. The agent has that deleted
device in the list and when it will reach the device it will find out it
does not exist and crash.

[1]: https://review.openstack.org/#/c/154043/

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: In Progress


** Tags: fullstack

** Changed in: neutron
 Assignee: (unassigned) => John Schwarz (jschwarz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506503

Title:
  OVS agents periodically fail to start in fullstack

Status in neutron:
  In Progress

Bug description:
  Changeset [1] introduced a validation that the local_ip specified for
  tunneling is actually used by one of the devices on the machine
  running an OVS agent.

  In Fullstack, multiple tests may run concurrently, which can cause a
  race condition: suppose an ovs agent starts running as part of test A.
  It retrieves the list of all devices on the host and starts a
  sequential loop on them. In the mean time, some *other* fullstack test
  (test B) completes and deletes the devices it created. The agent has
  that deleted device in the list and when it will reach the device it
  will find out it does not exist and crash.

  [1]: https://review.openstack.org/#/c/154043/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506492] [NEW] Deprecate new= argument from create_connection function

2015-10-15 Thread Ihar Hrachyshka
Public bug reported:

It's not used since we switched to oslo.messaging in Juno. It's time to
deprecate and eventually remove it.

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506492

Title:
  Deprecate new= argument from create_connection function

Status in neutron:
  In Progress

Bug description:
  It's not used since we switched to oslo.messaging in Juno. It's time
  to deprecate and eventually remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506488] [NEW] raise 'Flavor' object is not iterable while resize flavor

2015-10-15 Thread Lawrance
Public bug reported:

steps to reproduce:

1. demo user only can access the two flavors(m1.tiny and m1.small)
2. launch instance001 with flavor m1.tiny
3. try to resize m1.tiny, the horizon will raise "'Flavor' object is not 
iterable"

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506488

Title:
  raise 'Flavor' object is not iterable while resize flavor

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  steps to reproduce:

  1. demo user only can access the two flavors(m1.tiny and m1.small)
  2. launch instance001 with flavor m1.tiny
  3. try to resize m1.tiny, the horizon will raise "'Flavor' object is not 
iterable"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506482] [NEW] neutron_vpnaas.tests.unit.db.vpn.test_vpn_db.TestVpnDatabase.test_list_endpoint_groups fails due to ordering issue

2015-10-15 Thread Ihar Hrachyshka
Public bug reported:

Sometimes the test fails with the following traceback:

Traceback (most recent call last):
  File "neutron_vpnaas/tests/unit/db/vpn/test_vpn_db.py", line 1900, in 
test_list_endpoint_groups
group_id2 = self.helper_create_endpoint_group(info2)
  File "neutron_vpnaas/tests/unit/db/vpn/test_vpn_db.py", line 1830, in 
helper_create_endpoint_group
self.assertDictSupersetOf(info['endpoint_group'], actual)
  File 
"/home/vagrant/git/neutron-vpnaas/.tox/py27/src/neutron/neutron/tests/base.py", 
line 231, in assertDictSupersetOf
{'key': k, 'exp': v, 'act': actual_superset[k]})
  File 
"/home/vagrant/git/neutron-vpnaas/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/vagrant/git/neutron-vpnaas/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = ['a00562c1-9340-4a36-8111-2885273499e6',
 '1f23e0ab-ddbd-4788-979a-ccbc1f1c787a']
actual= ['1f23e0ab-ddbd-4788-979a-ccbc1f1c787a',
 u'a00562c1-9340-4a36-8111-2885273499e6']
: Key endpoints expected: ['a00562c1-9340-4a36-8111-2885273499e6', 
'1f23e0ab-ddbd-4788-979a-ccbc1f1c787a'], actual 
['1f23e0ab-ddbd-4788-979a-ccbc1f1c787a', 
u'a00562c1-9340-4a36-8111-2885273499e6']

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506482

Title:
  
neutron_vpnaas.tests.unit.db.vpn.test_vpn_db.TestVpnDatabase.test_list_endpoint_groups
  fails due to ordering issue

Status in neutron:
  New

Bug description:
  Sometimes the test fails with the following traceback:

  Traceback (most recent call last):
File "neutron_vpnaas/tests/unit/db/vpn/test_vpn_db.py", line 1900, in 
test_list_endpoint_groups
  group_id2 = self.helper_create_endpoint_group(info2)
File "neutron_vpnaas/tests/unit/db/vpn/test_vpn_db.py", line 1830, in 
helper_create_endpoint_group
  self.assertDictSupersetOf(info['endpoint_group'], actual)
File 
"/home/vagrant/git/neutron-vpnaas/.tox/py27/src/neutron/neutron/tests/base.py", 
line 231, in assertDictSupersetOf
  {'key': k, 'exp': v, 'act': actual_superset[k]})
File 
"/home/vagrant/git/neutron-vpnaas/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/vagrant/git/neutron-vpnaas/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = ['a00562c1-9340-4a36-8111-2885273499e6',
   '1f23e0ab-ddbd-4788-979a-ccbc1f1c787a']
  actual= ['1f23e0ab-ddbd-4788-979a-ccbc1f1c787a',
   u'a00562c1-9340-4a36-8111-2885273499e6']
  : Key endpoints expected: ['a00562c1-9340-4a36-8111-2885273499e6', 
'1f23e0ab-ddbd-4788-979a-ccbc1f1c787a'], actual 
['1f23e0ab-ddbd-4788-979a-ccbc1f1c787a', 
u'a00562c1-9340-4a36-8111-2885273499e6']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469817] Re: Glance doesn't handle exceptions from glance_store

2015-10-15 Thread Thierry Carrez
** Changed in: glance
Milestone: liberty-2 => 11.0.0

** No longer affects: glance/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1469817

Title:
  Glance doesn't handle exceptions from glance_store

Status in Glance:
  Fix Released
Status in Glance juno series:
  Confirmed
Status in Glance kilo series:
  Confirmed

Bug description:
  Server API expects to catch exception declared at
  glance/common/exception.py, but actually risen exceptions have the
  same names but are declared at different module,
  glance_store/exceptions.py and thus are never caught.

  For example, If exception is raised here:
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/_drivers/rbd.py#L316
  , it will never be caught here
  
https://github.com/openstack/glance/blob/stable/kilo/glance/api/v1/images.py#L1107
  , because first one is instance of
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/exceptions.py#L198
  , but Glance waits for
  
https://github.com/openstack/glance/blob/stable/kilo/glance/common/exception.py#L293

  There are many cases of that issue. The investigation continues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1469817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467982] Re: Profiler raises an error when it is enabled

2015-10-15 Thread Thierry Carrez
** Changed in: glance
Milestone: liberty-2 => 11.0.0

** No longer affects: glance/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1467982

Title:
  Profiler raises an error when it is enabled

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released

Bug description:
  Description:

  When OSProfiler is enabled in Glance API and Registry, they raise the
  following exception:

  CRITICAL glance [-] AttributeError: 'module' object has no attribute 
'messaging'
  TRACE glance Traceback (most recent call last):
  TRACE glance   File "/usr/bin/glance-api", line 10, in 
  TRACE glance sys.exit(main())
  TRACE glance   File "/usr/lib/python2.7/site-packages/glance/cmd/api.py", 
line 80, in main
  TRACE glance notifier.messaging, {},
  TRACE glance AttributeError: 'module' object has no attribute 'messaging'
  TRACE glance

  
  Steps to reproduce:
  1. Enable profiler in glance-api.conf and glance-registry.conf

  [profiler]
  enabled = True

  2. Restart API and Registry Services

  
  Expected Behavior:
  Start services with profiler enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1467982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445827] Re: unit test failures: Glance insist on ordereddict

2015-10-15 Thread Thierry Carrez
** Changed in: glance
Milestone: liberty-1 => 11.0.0

** No longer affects: glance/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1445827

Title:
  unit test failures: Glance insist on ordereddict

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in taskflow:
  Fix Released

Bug description:
  There's no python-ordereddict package anymore in Debian, as this is
  normally included in Python 2.7. I have therefore patched
  requirements.txt to remove ordereddict. However, even after this, I
  get some bad unit test errors about it. This must be fixed upstream,
  because there's no way (modern) downstream distributions can fix it
  (as the ordereddict Python package will *not* come back).

  Below is the tracebacks for the 4 failed unit tests.

  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_api_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 143, in 
test_list_api_opts
  expected_opt_groups, expected_opt_names)
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
  list_fn = ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
  self.require(env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_cache_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 288, in 
test_list_cache_opts
  expected_opt_groups, expected_opt_names)
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
  list_fn = ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
  self.require(env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_manage_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 301, in 
test_list_manage_opts
  expected_opt_groups, expected_opt_names)
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
  list_fn = ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
  self.require(env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_registry_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 192, in 
test_list_registry_opts
  expected_opt_groups, expected_opt_names)
File "/��PKGBUILDDIR��/glance/tests/unit/test_opts.py", line 45, in 
_test_entry_point
  list_fn = ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2188, in load
  self.require(env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_scrubber_opts
  

[Yahoo-eng-team] [Bug 1496138] Re: logging a warning when someone accesses / seems unnecessary and wasteful

2015-10-15 Thread Thierry Carrez
** Changed in: glance
Milestone: liberty-rc1 => 11.0.0

** No longer affects: glance/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1496138

Title:
  logging a warning when someone accesses / seems unnecessary and
  wasteful

Status in Glance:
  Fix Released
Status in Glance kilo series:
  New

Bug description:
  Our load balancer health checks (and other folks too) just load the
  main glance URL and look for an http status of 300 to determine if
  glance is okay. Starting I think in Kilo, glance changed and now logs
  a warning. This is highly unnecessary and ends up generating gigs of
  useless logs which make diagnosing real issues more difficult.

  At the least this should be an INFO, but ideally there's no point in
  logging this at all.

  2015-08-04 17:42:43.058 24075 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:43.577 24071 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:45.083 24076 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:45.317 24064 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:47.092 24074 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1496138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506473] [NEW] EC2 controller should not get credentials other than the ones whose type is "ec2"

2015-10-15 Thread Tony Wang
Public bug reported:

The EC2 extension interacts with credential APIs. New credentials
created by EC2 extension are with the "type" property, the value is
"ec2", but when the credentials are queried, there is no filter for
"type", so all credentials satisfied with other criteria (in this case,
"user_id") are returned.

** Affects: keystone
 Importance: Undecided
 Assignee: Tony Wang (muyu)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Tony Wang (muyu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1506473

Title:
  EC2 controller should not get credentials other than the ones whose
  type is "ec2"

Status in Keystone:
  In Progress

Bug description:
  The EC2 extension interacts with credential APIs. New credentials
  created by EC2 extension are with the "type" property, the value is
  "ec2", but when the credentials are queried, there is no filter for
  "type", so all credentials satisfied with other criteria (in this
  case, "user_id") are returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1506473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506465] [NEW] VMware: unable to deploy debian image with liberty

2015-10-15 Thread Gary Kotton
Public bug reported:

A debian image loaded in and used in Kilo is unable to be used in
liberty.

nicira@htb-1n-eng-dhcp8:~$ nova image-show  debian-2.6.32-i686 
+-+--+
| Property| Value|
+-+--+
| OS-EXT-IMG-SIZE:size| 1073741824   |
| created | 2015-10-15T11:19:33Z |
| id  | 34d8aa13-fbad-44ad-90c9-4d259abb0a2a |
| metadata hw_vif_model   | e1000|
| metadata vmware_adaptertype |  |
| metadata vmware_disktype| preallocated |
| minDisk | 0|
| minRam  | 0|
| name| debian-2.6.32-i686   |
| progress| 100  |
| status  | ACTIVE   |
| updated | 2015-10-15T11:19:49Z |
+-+--+

The trace from the API is:

2015-10-15 11:25:48.118 ERROR nova.api.openstack.extensions 
[req-ca854e8c-68fe-47f6-9ce0-c2a5083b417d demo demo] Unexpected exception in 
API method
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 602, in create
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions **create_kwargs)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/hooks.py", line 149, in inner
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions rv = f(*args, 
**kwargs)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1562, in create
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1162, in _create_instance
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 921, in 
_validate_and_build_base_options
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions image_meta = 
objects.ImageMeta.from_dict(boot_meta)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/image_meta.py", line 107, in from_dict
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 
image_meta.get("properties", {}))
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/image_meta.py", line 459, in from_dict
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 
obj._set_attr_from_legacy_names(image_props)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/image_meta.py", line 397, in 
_set_attr_from_legacy_names
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions setattr(self, 
"hw_scsi_model", vmware_adaptertype)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
71, in setter
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions field_value = 
field.coerce(self, name, value)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 
189, in coerce
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
self._type.coerce(obj, attr, value)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/fields.py", line 272, in coerce
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
super(SCSIModel, self).coerce(o

[Yahoo-eng-team] [Bug 1449492] Re: Cinder not working with IPv6 ISCSI

2015-10-15 Thread Lukas Bezdicka
** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449492

Title:
  Cinder not working with IPv6 ISCSI

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in os-brick:
  New

Bug description:
  Testing configuring Openstack completely with IPv6

  Found that IP address parsing was thrown in a lot of cases because of
  need to have '[]' encasing the address, or not for use with URLs and
  the parsing of some user space 3rd party C binaries - iscsiadm for
  example. All the others are best left by using a name set to the IPv6
  address in the /etc/hosts file, iSCSI though its not possible.

  Got Cinder working by setting iscsi_ip_address
  (/etc/cinder/cinder.conf) to '[$my_ip]' where my ip is an IPv6 address
  like 2001:db08::1 (not RFC documentation address ?) and changing one
  line of python iin the nova virt/libvirt/volume.py code:

  
  --- nova/virt/libvirt/volume.py.orig2015-04-27 23:00:00.208075644 +1200
  +++ nova/virt/libvirt/volume.py 2015-04-27 22:38:08.938643636 +1200
  @@ -833,7 +833,7 @@
   def _get_host_device(self, transport_properties):
   """Find device path in devtemfs."""
   device = ("ip-%s-iscsi-%s-lun-%s" %
  -  (transport_properties['target_portal'],
  +  
(transport_properties['target_portal'].replace('[','').replace(']',''),
  transport_properties['target_iqn'],
  transport_properties.get('target_lun', 0)))
   if self._get_transport() == "default":

  Nova-compute was looking for '/dev/disk/by-path/ip-[2001:db08::1]:3260
  -iscsi-iqn.2010-10.org.openstack:*' when there were no '[]' in the
  udev generated path

  This one can't be worked around by using the /etc/hosts file. iscsiadm
  and tgt ned the IPv6 address wrapped in '[]', and iscsadm uses it in
  output.  The above patch could be matched with a bi ihte cinder code
  that puts '[]' around iscsi_ip_address if if it is not supplied.

  More work is obvioulsy need on a convention for writing IPv6 addresses
  in the Openstack configuration files, and there will be a lot of
  places where code will need to be tweaked.

  Lets start by fixing this blooper/lo hanging one  first though as it
  makes it possible to get Cinder working in a pure IPv6 environment.
  Above may be a bit of a hack, but only one one code path needs
  adjustment...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1449492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287757] Re: Optimization: Don't prune events on every get

2015-10-15 Thread Thierry Carrez
** Changed in: keystone
Milestone: liberty-2 => 8.0.0

** No longer affects: keystone/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1287757

Title:
  Optimization:  Don't prune events on every get

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  _prune_expired_events_and_get always locks the backend. Store the time
  of the oldest event so that the prune process can be skipped if none
  of the events have timed out.

  (decided at keystone midcycle - 2015/07/17) -- MorganFainberg
  The easiest solution is to do the prune on issuance of new revocation event 
instead on the get.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1287757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506432] [NEW] Detach Interface Action should not be shown if there no interfaces are attached

2015-10-15 Thread Saravanan KR
Public bug reported:

When the Instance is not associated with any interface, Detach Interface
Action should not be show in the instance panel

** Affects: horizon
 Importance: Undecided
 Assignee: Saravanan KR (skramaja)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Saravanan KR (skramaja)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506432

Title:
  Detach Interface Action should not be shown if there no interfaces are
  attached

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the Instance is not associated with any interface, Detach
  Interface Action should not be show in the instance panel

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469563] Re: Fernet tokens do not maintain expires time across rescope (V2 tokens)

2015-10-15 Thread Thierry Carrez
** Changed in: keystone
Milestone: liberty-3 => 8.0.0

** No longer affects: keystone/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469563

Title:
  Fernet tokens do not maintain expires time across rescope (V2 tokens)

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  Fernet tokens do not maintain the expiration time when rescoping
  tokens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1469563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506429] [NEW] IP Address is not removed in Instance Panel UI after Detach Interface (working after refreshing manually)

2015-10-15 Thread Saravanan KR
Public bug reported:

The IP Address is not removed in the UI after Detach Interface, it
requires manual page refresh at the client side.

** Affects: horizon
 Importance: Undecided
 Assignee: Saravanan KR (skramaja)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Saravanan KR (skramaja)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506429

Title:
  IP Address is not removed in Instance Panel UI after Detach Interface
  (working after refreshing manually)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The IP Address is not removed in the UI after Detach Interface, it
  requires manual page refresh at the client side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506390] [NEW] LXC instances cannot reboot (reboot from container)

2015-10-15 Thread Bertrand NOEL
Public bug reported:

I have an LXC compute node. I can create LXC containers, they work fine.
When I try to reboot containers (reboot initiated from inside the container), 
the container goes into "SHUTOFF" status / "Shutdown" power state, and does not 
come back.

If I do a "nova start", the container comes back to "RUNNING", but with the 
following exception in logs:
--
ERROR nova.virt.disk.api [req-63630337-923f-4994-8960-83368c6a192e admin admin] 
Failed to teardown container filesystem
TRACE nova.virt.disk.api Traceback (most recent call last):
TRACE nova.virt.disk.api   File "/opt/stack/nova/nova/virt/disk/api.py", line 
472, in teardown_container
TRACE nova.virt.disk.api run_as_root=True, attempts=3)
TRACE nova.virt.disk.api   File "/opt/stack/nova/nova/utils.py", line 389, in 
execute
TRACE nova.virt.disk.api return RootwrapProcessHelper().execute(*cmd, 
**kwargs)
TRACE nova.virt.disk.api   File "/opt/stack/nova/nova/utils.py", line 272, in 
execute
TRACE nova.virt.disk.api return processutils.execute(*cmd, **kwargs)
TRACE nova.virt.disk.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
275, in execute
TRACE nova.virt.disk.api cmd=sanitized_cmd)
TRACE nova.virt.disk.api ProcessExecutionError: Unexpected error while running 
command.
TRACE nova.virt.disk.api Command: sudo nova-rootwrap /etc/nova/rootwrap.conf 
losetup --detach /dev/loop0
TRACE nova.virt.disk.api Exit code: 1
TRACE nova.virt.disk.api Stdout: u''
TRACE nova.virt.disk.api Stderr: u"loop: can't delete device /dev/loop0: No 
such device or address\n"
TRACE nova.virt.disk.api 
--


Tested on Juno/Kilo/Liberty/master, on an Ubuntu 14.04
(note that in Juno, nova start does not even work)


Below is my Devstack recipe if needed:
-
sudo mkdir -p /opt/stack
sudo chown $USER /opt/stack
git clone -b stable/liberty https://git.openstack.org/openstack-dev/devstack 
/opt/stack/devstack

cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]

VIRT_DRIVER=libvirt
LIBVIRT_TYPE=lxc

disable_service heat h-api h-api-cfn h-api-cw h-eng
disable_service horizon
disable_service tempest
disable_service c-sch c-api c-vol
disable_service s-proxy s-object s-container s-account
disable_service q-svc q-agt q-dhcp q-l3 q-meta neutron
disable_service tempest

DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=password
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password
END

cd /opt/stack/devstack/
./stack.sh
-

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: lxc reboot

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506390

Title:
  LXC instances cannot reboot (reboot from container)

Status in OpenStack Compute (nova):
  New

Bug description:
  I have an LXC compute node. I can create LXC containers, they work fine.
  When I try to reboot containers (reboot initiated from inside the container), 
the container goes into "SHUTOFF" status / "Shutdown" power state, and does not 
come back.

  If I do a "nova start", the container comes back to "RUNNING", but with the 
following exception in logs:
  --
  ERROR nova.virt.disk.api [req-63630337-923f-4994-8960-83368c6a192e admin 
admin] Failed to teardown container filesystem
  TRACE nova.virt.disk.api Traceback (most recent call last):
  TRACE nova.virt.disk.api   File "/opt/stack/nova/nova/virt/disk/api.py", line 
472, in teardown_container
  TRACE nova.virt.disk.api run_as_root=True, attempts=3)
  TRACE nova.virt.disk.api   File "/opt/stack/nova/nova/utils.py", line 389, in 
execute
  TRACE nova.virt.disk.api return RootwrapProcessHelper().execute(*cmd, 
**kwargs)
  TRACE nova.virt.disk.api   File "/opt/stack/nova/nova/utils.py", line 272, in 
execute
  TRACE nova.virt.disk.api return processutils.execute(*cmd, **kwargs)
  TRACE nova.virt.disk.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
275, in execute
  TRACE nova.virt.disk.api cmd=sanitized_cmd)
  TRACE nova.virt.disk.api ProcessExecutionError: Unexpected error while 
running command.
  TRACE nova.virt.disk.api Command: sudo nova-rootwrap /etc/nova/rootwrap.conf 
losetup --detach /dev/loop0
  TRACE nova.virt.disk.api Exit code: 1
  TRACE nova.virt.disk.api Stdout: u''
  TRACE nova.virt.disk.api Stderr: u"loop: can't delete device /dev/loop0: No 
such device or address\n"
  TRACE nova.virt.disk.api 
  --

  
  Tested on Juno/Kilo/Liberty/master, on an Ubuntu 14.04
  (note that in Juno, nova start does not even work)


  Below is my Devstack recipe if needed:
  -
  sudo mkdir -p /opt/stack
  sudo chown $USER /opt/stack
  git clone -b stable/liberty https://git.openstack.org/openstack-dev/devstack 
/opt/stack/devstack

  cat > /opt/stack/devstack/local.conf << END
  [[local|localrc]]

  VIRT_DRIVER=libvirt
  LIBVIRT_TYPE

[Yahoo-eng-team] [Bug 1392527] Re: [OSSA 2015-017] Deleting instance while resize instance is running leads to unuseable compute nodes (CVE-2015-3280)

2015-10-15 Thread Thierry Carrez
** Changed in: nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392527

Title:
  [OSSA 2015-017] Deleting instance while resize instance is running
  leads to unuseable compute nodes (CVE-2015-3280)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Steps to reproduce:
  1) Create a new instance,waiting until it’s status goes to ACTIVE state
  2) Call resize API
  3) Delete the instance immediately after the task_state is “resize_migrated” 
or vm_state is “resized”
  4) Repeat 1 through 3 in a loop

  I have kept attached program running for 4 hours, all instances
  created are deleted (nova list returns empty list) but I noticed
  instances directories with the name “_resize> are not
  deleted from the instance path of the compute nodes (mainly from the
  source compute nodes where the instance was running before resize). If
  I keep this program running for couple of more hours (depending on the
  number of compute nodes), then it completely uses the entire disk of
  the compute nodes (based on the disk_allocation_ratio parameter
  value). Later, nova scheduler doesn’t select these compute nodes for
  launching new vms and starts reporting error "No valid hosts found".

  Note: Even the periodic tasks doesn't cleanup these orphan instance
  directories from the instance path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483022] Re: Missing string substitution results in ugly exception

2015-10-15 Thread Thierry Carrez
** No longer affects: nova/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483022

Title:
  Missing string substitution results in ugly exception

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  nova/network/neutronv2/api.py's _create_port method can raise an
  InvalidInput exception.  The issue is that the msg_fmt of InvalidInput
  is:

  class InvalidInput(Invalid):
  msg_fmt = _("Invalid input received: %(reason)s")

  within api.py:
  msg = (_('Fixed IP %(ip)s is not a valid ip address for '
'network %(network_id)s.'),
{'ip': fixed_ip, 'network_id': network_id})
  raise exception.InvalidInput(reason=msg)

  This results in the reason not being properly created, which results
  in:

 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 670, in allocate_for_instance
   self._delete_ports(neutron, instance, created_port_ids)
 File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, 
in __exit__
   six.reraise(self.type_, self.value, self.tb)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 662, in allocate_for_instance
   security_group_ids, available_macs, dhcp_opts)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 306, in _create_port
   raise exception.InvalidInput(reason=msg)
  InvalidInput: Invalid input received: (u'Fixed IP %(ip)s is not a valid ip 
address for network %(network_id)s.', {'network_id': 
u'7692e1e6-3404-4a56-9aec-3588dbd75275', 'ip': '8.8.8.8'})

  Simply substituting the kwargs into msg before raising the exception
  fixes the issue.

  within api.py:
  msg = (_('Fixed IP %(ip)s is not a valid ip address for '
'network %(network_id)s.') %
{'ip': fixed_ip, 'network_id': network_id})
  raise exception.InvalidInput(reason=msg)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506356] [NEW] There is no "[vnc]" option group in nova.conf.sample

2015-10-15 Thread shunya kitada
Public bug reported:

I try to generate the sample nova.conf file, run the following.
$ tox -egenconfig

But, there is no "[vnc]" option group nova.conf.sample.

"[vnc]" option group is defined in "vnc/__init__.py",
but "nova.vnc" namespace is not defined in 
"etc/nova/nova-config-generator.conf".

vnc/__init__.py
```
vnc_opts = [
 cfg.StrOpt('novncproxy_base_url',
default='http://127.0.0.1:6080/vnc_auto.html',
help='Location of VNC console proxy, in the form '
 '"http://127.0.0.1:6080/vnc_auto.html";',
deprecated_group='DEFAULT',
deprecated_name='novncproxy_base_url'),
...
]

CONF = cfg.CONF
CONF.register_opts(vnc_opts, group='vnc')
```


I resolved this, following 3 steps.
Not sure if this is the correct fix or not.

1. Define "nova.vnc" namespace in "etc/nova/nova-config-generator.conf",
```
   [DEFAULT]
   output_file = etc/nova/nova.conf.sample
   ...
   namespace = nova.virt
 > namespace = nova.vnc
   namespace = nova.openstack.common.memorycache
   ...
```


2. Define "nova.vnc" entry_point in setup.cfg.

```
   [entry_points]
   oslo.config.opts =
   nova = nova.opts:list_opts
   nova.api = nova.api.opts:list_opts
   nova.cells = nova.cells.opts:list_opts
   nova.compute = nova.compute.opts:list_opts
   nova.network = nova.network.opts:list_opts
   nova.network.neutronv2 = nova.network.neutronv2.api:list_opts
   nova.scheduler = nova.scheduler.opts:list_opts
   nova.virt = nova.virt.opts:list_opts
 > nova.vnc = nova.vnc.opts:list_opts
 ...
```


3. Create "nova/vnc/opts.py".

```
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import nova.vnc


def list_opts():
return [
('vnc', nova.vnc.vnc_opts),
]
```

** Affects: nova
 Importance: Undecided
 Assignee: shunya kitada (skitada)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => shunya kitada (skitada)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506356

Title:
  There is no "[vnc]" option group in nova.conf.sample

Status in OpenStack Compute (nova):
  New

Bug description:
  I try to generate the sample nova.conf file, run the following.
  $ tox -egenconfig

  But, there is no "[vnc]" option group nova.conf.sample.

  "[vnc]" option group is defined in "vnc/__init__.py",
  but "nova.vnc" namespace is not defined in 
"etc/nova/nova-config-generator.conf".

  vnc/__init__.py
  ```
  vnc_opts = [
   cfg.StrOpt('novncproxy_base_url',
  default='http://127.0.0.1:6080/vnc_auto.html',
  help='Location of VNC console proxy, in the form '
   '"http://127.0.0.1:6080/vnc_auto.html";',
  deprecated_group='DEFAULT',
  deprecated_name='novncproxy_base_url'),
  ...
  ]

  CONF = cfg.CONF
  CONF.register_opts(vnc_opts, group='vnc')
  ```

  
  I resolved this, following 3 steps.
  Not sure if this is the correct fix or not.

  1. Define "nova.vnc" namespace in "etc/nova/nova-config-generator.conf",
  ```
 [DEFAULT]
 output_file = etc/nova/nova.conf.sample
 ...
 namespace = nova.virt
   > namespace = nova.vnc
 namespace = nova.openstack.common.memorycache
 ...
  ```

  
  2. Define "nova.vnc" entry_point in setup.cfg.

  ```
 [entry_points]
 oslo.config.opts =
 nova = nova.opts:list_opts
 nova.api = nova.api.opts:list_opts
 nova.cells = nova.cells.opts:list_opts
 nova.compute = nova.compute.opts:list_opts
 nova.network = nova.network.opts:list_opts
 nova.network.neutronv2 = nova.network.neutronv2.api:list_opts
 nova.scheduler = nova.scheduler.opts:list_opts
 nova.virt = nova.virt.opts:list_opts
   > nova.vnc = nova.vnc.opts:list_opts
   ...
  ```

  
  3. Create "nova/vnc/opts.py".

  ```
  # Licensed under the Apache License, Version 2.0 (the "License"); you may not
  # use this file except in compliance with the License. You may obtain a copy
  # of the License at
  #
  # http://www.apache.org/licenses/LICENSE-2.0
  #
  # Unless required by applicable law or agreed to in writing, software
  # distributed under the License is distributed on an "AS IS" BASIS,
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  # See the License for the specific language governing perm

[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-10-15 Thread Angus Salkeld
** Changed in: heat/kilo
   Status: Fix Committed => Fix Released

** Changed in: heat/kilo
Milestone: 2015.1.2 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in heat kilo series:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone juno series:
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Won't Fix
Status in Sahara:
  In Progress

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https: