[Yahoo-eng-team] [Bug 1605894] Re: some test_l3 unit test failures

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/346585
Committed: 
https://git.openstack.org/cgit/openstack/networking-midonet/commit/?id=0fa57a25a683bb1c85a839dba4bb253aa9551ad0
Submitter: Jenkins
Branch:master

commit 0fa57a25a683bb1c85a839dba4bb253aa9551ad0
Author: YAMAMOTO Takashi 
Date:   Mon Jul 25 13:37:29 2016 +0900

l3: Avoid breaking transaction in _validate_router_port_info

The recent Neutron change [1] broke plugins which have a surrounding
transaction for the method.  This commit workarounds the breakage by
avoid breaking the whole transaction in _validate_router_port_info.

[1] I797df266dafc41843408dc95a6ce9f986db5c21c

Closes-Bug: #1605894
Change-Id: Ib2fff32642013af2523b159a48c7e2bc8c854131


** Changed in: networking-midonet
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605894

Title:
  some test_l3 unit test failures

Status in networking-midonet:
  Fix Released
Status in neutron:
  In Progress

Bug description:
  networking-midonet gate fails due to recent l3_db change. [1]
  any plugins which have a surrounding transaction for add_router_interface
  would be affected in the same way.

  [1] I797df266dafc41843408dc95a6ce9f986db5c21c

  http://logs.openstack.org/00/344100/2/check/gate-networking-midonet-
  python27/f55680d/console.html

  2016-07-23 10:56:05.220890 | 
==
  2016-07-23 10:56:05.220921 | FAIL: 
midonet.neutron.tests.unit.test_midonet_plugin.TestMidonetL3NatDBIntTest.test_router_add_interface_dup_subnet2_returns_400
  2016-07-23 10:56:05.220941 | 
--
  2016-07-23 10:56:05.220953 | Traceback (most recent call last):
  2016-07-23 10:56:05.220984 |   File 
"/tmp/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", line 1425, 
in test_router_add_interface_dup_subnet2_returns_400
  2016-07-23 10:56:05.221003 | expected_code=exc.HTTPBadRequest.code)
  2016-07-23 10:56:05.221029 |   File 
"/tmp/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", line 403, in 
_router_interface_action
  2016-07-23 10:56:05.221046 | self.assertEqual(expected_code, 
res.status_int, msg)
  2016-07-23 10:56:05.221081 |   File 
"/home/jenkins/workspace/gate-networking-midonet-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2016-07-23 10:56:05.221101 | self.assertThat(observed, matcher, message)
  2016-07-23 10:56:05.221136 |   File 
"/home/jenkins/workspace/gate-networking-midonet-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2016-07-23 10:56:05.221146 | raise mismatch_error
  2016-07-23 10:56:05.221160 | testtools.matchers._impl.MismatchError: 400 != 
500
  2016-07-23 10:56:05.221178 | 
==
  2016-07-23 10:56:05.221211 | FAIL: 
midonet.neutron.tests.unit.test_midonet_plugin.TestMidonetL3NatDBIntTest.test_router_add_interface_ipv6_port_existing_network_returns_400
  2016-07-23 10:56:05.221235 | 
--
  2016-07-23 10:56:05.221247 | Traceback (most recent call last):
  2016-07-23 10:56:05.221281 |   File 
"/tmp/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", line 1316, 
in test_router_add_interface_ipv6_port_existing_network_returns_400
  2016-07-23 10:56:05.221297 | expected_code=exp_code)
  2016-07-23 10:56:05.221324 |   File 
"/tmp/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", line 403, in 
_router_interface_action
  2016-07-23 10:56:05.221349 | self.assertEqual(expected_code, 
res.status_int, msg)
  2016-07-23 10:56:05.221386 |   File 
"/home/jenkins/workspace/gate-networking-midonet-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2016-07-23 10:56:05.221400 | self.assertThat(observed, matcher, message)
  2016-07-23 10:56:05.221435 |   File 
"/home/jenkins/workspace/gate-networking-midonet-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2016-07-23 10:56:05.221445 | raise mismatch_error
  2016-07-23 10:56:05.221459 | testtools.matchers._impl.MismatchError: 400 != 
500
  2016-07-23 10:56:05.221477 | 
==
  2016-07-23 10:56:05.221509 | FAIL: 
midonet.neutron.tests.unit.test_midonet_plugin.TestMidonetL3NatDBIntTest.test_router_add_interface_multiple_ipv4_subnet_port_returns_400
  2016-07-23 10:56:05.221528 | 
--
  2016-07-23 10:56:05.221540 | Traceback (most recent call last):
  2016-07-23 

[Yahoo-eng-team] [Bug 1606431] [NEW] Flavor create and update with service_profiles is not working properly

2016-07-25 Thread Rahmad Ade Putra
Public bug reported:

1. When creating a new Flavor with service_profiles, is not working
properly due the it showing empty service_profiles, I entered the
correct UUID of service_profiles, however the UUID of service_profiles
is empty.

2. And another happened when I try to update the existing of Flavor by
inserting the UUID of service_profiles, the 500 Internal Server Error
occured.

Here are my log from the first one (Creating new Falvor with
service_profiles) and then the second one (Update Existing Flavor by
Inserting service_profiles)


Creating a new Flavor with service_profiles

Request Command :
vagrant@ubuntu:~$ curl -g -i -X POST http://192.168.122.139:9696/v2.0/flavors 
-H "X-Auth-Token: $TOKEN" -d '{"flavor": 
{"service_type":"LOADBALANCER","enabled":"true"n$
HTTP/1.1 201 Created
Content-Type: application/json
Content-Length: 173
X-Openstack-Request-Id: req-6f3047a4-07e9-4dbe-b22a-b61ba167f705
Date: Mon, 25 Jul 2016 16:12:41 GMT

{"flavor": {"description": "", "enabled": true, "service_profiles": [],
"service_type": "LOADBALANCER", "id":
"79eaa203-5913-41b0-92c5-d6c2a0211a9c", "name": "flavor-test"}}


-
Update Existing Flavor By Inserting service_profiles

Request Command :
vagrant@ubuntu:~$ curl -g -i -X PUT 
http://192.168.122.139:9696/v2.0/flavors/79eaa203-5913-41b0-92c5-d6c2a0211a9c 
-H "X-Auth-Token: $TOKEN" -d '{"flavor": {"enabled":"$
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 150
X-Openstack-Request-Id: req-d8581b95-a798-4d83-9980-414892553cd3
Date: Mon, 25 Jul 2016 17:18:56 GMT

2016-07-25 17:18:56.650 24207 DEBUG neutron.api.v2.base 
[req-b42a4171-1c3d-4e67-a375-c5ce7c08546b e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de ---] Request body: {u'flavor': 
{u'service_profiles': [u'8e843ed6-cbd0-4ede-b765-d98e765f1135'], u'enabled': 
u'false'}} prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:649
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource 
[req-b42a4171-1c3d-4e67-a375-c5ce7c08546b e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de- - -] update failed: No details.
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 571, in update
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 617, in _update
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/flavors_db.py", line 142, in update_flavor
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource fl_db.update(fl)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/models.py", line 94, 
in update
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource setattr(self, 
k, v)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
224, in __set__
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource 
instance_dict(instance), value, None)
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
1027, in set
2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource lambda adapter, 
i: 

[Yahoo-eng-team] [Bug 1606430] [NEW] Implied Role not supported

2016-07-25 Thread Kenji Ishii
Public bug reported:

Keystone v3 supports a feature about Implied Role in Mitaka.
 - 
http://specs.openstack.org/openstack/keystone-specs/specs/backlog/implied-roles.html
 - https://blueprints.launchpad.net/keystone/+spec/implied-roles

Implied role is to define a relation between roles.
If role_A implies role_B and a user has a role_A, this user can operate actions 
which allowed to people who have role_A and role_B. Detail is on that site.
Actually, when we use Openstack, it might need many kinds of roles (not only 
'admin' and 'member').
As scale is bigger than before, the number of roles needed will also increase 
in order to divide rights.
This feature helps operators manage these settings easily.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606430

Title:
  Implied Role not supported

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Keystone v3 supports a feature about Implied Role in Mitaka.
   - 
http://specs.openstack.org/openstack/keystone-specs/specs/backlog/implied-roles.html
   - https://blueprints.launchpad.net/keystone/+spec/implied-roles

  Implied role is to define a relation between roles.
  If role_A implies role_B and a user has a role_A, this user can operate 
actions which allowed to people who have role_A and role_B. Detail is on that 
site.
  Actually, when we use Openstack, it might need many kinds of roles (not only 
'admin' and 'member').
  As scale is bigger than before, the number of roles needed will also increase 
in order to divide rights.
  This feature helps operators manage these settings easily.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226344] Re: instance-gc doesn't work on baremetal

2016-07-25 Thread Launchpad Bug Tracker
[Expired for tripleo because there has been no activity for 60 days.]

** Changed in: tripleo
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226344

Title:
  instance-gc doesn't work on baremetal

Status in OpenStack Compute (nova):
  Won't Fix
Status in tripleo:
  Expired

Bug description:
  Per bug 1226342 it's possible to get into a situation where a nova bm
  node has an instance uuid associated with it that doesn't exist
  anymore (it may be presented 'DELETED' or just completely gone).

  This should be detected and result in the node being forced off and
  deassociated from the instance uuid, in the same way a VM hypervisor
  kills rogue VM's, but for some reason it's not working.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606426] [NEW] Upgrading to Mitaka casues significant slow down on user-list

2016-07-25 Thread Sam Morrison
Public bug reported:

With Kilo doing a user-list on V2 or V3 would take approx. 2-4 seconds

In Mitaka it takes 19-22 seconds. This is a significant slow down.

We have ~9,000 users

We also changed from going under eventlet to moving to apache wsgi

We have ~10,000 project and this api (project-list) hasn't slowed down
so I think this is something specific to the user-list api

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1606426

Title:
  Upgrading to Mitaka casues significant slow down on user-list

Status in OpenStack Identity (keystone):
  New

Bug description:
  With Kilo doing a user-list on V2 or V3 would take approx. 2-4 seconds

  In Mitaka it takes 19-22 seconds. This is a significant slow down.

  We have ~9,000 users

  We also changed from going under eventlet to moving to apache wsgi

  We have ~10,000 project and this api (project-list) hasn't slowed down
  so I think this is something specific to the user-list api

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1606426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533380] Re: Creating multiple instances with a single request when using cells creates wrong instance names

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319091
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=00dc082699a6ca1e3e841bb092352e4f29a3b080
Submitter: Jenkins
Branch:master

commit 00dc082699a6ca1e3e841bb092352e4f29a3b080
Author: Andrey Volkov 
Date:   Thu May 19 18:20:17 2016 +0300

Skip instance name templating in API cell

There is a template called multi_instance_display_name_template
which is used when a user creates multiple instances in one request.
Template is used to assign display name to instance and
by default looks like "%(name)s-%(count)d".

If cells are enabled template was applied two or more times (in API cell
and in child cell).

Changes exclude template applying in API cell.
Also tests are changed to check display name in default environment
and cells environment.

Change-Id: Ib6059dc17665a540916885b8d71d63bffeb6fca6
Co-Authored-By: Andrew Laski 
Closes-Bug: #1533380


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1533380

Title:
  Creating multiple instances with a single request when using cells
  creates wrong instance names

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When creating multiple instances with a single request the instance name has 
the format defined in the "multi_instance_display_name_template" option.
  By default: multi_instance_display_name_template=%(name)s-%(count)d
  When booting two instances (num-instances=2) with the name=test is expected 
to have the following instance names:
  test-1
  test-2

  However, if using cells (only considering 2 levels) we have the following 
names:
  test-1-1
  test-1-2

  Increasing the number of cell levels adds more hops in the instance name.
  Changing the "multi_instance_display_name_template" to uuids has the same 
problem.
  For example: (consider  a random uuid)
  test--
  test--

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1533380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605444] Re: Should check null key when set key-value pair in property

2016-07-25 Thread QiangTang
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: python-glanceclient
 Assignee: (unassigned) => QiangTang (qtang)

** Changed in: glance
   Status: New => Invalid

** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1605444

Title:
  Should check null key when set key-value pair in property

Status in python-glanceclient:
  New

Bug description:
  root@controller01:~# glance image-update ubuntu-14.04-server-amd64 --property 
=
  +---+--+
  | Property  | Value|
  +---+--+
  | Property ''   | None |
  | Property 'vmware_adaptertype' | lsiLogic |
  | Property 'vmware_disktype'| streamOptimized  |
  | checksum  | 7b26c526f6f7c8ab874ad55b2698dee0 |
  | container_format  | bare |
  | created_at| 2016-07-21T07:37:42.00   |
  | deleted   | False|
  | deleted_at| None |
  | disk_format   | vmdk |
  | id| 2fb0b8a1-1634-499d-95ee-2451ec23893e |
  | is_public | True |
  | min_disk  | 5|
  | min_ram   | 512  |
  | name  | ubuntu-14.04-server-amd64|
  | owner | 5b53ee384ebc4212b4086241e1075dea |
  | protected | False|
  | size  | 971606528|
  | status| active   |
  | updated_at| 2016-07-22T02:25:46.00   |
  | virtual_size  | None |
  +---+--+

  The null key set success as the result.
  But null key is meaningless to the user which also belong to the invalid 
input in other components, like cinder and nova flavor as below:

  root@controller01:~# nova flavor-key m1.tiny set =
  ERROR (CommandError): Invalid key: "". Keys may only contain letters, 
numbers, spaces, underscores, periods, colons and hyphens.

  So for the consistent openstack UE and error handling, glance cli site
  better to add the null key check.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1605444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606020] Re: UnhashableKeyWarning in logs

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/346910
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=14f7b015eaf888458833aee0c77fc70fb4749818
Submitter: Jenkins
Branch:master

commit 14f7b015eaf888458833aee0c77fc70fb4749818
Author: Luis Daniel Castellanos 
Date:   Mon Jul 25 10:57:30 2016 -0500

Fix for cinder api memoize issue

cinder api (api/cinder.py) cinderclient returns an
UnashableKeyWarning for client api_version param.

This patch fixes the bug by removing the api_version param from the
memoized with request function.

Change-Id: I11f3ca8642fb746bd725f059f089f9b3900c6f66
Closes-Bug: #1606020


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606020

Title:
  UnhashableKeyWarning in logs

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  UnhashableKeyWarning: The key ((({'client': ,
 'version': 2}, u'admin', 'b6240f0c5a36459cb14062b31293cf52', 
u'837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101:8776/v2/837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101/identity'),), ()) is not hashable and cannot be 
memoized.
  WARNING:py.warnings:UnhashableKeyWarning: The key ((({'client': ,
 'version': 2}, u'admin', 'b6240f0c5a36459cb14062b31293cf52', 
u'837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101:8776/v2/837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101/identity'),), ()) is not hashable and cannot be 
memoized.

  
  This might be related to https://review.openstack.org/#/c/314750/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606406] [NEW] Debian hosts template has wrong path

2016-07-25 Thread Dan Peschman
Public bug reported:

templates/hosts.debian.tmpl says
a.) make changes to the master file in /etc/cloud/templates/hosts.tmpl

but should say
a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1606406

Title:
  Debian hosts template has wrong path

Status in cloud-init:
  New

Bug description:
  templates/hosts.debian.tmpl says
  a.) make changes to the master file in /etc/cloud/templates/hosts.tmpl

  but should say
  a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1606406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603736] Re: Remove already deprecated ec2 api code

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/344967
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cbc34d78b895cb131469c4dd67f2296a909437b7
Submitter: Jenkins
Branch:master

commit cbc34d78b895cb131469c4dd67f2296a909437b7
Author: yatinkarel 
Date:   Wed Jul 20 23:32:15 2016 +0530

Remove unnecessary code added for ec2 deprecation

The code which depicts ec2 deprecation is removed
in Newton.

Change-Id: Ia9dd7790199b9db3ea901d6e8b4ba1e44c9129dc
Closes-Bug: #1603736


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603736

Title:
  Remove already deprecated ec2 api code

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As per Sean Dague's below comment in review:-
  https://review.openstack.org/#/c/279721 the unnecessary code is to be
  removed:-

  # NOTE(sdague): this whole file is safe to remove in Newton. We just
  # needed a release cycle for it.

  File: nova/api/ec2/__init__.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604438] Re: Neutron lib - migration tool fails without bc

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/344268
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=cf874bf5603b3f0c40588c32d999431d1d95de7e
Submitter: Jenkins
Branch:master

commit cf874bf5603b3f0c40588c32d999431d1d95de7e
Author: Gary Kotton 
Date:   Tue Jul 19 07:21:04 2016 -0700

Migration report: validate that bc is installed

Without this the tool reports:

gkotton@ubuntu:~/neutron-lib$ ./tools/migration_report.sh ../vmware-nsx/
You have 2517 total imports
You imported Neutron 477 times
You imported Neutron-Lib 108 times
./tools/migration_report.sh: line 30: bc: command not found
./tools/migration_report.sh: line 31: bc: command not found
./tools/migration_report.sh: line 33: [: : integer expression expected

Closes-bug: #1604438

Change-Id: Ib8236ae214d423c993d9e22035f70af1821af944


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604438

Title:
  Neutron lib - migration tool fails without bc

Status in neutron:
  Fix Released

Bug description:
  gkotton@ubuntu:~/neutron-lib$ ./tools/migration_report.sh ../vmware-nsx/
  You have 2517 total imports
  You imported Neutron 477 times
  You imported Neutron-Lib 108 times
  ./tools/migration_report.sh: line 30: bc: command not found
  ./tools/migration_report.sh: line 31: bc: command not found
  ./tools/migration_report.sh: line 33: [: : integer expression expected

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414559] Re: OVS drops RARP packets by QEMU upon live-migration - VM temporarily disconnected

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246898
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b7c303ee0a16a05c1fdb476dc7f4c7ca623a3f58
Submitter: Jenkins
Branch:master

commit b7c303ee0a16a05c1fdb476dc7f4c7ca623a3f58
Author: Oleg Bondarev 
Date:   Wed Nov 18 12:15:09 2015 +0300

Notify nova with network-vif-plugged in case of live migration

 - during live migration on pre migration step nova plugs instance
   vif device on the destination compute node;
 - L2 agent on destination host detects new device and requests device
   info from server;
 - server does not change port status since port is bound to another
   host (source host);
 - L2 agent processes device and sends update_device_up to server;
 - again server does not update status as port is bound to another host;

Nova notifications are sent only in case port status change so in this case
no notifications are sent.

The fix is to explicitly notify nova if agent reports device up from a host
other than port's current host.

This is the fix on neutron side, the actual fix of the bug is on nova side:
change-id Ib1cb9c2f6eb2f5ce6280c685ae44a691665b4e98

Closes-Bug: #1414559
Change-Id: Ifa919a9076a3cc2696688af3feadf8d7fa9e6fc2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414559

Title:
  OVS drops RARP packets by QEMU upon live-migration - VM temporarily
  disconnected

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When live-migrating a VM the QEMU send 5 RARP packets in order to allow 
re-learning of the new location of the VM's MAC address.
  However the VIF creation scheme between nova-compute and neutron-ovs-agent 
drops these RARPs:
  1. nova creates a port on OVS but without the internal tagging. 
  2. At this stage all the packets that come out from the VM, or QEMU process 
it runs in, will be dropped.
  3. The QEMU sends five RARP packets in order to allow MAC learning. These 
packets are dropped as described in #2.
  4. In the meanwhile neutron-ovs-agent loops every POLLING_INTERVAL and scans 
for new ports. Once it detects a new port is added. it will read the properties 
of the new port, and assign the correct internal tag, that will allow 
connection of the VM.

  The flow above suggests that:
  1. RARP packets are dropped, so MAC learning takes much longer and depends on 
internal traffic and advertising by the VM.
  2. VM is disconnected from the network for a mean period of POLLING_INTERVAL/2

  Seems like this could be solved by direct messages between nova vif
  driver and neutron-ovs-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605832] Re: no 8021q support

2016-07-25 Thread Armando Migliaccio
Adding Neutron to keep an eye on the Cirros issue.


** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605832

Title:
  no 8021q support

Status in CirrOS:
  New
Status in neutron:
  New

Bug description:
  Apologies if this is not the right place for a feature request.

  In OpenStack Neutron we are developing a feature for to allow VMs to
  send tagged VLANs and we would like end-to-end testing support for it
  (all of which is currently based on cirros). However, Cirros doesn't
  appear to support creating VLAN interfaces:

  $ sudo ip link add link eth0 name eth0.99 type vlan id 99
  ip: RTNETLINK answers: Operation not supported

  
  Is it possible to have the 8021q kernel module loaded into cirros, or would 
that require too much space?

  
  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1605832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584204] Re: VersionsCallbackNotFound exception when using QoS

2016-07-25 Thread Ihar Hrachyshka
I presume it's fixed now.

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: In Progress => Fix Released

** Tags removed: neutron-proactive-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584204

Title:
  VersionsCallbackNotFound exception when using QoS

Status in networking-ovn:
  Confirmed
Status in neutron:
  Fix Released

Bug description:
  VersionsCallbackNotFound exception occurred in neutron-server running
  networking-ovn when trying to enable QoS with the following commands:

  $ neutron qos-policy-create bw-limiter

  $ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000
  --max-burst-kbps 300

  Note:  This exception occurred when running core plugin or ML2 mech
  driver.

  
  2016-05-20 09:41:36.789 27596 DEBUG oslo_policy.policy 
[req-0fe76c74-76a6-43b3-8f5b-4d85a65aec7b admin -] Reloaded policy file: 
/etc/neutron/policy.json _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:520
  2016-05-20 09:41:36.954 27596 INFO neutron.wsgi 
[req-0fe76c74-76a6-43b3-8f5b-4d85a65aec7b admin -] 192.168.56.10 - - 
[20/May/2016 09:41:36] "GET /v2.0/qos/policies.json?fields=id=bw-limiter 
HTTP/1.1" 200 260 0.368297
  2016-05-20 09:41:37.031 27596 DEBUG neutron.api.v2.base 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] Request body: 
{u'bandwidth_limit_rule': {u'max_kbps': u'3000', u'max_burst_kbps': u'300'}} 
prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:658
  2016-05-20 09:41:37.031 27596 DEBUG neutron.api.v2.base 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] Unknown quota resources 
['bandwidth_limit_rule']. _create /opt/stack/neutron/neutron/api/v2/base.py:460
  2016-05-20 09:41:37.056 27596 DEBUG neutron.api.rpc.handlers.resources_rpc 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] 
neutron.api.rpc.handlers.resources_rpc.ResourcesPushRpcApi method push called 
with arguments (, 
QosPolicy(description='',id=dbee9581-44a5-4889-bd06-9193eb08c10d,name='bw-limiter',rules=[QosRule(7317f86e-bacb-4c6c-9221-66e2f9d9309d)],shared=False,tenant_id=7c291c3d9d1a45dd89c8c80c7f5f12b0),
 'updated') {} wrapper 
/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py:47
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] create failed
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 412, in create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 523, in _create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource obj = 
do_create(body)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 505, in do_create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-05-20 09:41:37.056 27596 ERROR 

[Yahoo-eng-team] [Bug 1605066] Re: [Neutron][VPNaaS] Failed to create ipsec site connection

2016-07-25 Thread Miguel Lavalle
Based on comment from Dongcan Ye, setting this bug to invalid since it
is caused by a libreswan bug

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605066

Title:
  [Neutron][VPNaaS] Failed to create ipsec site connection

Status in neutron:
  Invalid

Bug description:
  Code repo: neutron-vpnaas master
  OS: Centos7
  ipsec device driver: libreswan-3.15-5.el7_1.x86_64

  In /etc/neutron/vpn_agent.ini, vpn_device_driver is
  neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver.

  Before running neutron-vpn-agent, I had checked ipsec status, it seems normal:
  # ipsec verify
  Verifying installed system and configuration files

  Version check and ipsec on-path   [OK]
  Libreswan 3.15 (netkey) on 3.10.0-123.el7.x86_64
  Checking for IPsec support in kernel  [OK]
   NETKEY: Testing XFRM related proc values
   ICMP default/send_redirects  [OK]
   ICMP default/accept_redirects[OK]
   XFRM larval drop [OK]
  Pluto ipsec.conf syntax   [OK]
  Hardware random device[N/A]
  Two or more interfaces found, checking IP forwarding  [OK]
  Checking rp_filter[OK]
  Checking that pluto is running[OK]
   Pluto listening for IKE on udp 500   [OK]
   Pluto listening for IKE/NAT-T on udp 4500[OK]
   Pluto ipsec.secret syntax[OK]
  Checking 'ip' command [OK]
  Checking 'iptables' command   [OK]
  Checking 'prelink' command does not interfere with FIPSChecking for obsolete 
ipsec.conf options   [OK]
  Opportunistic Encryption  [DISABLED]

  After create ikepolicy, ipsecpolicy and vpn service, create an 
ipsec-site-connection failed,
  ipsec whack --ctlbase status code in vpn-agent.log returns 1 which means not 
running.

  Then I trace the code, I think the problem is in function enable(), call 
self.ensure_configs()[1] may have some problems.
  ensure_configs[2] in libreswan_ipsec.py will override, I'm not confirm the 
root cause is ipsec checknss (which create nssdb).
  If call self.ensure_configs() failed, we can't start ipsec pluto daemon.

  
  Here is the running ipsec process:
  # ps aux |grep ipsec
  root 3  0.0  0.0   9648  1368 pts/17   S+   12:59   0:00 /bin/sh 
/sbin/ipsec checknss 
/opt/stack/data/neutron/ipsec/f75151f6-ef01-4a68-9747-eb52f4e629f5/etc
  root 4  0.0  0.0  37400  3300 pts/17   S+   12:59   0:00 certutil -N 
-d sql:/etc/ipsec.d --empty-password
  root 25893  0.0  0.0   9040   668 pts/0S+   13:40   0:00 grep 
--color=auto ipsec
  root 26396  0.0  0.1 335268  4588 ?Ssl  08:58   0:00 
/usr/libexec/ipsec/pluto --config /etc/ipsec.conf --nofork

  [1] 
https://github.com/openstack/neutron-vpnaas/blob/master/neutron_vpnaas/services/vpn/device_drivers/ipsec.py#L304
  [2] 
https://github.com/openstack/neutron-vpnaas/blob/master/neutron_vpnaas/services/vpn/device_drivers/libreswan_ipsec.py#L59

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598370] Re: Got AttributeError when launching instance in Aarch64

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/336781
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a221eb4cc6ea6bd3459495be9d2a443e9808fb1a
Submitter: Jenkins
Branch:master

commit a221eb4cc6ea6bd3459495be9d2a443e9808fb1a
Author: Kevin Zhao 
Date:   Sat Jul 2 09:13:58 2016 +

libvirt: Modify the interface address object assignment

In some other arch such as AArch64, the interface of the device
has the address "virtio-mmio", not "PCI" or "Drive", so assign the
default object "LibvirtConfigGuestDeviceAddress" to the interface.
Also fix problem for S390x, s390x has address "ccw".

Closes-Bug: #1598370

Change-Id: I10004c2060ff8f5a60e065b237311fe5b9fd1876
Signed-off-by: Kevin Zhao 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1598370

Title:
  Got AttributeError  when launching instance in Aarch64

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Using nova to create an instance in AArch64,and finally got AttributeError 
"'NoneType' object has no attribute 'parse_dom'" after "_get_guest_xml".
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Upload the aarch64 image with glance.
  $ source ~/devstack/openrc admin admin
  $ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
  $ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
  $ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd --progress
  $ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }')
  $ IMAGE_KERNEL_UUID=$(glance image-list | grep image-arm64.vmlinuz | awk '{ 
print $2 }')
  $ IMAGE_INITRD_UUID=$(glance image-list | grep image-arm64.initrd | awk '{ 
print $2 }')
  $ glance image-update --kernel-id ${IMAGE_KERNEL_UUID} --ramdisk-id 
${IMAGE_INITRD_UUID} ${IMAGE_UUID}
  3.Set the scsi model:
  $ glance image-update --property hw_disk_bus --property 
hw_scsi_model=virtio-scsi ${IMAGE_UUID}

  4.nova add keypair
  $ nova keypair-add default --pub-key ~/.ssh/id_rsa.pub

  5.Launch the instance:
  $ image=$(nova image-list | egrep "image-arm64.img"'[^-]' | awk '{ print $2 
}')
  $ nova boot --flavor m1.small--image ${image} --key-name default test-arm64

  6.See the n-cpu log, we can get the error information.

  Expected result
  ===
  Spawning guest successfully.

  Actual result
  =
  Got the error log information as below:
  2016-07-02 06:57:08.645 ERROR nova.compute.manager 
[req-c8805971-7d8a-4775-ae95-7ac62b284487 admin admin] [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] Instance failed to spawn
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] Traceback (most recent call last):
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2063, in _build_resources
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] yield resources
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1907, in _build_and_run_instance
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] block_device_info=block_device_info)
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2665, in spawn
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] post_xml_callback=gen_confdrive)
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4860, in 
_create_domain_and_network
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] post_xml_callback=post_xml_callback)
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4784, in _create_domain
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] post_xml_callback()
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1606328] [NEW] UX: Resource Types table with unnecessary checkboxes

2016-07-25 Thread Eddie Ramirez
Public bug reported:

1. Go to Project->Orchestration->Resource Types

Looks like there are no table actions (batch actions) attached to this
table, making the checkboxes a useless thing.

See attached screenshot.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: heat ux

** Attachment added: "Screenshot"
   
https://bugs.launchpad.net/bugs/1606328/+attachment/4707257/+files/resourcetypes.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606328

Title:
  UX: Resource Types table with unnecessary checkboxes

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Go to Project->Orchestration->Resource Types

  Looks like there are no table actions (batch actions) attached to this
  table, making the checkboxes a useless thing.

  See attached screenshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570535] Re: UX: Resource Types table needs filter

2016-07-25 Thread David Lyle
*** This bug is a duplicate of bug 1569681 ***
https://bugs.launchpad.net/bugs/1569681

** This bug has been marked a duplicate of bug 1569681
   It's hard to find the resource_type in Orchestration panel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1570535

Title:
  UX: Resource Types table needs filter

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Resource Types table could REALLY use a filter of some kind, to match
  other tables: https://i.imgur.com/9hu6ZqN.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1570535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570535] Re: UX: Resource Types table needs filter

2016-07-25 Thread Eddie Ramirez
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1570535

Title:
  UX: Resource Types table needs filter

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Resource Types table could REALLY use a filter of some kind, to match
  other tables: https://i.imgur.com/9hu6ZqN.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1570535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597644] Re: Quobyte: Permission denied on console.log during instance startup

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/337174
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d4e6bd8b2d0236b44baa9e20f301e88934e1450c
Submitter: Jenkins
Branch:master

commit d4e6bd8b2d0236b44baa9e20f301e88934e1450c
Author: Silvan Kaiser 
Date:   Mon Jul 4 11:58:00 2016 +

Revert "Remove manual creation of console.log"

The change to be reverted in this change removed a manual
console.log creation that was thought unnecessary. However due
to libvirt behaviour permission issue arise if console.log is
created not by nova causing the Quobyte CI to fail.
Closes-Bug: #1597644
This reverts commit ea3904b168e4360b6c68464dbcd0585987b27e91.

Change-Id: I36d1faf90f9b97ae756b8bcda3beb47e66f9e587


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597644

Title:
  Quobyte: Permission denied on console.log during instance startup

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  A range of tempest volume tests fails when starting Quobyte volume
  based instances. This issue might be hit by other filesystem based
  volume drivers. The error shows the following trace in the nova-
  compute log:

  2016-06-30 07:01:42.770 1391 ERROR nova.virt.libvirt.guest 
[req-b128a930-5eaa-4c20-a26c-0bf5168f2232 
tempest-AttachVolumeShelveTestJSON-88210136 
tempest-AttachVolumeShelveTestJSON-88210136] Error launching a defined domain 
with XML: 
instance-0001
5f1d3712-641e-4cbe-a801-b103d75f59e3

  http://openstack.org/xmlns/libvirt/nova/1.0;>

tempest.common.compute-instance-314522010
2016-06-30 07:01:40

  64
  0
  0
  0
  1


  tempest-AttachVolumeShelveTestJSON-88210136
  tempest-AttachVolumeShelveTestJSON-88210136


  

65536
65536
1

  1024


  
OpenStack Foundation
OpenStack Nova
14.0.0
32c3c45f-2605-40e5-8262-de1d16cd6181
5f1d3712-641e-4cbe-a801-b103d75f59e3
Virtual Machine
  


  hvm
  
/opt/stack/data/nova/instances/5f1d3712-641e-4cbe-a801-b103d75f59e3/kernel
  
/opt/stack/data/nova/instances/5f1d3712-641e-4cbe-a801-b103d75f59e3/ramdisk
  root=/dev/vda console=tty0 console=ttyS0
  
  


  
  


  
  


  
  
  

destroy
restart
destroy

  /usr/bin/kvm-spice
  




  
  





  
  

  
  
  

  
  




  
  


  
  

  
  


  
  


  

  

  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager 
[req-b128a930-5eaa-4c20-a26c-0bf5168f2232 
tempest-AttachVolumeShelveTestJSON-88210136 
tempest-AttachVolumeShelveTestJSON-88210136] [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3] Instance failed to spawn
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3] Traceback (most recent call last):
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2063, in _build_resources
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3] yield resources
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1907, in _build_and_run_instance
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3] block_device_info=block_device_info)
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2665, in spawn
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3] post_xml_callback=gen_confdrive)
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4860, in 
_create_domain_and_network
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3] post_xml_callback=post_xml_callback)
  2016-06-30 07:01:42.770 1391 ERROR nova.compute.manager [instance: 
5f1d3712-641e-4cbe-a801-b103d75f59e3]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4789, in 

[Yahoo-eng-team] [Bug 1571722] Re: Neutronclient failures throw unhelpful "Unexpected Exception"

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/312014
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=80d39a65062a424c9efe42257a38a09c6af2f3e9
Submitter: Jenkins
Branch:master

commit 80d39a65062a424c9efe42257a38a09c6af2f3e9
Author: Sahid Orentino Ferdjaoui 
Date:   Tue May 3 06:54:21 2016 -0400

network: handle unauthorized exception from neutron

Neutron can raise an unauthorized exception for an expired or invalid
token. In case of admin context we log a message to inform operators
that the Neutron admin creadential are not well configured then return
a 500 to user. In case of user context we return 401.

Change-Id: I87c8b86373967639eb55b4cc3b7d6cbd9780f3ac
Closes-Bug: #1571722


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1571722

Title:
  Neutronclient failures throw unhelpful "Unexpected Exception"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We are seeing some bug reports coming in where users experience an
  "Unexpected Exception" error when neutronclient fails. This is not
  particularly helpful for anyone experiencing this issue and it would
  be nice to have something more specific to aid in troubleshooting.

  We could arguably blanket wrap all connection failures from
  neutronclient in nova.network.neutronv2.api and raise those up as
  something more specific. See cinderclient usage in nova, for example.

  like https://github.com/openstack/python-
  neutronclient/blob/master/neutronclient/common/exceptions.py#L204 or
  https://github.com/openstack/python-
  neutronclient/blob/master/neutronclient/common/exceptions.py#L109

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1571722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572062] Re: nova-consoleauth doesn't play well with memcached

2016-07-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307718
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=59192cfbf9482fe3798cdcd4c2044d8c85f66e95
Submitter: Jenkins
Branch:master

commit 59192cfbf9482fe3798cdcd4c2044d8c85f66e95
Author: Jens Rosenboom 
Date:   Tue Apr 19 13:18:52 2016 +0200

Warn when using null cache backend

The default backend for oslo_cache is dogpile.cache.null, leading to a
broken setup when following the current release notes or the deprecation
message verbatim. So we should at least emit a warning when the config
does not specify a different backend.

Note: I'm not sure whether it is possible to amend the release note like
this or whether there needs a new note to be added, please advise.

Change-Id: I16647e424d8382dae98f13cb1f73a7e0c0aebaf5
Closes-Bug: 1572062


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572062

Title:
  nova-consoleauth doesn't play well with memcached

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When running with the Ubuntu Xenial Mitaka packages, I'm seeing the
  following behaviour:

  Following the release notes at
  http://docs.openstack.org/releasenotes/nova/mitaka.html I remove the
  old option "memcached_servers" and set up a cache section with

  [cache]
  enabled = true
  memcache_servers = host1:11211,host2:11211,host3:11211

  The result is that there are errors logged when creating a token:

  2016-04-19 10:17:33.501 15952 WARNING nova.consoleauth.manager 
[req-ad043de9-616d-4cbb-a93f-fc5e345f1f53 1b0106d4f339406bb7012a83162ba5f2 
e3c253d3e8344a8796e70bc4f96b6166 - - -] Token: 
dd0e7bbc-c593-421f-a56b-52578aaec20e failed to save into memcached.
  2016-04-19 10:17:33.503 15952 WARNING nova.consoleauth.manager 
[req-ad043de9-616d-4cbb-a93f-fc5e345f1f53 1b0106d4f339406bb7012a83162ba5f2 
e3c253d3e8344a8796e70bc4f96b6166 - - -] Instance: 
d3267abb-258d-4b5d-a22e-8e3b0a39905c failed to save into memcached

  Only after a lot of debugging it turns out that the default backend
  for oslo_cache is dogpile.cache.null, implying that no values get
  cached and token validation always fails. Only after adding

  [cache]
  backend = oslo_cache.memcache_pool

  does console authentication start working. Strangely though, the above
  warning messages are still being logged in the working setup, which
  made debugging this even more difficult.

  So I suggest the following fixes:

  1. Change the text of the warnings from "failed to save into memcached" to 
"failed to save into cache", as with the change to using oslo_cache, there may 
be other backends in use instead of memcached.
  2. Either override the default of using the null backend or refuse to run 
with it or at the very least give a big fat warning that the configuration can 
not work.
  3. Stop generating the warning messages when the data got in fact saved into 
cache properly.

  Package versions for reference:
  # dpkg -l | grep nova
  ii  nova-api-metadata2:13.0.0-0ubuntu2   all  
OpenStack Compute - metadata API frontend
  ii  nova-api-os-compute  2:13.0.0-0ubuntu2   all  
OpenStack Compute - OpenStack Compute API frontend
  ii  nova-cert2:13.0.0-0ubuntu2   all  
OpenStack Compute - certificate management
  ii  nova-common  2:13.0.0-0ubuntu2   all  
OpenStack Compute - common files
  ii  nova-conductor   2:13.0.0-0ubuntu2   all  
OpenStack Compute - conductor service
  ii  nova-consoleauth 2:13.0.0-0ubuntu2   all  
OpenStack Compute - Console Authenticator
  ii  nova-scheduler   2:13.0.0-0ubuntu2   all  
OpenStack Compute - virtual machine scheduler
  ii  nova-spiceproxy  2:13.0.0-0ubuntu2   all  
OpenStack Compute - spice html5 proxy
  ii  python-nova  2:13.0.0-0ubuntu2   all  
OpenStack Compute Python libraries
  ii  python-novaclient2:3.3.1-2   all  
client library for OpenStack Compute API - Python 2.7
  # dpkg -l | grep oslo
  ii  python-oslo.cache1.6.0-2 all  
cache storage for Openstack projects - Python 2.7
  ii  python-oslo.concurrency  3.7.0-2 all  
concurrency and locks for OpenStack projects - Python 2.x
  ii  python-oslo.config   1:3.9.0-3   all  
Common code for Openstack Projects (configuration API) - Python 2.x
  ii  

[Yahoo-eng-team] [Bug 1606294] [NEW] Error 404 is poorly styled

2016-07-25 Thread Eddie Ramirez
Public bug reported:

How to reproduce:

1. Make the web server to respond a HTTP 404 status code (with
DEBUG=False).

The 404 template file at openstack_dashboard/templates/404.html is
rendering a poorly styled web page with no margins and a hyphen at the
beginning of the .

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot"
   
https://bugs.launchpad.net/bugs/1606294/+attachment/4707221/+files/error404.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606294

Title:
  Error 404 is poorly styled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:

  1. Make the web server to respond a HTTP 404 status code (with
  DEBUG=False).

  The 404 template file at openstack_dashboard/templates/404.html is
  rendering a poorly styled web page with no margins and a hyphen at the
  beginning of the .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606269] [NEW] Incorrect flavor in request_spec on resize

2016-07-25 Thread Charles Volzka
Public bug reported:

On resizes, the RequestSpec object sent to the scheduler contains the
instance's original flavor.  This is causing an issue in our scheduler
because it is not seeing a new flavor for the resize tasking.

The issue appears to be that at
https://github.com/openstack/nova/blob/76dfb4ba9fa0fed1350021591956c4e8143b1ce9/nova/conductor/tasks/migrate.py#L52
the RequestSpec is hydrated with self.instance.flavor than the new
flavor which is  self.flavor.

Issue discovered in newtwon nova and appeared after
https://github.com/openstack/nova/commit/76dfb4ba9fa0fed1350021591956c4e8143b1ce9?diff=split
#diff-b839034e35c154b8c3a1c65bf7791eefL42

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606269

Title:
  Incorrect flavor in request_spec on resize

Status in OpenStack Compute (nova):
  New

Bug description:
  On resizes, the RequestSpec object sent to the scheduler contains the
  instance's original flavor.  This is causing an issue in our scheduler
  because it is not seeing a new flavor for the resize tasking.

  The issue appears to be that at
  
https://github.com/openstack/nova/blob/76dfb4ba9fa0fed1350021591956c4e8143b1ce9/nova/conductor/tasks/migrate.py#L52
  the RequestSpec is hydrated with self.instance.flavor than the new
  flavor which is  self.flavor.

  Issue discovered in newtwon nova and appeared after
  
https://github.com/openstack/nova/commit/76dfb4ba9fa0fed1350021591956c4e8143b1ce9?diff=split
  #diff-b839034e35c154b8c3a1c65bf7791eefL42

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1606269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606252] Re: refresh tasks schemas for api-ref

2016-07-25 Thread Brian Rosmaita
Nothing to do.  The sample v2/schemas/task and v2/schemas/tasks
responses are accurate as of the 13.0.0.0b2 release.

** Changed in: glance
   Status: In Progress => Invalid

** Changed in: glance
 Assignee: Brian Rosmaita (brian-rosmaita) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1606252

Title:
  refresh tasks schemas for api-ref

Status in Glance:
  Invalid

Bug description:
  Review the tasks schemas in the api-ref and refresh if necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1606252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606252] [NEW] refresh tasks schemas for api-ref

2016-07-25 Thread Brian Rosmaita
Public bug reported:

Review the tasks schemas in the api-ref and refresh if necessary.

** Affects: glance
 Importance: High
 Assignee: Brian Rosmaita (brian-rosmaita)
 Status: In Progress


** Tags: api-ref

** Changed in: glance
   Status: New => In Progress

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1606252

Title:
  refresh tasks schemas for api-ref

Status in Glance:
  In Progress

Bug description:
  Review the tasks schemas in the api-ref and refresh if necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1606252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606253] [NEW] replace image-update response example in api-ref

2016-07-25 Thread Brian Rosmaita
Public bug reported:

Not only is the owner null, but the image is active and has a non-null
file with null disk_format and container_format.  Replace with a more
typical response.

** Affects: glance
 Importance: High
 Assignee: Brian Rosmaita (brian-rosmaita)
 Status: In Progress


** Tags: api-ref

** Changed in: glance
   Status: New => In Progress

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1606253

Title:
  replace image-update response example in api-ref

Status in Glance:
  In Progress

Bug description:
  Not only is the owner null, but the image is active and has a non-null
  file with null disk_format and container_format.  Replace with a more
  typical response.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1606253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606250] [NEW] refresh images schemas for api-ref

2016-07-25 Thread Brian Rosmaita
Public bug reported:

The api-ref got merged with at least one outdated image-related schema.
Refresh them all.

** Affects: glance
 Importance: High
 Assignee: Brian Rosmaita (brian-rosmaita)
 Status: In Progress


** Tags: api-ref

** Changed in: glance
   Status: New => In Progress

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1606250

Title:
  refresh images schemas for api-ref

Status in Glance:
  In Progress

Bug description:
  The api-ref got merged with at least one outdated image-related
  schema.  Refresh them all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1606250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606229] [NEW] vif_port_id of ironic port is not updating after neutron port-delete

2016-07-25 Thread Andrey Shestakov
Public bug reported:

Steps to reproduce:
1. Get list of attached ports of instance:
nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
++--+--+---+---+
| Port State | Port ID  | Net ID
   | IP addresses  | MAC Addr  |
++--+--+---+---+
| ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
++--+--+---+---+
2. Show ironic port. it has vif_port_id in extra with id of neutron port:
ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
+---+---+
| Property  | Value 
|
+---+---+
| address   | 52:54:00:85:19:89 
|
| created_at| 2016-07-20T13:15:23+00:00 
|
| extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
| local_link_connection |   
|
| node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741  
|
| pxe_enabled   |   
|
| updated_at| 2016-07-22T13:31:29+00:00 
|
| uuid  | 735fcaf5-145d-4125-8701-365c58c6b796  
|
+---+---+
3. Delete neutron port:
neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
4. It is done from interface list:
nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
++-++--+--+
| Port State | Port ID | Net ID | IP addresses | MAC Addr |
++-++--+--+
++-++--+--+
5. ironic port still has vif_port_id with neutron's port id:
ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
+---+---+
| Property  | Value 
|
+---+---+
| address   | 52:54:00:85:19:89 
|
| created_at| 2016-07-20T13:15:23+00:00 
|
| extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
| local_link_connection |   
|
| node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741  
|
| pxe_enabled   |   
|
| updated_at| 2016-07-22T13:31:29+00:00 
|
| uuid  | 735fcaf5-145d-4125-8701-365c58c6b796  
|
+---+---+

This can confuse when user wants to get list of unused ports of ironic node.
vif_port_id should be removed after neutron port-delete.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606229

Title:
  vif_port_id of ironic port is not updating after neutron port-delete

Status in neutron:
  New

Bug description:
  Steps to reproduce:
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr 
 |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 

[Yahoo-eng-team] [Bug 1604798] Re: Use DDT library to reduce code duplication

2016-07-25 Thread Dinesh Bhor
As per the discussion this is not a real bug so marking it as Invalid.

** Changed in: nova
   Status: In Progress => Invalid

** Changed in: nova
 Assignee: Dinesh Bhor (dinesh-bhor) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604798

Title:
  Use DDT library to reduce code duplication

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Use DDT library to reduce code duplication

  DDT can ease up error tracing and autogenerates tests on basis of different 
input data.
  It allows to multiply one test case by running it with different test data, 
and make it
  appear as multiple test cases. This will help to reduce code duplication.

  Please refer example use: 
  http://ddt.readthedocs.io/en/latest/example.html

  Currently DDT is implemented in openstack/cinder and openstack/rally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604798] Re: Use DDT library to reduce code duplication

2016-07-25 Thread OpenStack Infra
** Changed in: nova
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604798

Title:
  Use DDT library to reduce code duplication

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Use DDT library to reduce code duplication

  DDT can ease up error tracing and autogenerates tests on basis of different 
input data.
  It allows to multiply one test case by running it with different test data, 
and make it
  appear as multiple test cases. This will help to reduce code duplication.

  Please refer example use: 
  http://ddt.readthedocs.io/en/latest/example.html

  Currently DDT is implemented in openstack/cinder and openstack/rally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268955] Re: OVS agent updates the wrong port when using Xen + Neutron with HVM or PVHVM

2016-07-25 Thread Bob Ball
** Changed in: neutron
   Status: Incomplete => Fix Committed

** Project changed: neutron => nova

** Summary changed:

- OVS agent updates the wrong port when using Xen + Neutron with HVM or PVHVM
+ OVS agent updates the wrong port when using XenAPI + Neutron with HVM or PVHVM

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268955

Title:
  OVS agent updates the wrong port when using XenAPI + Neutron with HVM
  or PVHVM

Status in OpenStack Compute (nova):
  Fix Committed

Bug description:
  Environment
  ==
  - Xen Server 6.2
  - OpenStack Havana installed with Packstack
  - Neutron OVS agent using VLAN

  From time to time, when an instance is started, it fails to get
  network connectivity. As a result the instance cannot get its IP
  address from DHCP and it remains unreachable.

  After further investigation, it appears that the OVS agent running on
  the compute node is updating the wrong OVS port because on startup, 2
  ports exist for the same instance: vifX.0 and tapX.0. The agent
  updates whatever port is returned in first position (see logs below).
  Note that the tapX.0 port is only transient and disappears after a few
  seconds.

  Workaround
  ==

  Manually update the OVS port on dom0:

  $ ovs-vsctl set Port vif17.0 tag=1

  OVS Agent logs
  

  2014-01-14 14:15:11.382 18268 DEBUG neutron.agent.linux.utils [-] Running 
command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--columns=external_ids,name,ofport', 'find', 
'Interface', 'external_ids:iface-id="98679ab6-b879-4b1b-a524-01696959d468"'] 
execute /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:43
  2014-01-14 14:15:11.403 18268 DEBUG qpid.messaging.io.raw [-] SENT[3350c68]: 
'\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x81'
 writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
  2014-01-14 14:15:11.649 18268 DEBUG neutron.agent.linux.utils [-]
  Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--columns=external_ids,name,ofport', 'find', 
'Interface', 'external_ids:iface-id="98679ab6-b879-4b1b-a524-01696959d468"']
  Exit code: 0
  Stdout: 'external_ids: {attached-mac="fa:16:3e:46:1e:91", 
iface-id="98679ab6-b879-4b1b-a524-01696959d468", iface-status=active, 
xs-network-uuid="b2bf90df-be17-a4ff-5c1e-3d69851f508a", 
xs-vif-uuid="2d2718d8-6064-e734-2737-cdcb4e06efc4", 
xs-vm-uuid="7f7f1918-3773-d97c-673a-37843797f70a"}\nname: 
"tap29.0"\nofport  : 52\n\nexternal_ids: 
{attached-mac="fa:16:3e:46:1e:91", 
iface-id="98679ab6-b879-4b1b-a524-01696959d468", iface-status=inactive, 
xs-network-uuid="b2bf90df-be17-a4ff-5c1e-3d69851f508a", 
xs-vif-uuid="2d2718d8-6064-e734-2737-cdcb4e06efc4", 
xs-vm-uuid="7f7f1918-3773-d97c-673a-37843797f70a"}\nname: 
"vif29.0"\nofport  : 51\n\n'
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60
  2014-01-14 14:15:11.650 18268 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Port 
98679ab6-b879-4b1b-a524-01696959d468 updated. Details: {u'admin_state_up': 
True, u'network_id': u'ad37f107-074b-4c58-8f36-4705533afb8d', 
u'segmentation_id': 100, u'physical_network': u'default', u'device': 
u'98679ab6-b879-4b1b-a524-01696959d468', u'port_id': 
u'98679ab6-b879-4b1b-a524-01696959d468', u'network_type': u'vlan'}
  2014-01-14 14:15:11.650 18268 DEBUG neutron.agent.linux.utils [-] Running 
command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', 'set', 'Port', 'tap29.0', 'tag=1'] execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:43
  2014-01-14 14:15:11.913 18268 DEBUG neutron.agent.linux.utils [-]
  Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', 'set', 'Port', 'tap29.0', 'tag=1']
  Exit code: 0
  Stdout: '\n'
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies

2016-07-25 Thread Dinesh Bhor
** Also affects: murano
   Importance: Undecided
   Status: New

** Changed in: murano
 Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1214176

Title:
  Fix copyright headers to be compliant with Foundation policies

Status in Ceilometer:
  Fix Released
Status in devstack:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  Correct the copyright headers to be consistent with the policies
  outlined by the OpenStack Foundation at http://www.openstack.org/brand
  /openstack-trademark-policy/

  Remove references to OpenStack LLC, replace with OpenStack Foundation

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605421] Re: Unable to add classless-static-route in extra_dhcp_opt extension

2016-07-25 Thread Jakub Libosvar
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605421

Title:
  Unable to add classless-static-route in extra_dhcp_opt extension

Status in python-neutronclient:
  New

Bug description:
  When adding classless-static-route in extra_dhcp_opt for a port, neutron 
client
  complains syntax error. For example,

  $ neutron port-update --extra-dhcp-opt 
opt_name="classless-static-route",opt_value="169.254.169.254/32,20.20.20.1" 
port1
  usage: neutron port-update [-h] [--request-format {json}] [--name NAME]
     [--description DESCRIPTION]
     [--fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR]
     [--device-id DEVICE_ID]
     [--device-owner DEVICE_OWNER]
     [--admin-state-up {True,False}]
     [--security-group SECURITY_GROUP | 
--no-security-groups]
     [--extra-dhcp-opt EXTRA_DHCP_OPTS]
     [--qos-policy QOS_POLICY | --no-qos-policy]
     [--allowed-address-pair 
ip_address=IP_ADDR[,mac_address=MAC_ADDR]
     | --no-allowed-address-pairs]
     [--dns-name DNS_NAME | --no-dns-name]
     PORT
  neutron port-update: error: argument --extra-dhcp-opt: invalid key-value 
'20.20.20.1', expected format: key=value
  Try 'neutron help port-update' for more information.

  The reason is neutron client interprets the "," inside the opt_value as
  a delimiter of key-value pairs for --extra-dhcp-opt.

  The comma in the opt_value for classless-static-route is required because
  the format of DHCP options in the opts file for dnsmasq is like:

  tag:,option:classless-static-
  route,169.254.169.254/32,20.20.20.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1605421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606136] [NEW] cannot detach_volume while VM is in error state

2016-07-25 Thread Silvan Kaiser
Public bug reported:

This Error occurs randomly with the Quobyte CI on arbitrary changes.
Nova tries to to detach a volume from a VM that is in vm_state error.

CI run examples can be found at [1][2]

Example output:
==
Failed 1 tests - output below:
==

tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume[id-f56e465b-fe10-48bf-b75d-646cda3a8bc9,negative,volume]
---

Captured traceback:
~~~
Traceback (most recent call last):
  File "tempest/api/compute/servers/test_server_rescue_negative.py", line 
80, in _unrescue
server_id, 'ACTIVE')
  File "tempest/common/waiters.py", line 77, in wait_for_server_status
server_id=server_id)
tempest.exceptions.BuildErrorException: Server 
35a2d53a-d80d-46b2-87a6-a7a82f9d5244 failed to build and is in ERROR status
Details: {u'code': 500, u'message': u"Failed to open file 
'/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-351db2c5-9724-410f-b1d8-8680065c0788':
 No such file or directory", u'created': u'2016-07-23T13:03:20Z'}


Captured traceback-2:
~
Traceback (most recent call last):
  File "tempest/api/compute/base.py", line 346, in delete_volume
cls._delete_volume(cls.volumes_extensions_client, volume_id)
  File "tempest/api/compute/base.py", line 277, in _delete_volume
volumes_client.delete_volume(volume_id)
  File "tempest/lib/services/compute/volumes_client.py", line 63, in 
delete_volume
resp, body = self.delete("os-volumes/%s" % volume_id)
  File "tempest/lib/common/rest_client.py", line 301, in delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "tempest/lib/services/compute/base_compute_client.py", line 48, in 
request
method, url, extra_headers, headers, body, chunked)
  File "tempest/lib/common/rest_client.py", line 664, in request
resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 828, in _error_checker
message=message)
tempest.lib.exceptions.ServerFault: Got server fault
Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.



Captured traceback-1:
~
Traceback (most recent call last):
  File "tempest/api/compute/servers/test_server_rescue_negative.py", line 
73, in _detach
self.servers_client.detach_volume(server_id, volume_id)
  File "tempest/lib/services/compute/servers_client.py", line 342, in 
detach_volume
(server_id, volume_id))
  File "tempest/lib/common/rest_client.py", line 301, in delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "tempest/lib/services/compute/base_compute_client.py", line 48, in 
request
method, url, extra_headers, headers, body, chunked)
  File "tempest/lib/common/rest_client.py", line 664, in request
resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 777, in _error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: An object with that identifier already 
exists
Details: {u'code': 409, u'message': u"Cannot 'detach_volume' instance 
35a2d53a-d80d-46b2-87a6-a7a82f9d5244 while it is in vm_state error"}


[1] http://78.46.57.153:8081/refs-changes-38-346438-3/
[2] http://78.46.57.153:8081/refs-changes-58-346358-1/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606136

Title:
  cannot detach_volume while VM is in error state

Status in OpenStack Compute (nova):
  New

Bug description:
  This Error occurs randomly with the Quobyte CI on arbitrary changes.
  Nova tries to to detach a volume from a VM that is in vm_state error.

  CI run examples can be found at [1][2]

  Example output:
  ==
  Failed 1 tests - output below:
  ==

  
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume[id-f56e465b-fe10-48bf-b75d-646cda3a8bc9,negative,volume]
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_rescue_negative.py", line 
80, in _unrescue
  server_id, 'ACTIVE')
File "tempest/common/waiters.py", line 

[Yahoo-eng-team] [Bug 1603979] Re: gate: context tests failed because missing parameter "is_admin_project" (oslo.context 2.6.0)

2016-07-25 Thread ChangBo Guo(gcb)
Nova didn't accept the  g-r update now, Maybe the CI test skip
oslo.context 2.6.0.

We have another fix in https://review.openstack.org/#/c/343694/

** Changed in: nova
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603979

Title:
  gate: context tests failed because missing parameter
  "is_admin_project" (oslo.context 2.6.0)

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The following 3 tests failed:
  1. 
nova.tests.unit.test_context.ContextTestCase.test_convert_from_dict_then_to_dict
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 230, in 
test_convert_from_dict_then_to_dict
  self.assertEqual(values, values2)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {
   ..
   'is_admin': True,
   ..}
  actual= {
   ..
   'is_admin': True,
   'is_admin_project': True,
   ..}

  2. nova.tests.unit.test_context.ContextTestCase.test_convert_from_rc_to_dict
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 203, in 
test_convert_from_rc_to_dict
  self.assertEqual(expected_values, values2)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {
   ..
   'is_admin': True,
   ..}
  actual= {
   ..
   'is_admin': True,
   'is_admin_project': True,
   ..}

  3. nova.tests.unit.test_context.ContextTestCase.test_to_dict_from_dict_no_log
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 144, in 
test_to_dict_from_dict_no_log
  self.assertEqual(0, len(warns), warns)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 0 != 1: ["Arguments dropped when 
creating context: {'is_admin_project': True}"]

  Steps to reproduce
  ==
  Just run the context tests:
  tox -e py27 test_context

  This is because we missed to pass "is_admin_project" parameter to
  __init__() of  oslo.context.ResourceContext when initializing a nova
  ResourceContext object.

  In nova/context.py

  @enginefacade.transaction_context_provider
  class RequestContext(context.RequestContext):
  """Security context and request information.

  Represents the user taking a given action within the system.

  """

  def __init__(self, user_id=None, project_id=None,
   is_admin=None, read_deleted="no",
   roles=None, remote_address=None, timestamp=None,
   request_id=None, auth_token=None, overwrite=True,
   quota_class=None, user_name=None, project_name=None,
   service_catalog=None, instance_lock_checked=False,
   user_auth_plugin=None, **kwargs):
  ..
  super(RequestContext, self).__init__(
  ..
  is_admin=is_admin,
  ..)

  But in oslo_context/context.py,

  class RequestContext(object):

  ..

  def __init__(..
   is_admin=False,
   ..
   is_admin_project=True):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606123] [NEW] Connection Error in Resource Usage Panel

2016-07-25 Thread Sudheer Kalla
Public bug reported:

The Resource-Usage panel some times throws the following error

ConnectionError: HTTPSConnectionPool(host='public.fuel.local',
port=8777): Max retries exceeded with url:
/v2/meters/disk.write.bytes/statistics?q.field=project_id=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=eq=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le
 
pe==ddf0b064e5b045c4b064ec7344100f81=2016-07-21+05%3A46%3A26.965056%2B00%3A00=2016-07-22+05%3A46%3A26.965056%2B00%3A00=2016-07-21+05%3A56%3A08.129089%2B00%3A00=2016-07-22+05%3A56%3A08.129089%2B00%3A00=2016-07-21+05%3A57%3A02.799849%2B00%3A00=2016-07-22+05%3A57%3A02.799849%2B00%3A00=2016-07-15+05%3A57%3A30.502598%2B00%3A00=2016-07-22+05%3A57%3A30.502598%2B00%3A00=2016-07-15+05%3A58%3A12.655724%2B00%3A00=2016-07-22+05%3A58%3A12.655724%2B00%3A00=2016-07-15+05%3A58%3A26.069241%2B00%3A00=2016-07-22+05%3A58%3A26.069241%2B00%3A00=2016-07-15+05%3A58%3A57.687257%2B00%3A00=2016-07-22+05%3A58%3A57.687257%2B00%3A00=2016-07-15+05%3A59%3A11.241494%2B00%3A00=2016-07-22+05%3A59%3A11.241494%2B00%3A00=
 
2016-07-15+05%3A59%3A41.254137%2B00%3A00=2016-07-22+05%3A59%3A41.254137%2B00%3A00=2016-07-15+06%3A02%3A30.960500%2B00%3A00=2016-07-22+06%3A02%3A30.960500%2B00%3A00=2016-07-21+06%3A24%3A35.926004%2B00%3A00=2016-07-22+06%3A24%3A35.926004%2B00%3A00=2016-07-15+06%3A24%3A55.701449%2B00%3A00=2016-07-22+06%3A24%3A55.701449%2B00%3A00=2016-07-24+05%3A13%3A02.620998%2B00%3A00=2016-07-25+05%3A13%3A02.620998%2B00%3A00=2016-07-18+05%3A16%3A35.441166%2B00%3A00=2016-07-25+05%3A16%3A35.441166%2B00%3A00=2016-07-18+05%3A17%3A05.966066%2B00%3A00=2016-07-25+05%3A17%3A05.966066%2B00%3A00=2016-07-24+06%3A13%3A42.435985%2B00%3A00=2016-07-25+06%3A13%3A42.435985%2B00%3A00=86400
(Caused by : [Errno 110] Connection timed out)


All though the cli commands ceilometer meter-list works fine , and most of the 
time this error leads to something went wrong page.And i think this error will 
be
most commonly seen when there is a lot of data to be fetched

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  The Resource-Usage panel some times throws the following error
  
  ConnectionError: HTTPSConnectionPool(host='public.fuel.local',
  port=8777): Max retries exceeded with url:
  
/v2/meters/disk.write.bytes/statistics?q.field=project_id=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=timestamp=eq=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le=ge=le
 
type==ddf0b064e5b045c4b064ec7344100f81=2016-07-21+05%3A46%3A26.965056%2B00%3A00=2016-07-22+05%3A46%3A26.965056%2B00%3A00=2016-07-21+05%3A56%3A08.129089%2B00%3A00=2016-07-22+05%3A56%3A08.129089%2B00%3A00=2016-07-21+05%3A57%3A02.799849%2B00%3A00=2016-07-22+05%3A57%3A02.799849%2B00%3A00=2016-07-15+05%3A57%3A30.502598%2B00%3A00=2016-07-22+05%3A57%3A30.502598%2B00%3A00=2016-07-15+05%3A58%3A12.655724%2B00%3A00=2016-07-22+05%3A58%3A12.655724%2B00%3A00=2016-07-15+05%3A58%3A26.069241%2B00%3A00=2016-07-22+05%3A58%3A26.069241%2B00%3A00=2016-07-15+05%3A58%3A57.687257%2B00%3A00=2016-07-22+05%3A58%3A57.687257%2B00%3A00=2016-07-15+05%3A59%3A11.241494%2B00%3A00=2016-07-22+05%3A59%3A11.241494%2B00%3A00
 
e=2016-07-15+05%3A59%3A41.254137%2B00%3A00=2016-07-22+05%3A59%3A41.254137%2B00%3A00=2016-07-15+06%3A02%3A30.960500%2B00%3A00=2016-07-22+06%3A02%3A30.960500%2B00%3A00=2016-07-21+06%3A24%3A35.926004%2B00%3A00=2016-07-22+06%3A24%3A35.926004%2B00%3A00=2016-07-15+06%3A24%3A55.701449%2B00%3A00=2016-07-22+06%3A24%3A55.701449%2B00%3A00=2016-07-24+05%3A13%3A02.620998%2B00%3A00=2016-07-25+05%3A13%3A02.620998%2B00%3A00=2016-07-18+05%3A16%3A35.441166%2B00%3A00=2016-07-25+05%3A16%3A35.441166%2B00%3A00=2016-07-18+05%3A17%3A05.966066%2B00%3A00=2016-07-25+05%3A17%3A05.966066%2B00%3A00=2016-07-24+06%3A13%3A42.435985%2B00%3A00=2016-07-25+06%3A13%3A42.435985%2B00%3A00=86400
  (Caused by : [Errno 110] Connection timed out)
+ 
+ 
+ All though the cli commands ceilometer meter-list works fine , and most of 
the time this error leads to something went wrong page.And i think this error 
will be
+ most commonly seen when there is a lot of data to be fetched

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606123

Title:
  Connection Error in Resource Usage 

[Yahoo-eng-team] [Bug 1605421] Re: Unable to add classless-static-route in extra_dhcp_opt extension

2016-07-25 Thread KATO Tomoyuki
** Project changed: openstack-manuals => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605421

Title:
  Unable to add classless-static-route in extra_dhcp_opt extension

Status in neutron:
  New

Bug description:
  When adding classless-static-route in extra_dhcp_opt for a port, neutron 
client
  complains syntax error. For example,

  $ neutron port-update --extra-dhcp-opt 
opt_name="classless-static-route",opt_value="169.254.169.254/32,20.20.20.1" 
port1
  usage: neutron port-update [-h] [--request-format {json}] [--name NAME]
     [--description DESCRIPTION]
     [--fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR]
     [--device-id DEVICE_ID]
     [--device-owner DEVICE_OWNER]
     [--admin-state-up {True,False}]
     [--security-group SECURITY_GROUP | 
--no-security-groups]
     [--extra-dhcp-opt EXTRA_DHCP_OPTS]
     [--qos-policy QOS_POLICY | --no-qos-policy]
     [--allowed-address-pair 
ip_address=IP_ADDR[,mac_address=MAC_ADDR]
     | --no-allowed-address-pairs]
     [--dns-name DNS_NAME | --no-dns-name]
     PORT
  neutron port-update: error: argument --extra-dhcp-opt: invalid key-value 
'20.20.20.1', expected format: key=value
  Try 'neutron help port-update' for more information.

  The reason is neutron client interprets the "," inside the opt_value as
  a delimiter of key-value pairs for --extra-dhcp-opt.

  The comma in the opt_value for classless-static-route is required because
  the format of DHCP options in the opts file for dnsmasq is like:

  tag:,option:classless-static-
  route,169.254.169.254/32,20.20.20.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605421] [NEW] Unable to add classless-static-route in extra_dhcp_opt extension

2016-07-25 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When adding classless-static-route in extra_dhcp_opt for a port, neutron client
complains syntax error. For example,

$ neutron port-update --extra-dhcp-opt 
opt_name="classless-static-route",opt_value="169.254.169.254/32,20.20.20.1" 
port1
usage: neutron port-update [-h] [--request-format {json}] [--name NAME]
   [--description DESCRIPTION]
   [--fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR]
   [--device-id DEVICE_ID]
   [--device-owner DEVICE_OWNER]
   [--admin-state-up {True,False}]
   [--security-group SECURITY_GROUP | 
--no-security-groups]
   [--extra-dhcp-opt EXTRA_DHCP_OPTS]
   [--qos-policy QOS_POLICY | --no-qos-policy]
   [--allowed-address-pair 
ip_address=IP_ADDR[,mac_address=MAC_ADDR]
   | --no-allowed-address-pairs]
   [--dns-name DNS_NAME | --no-dns-name]
   PORT
neutron port-update: error: argument --extra-dhcp-opt: invalid key-value 
'20.20.20.1', expected format: key=value
Try 'neutron help port-update' for more information.

The reason is neutron client interprets the "," inside the opt_value as
a delimiter of key-value pairs for --extra-dhcp-opt.

The comma in the opt_value for classless-static-route is required because
the format of DHCP options in the opts file for dnsmasq is like:

tag:,option:classless-static-
route,169.254.169.254/32,20.20.20.1

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Unable to add classless-static-route in extra_dhcp_opt extension
https://bugs.launchpad.net/bugs/1605421
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp