[Yahoo-eng-team] [Bug 1507521] Re: Nova Resize is failing for shared storage between Compute node

2016-04-05 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1507521

Title:
  Nova Resize is failing for shared storage between Compute node

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Nova Version: 2.22.0

  I have share nfs storage mounting the /var/lib/nova between two
  compute nodes. When I tried to re-sizing the instance using nova
  resize command, it is failing and below is the output of log

  2015-10-19 05:13:15.582 14325 ERROR oslo_messaging.rpc.dispatcher 
[req-5cb16661-74ec-4faf-93cd-044e597cc9de d4209dcd86b84fc584f8b3b72bee0c64 
da6c9fa9be0046dda47e9bd6caf3908a - - -] Exception during message handling: 
Resize error: not able to execute ssh command: Unexpected error while running 
command.
  Command: ssh 20.20.20.3 mkdir -p 
/var/lib/nova/instances/744d6341-023f-49cd-9d93-7bae7eb32653
  Exit code: 255
  Stdout: u''
  Stderr: u'Host key verification failed.\r\n'
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6748, in 
resize_instance
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
clean_shutdown=clean_shutdown)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher payload)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, in 
decorated_function
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 298, in 
decorated_function
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 377, in 
decorated_function
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 286, in 
decorated_function
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
migration.instance_uuid, exc_info=True)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-10-19 05:13:15.582 14325 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 269, in 
decorated_function
  

[Yahoo-eng-team] [Bug 1566622] [NEW] live migration fails with xenapi virt driver and SRs with old-style naming convention

2016-04-05 Thread Corey Wright
Public bug reported:

version: commit ce5a2fb419f999bec0fb2c67413387c8b67a691a

1. create a boot-from-volume instance prior to deploying commit 
5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
2. upgrade nova to commit 5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
3. live-migrate instance
4. observe live-migrate action fail

based on my analysis of logs and code:
1. destination uses new-style SR naming convention in sr_uuid_map.
2. source tries to use new-style SR naming convention in talking to XenAPI (in 
nova.virt.xenapi.vmops.py:VMOps.live_migrate() -> _call_live_migrate_command())
3. xenapi throws XenAPI.Failure exception because it "Got exception 
UUID_INVALID" because it only knows the SR by the old-style naming convention

example destination nova-compute, source nova-compute, and xenapi logs
from a live-migrate request to follow.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: live-migrate nova xenapi

** Attachment added: "destination nova-compute, source nova-compute, and xenapi 
logs from a live-migrate request"
   
https://bugs.launchpad.net/bugs/1566622/+attachment/4625560/+files/live-migrate_fails_with_old_and_new-style_sr_naming_convention.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566622

Title:
  live migration fails with xenapi virt driver and SRs with old-style
  naming convention

Status in OpenStack Compute (nova):
  New

Bug description:
  version: commit ce5a2fb419f999bec0fb2c67413387c8b67a691a

  1. create a boot-from-volume instance prior to deploying commit 
5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  2. upgrade nova to commit 5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  3. live-migrate instance
  4. observe live-migrate action fail

  based on my analysis of logs and code:
  1. destination uses new-style SR naming convention in sr_uuid_map.
  2. source tries to use new-style SR naming convention in talking to XenAPI 
(in nova.virt.xenapi.vmops.py:VMOps.live_migrate() -> 
_call_live_migrate_command())
  3. xenapi throws XenAPI.Failure exception because it "Got exception 
UUID_INVALID" because it only knows the SR by the old-style naming convention

  example destination nova-compute, source nova-compute, and xenapi logs
  from a live-migrate request to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1566622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562914] Re: vendor-data not working with ConfigDrive in 0.7.5

2016-04-05 Thread Matt Dorn
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: cloud-init (Ubuntu)

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1562914

Title:
  vendor-data not working with ConfigDrive in 0.7.5

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  New

Bug description:
  vendor-data is not read with the ConfigDrive datasource in 0.7.5.

  Mar 28 14:54:01 hi [CLOUDINIT] stages.py[DEBUG]: no vendordata from
  datasource

  Works properly with NoCloud and OpenStack datasources.

  DistroRelease: Ubuntu 14.04
  Package: 0.7.5-0ubuntu1.16
  Uname: 3.13.0-79-generic x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1562914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558819] Re: Fullstack linux bridge agent sometimes refuses to die during test clean up, failing the test

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294798
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fd93e19f2a415b3803700fc491749daba01a4390
Submitter: Jenkins
Branch:master

commit fd93e19f2a415b3803700fc491749daba01a4390
Author: Assaf Muller 
Date:   Fri Mar 18 16:29:26 2016 -0400

Change get_root_helper_child_pid to stop when it finds cmd

get_root_helper_child_pid recursively finds the child of pid,
until it can no longer find a child. However, the intention is
not to find the deepest child, but to strip away root helpers.
For example 'sudo neutron-rootwrap x' is supposed to find the
pid of x. However, in cases 'x' spawned quick lived children of
its own (For example: ip / brctl / ovs invocations),
get_root_helper_child_pid returned those pids if called in
the wrong time.

Change-Id: I582aa5c931c8bfe57f49df6899445698270bb33e
Closes-Bug: #1558819


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558819

Title:
  Fullstack linux bridge agent sometimes refuses to die during test
  clean up, failing the test

Status in neutron:
  Fix Released

Bug description:
  Paste of failure:
  http://paste.openstack.org/show/491014/

  When looking at the LB agent logs, you start seeing RPC errors as
  neutron-server is unable to access the DB. What's happening is that
  fullstack times out trying to kill the LB agent and moves on to other
  clean ups. It deletes the DB for the test, but the agents and neutron-
  server live on, resulting in errors trying to access the DB. The DB
  errors are essentially unrelated - The root cause is that the agent
  refuses to die for an unknown reason.

  The code that tries to stop the agent is AsyncProcess.stop(block=True, 
signal=9).
  Another detail that might be relevant is that the agent lives in a namespace.

  To reproduce locally, go to the VM running the fullstack tests and load all 
CPUs to 100%, then run:
  tox -e dsvm-fullstack TestLinuxBridgeConnectivitySameNetwork

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566569] [NEW] OVS functional tests no longer output OVS logs after the change to compile OVS from source

2016-04-05 Thread Assaf Muller
Public bug reported:

Since https://review.openstack.org/#/c/266423/ merged we compile OVS
from source for the functional job. This had a side effect of not
providing OVS logs. Here's the openvswitch logs dir for the patch in
question:

http://logs.openstack.org/23/266423/26/check/gate-neutron-dsvm-
functional/fde6d9e/logs/openvswitch/

You can see it only has has ovs-ctl log, which was created by the OVS-
from-package binary before it was shut down to make room for the OVS-
from-source binary.

Here is the OVS logs dir for the parent change:

http://logs.openstack.org/60/265460/4/check/gate-neutron-dsvm-
functional/6134795/logs/openvswitch/

Which contains a lot of nice, juicy logs useful for all sorts of amazing
things.

** Affects: neutron
 Importance: Low
 Status: New


** Tags: functional-tests mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566569

Title:
  OVS functional tests no longer output OVS logs after the change to
  compile OVS from source

Status in neutron:
  New

Bug description:
  Since https://review.openstack.org/#/c/266423/ merged we compile OVS
  from source for the functional job. This had a side effect of not
  providing OVS logs. Here's the openvswitch logs dir for the patch in
  question:

  http://logs.openstack.org/23/266423/26/check/gate-neutron-dsvm-
  functional/fde6d9e/logs/openvswitch/

  You can see it only has has ovs-ctl log, which was created by the OVS-
  from-package binary before it was shut down to make room for the OVS-
  from-source binary.

  Here is the OVS logs dir for the parent change:

  http://logs.openstack.org/60/265460/4/check/gate-neutron-dsvm-
  functional/6134795/logs/openvswitch/

  Which contains a lot of nice, juicy logs useful for all sorts of
  amazing things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564870] Re: Not supported error message is incorrect

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300435
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ab1ec3f94a3a39137b0339c94f6db6f95e38cab1
Submitter: Jenkins
Branch:master

commit ab1ec3f94a3a39137b0339c94f6db6f95e38cab1
Author: Yuiko Takada 
Date:   Fri Apr 1 20:59:54 2016 +0900

Fix not supported error message

When execute "nova baremetal-node-create" command,
below error message is shown:
 ERROR (BadRequest): Command Not supported.
 Please use Ironic command port-create to perform this action. (HTTP 400)

Ironic command corresponds to nova baremetal-node-create is
not port-create, but node-create.
This patch set fixes this bug.

Change-Id: Id41b832f2b01c1848c224515fa70efec8a316ae9
Closes-bug: #1564870


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1564870

Title:
  Not supported error message is incorrect

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When execute "nova baremetal-node-create" command, above error message is 
shown.
  ERROR (BadRequest): Command Not supported. Please use Ironic command 
port-create to perform this action. (HTTP 400) 

  port-create is incorrect, node-create is correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1564870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566282] Re: Returning federated user fails to authenticate with HTTP 500

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301795
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=7ad4f8728cce354617b5facefe5076d65af311c6
Submitter: Jenkins
Branch:master

commit 7ad4f8728cce354617b5facefe5076d65af311c6
Author: Boris Bobrov 
Date:   Tue Apr 5 18:50:48 2016 +0300

Update federated user display name with shadow_users_api

When a user comes to the cloud for the first time, a shadow user is
created. When the user authenticates again, this shadow user is
fetched and returned. Before it is returned, its display name should
be updated. But the call to update the display name fails because
neither identity manager nor identity drivers have the required
method. However, the required method exists in shadow_users_api.

The issue was hidden because method shadow_federated_user was
cached and while the cache lived, the user could authenticate.

Use the method of shadow_user_api instead of identity_api to update
federated user display name.

Change-Id: I58e65bdf3a953f3ded485003939b81f908738e1e
Closes-Bug: 1566282


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1566282

Title:
  Returning federated user fails to authenticate with HTTP 500

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  In Progress

Bug description:
  I've set up stable/mitaka keystone with AD FS and it worked. After
  some time, i decided to test the set up again and after trying to
  authenicate i've got HTTP 500.

  In keystone logs, there is this:
  http://paste.openstack.org/show/492968/ (the logs are the same as
  below).

  This happens because  self.update_federated_user_display_name is
  called in identity_api.shadow_federated_user. Since no
  update_federated_user_display_name is defined in identity_api,
  __getattr__ tries to lookup the name in the driver. The driver used
  for identity_api hasn't update_federated_user_display_name, and
  AttributeError is raised.

  The issue seems to exist on both stable/mitaka and master (6f9f390).

  2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] direct_maps: 
 
_update_local_mapping /opt/stack/keystone/keystone/federation/utils.py:691
  2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] local: {u'id': 
u'f7567142a8024543ab678de7be553dbf'} _update_local_mapping 
/opt/stack/keystone/keystone/federation/utils.py:692
  2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] identity_values: 
[{u'user': {u'domain': {u'name': u'Default'}, u'name': u'bre...@winad.org'}}, 
{u'group': {u'id': u'f7567142a8024543ab678de7be553dbf'}}] proc
  ess /opt/stack/keystone/keystone/federation/utils.py:535
  2016-04-05 11:53:56.174 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] mapped_properties: 
{'group_ids': [u'f7567142a8024543ab678de7be553dbf'], 'user': {u'domain': {'id': 
'Federated'}, 'type': 'ephemeral', u'name': u'breton@winad
  .org'}, 'group_names': []} process 
/opt/stack/keystone/keystone/federation/utils.py:537
  2016-04-05 11:53:56.273 2100 ERROR keystone.common.wsgi 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] 'Identity' object has no 
attribute 'update_federated_user_display_name'
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 249, in __call__
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi result = 
method(context, **params)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/federation/controllers.py", line 320, in 
federated_sso_auth
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi protocol_id)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/federation/controllers.py", line 302, in 
federated_authentication
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi return 
self.authenticate_for_token(context, auth=auth)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 396, in 
authenticate_for_token
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi 
self.authenticate(context, auth_info, auth_context)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 520, in authenticate

[Yahoo-eng-team] [Bug 1566327] Re: Creating a security group rule with no protocol fails with KeyError

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301749
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5a41caa47a080fdbc1801e2771163734b9790c57
Submitter: Jenkins
Branch:master

commit 5a41caa47a080fdbc1801e2771163734b9790c57
Author: Ihar Hrachyshka 
Date:   Tue Apr 5 16:56:16 2016 +0200

Don't drop 'protocol' from client supplied security_group_rule dict

If protocol was present in the dict, but was None, then it was never
re-instantiated after being popped out of the dict. This later resulted
in KeyError when trying to access the key on the dict.

Change-Id: I4985e7b54117bee3241d7365cb438197a09b9b86
Closes-Bug: #1566327


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566327

Title:
  Creating a security group rule with no protocol fails with KeyError

Status in neutron:
  Fix Released

Bug description:
  neutron security-group-rule-create --direction ingress default

  results in:

  
  2016-04-05 15:50:56.772 ERROR neutron.api.v2.resource 
[req-67736b7a-6a4c-442c-9536-890ccf5c8d19 admin 
3dc1eb0373d34ba9b2edfb41ee98149c] create failed
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 410, in create
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource self.force_reraise()
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 521, in _create
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource obj = 
do_create(body)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 503, in do_create
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource request.context, 
reservation.reservation_id)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource self.force_reraise()
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 496, in do_create
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return 
obj_creator(request.context, **kwargs)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_rpc_base.py", line 74, in 
create_security_group_rule
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource security_group_rule)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 374, in 
create_security_group_rule
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return 
self._create_security_group_rule(context, security_group_rule)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 399, in 
_create_security_group_rule
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource 
protocol=rule_dict['protocol'],
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource KeyError: 'protocol'
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource

  This is a regression, since it was working before.

To manage 

[Yahoo-eng-team] [Bug 1566524] [NEW] create security group raises HTTPForbidden on SecurityGroupLimitExceeded

2016-04-05 Thread sajuptpm
Public bug reported:

create security group raises HTTPForbidden on SecurityGroupLimitExceeded 
exception from nov-api server.
Because of that, novaclient raises Forbidden instead of OverLimit exception.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566524

Title:
  create security group raises HTTPForbidden on
  SecurityGroupLimitExceeded

Status in OpenStack Compute (nova):
  New

Bug description:
  create security group raises HTTPForbidden on SecurityGroupLimitExceeded 
exception from nov-api server.
  Because of that, novaclient raises Forbidden instead of OverLimit exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1566524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566007] Re: l3 iptables floating IP rules don't match iptables rules

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301335
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b8d520ffe2afbffe26b554bff55165531e36e758
Submitter: Jenkins
Branch:master

commit b8d520ffe2afbffe26b554bff55165531e36e758
Author: Kevin Benton 
Date:   Fri Apr 1 02:42:54 2016 -0700

L3 agent: match format used by iptables

This fixes the iptables rules generated by the L3 agent
(SNAT, DNAT, set-mark and metadata), and the DHCP agent
(checksum-fill) to match the format that will be returned
by iptables-save to prevent excessive extra replacement
work done by the iptables manager.

It also fixes the iptables test that was not passing the
expected arguments (-p PROTO -m PROTO) for block rules.

A simple test was added to the L3 agent to ensure that the
rules have converged during the normal lifecycle tests.

Closes-Bug: #1566007
Change-Id: I5e8e27cdbf0d0448011881614671efe53bb1b6a1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566007

Title:
  l3 iptables floating IP rules don't match iptables rules

Status in neutron:
  Fix Released

Bug description:
  The floating IP translation rules generated by the l3 agent do not
  match the format in which they are returned by iptables. This causes
  the iptables diffing code to think they are different and replace
  every one of them on an iptables apply call, which is very expensive.

  See https://gist.github.com/busterswt/479e4e5484df7e91017da48b38fa5814
  for an example diff.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566520] [NEW] Upgrade controllers with no API downtime

2016-04-05 Thread Ihar Hrachyshka
Public bug reported:

Currently pretty much every major upgrade requires full shutdown for all
neutron-server instances for the time while upgrade process is running.
The downtime is due to the need to run alembic scripts that modify
schema and transform data. Neutron-server instances are currently not
resilient to working with older schema. We also don't make an effort to
avoid 'contract' migrations.

The goal of the RFE is to allow upgrading controller services one by
one, without full shutdown for all of them in an HA setup. This will
allow to avoid public shutdown for API for rolling upgrades.

The RFE involves:
- adopting object facades for all interaction with database models;
- forbidding contract migrations in alembic;
- implementing new contract migrations in backwards compatible way in runtime.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: rfe

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566520

Title:
  Upgrade controllers with no API downtime

Status in neutron:
  New

Bug description:
  Currently pretty much every major upgrade requires full shutdown for
  all neutron-server instances for the time while upgrade process is
  running. The downtime is due to the need to run alembic scripts that
  modify schema and transform data. Neutron-server instances are
  currently not resilient to working with older schema. We also don't
  make an effort to avoid 'contract' migrations.

  The goal of the RFE is to allow upgrading controller services one by
  one, without full shutdown for all of them in an HA setup. This will
  allow to avoid public shutdown for API for rolling upgrades.

  The RFE involves:
  - adopting object facades for all interaction with database models;
  - forbidding contract migrations in alembic;
  - implementing new contract migrations in backwards compatible way in runtime.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566514] [NEW] [RFE] Enable sorting and pagination by default

2016-04-05 Thread Ihar Hrachyshka
Public bug reported:

Currently those features are controlled by configuration options:
allow_sorting, allow_pagination, and they are disabled by default. There
are multiple issues with that:

- those useful API features are not available for any default installation of 
neutron;
- it's not great when API behaviour is not consistent, depending on local 
configuration;
- we don't have a way of detecting whether those features are enabled.

Base controller already supports both native and generic implementations
for those features: if a plugin claims native support, then plugin calls
are populated with corresponding sorting/pagination parameters;
otherwise the base controller 'emulates' those features for the plugin.
It seems that this fallback approach already covers all cases, and we
should be safe to enable those features for all setups.

We need to make sure that testing coverage for the features is adequate
(API tests), that we test it in gate; then we should consider enabling
the features by default, deprecating those options and eventually
removing them.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: rfe

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566514

Title:
  [RFE] Enable sorting and pagination by default

Status in neutron:
  New

Bug description:
  Currently those features are controlled by configuration options:
  allow_sorting, allow_pagination, and they are disabled by default.
  There are multiple issues with that:

  - those useful API features are not available for any default installation of 
neutron;
  - it's not great when API behaviour is not consistent, depending on local 
configuration;
  - we don't have a way of detecting whether those features are enabled.

  Base controller already supports both native and generic
  implementations for those features: if a plugin claims native support,
  then plugin calls are populated with corresponding sorting/pagination
  parameters; otherwise the base controller 'emulates' those features
  for the plugin. It seems that this fallback approach already covers
  all cases, and we should be safe to enable those features for all
  setups.

  We need to make sure that testing coverage for the features is
  adequate (API tests), that we test it in gate; then we should consider
  enabling the features by default, deprecating those options and
  eventually removing them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563513] Re: Unexpected Exception in API Error when creating instance with Neutron

2016-04-05 Thread Matt Riedemann
Please link to the install guide you were following, the specific page.

This seems correct:

http://docs.openstack.org/liberty/install-guide-ubuntu/neutron-compute-
install.html

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563513

Title:
  Unexpected Exception in API Error when creating instance with Neutron

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I went through the Liberty install guide using networking Option 2. I
  created a public and a private network in the admin project (logged on
  as the admin user), then attempted to launch an instance from the
  Horizon GUI.

  In the GUI, I get two error popups: "Error: Unexpected API Error.
  Please report this at http://bugs.launchpad.net/nova/ and attach the
  Nova API log if possible.  (HTTP 500)
  (Request-ID: req-2cb42761-a3d0-4c9a-8fe6-10ae746d60e7)"  and "Error:
  Unable to launch instance named "TestInstance"."

  On the controller node, the nova-api.log has:

  2016-03-29 13:35:37.263 2044 INFO nova.osapi_compute.wsgi.server 
[req-1963eedc-3a0a-4ee7-bab3-fbcada58d0ec 7e4ea1507e684f068ab68ec1e2ce77c4 
138c2905912e4277b9845e5811614301 - - -] 10.0.0.19 "GET 
/v2/138c2905912e4277b9845e5811614301/os-availability-zone HTTP/1.1" status: 200 
len: 293 time: 0.0253379
  2016-03-29 13:35:37.280 2044 INFO nova.osapi_compute.wsgi.server 
[req-b4a0637c-3f9a-4c36-a4e9-b33071c7eb68 7e4ea1507e684f068ab68ec1e2ce77c4 
138c2905912e4277b9845e5811614301 - - -] 10.0.0.19 "GET 
/v2/138c2905912e4277b9845e5811614301/flavors/detail HTTP/1.1" status: 200 len: 
2227 time: 0.0146720
  2016-03-29 13:35:37.524 2042 INFO nova.osapi_compute.wsgi.server 
[req-acb0e2f2-17b6-4920-bafb-dd5cebb0a8f8 7e4ea1507e684f068ab68ec1e2ce77c4 
138c2905912e4277b9845e5811614301 - - -] 10.0.0.19 "GET 
/v2/138c2905912e4277b9845e5811614301/os-quota-sets/138c2905912e4277b9845e5811614301
 HTTP/1.1" status: 200 len: 568 time: 0.0112181
  2016-03-29 13:35:37.584 2043 INFO nova.osapi_compute.wsgi.server 
[req-926bc8fa-d927-4560-9488-1517ee97c713 7e4ea1507e684f068ab68ec1e2ce77c4 
138c2905912e4277b9845e5811614301 - - -] 10.0.0.19 "GET 
/v2/138c2905912e4277b9845e5811614301/servers/detail?all_tenants=True_id=138c2905912e4277b9845e5811614301
 HTTP/1.1" status: 200 len: 211 time: 0.0455759
  2016-03-29 13:35:37.815 2041 INFO nova.osapi_compute.wsgi.server 
[req-3a615414-417b-4640-bbe1-37462f8ee9c5 7e4ea1507e684f068ab68ec1e2ce77c4 
138c2905912e4277b9845e5811614301 - - -] 10.0.0.19 "GET 
/v2/138c2905912e4277b9845e5811614301/os-keypairs HTTP/1.1" status: 200 len: 212 
time: 0.0068719
  2016-03-29 13:35:37.957 2040 INFO nova.osapi_compute.wsgi.server 
[req-f6aa1fa3-08ff-426d-b217-5fb8488e1dca 7e4ea1507e684f068ab68ec1e2ce77c4 
138c2905912e4277b9845e5811614301 - - -] 10.0.0.19 "GET 
/v2/138c2905912e4277b9845e5811614301/extensions HTTP/1.1" status: 200 len: 
21880 time: 0.0278120
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions 
[req-2cb42761-a3d0-4c9a-8fe6-10ae746d60e7 7e4ea1507e684f068ab68ec1e2ce77c4 
138c2905912e4277b9845e5811614301 - - -] Unexpected exception in API method
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
611, in create
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1581, in create
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2016-03-29 13:35:38.212 2040 ERROR nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1486767] Re: when i create a public image the form defaults to the project image list after submit

2016-04-05 Thread Daniel Castellanos
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1486767

Title:
  when i create a public image the form defaults to the project image
  list after submit

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When i create an image (which is public) and submit the form.  The
  image list defaults back to the project image list rather than the
  public image list.  It should stay on the public image list

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1486767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566496] Re: DepricationWarning message occurs in functional db testing

2016-04-05 Thread John Perkins
I must have been on an old branch, this was fixed here:
https://github.com/openstack/neutron/commit/13fe6af8a2174e188cda9ffc5748df7c8b03d464

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
 Assignee: John Perkins (john-d-perkins) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566496

Title:
  DepricationWarning message occurs in functional db testing

Status in neutron:
  Invalid

Bug description:
  The following warning is generated while running functional tests:

  
neutron/neutron/db/migration/alembic_migrations/versions/mitaka/expand/1df244e556f5_add_unique_ha_router_agent_port_bindings.py:44:
 DeprecationWarning: Using function/method 'instance.ugettext()' is deprecated: 
Builtin _ translation function is deprecated in OpenStack; use the function 
from 
  _i18n module for your project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489905] Re: Ajax errors returning 500 error + html

2016-04-05 Thread Diana Whitten
** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489905

Title:
  Ajax errors returning 500 error + html

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Currently, any ajax call that hits an error other than NotAuthorized
  or NotAuthenticated the end-response returned will likely be a 500
  error with all the html of our ISE page included.

  Richard Jones's API code already provides a way to handle these
  issues, so we should use that!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493140] Re: create network form duplicates errors

2016-04-05 Thread Diana Whitten
This doesn't seem to happen anymore, or I am unable to recreate it.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493140

Title:
  create network form duplicates errors

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When trying to create a network with "create network"  button on the
  page "/horizon/project/networks/", there is a modal form with
  subforms. It has second section named "Subnet".  If you just click
  "Next" button there, you will get an error 'Specify "Network Address"
  or clear "Create Subnet" checkbox.', if you click it one more time,
  this error appears again. Same thing happens with any field errors on
  this form.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495960] Re: linuxbridge with vlan crashes when long device names used

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246954
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7ececa3a20e19985b7ebcca2c629126a3900c090
Submitter: Jenkins
Branch:master

commit 7ececa3a20e19985b7ebcca2c629126a3900c090
Author: Andreas Scheuring 
Date:   Wed Nov 18 15:33:03 2015 +0100

lb: interface name hashing for too long vlan interface names

The linuxbridge agent creates vlan subinterfaces for each vlan
neutron network. The name for this vlan subinterface is
".". Todays code crashes if the
physical interface name is too long. Therefore this hashing is
being introduced.

Change-Id: Ieb3c0a7282b28eed556236ead4993ab83a29a12f
Closes-Bug: #1495960


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495960

Title:
  linuxbridge with vlan crashes when long device names used

Status in neutron:
  Fix Released

Bug description:
  The linuxbridge agent creates a linux vlan-device for each openstack vlan 
network that has been defined. Therefore the code uses the following naming 
scheme : .
  Example: eth-dev-name: eth0, vlan-id: 1000 --> eth0.1000

  This works fine, if eth-dev-name is a short name like "eth0". If there
  is a long device name (e.g. long-device-name) this will cause trouble,
  as the vlan device name "long-device-name.1000" exceeds the max length
  of a linux network device, which is 15 chars.

  Today the linuxbridge agent fails with

  Command: ['ip', 'link', 'add', 'link', 'too_long_name', 'name', 
'too_long_name.1007', 'type', 'vlan', 'id', 1007]
  Exit code: 255
  Stderr: Error: argument "too_long_name.1007" is wrong: "name" too long

  The same problem needs to be solved for the new macvtap agent that is
  currently under development [1] as well

  
  [1] https://bugs.launchpad.net/neutron/+bug/1480979

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566282] Re: Returning federated user fails to authenticate with HTTP 500

2016-04-05 Thread Steve Martinelli
** Also affects: keystone/mitaka
   Importance: Undecided
 Assignee: Dolph Mathews (dolph)
   Status: In Progress

** Also affects: keystone/newton
   Importance: Undecided
   Status: New

** Changed in: keystone/newton
 Assignee: (unassigned) => Boris Bobrov (bbobrov)

** Changed in: keystone/mitaka
 Assignee: Dolph Mathews (dolph) => (unassigned)

** Changed in: keystone/mitaka
 Assignee: (unassigned) => Steve Martinelli (stevemar)

** Changed in: keystone/mitaka
 Assignee: Steve Martinelli (stevemar) => Boris Bobrov (bbobrov)

** Changed in: keystone/newton
   Importance: Undecided => High

** Changed in: keystone/mitaka
   Importance: Undecided => Critical

** Changed in: keystone/newton
   Importance: High => Critical

** Changed in: keystone/newton
   Status: New => In Progress

** Changed in: keystone/newton
Milestone: None => newton-1

** Changed in: keystone/mitaka
Milestone: None => mitaka-rc3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1566282

Title:
  Returning federated user fails to authenticate with HTTP 500

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Identity (keystone) mitaka series:
  In Progress
Status in OpenStack Identity (keystone) newton series:
  In Progress

Bug description:
  I've set up stable/mitaka keystone with AD FS and it worked. After
  some time, i decided to test the set up again and after trying to
  authenicate i've got HTTP 500.

  In keystone logs, there is this:
  http://paste.openstack.org/show/492968/ (the logs are the same as
  below).

  This happens because  self.update_federated_user_display_name is
  called in identity_api.shadow_federated_user. Since no
  update_federated_user_display_name is defined in identity_api,
  __getattr__ tries to lookup the name in the driver. The driver used
  for identity_api hasn't update_federated_user_display_name, and
  AttributeError is raised.

  The issue seems to exist on both stable/mitaka and master (6f9f390).

  2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] direct_maps: 
 
_update_local_mapping /opt/stack/keystone/keystone/federation/utils.py:691
  2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] local: {u'id': 
u'f7567142a8024543ab678de7be553dbf'} _update_local_mapping 
/opt/stack/keystone/keystone/federation/utils.py:692
  2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] identity_values: 
[{u'user': {u'domain': {u'name': u'Default'}, u'name': u'bre...@winad.org'}}, 
{u'group': {u'id': u'f7567142a8024543ab678de7be553dbf'}}] proc
  ess /opt/stack/keystone/keystone/federation/utils.py:535
  2016-04-05 11:53:56.174 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] mapped_properties: 
{'group_ids': [u'f7567142a8024543ab678de7be553dbf'], 'user': {u'domain': {'id': 
'Federated'}, 'type': 'ephemeral', u'name': u'breton@winad
  .org'}, 'group_names': []} process 
/opt/stack/keystone/keystone/federation/utils.py:537
  2016-04-05 11:53:56.273 2100 ERROR keystone.common.wsgi 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] 'Identity' object has no 
attribute 'update_federated_user_display_name'
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 249, in __call__
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi result = 
method(context, **params)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/federation/controllers.py", line 320, in 
federated_sso_auth
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi protocol_id)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/federation/controllers.py", line 302, in 
federated_authentication
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi return 
self.authenticate_for_token(context, auth=auth)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 396, in 
authenticate_for_token
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi 
self.authenticate(context, auth_info, auth_context)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 520, in authenticate
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi auth_context)
  2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/plugins/mapped.py", line 65, in authenticate
  2016-04-05 11:53:56.273 2100 TRACE 

[Yahoo-eng-team] [Bug 1566496] [NEW] DepricationWarning message occurs in functional db testing

2016-04-05 Thread John Perkins
Public bug reported:

The following warning is generated while running functional tests:

neutron/neutron/db/migration/alembic_migrations/versions/mitaka/expand/1df244e556f5_add_unique_ha_router_agent_port_bindings.py:44:
 DeprecationWarning: Using function/method 'instance.ugettext()' is deprecated: 
Builtin _ translation function is deprecated in OpenStack; use the function 
from 
_i18n module for your project.

** Affects: neutron
 Importance: Undecided
 Assignee: John Perkins (john-d-perkins)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => John Perkins (john-d-perkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566496

Title:
  DepricationWarning message occurs in functional db testing

Status in neutron:
  New

Bug description:
  The following warning is generated while running functional tests:

  
neutron/neutron/db/migration/alembic_migrations/versions/mitaka/expand/1df244e556f5_add_unique_ha_router_agent_port_bindings.py:44:
 DeprecationWarning: Using function/method 'instance.ugettext()' is deprecated: 
Builtin _ translation function is deprecated in OpenStack; use the function 
from 
  _i18n module for your project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565705] Re: iptables duplicate rule warning on ports with multiple security groups

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301029
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=142b68f0757ab036d56bc9b4563b7a4481527deb
Submitter: Jenkins
Branch:master

commit 142b68f0757ab036d56bc9b4563b7a4481527deb
Author: Kevin Benton 
Date:   Fri Apr 1 01:53:10 2016 -0700

De-dup user-defined SG rules before iptables call

A port may be a member of multiple security groups. These
security groups may have dupilcate rules between them
(e.g. they both allow all EGRESS traffic). If the iptables
manager is called with duplicated rules, it emits a warning
of a possible bug in the rule generation code because it
doesn't expect there to be duplicated rules.

This patch fixes this by de-duplicating user-defined security group
rules before dispatching the calls to the iptables_manager.

Change-Id: I98dbe60df1bcf68b9922deee63dd0328c4c10dd0
Closes-Bug: #1565705


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565705

Title:
  iptables duplicate rule warning on ports with multiple security groups

Status in neutron:
  Fix Released

Bug description:
  If ports are members of multiple security groups, there may be
  duplicate rules when it comes time to convert them to iptables rules
  (e.g. both groups have a rule to allow TCP port 80). This results in
  warnings from the iptables manager detecting duplicate rules that hint
  that there may be a bug.

  For example:

  WARNING neutron.agent.linux.iptables_manager [req-
  944a9996-062b-4588-9536-d5df779da344 - -] Duplicate iptables rule
  detected. This may indicate a bug in the the iptables rule generation
  code. Line: -A neutron-openvswi-oe4186b39-0 -j RETURN

  
  This warning resulted from a port that was a member of two security groups 
that both allowed all EGRESS traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566494] [NEW] Federated user's name is not updated if changed in idp

2016-04-05 Thread Boris Bobrov
Public bug reported:

If username changes in identity provider, shadow user's display_name is
not updated.

** Affects: keystone
 Importance: Undecided
 Assignee: Ron De Rose (ronald-de-rose)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1566494

Title:
  Federated user's name is not updated if changed in idp

Status in OpenStack Identity (keystone):
  New

Bug description:
  If username changes in identity provider, shadow user's display_name
  is not updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1566494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372359] Re: Security Groups: Add Rule dialog does not specify the option to create an IPv6 rule.

2016-04-05 Thread Daniel Castellanos
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372359

Title:
  Security Groups: Add Rule dialog does not specify the option to create
  an IPv6 rule.

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Description of problem:
  ===
  The Add Rule dialog does not allow you to specify the 'Ether Type' for the 
rule.
  Instead, It auto detects if the CIDR is IPv4 or IPv6 and creates the rule 
accordingly.
  Having that approach, I Would suggest that the IPv4/IPv6 auto-detection will 
be better reflected to the user.

  Currently:
  a. The default CIDR is: 0.0.0.0/0
  b. The CIDR Field help: Classless Inter-Domain Routing (e.g 192.168.0.0/24)
  c. The IPv6 is not described as valid input in the Dialog Description.

  Steps to Reproduce:
  ===
  See the dialog at: 
http:///project/access_and_security/security_groups//add_rule/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456930] Re: remove "admin" user from new created project cause error

2016-04-05 Thread Daniel Castellanos
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1456930

Title:
  remove "admin" user from new created project cause error

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Step 1: log on Horzion as admin
  step 2 : Create a new project, for example, named as "test-proj"
  step 3: Append "admin" user to the new created project, just as "_member_" 
role and save
  step 4: remove "admin" user from the project "test-proj "s member list and 
try to save
  Horizon will show error and the project list will be empty.
  Please refer to snapshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1456930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464652] Re: loss of privileges of current admin user

2016-04-05 Thread Daniel Castellanos
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1464652

Title:
  loss of privileges of current admin user

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  When I'm logged into openstack  as the "admin" user into a project.
  I created a new project and added "admin" user to it and saved. Again I 
removed user "admin" from the same project and saved. Then he ("admin" user) 
looses all his admin privileges from the current session. He can't see any 
projects or users.  We will  have to log in again to re-gain the admin 
privileges.

  In detail:
  1. Log in as user "admin"
  2. Select "Current Project = admin", "View = admin"
  3. Click "Projects"
  4. Click "+ Create Project"
  5. Name: "test", Description: "", Enabled: checked
  6. Project Members: click the "+" for "admin", now the admin user is added
 with the "_member_" role
  7. Click "Create Project" to close the dialog window
  8. In the column of project "test", click on "Modify Users"
  9. Click on the "-" for the "admin" user to remove her
  10. Click "Save"
  11. An error pops up, "Error: Unauthorized: Unable to retrieve project list."

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482307] Re: [Launch Instance NG] Key Pair download missing

2016-04-05 Thread Daniel Castellanos
Can't reproduce the bug on Mitaka, probably the Fix was released alredy

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482307

Title:
  [Launch Instance NG] Key Pair download missing

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In the new launch instance wizard, users can create a keypair and copy
  paste it, but they can't download it.  Downloading was mocked, but
  wasn't implemented due to kilo time constraints.  In a usability
  study, this was identified as a problem and is a regression in
  functionality for old launch instance wizard.

  https://openstack.invisionapp.com/d/main#/console/2472307/66353180/preview

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448867] Re: Subfolder_path is None when upload object.

2016-04-05 Thread Rob Cresswell
Given that the swift UI has been replaced, this is low priority, and its
unlikely this will be fixed unless somebody turns up with code.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1448867

Title:
  Subfolder_path is None when upload object.

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Subfolder_path is None when upload object.

  Step:
  1. Create 50 containers. Container_1 ~ 50.
  2. click Container_1 then create pseudo-folder folder1.
  3. click folder1 and then upload object test1.
  4. repeate to click sidebar Container to reload table.
  5. repeate step 3 will see the Subfolder_path is none.
   to view the page source code will see the button path lost 'folder1'

   subfolders = self.table.kwargs.get('subfolder_path', '')

  def get_link_url(self, datum=None):
  # Usable for both the container and object tables
  if getattr(datum, 'container', datum):
  # This is a container
  container_name = http.urlquote(datum.name)
  else:
  # This is a table action, and we already have the container name
  container_name = self.table.kwargs['container_name']
  subfolders = self.table.kwargs.get('subfolder_path', '')
  args = (http.urlquote(bit) for bit in
  (container_name, subfolders) if bit)
  return reverse(self.url, args=args)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1448867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483595] Re: Navigating Horizon UI hieroglyphs appears near buttons

2016-04-05 Thread Diana Whitten
Cannot reproduce this bug.  Is this still a problem for you?

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483595

Title:
  Navigating Horizon UI hieroglyphs appears near buttons

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Reproduced on stable kilo.

  Steps to reproduce:

  1. Navigate to horizon

  Actual result:
  Symbols like hieroglyphs appears near buttons (see attachment)
  It is reproduced in Firefox, Chrome, Vivaldi browsers

  Expected result:
  No symbols like hieroglyphs near buttons

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482499] Re: error page found in url: /dashboard/admin/aggregates/

2016-04-05 Thread Diana Whitten
I can't seem to reproduce this bug.  Can you verify this is still an
issue?

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482499

Title:
  error page found in url:  /dashboard/admin/aggregates/

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Create a host aggregates with Availability Zone and a host;  then
  disable the only host in this zone;  when you open
  /dashboard/admin/aggregates/,the  page shows error:   "An unexpected
  error has occurred. Try refreshing the page. If that doesn't  help,
  contact your local administrator."

  In  /var/log/horizon/horizon.log ,it shows: 
  2015-08-07 05:15:50,766 26634 ERROR django.request Internal Server Error: 
/dashboard/admin/aggregates/
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
137, in get_response
  response = response.render()
File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
105, in render
  self.content = self.rendered_content
File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
82, in rendered_content
  content = template.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 140, 
in render
  return self._render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 134, 
in _render
  return self.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 840, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 854, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", 
line 123, in render
  return compiled_parent._render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 134, 
in _render
  return self.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 840, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 854, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", 
line 62, in render
  result = block.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 840, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 854, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", 
line 62, in render
  result = block.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 840, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 854, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 891, 
in render
  output = self.filter_expression.resolve(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 585, 
in resolve
  obj = self.var.resolve(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 735, 
in resolve
  value = self._resolve_lookup(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 789, 
in _resolve_lookup
  current = current()
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1149, 
in render
  return table_template.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 140, 
in render
  return self._render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 134, 
in _render
  return self.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 840, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 854, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", 
line 506, in render
  output = self.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 840, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 854, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", 
line 504, in render
  six.iteritems(self.extra_context)])
File 

[Yahoo-eng-team] [Bug 1566455] [NEW] Using V3 Auth throws error TypeError at /auth/login/ __init__() got an unexpected keyword argument 'unscoped'

2016-04-05 Thread Mitali Parthasarathy
Public bug reported:

PasswordPlugin class creates a plugin with 'unscoped' parameter. This is
throwing the following error:

TypeError at /auth/login/
__init__() got an unexpected keyword argument 'unscoped'

LOG.debug('Attempting to authenticate for %s', username)
if utils.get_keystone_version() >= 3:
return v3_auth.Password(auth_url=auth_url,
username=username,
password=password,
user_domain_name=user_domain_name,
unscoped=True) ---> 
Deleting this line removes the error and authenticates successfully.
else:
return v2_auth.Password(auth_url=auth_url,
username=username,
password=password) 

I have V3 API and URL configured in Horizon settings. Using Horizon Kilo
version.

Is there some other setting that is needed?

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1566455

Title:
  Using V3 Auth throws error TypeError at /auth/login/ __init__() got an
  unexpected keyword argument 'unscoped'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  PasswordPlugin class creates a plugin with 'unscoped' parameter. This
  is throwing the following error:

  TypeError at /auth/login/
  __init__() got an unexpected keyword argument 'unscoped'

  LOG.debug('Attempting to authenticate for %s', username)
  if utils.get_keystone_version() >= 3:
  return v3_auth.Password(auth_url=auth_url,
  username=username,
  password=password,
  user_domain_name=user_domain_name,
  unscoped=True) ---> 
Deleting this line removes the error and authenticates successfully.
  else:
  return v2_auth.Password(auth_url=auth_url,
  username=username,
  password=password) 

  I have V3 API and URL configured in Horizon settings. Using Horizon
  Kilo version.

  Is there some other setting that is needed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1566455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1201266] Re: 'is_public' filter should be handled when nova calls glance via V2

2016-04-05 Thread Matt Riedemann
The nova part of this needs to be fixed in the glance v2 support
blueprint:

https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1201266

Title:
  'is_public' filter should be handled when nova calls glance via V2

Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  During an image- list call via Nova, it appends an 'is_public:None' to
  the filters, to ensure that private images are not filtered out. In
  glance V2 Api, this value should be parsed to something useful, say
  returning True and preserving the default behaviour of returning all
  public images ( As is done in V1). Currently image-list to V2 via Nova
  returns an empty list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1201266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562914] Re: vendor-data not working with ConfigDrive in 0.7.5

2016-04-05 Thread Scott Moser
Hi, I've just verified this is functional in xenial through putting:
$ cat /var/lib/cloud/seed/nocloud-net/vendor-data 
#cloud-config
runcmd:
 - [ sh, -xc, 'echo "$(date): hi world" | tee /run/hello.txt' ]


it was fixed in trunk at revno 1142 which is in 0.7.6.
If you want or need this fixed in an Ubuntu release, you'll need to 'Also 
affects distribution/package' and then target it for a specific release.


** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1562914

Title:
  vendor-data not working with ConfigDrive in 0.7.5

Status in cloud-init:
  Fix Released

Bug description:
  vendor-data is not read with the ConfigDrive datasource in 0.7.5.

  Mar 28 14:54:01 hi [CLOUDINIT] stages.py[DEBUG]: no vendordata from
  datasource

  Works properly with NoCloud and OpenStack datasources.

  DistroRelease: Ubuntu 14.04
  Package: 0.7.5-0ubuntu1.16
  Uname: 3.13.0-79-generic x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1562914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479799] Re: Ajax update of table rows not working in Horizon

2016-04-05 Thread Diana Whitten
Unable to replicate this bug.  If its still a problem, please attach
more information so we can try and recreate it.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1479799

Title:
  Ajax update of table rows not working in Horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Hi iam using Kilo openstack My problem is when i create a new instance
  through horizon the request is submitted and the list will be updated
  with this current instance , But the status of the instance is not
  getting updated (instance status is in spawwing all the time if donot
  refresh) and if i refresh the page my instance will be in active state
  , The problem here is ajax updation of the status row is not happening
  , Ajax update in not happening for image creation , volume creation
  and deletion

  But when i comment the following lines in the function "def
  process_request(self, request):" in horizon/middleware.py

   if request.is_ajax():
  return None

  
  Then autopolling is happening and the rows are automatically updated with out 
any refresh

  Please help me ???

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1479799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468243] Re: The behavior of "Boot from image (creates a new volume)" is maybe confusing.

2016-04-05 Thread Diana Whitten
** Changed in: horizon
   Status: New => Opinion

** Changed in: horizon
Milestone: None => next

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1468243

Title:
  The behavior of "Boot from image (creates a new volume)" is maybe
  confusing.

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  When you want  to boot instance using horizon.

  The option, "Boot from image (creates a new volume)" , is maybe
  confusing.

  the option maybe means that "it boot from image, create a new volume, and 
attach the volume to the instance".
  Or
  it means that "copy the image to volume, boot from volume"

  The real behavior in horizon is the second case, "copy the image to
  volume, boot from volume".

  Maybe we could use appropriate to describe the option instead of using
  the text, "Boot from image (creates a new volume)".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1468243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404331] Re: ironic driver logs incorrect error message when node in unexpected state

2016-04-05 Thread Matt Riedemann
This is superseded by https://review.openstack.org/#/c/211097/

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404331

Title:
  ironic driver logs incorrect error message when node in unexpected
  state

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When an Ironic node is not in the expected state (eg, it somehow is
  out of sync with the nova driver), an incorrect error message is
  logged in Nova.

  This showed up while testing changes to Ironic's state machine (so the
  node being in the wrong state is not Nova's fault; I broke something
  in Ironic to cause that). Regardless of the cause of the InvalidState
  error, our Nova driver should handle it better.

  Here is a copy of the trace from this test run:
  
http://logs.openstack.org/83/140883/6/check/check-tempest-dsvm-ironic-pxe_ssh/369aebc/logs/screen-n-cpu.txt.gz?#_2014-12-19_16_52_57_030

  
  2014-12-19 16:52:57.030 WARNING ironicclient.common.http 
[req-7059788d-3695-4b22-851a-bec30922e823 demo demo] Request returned failure 
status.
  2014-12-19 16:52:57.030 WARNING nova.virt.ironic.client_wrapper 
[req-7059788d-3695-4b22-851a-bec30922e823 demo demo] Error contacting Ironic 
server for 'node.update'. Attempt 59 of 60
  ...
  {"error_message": "{\"debuginfo\": null, \"faultcode\": \"Client\", 
\"faultstring\": \"Node 07a3ce7c-0726-4fc2-a94b-a707d0450b5a can not be updated 
while a state transition is in progress.\"}"}
   log_http_response 
/usr/local/lib/python2.7/dist-packages/ironicclient/common/http.py:119
  2014-12-19 16:52:59.196 WARNING ironicclient.common.http 
[req-7059788d-3695-4b22-851a-bec30922e823 demo demo] Request returned failure 
status.
  2014-12-19 16:52:59.196 ERROR nova.virt.ironic.client_wrapper 
[req-7059788d-3695-4b22-851a-bec30922e823 demo demo] Error contacting Ironic 
server for 'node.update'. Attempt 60 of 60
  2014-12-19 16:52:59.197 ERROR nova.compute.manager 
[req-7059788d-3695-4b22-851a-bec30922e823 demo demo] [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] Setting instance vm_state to ERROR
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] Traceback (most recent call last):
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6148, in 
_error_out_instance_on_exception
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] yield
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2865, in rebuild_instance
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] self.driver.rebuild(**kwargs)
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18]   File 
"/opt/stack/new/nova/nova/virt/ironic/driver.py", line 1007, in rebuild
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] preserve_ephemeral)
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18]   File 
"/opt/stack/new/nova/nova/virt/ironic/driver.py", line 297, in 
_add_driver_fields
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] ironicclient.call('node.update', 
node.uuid, patch)
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18]   File 
"/opt/stack/new/nova/nova/virt/ironic/client_wrapper.py", line 118, in call
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] raise exception.NovaException(msg)
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18] NovaException: Error contacting Ironic 
server for 'node.update'. Attempt 60 of 60
  2014-12-19 16:52:59.197 31679 TRACE nova.compute.manager [instance: 
604b621c-2103-4343-85f4-acaef2b0eb18]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541691] Re: server boot with leading and trailing white spaces in name, displays weird error message

2016-04-05 Thread Markus Zoeller (markus_z)
This is fixed. Re-tested with:

$ nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec "test "
ERROR (BadRequest): An invalid 'name' value was provided. 
The name must be: printable characters. Can not start or end with 
whitespace. (HTTP 400) 
(Request-ID: req-6e4cefdf-eca8-4878-98c9-85a6387b6d8f)

Test environment:

$ ./tools/info.sh 
os|distro=trusty
os|vendor=Ubuntu
os|release=14.04
git|cinder|master[b4b0c96]
git|devstack|master[be11ae7]
git|glance|master[6e13a71]
git|horizon|master[3666072]
git|keystone|master[6f9f390]
git|noVNC|master[b403cb9]
git|nova|[386d45e]

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1541691

Title:
  server boot with leading and trailing white spaces in name, displays
  weird error message

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If you pass only white space or leading or trailing white spaces in
  name while creating instance then it raises 400 bad request exception,
  but error message is weird and pretty big.

  Steps to reproduce:
  ---  

  Create instance using boot command with leading or trailing white spaces.
  $ nova boot --flavor  --image  "test  "

  
  ERROR (BadRequest): Invalid input for field/attribute name. Value: test  
. u'test  ' does not match u'^(?![\\ 
\\\xa0\\\u1680\\\u180e\\\u2000\\\u2001\\\u2002\\\u2003\\\u2004\\\u2005\\\u2006\\\u2007\\\u2008\\\u2009\\\u200a\\\u202f\\\u205f\\\u3000])[\\
 
\\!\\"\\#\\$\\%\\&\\\'\\(\\)\\*\\+\\,\\-\\.\\/0123456789\\:\\;\\<\\=\\>\\?\\@ABCDEFGHIJKLMNOPQRSTUVWXYZ\\[\\]\\^\\_\\`abcdefghijklmnopqrstuvwxyz\\{\\|\\}\\~\\\xa0\\\xa1\\\xa2\\\xa3\\\xa4\\\xa5\\\xa6\\\xa7\\\xa8\\\xa9\\\xaa\\\xab\\\xac\\\xae\\\xaf\\\xb0\\\xb1\\\xb2\\\xb3\\\xb4\\\xb5\\\xb6\\\xb7\\\xb8\\\xb9\\\xba\\\xbb\\\xbc\\\xbd\\\xbe\\\xbf\\\xc0\\\xc1\\\xc2\\\xc3\\\xc4\\\xc5\\\xc6\\\xc7\\\xc8\\\xc9\\\xca\\\xcb\\\xcc\\\xcd\\\xce\\\xcf\\\xd0\\\xd1\\\xd2\\\xd3\\\xd4\\\xd5\\\xd6\\\xd7\\\xd8\\\xd9\\\xda\\\xdb\\\xdc\\\xdd\\\xde\\\xdf\\\xe0\\\xe1\\\xe2\\\xe3\\\xe4\\\xe5\\\xe6\\\xe7\\\xe8\\\xe9\\\xea\\\xeb\\\xec\\\xed\\\xee\\\xef\\\xf0\\\xf1\\\xf2\\\xf3\\\xf4\\\xf5\\\xf6\\\xf7\\\xf8\\\xf9\\\xfa\\\xfb\\\xfc\\\xfd\\\xfe\\\xff\\\u0100
 
\\\u0101\\\u0102\\\u0103\\\u0104\\\u0105\\\u0106\\\u0107\\\u0108\\\u0109\\\u010a\\\u010b\\\u010c\\\u010d\\\u010e\\\u010f\\\u0110\\\u0111\\\u0112\\\u0113\\\u0114\\\u0115\\\u0116\\\u0117\\\u0118\\\u0119\\\u011a\\\u011b\\\u011c\\\u011d\\\u011e\\\u011f\\\u0120\\\u0121\\\u0122\\\u0123\\\u0124\\\u0125\\\u0126\\\u0127\\\u0128\\\u0129\\\u012a\\\u012b\\\u012c\\\u012d\\\u012e\\\u012f\\\u0130\\\u0131\\\u0132\\\u0133\\\u0134\\\u0135\\\u0136\\\u0137\\\u0138\\\u0139\\\u013a\\\u013b\\\u013c\\\u013d\\\u013e\\\u013f\\\u0140\\\u0141\\\u0142\\\u0143\\\u0144\\\u0145\\\u0146\\\u0147\\\u0148\\\u0149\\\u014a\\\u014b\\\u014c\\\u014d\\\u014e\\\u014f\\\u0150\\\u0151\\\u0152\\\u0153\\\u0154\\\u0155\\\u0156\\\u0157\\\u0158\\\u0159\\\u015a\\\u015b\\\u015c\\\u015d\\\u015e\\\u015f\\\u0160\\\u0161\\\u0162\\\u0163\\\u0164\\\u0165\\\u0166\\\u0167\\\u0168\\\u0169\\\u016a\\\u016b\\\u016c\\\u016d\\\u016e\\\u016f\\\u0170\\\u0171\\\u0172\\\u0173\\\u0174\\\u0175\\\u0176\\\u0177\\\u0178\\\u0179\\\u017a\\\u017b\\\u017c\\\u0
 
17d\\\u017e\\\u017f\\\u0180\\\u0181\\\u0182\\\u0183\\\u0184\\\u0185\\\u0186\\\u0187\\\u0188\\\u0189\\\u018a\\\u018b\\\u018c\\\u018d\\\u018e\\\u018f\\\u0190\\\u0191\\\u0192\\\u0193\\\u0194\\\u0195\\\u0196\\\u0197\\\u0198\\\u0199\\\u019a\\\u019b\\\u019c\\\u019d\\\u019e\\\u019f\\\u01a0\\\u01a1\\\u01a2\\\u01a3\\\u01a4\\\u01a5\\\u01a6\\\u01a7\\\u01a8\\\u01a9\\\u01aa\\\u01ab\\\u01ac\\\u01ad\\\u01ae\\\u01af\\\u01b0\\\u01b1\\\u01b2\\\u01b3\\\u01b4\\\u01b5\\\u01b6\\\u01b7\\\u01b8\\\u01b9\\\u01ba\\\u01bb\\\u01bc\\\u01bd\\\u01be\\\u01bf\\\u01c0\\\u01c1\\\u01c2\\\u01c3\\\u01c4\\\u01c5\\\u01c6\\\u01c7\\\u01c8\\\u01c9\\\u01ca\\\u01cb\\\u01cc\\\u01cd\\\u01ce\\\u01cf\\\u01d0\\\u01d1\\\u01d2\\\u01d3\\\u01d4\\\u01d5\\\u01d6\\\u01d7\\\u01d8\\\u01d9\\\u01da\\\u01db\\\u01dc\\\u01dd\\\u01de\\\u01df\\\u01e0\\\u01e1\\\u01e2\\\u01e3\\\u01e4\\\u01e5\\\u01e6\\\u01e7\\\u01e8\\\u01e9\\\u01ea\\\u01eb\\\u01ec\\\u01ed\\\u01ee\\\u01ef\\\u01f0\\\u01f1\\\u01f2\\\u01f3\\\u01f4\\\u01f5\\\u01f6\\\u01f7\\\u01f8\\\u01f9\\
 

[Yahoo-eng-team] [Bug 1455317] Re: Dashboard crashed graphics which look like a text browser

2016-04-05 Thread Diana Whitten
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1455317

Title:
  Dashboard crashed graphics which look like a text browser

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Using CentOS Linux release 7.0.1406 (Core)

  Name: openstack-dashboard
  Arch: noarch
  Version : 2014.2.2
  Release : 1.el7
  Size: 20 M
  Repo: installed
  From repo   : openstack-juno
  Summary : Openstack web user interface reference implementation
  URL : http://horizon.openstack.org/
  License : ASL 2.0 and BSD
  Description : Openstack Dashboard is a web user interface for Openstack. The 
package
  : provides a reference implementation using the Django Horizon 
project,
  : mostly consisting of JavaScript and CSS to tie it altogether as 
a standalone
  : site.

  This is the 3rd time I encountered this problems; but so far all other
  services is running on the background. Don't know really what the
  cause of this.

  Based on the log, found this error:
  2015-05-07 04:09:24,014 13219 INFO horizon.tables.actions 
: "Windows7_Access"
  2015-05-07 04:42:53,182 13221 ERROR horizon.exceptions Not Found: Instance 
could not be found (HTTP 404) (Request-ID: 
req-b01e5808-08c8-4129-9186-aae0be1720ea)
  Traceback (most recent call last):
    File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 268, in get_data
  instance = api.nova.server_get(self.request, instance_id)
    File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py",
 line 559, in server_get
  return Server(novaclient(request).servers.get(instance_id), request)
    File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", line 
563, in get
  return self._get("/servers/%s" % base.getid(server), "server")
    File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 93, in _get
  _resp, body = self.api.client.get(url)
    File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 487, in 
get
  return self._cs_request(url, 'GET', **kwargs)
    File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 465, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
    File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 439, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
    File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 433, in 
request
  raise exceptions.from_response(resp, body, url, method)
  NotFound: Instance could not be found (HTTP 404) (Request-ID: 
req-b01e5808-08c8-4129-9186-aae0be1720ea)
  2015-05-07 04:54:31,672 13221 INFO openstack_auth.views Logging out user 
"lumad".
  2015-05-07 04:54:31,680 13221 INFO openstack_auth.views Could not delete token
  2015-05-07 22:50:51,319 13217 INFO openstack_auth.forms Login successful for 
user "lumad".
  2015-05-07 22:53:05,866 13216 INFO horizon.tables.actions 
: "Windows7_Access"
  2015-05-07 23:03:43,950 13220 INFO openstack_auth.views Logging out user "".
  2015-05-07 23:03:45,930 13220 INFO openstack_auth.forms Login successful for 
user "admin".
  2015-05-07 23:04:02,510 13221 INFO horizon.tables.actions 
: 
"Windows2k7_Office"

  What I did before i used to remove the openstack-dashboard, reinstall and 
restore local_settings from the backup.
  I am afraid now to update as the Kilo is now release.

  Hope someone can enlighten.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1455317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531988] Re: py34 tests that use the stubbed out fake image service race fail a lot

2016-04-05 Thread Markus Zoeller (markus_z)
This doesn't have any hit anymore in logstash. I close this bug report
and remove the bug signature in elastic-recheck.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531988

Title:
  py34 tests that use the stubbed out fake image service race fail a lot

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Failures like this:

  http://logs.openstack.org/26/224726/16/gate/gate-nova-
  python34/1dda5ee/console.html#_2016-01-07_18_17_49_116

  2016-01-07 18:17:49.115 | 
nova.tests.unit.virt.vmwareapi.test_configdrive.ConfigDriveTestCase.test_create_vm_without_config_drive
  2016-01-07 18:17:49.115 | 
---
  2016-01-07 18:17:49.115 | 
  2016-01-07 18:17:49.115 | Captured traceback:
  2016-01-07 18:17:49.115 | ~~~
  2016-01-07 18:17:49.115 | b'Traceback (most recent call last):'
  2016-01-07 18:17:49.115 | b'  File 
"/home/jenkins/workspace/gate-nova-python34/.tox/py34/lib/python3.4/site-packages/mock/mock.py",
 line 1305, in patched'
  2016-01-07 18:17:49.115 | b'return func(*args, **keywargs)'
  2016-01-07 18:17:49.115 | b'  File 
"/home/jenkins/workspace/gate-nova-python34/nova/tests/unit/virt/vmwareapi/test_configdrive.py",
 line 89, in setUp'
  2016-01-07 18:17:49.116 | b'metadata = image_service.show(context, 
image_id)'
  2016-01-07 18:17:49.116 | b'  File 
"/home/jenkins/workspace/gate-nova-python34/nova/tests/unit/image/fake.py", 
line 184, in show'
  2016-01-07 18:17:49.116 | b'raise 
exception.ImageNotFound(image_id=image_id)'
  2016-01-07 18:17:49.116 | b'nova.exception.ImageNotFound: Image 
70a599e0-31e7-49b7-b260-868f441e862b could not be found.'
  2016-01-07 18:17:49.116 | b''
  2016-01-07 18:17:49.116 | 
  2016-01-07 18:17:49.116 | Captured pythonlogging:
  2016-01-07 18:17:49.116 | ~~~
  2016-01-07 18:17:49.116 | b'2016-01-07 18:02:26,578 INFO 
[oslo_vmware.api] Successfully established new session; session ID is 3818e.'
  2016-01-07 18:17:49.116 | b'2016-01-07 18:02:26,579 INFO 
[nova.virt.vmwareapi.driver] VMware vCenter version: 5.1.0'
  2016-01-07 18:17:49.116 | b'2016-01-07 18:02:26,587 WARNING 
[nova.tests.unit.image.fake] Unable to find image id 
70a599e0-31e7-49b7-b260-868f441e862b.  Have images: {}'
  2016-01-07 18:17:49.116 | b''

  Have been showing up a lot recently in the py34 job for nova:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22b'%20%20%20%20raise%20exception.ImageNotFound(image_id%3Dimage_id)'%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20build_name%3A%5C
  %22gate-nova-python34%5C%22

  A lot of them were in the vmwareapi driver tests which dims
  blacklisted for py34 yesterday:

  https://review.openstack.org/#/c/264368/

  But we're still hitting them.

  I have a change up to stop using the stubs.Set (mox) calls with the
  fake image service stub out code here:

  https://review.openstack.org/#/c/264393/

  This bug is for tracking those failures in elastic-recheck so we can
  get them off the uncategorized bugs page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1531988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470443] Re: ICMP rules not getting deleted on the hyperv network adapter extended acl set

2016-04-05 Thread Claudiu Belu
** Changed in: networking-hyperv
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470443

Title:
  ICMP rules not getting deleted on the hyperv network adapter extended
  acl set

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  1. Create a security group with icmp rule
  2. spawn a vm with the above secuirty-grop-rule
  3. ping works from dhcp namespace 
  4. delete the rule from secuirty-group which will trigger the port-update
  5. however the rule is still there on compute for the vm even after 
port-update

  rootcause: icmp rule is created with locacal port as empty('').
  however during remove_security_rule the rule is matched for port "ANY" which 
does not match any rule, hence rule not deleted.
  solution: introduce the check to match empty loalport incase of deleting icmp 
rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1470443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466547] Re: Hyper-V: Cannot add ICMPv6 security group rule

2016-04-05 Thread Claudiu Belu
** Changed in: networking-hyperv
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466547

Title:
  Hyper-V: Cannot add ICMPv6 security group rule

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  Security Group rules created with ethertype 'IPv6' and protocol 'icmp'
  cannot be added by the Hyper-V Security Groups Driver, as it cannot
  add rules with the protocol 'icmpv6'.

  This can be easily fixed by having the Hyper-V Security Groups Driver
  create rules with protocol '58' instead. [1] These rules will also
  have to be stateless, as ICMP rules cannot be stateful on Hyper-V.

  This bug is causing the test
  tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os
  to fail on Hyper-V.

  [1] http://www.iana.org/assignments/protocol-numbers/protocol-
  numbers.xhtml

  Log: http://paste.openstack.org/show/301866/

  Security Groups: http://paste.openstack.org/show/301870/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1466547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520054] Re: HyperV third-party code still present in neutron tree

2016-04-05 Thread Claudiu Belu
** Changed in: networking-hyperv
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520054

Title:
  HyperV third-party code still present in neutron tree

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  The HyperV ML2 mechanism driver and agent should be completely removed
  from the neutron tree. All HyperV code and artifacts should be
  contained in the networking-hyperv sub-project.

  See http://docs.openstack.org/developer/neutron/devref/contribute.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1520054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1016633] Re: Bad performance problem with nova.virt.firewall

2016-04-05 Thread Markus Zoeller (markus_z)
@Hans Lindgren: Thanks for the feedback. This bug report is really old
and I doubt that the current "medium" importance is still valid. I
close this as fix released, thanks for you patch.

@David Kranz (+ other stakeholders): Please double-check if the 
issue is fixed from your perspective. Use the current master (Newton) 
code for that. If it is not fixed, please reopen and provide some
information how you tested this.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1016633

Title:
  Bad performance problem with nova.virt.firewall

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I was trying to figure out why creating 1,2,4 servers in parallel on an 
8-core machine did not show any speedup. I found a
  problem shown in this log snippet with 4 servers. The pair of calls producing 
the debug messages are separated only by
  a single call. 

  def prepare_instance_filter(self, instance, network_info):
  # make sure this is legacy nw_info
  network_info = self._handle_network_info_model(network_info)

  self.instances[instance['id']] = instance
  self.network_infos[instance['id']] = network_info
  self.add_filters_for_instance(instance)
  LOG.debug(_('Filters added to instance'), instance=instance)
  self.refresh_provider_fw_rules()
  LOG.debug(_('Provider Firewall Rules refreshed'), instance=instance)
  self.iptables.apply()

  Note the interleaving of the last two calls in this log snippet and
  how long they take:

  
  Jun 22 10:52:09 xg06eth0 2012-06-22 10:52:09 DEBUG nova.virt.firewall 
[req-14689766-cc17-4d8d-85bb-c4c19a2fc88d demo demo] [instance: 
4c5a43af-04fd-4aa0-818e-8e0c5384b279] Filters added to instance from 
(pid=15704) prepare_instance_filter /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:10 xg06eth0 2012-06-22 10:52:10 DEBUG
  nova.virt.firewall [req-14689766-cc17-4d8d-85bb-c4c19a2fc88d demo
  demo] [instance: 4c5a43af-04fd-4aa0-818e-8e0c5384b279] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

  Jun 22 10:52:18 xg06eth0 2012-06-22 10:52:18 DEBUG
  nova.virt.firewall [req-c9ed42e0-1eed-418a-ba37-132bcc26735c demo
  demo] [instance: df15e7d6-657e-4fd7-a4eb-6aab1bd63d5b] Filters added
  to instance from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:19 xg06eth0 2012-06-22 10:52:19 DEBUG
  nova.virt.firewall [req-c9ed42e0-1eed-418a-ba37-132bcc26735c demo
  demo] [instance: df15e7d6-657e-4fd7-a4eb-6aab1bd63d5b] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

  Jun 22 10:52:19 xg06eth0 2012-06-22 10:52:19 DEBUG
  nova.virt.firewall [req-2daf4cb8-73c5-487a-9bf6-bea08125b461 demo
  demo] [instance: 765212a6-cc23-4d5a-b252-5fa6b5f8331e] Filters added
  to instance from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:25 xg06eth0 2012-06-22 10:52:25 DEBUG
  nova.virt.firewall [req-5618e93e-3af1-4c65-b826-9d38850a215d demo
  demo] [instance: fa6423ac-82b8-419b-a077-f2d44d081771] Filters added
  to instance from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:38 xg06eth0 2012-06-22 10:52:38 DEBUG
  nova.virt.firewall [req-2daf4cb8-73c5-487a-9bf6-bea08125b461 demo
  demo] [instance: 765212a6-cc23-4d5a-b252-5fa6b5f8331e] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

  Jun 22 10:52:52 xg06eth0 2012-06-22 10:52:52 DEBUG
  nova.virt.firewall [req-5618e93e-3af1-4c65-b826-9d38850a215d demo
  demo] [instance: fa6423ac-82b8-419b-a077-f2d44d081771] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1016633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565711] Re: vlan configuration/unconfigured interfaces creates slow boot time

2016-04-05 Thread Blake Rouse
So the configuration that MAAS emits and curtin generates looks correct.
Cloud-init just waits for the signal so I actually think the issue is
with ifupdown. I have targeted that package as well, will leave the
others for now just to track.

** Also affects: ifupdown
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1565711

Title:
  vlan configuration/unconfigured interfaces creates slow boot time

Status in cloud-init:
  New
Status in curtin:
  New
Status in ifupdown:
  New
Status in MAAS:
  New

Bug description:
  maas: 1.9.1+bzr4543-0ubuntu1~trusty1 (from proposed PPA)

  Deploying juju bootstrap node on Ubuntu 14.04 with the following
  network configuration:

  eth0
  static assigned IP address, default VLAN (no trunking)

  eth1
 static assigned IP address, secondary VLAN

 eth1.2667
 static assigned IP address, VLAN 2667

 eth1.2668
 static assigned IP address, VLAN 2668

 eth1.2669
 static assigned IP address, VLAN 2669

 eth1.2670
 static assigned IP address, VLAN 2670

  eth2
unconfigured

  eth3
unconfigured

  
  MAAS generates a /e/n/i which auto stanzas for the VLAN devices and the 
unconfigured network interfaces; the upstart process which checks that network 
configuration is complete waits for /var/run/ifup. to exists for all auto 
interfaces; these will never appear for either the VLAN interfaces or the 
unconfigured network interfaces.

  As a result, boot time if very long as cloud-init and networking both
  take 2 minutes to timeout waiting for network interfaces that will
  never appear to be configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1565711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291069] Re: Modifying flavor access list modifies the flavor info

2016-04-05 Thread Matt Borland
Similar to https://bugs.launchpad.net/horizon/+bug/1311561, which Cindy
designated as Won't Fix.  I agree.

** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291069

Title:
  Modifying flavor access list modifies the flavor info

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  When the user modifies the flavor access list, the flavor is removed
  and created again even when the flavor information hasn't chance so
  there was no need to update the flavor information

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566405] [NEW] Create Image Alert Left Padding

2016-04-05 Thread Diana Whitten
Public bug reported:

The left padding on a Create Image error is too much.  See attached
picture

** Affects: horizon
 Importance: Low
 Status: Triaged


** Tags: style

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1566405

Title:
  Create Image Alert Left Padding

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  The left padding on a Create Image error is too much.  See attached
  picture

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1566405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/267084
Committed: 
https://git.openstack.org/cgit/openstack/python-troveclient/commit/?id=8ed42da06b267c41b1c429d1002435dac0f6a0d5
Submitter: Jenkins
Branch:master

commit 8ed42da06b267c41b1c429d1002435dac0f6a0d5
Author: LiuNanke 
Date:   Thu Jan 14 02:36:30 2016 +0800

Keep py3.X compatibility for urllib

Use six.moves.urllib.parse instead of urllib

Change-Id: Ia728005ee0af307a7df042f23f0276f922926465
Closes-Bug: #1280105


** Changed in: python-troveclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280105

Title:
  urllib/urllib2  is incompatible for python 3

Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in Fuel for OpenStack:
  Fix Committed
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Committed
Status in neutron:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in refstack:
  Fix Released
Status in Sahara:
  Fix Released
Status in tacker:
  In Progress
Status in tempest:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in Zuul:
  In Progress

Bug description:
  urllib/urllib2  is incompatible for python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563626] Re: When a VM is deleted, ipallocation table is not cleaned up

2016-04-05 Thread Sridhar Venkat
I am canceling this bug, the problem was root caused to database issue,
foreign key setup between ports and ipallocations.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563626

Title:
  When a VM is deleted, ipallocation table is not cleaned up

Status in neutron:
  Invalid

Bug description:
  This problem is related to latest mitaka driver. When a VM is
  deployed, I see records added to port and ipallocations tables. When
  VM is deleted, record is deleted only from port table. Record in
  ipallocations table stays. Due to this, the same IP is not able to be
  reused for subsequent deploy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566379] Re: enable_v2_api = True is not working

2016-04-05 Thread Ian Cordasco
You appear to be using glanceclient 0.14.2. That version of the client
defaults to using v1 of the API. You can manually override it or upgrade
to a new version that uses the v2 API by default.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1566379

Title:
  enable_v2_api = True is not working

Status in Glance:
  Invalid

Bug description:
  I have set the "eanble_v2_api=True" in the glance-api.conf and
  restarted the glance-api service.

  After the above, expected that glance will use the V2 apis. But still
  it uses V1 APIs.

  The above has been observed with glance version 0.14.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1566379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546158] Re: Table actions with non-unique names cause horizon to incorrectly bind them to a table

2016-04-05 Thread Rob Cresswell
This isn't a bug. The name acts as a data or slug or identifier; its
never displayed to the user, so there's no reason not to use something
unique.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546158

Title:
  Table actions with non-unique names cause horizon to incorrectly bind
  them to a table

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  If you define two different actions with the same name and place one of them 
into table_actions and the other into row_action, horizon will place them 
non-deterministically,  because horizon.tables.base.DataTable class relies on 
the Action.name attribute 
https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L1389
  https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L1124

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562945] Re: Change nova's devstack blacklist to use test uuids

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/298437
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b113cb13e95b4d9305f093342ad4d34378aee8bb
Submitter: Jenkins
Branch:master

commit b113cb13e95b4d9305f093342ad4d34378aee8bb
Author: Chuck Carmack 
Date:   Mon Mar 28 19:16:35 2016 +

Change the nova tempest blacklist to use to idempotent ids

Currently, nova blacklists tests by name.  If a name changes,
the regex will not match, and the test will be attempted.
If the test idempotent ids are used instead, the test
will always be skipped.

Change-Id: Iaf189c42c342b4c2d7c77555980ed49914210bf6
Related-bug: 1562323
Closes-bug: 1562945


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562945

Title:
  Change nova's devstack blacklist to use test uuids

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  See https://bugs.launchpad.net/nova/+bug/1562323 for the background.
  A temporary workaround was made for that bug.  This bug will be used
  to implement changing the nova devstack blacklist file to use the test
  idempotent ids and not the test names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566159] Re: ERROR nova.api.openstack.extensions HTTPInternalServerError: 500

2016-04-05 Thread Markus Zoeller (markus_z)
*** This bug is a duplicate of bug 1527925 ***
https://bugs.launchpad.net/bugs/1527925

This looks like a (popular) configuration issue:
* https://bugs.launchpad.net/nova/+bug/1514480
* https://bugs.launchpad.net/nova/+bug/1523889
* https://bugs.launchpad.net/nova/+bug/1523224
* https://bugs.launchpad.net/nova/+bug/1525819

Please double-check the passwords in the glance config files and
the nova config file.

** This bug has been marked a duplicate of bug 1527925
   glanceclient.exc.HTTPInternalServerError when running nova image-list

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566159

Title:
  ERROR nova.api.openstack.extensions HTTPInternalServerError: 500

Status in OpenStack Compute (nova):
  New

Bug description:
  2016-04-05 10:54:42.673 989 INFO oslo_service.service [-] Child 1100 exited 
with status 0
  2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1096 exited 
with status 0
  2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1076 exited 
with status 0
  2016-04-05 10:54:42.680 989 INFO oslo_service.service [-] Child 1090 exited 
with status 0
  2016-04-05 10:54:42.681 989 INFO oslo_service.service [-] Child 1077 exited 
with status 0
  2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1091 exited 
with status 0
  2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1079 exited 
with status 0
  2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1094 exited 
with status 0
  2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1075 exited 
with status 0
  2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1071 exited 
with status 0
  2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1097 exited 
with status 0
  2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1092 exited 
with status 0
  2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1098 exited 
with status 0
  2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1099 exited 
with status 0
  2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1073 exited 
with status 0
  2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1067 exited 
with status 0
  2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1068 exited 
with status 0
  2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1069 exited 
with status 0
  2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1072 exited 
with status 0
  2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1066 exited 
with status 0
  2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1074 exited 
with status 0
  2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1078 exited 
with status 0
  2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1080 exited 
with status 0
  2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1081 exited 
with status 0
  2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1093 killed 
by signal 15
  2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1095 killed 
by signal 15
  2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1101 exited 
with status 0
  2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1102 exited 
with status 0
  2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1103 exited 
with status 0
  2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1104 exited 
with status 0
  2016-04-05 10:54:42.713 989 INFO oslo_service.service [-] Child 1105 exited 
with status 0
  2016-04-05 10:54:46.299 1259 INFO oslo_service.periodic_task [-] Skipping 
periodic task _periodic_update_dns because its interval is negative
  2016-04-05 10:54:46.529 1259 INFO nova.api.openstack [-] Loaded extensions: 
['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 
'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 
'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 
'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 
'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 
'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 
'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 
'os-evacuate', 'os-extended-availability-zone', 
'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 
'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 
'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 
'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 
'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 
'os-instance-actions', 'os-instance-usage-audi
 t-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 
'os-multinic', 

[Yahoo-eng-team] [Bug 1421017] Re: Flavor/image size checks are insufficient and untested

2016-04-05 Thread Matt Borland
*** This bug is a duplicate of bug 1401101 ***
https://bugs.launchpad.net/bugs/1401101

** This bug has been marked a duplicate of bug 1401101
   Nova launch instance from snapshot ignores --min-disk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421017

Title:
  Flavor/image size checks are insufficient and untested

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When launching an instance, Horizon does some checks to ensure that
  the flavor the user has chosen is large enough to support the image
  they have chosen. Unfortunately:

  1. These checks are broken. For disk size, they only check the
  `min_disk` property of the image, not the image size or virtual size;
  as a result, the image disk size is only checked in practice if the
  user has bothered to set the `min_disk` property.

  2. The unit tests for the checks are broken. They modify local
  versions of the glance image data, but then run the tests by passing
  image IDs, which means that the original unmodified test images are
  used. The tests happen to pass, accidentally, but they don't test what
  we think they're testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291579
Committed: 
https://git.openstack.org/cgit/openstack/searchlight/commit/?id=d5c3b4679647233ca023ec48c7ae2f1a022d8664
Submitter: Jenkins
Branch:master

commit d5c3b4679647233ca023ec48c7ae2f1a022d8664
Author: Swapnil Kulkarni (coolsvap) 
Date:   Fri Mar 11 13:00:50 2016 +0530

Replace deprecated LOG.warn with LOG.warning

LOG.warn is deprecated. It still used in a few places.
Updated to non-deprecated LOG.warning.

Change-Id: I6c5c39c033b06c7f2cf5b806b9b3cd8ff17485ad
Closes-Bug:#1508442


** Changed in: searchlight
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Committed
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  In Progress
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in shaker:
  In Progress
Status in Solum:
  Fix Released
Status in tempest:
  In Progress
Status in tripleo:
  In Progress
Status in trove-dashboard:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566383] [NEW] The creation fip does not endure restarting of l3-agent

2016-04-05 Thread QingchuanHao
Public bug reported:

when creating the first floating ip of a router, the veth-pair(fpr and
rfp) will be created. But if I3-agent is restarted accidentally before
assigning locally allocated ip for the fpr or rfp, gateway will not be
added successfully with and error raised

** Affects: neutron
 Importance: Undecided
 Assignee: QingchuanHao (haoqingchuan-28)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => QingchuanHao (haoqingchuan-28)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566383

Title:
  The creation fip does not endure restarting of l3-agent

Status in neutron:
  New

Bug description:
  when creating the first floating ip of a router, the veth-pair(fpr and
  rfp) will be created. But if I3-agent is restarted accidentally before
  assigning locally allocated ip for the fpr or rfp, gateway will not be
  added successfully with and error raised

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414118] Re: Horizon shows stacktrace instead of 401: unauthorized

2016-04-05 Thread Matt Borland
I think that the current behavior is appropriate; if for some reason
we're calling getVolumes when the user doesn't have access, there should
be some sort of stack trace on the server logs.  The bug is more whether
we should be checking something to help prevent that call from being
made, not to change the getVolumes behavior.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1414118

Title:
  Horizon shows stacktrace instead of 401: unauthorized

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  In Icehouse if a user is unauthorized to access volumes, horizon shows
  a popup upon login as well as when accesing volumes. In Juno, horizon
  shows no warning upon login, and as one access volumes it shows a
  stacktrace.

  One way to reproduce it is to edit on the cinder host the /etc/cinder
  /api-paste.ini file, and in the "filter:authtoken" section configure
  an auth_host that is not keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1414118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566379] [NEW] enable_v2_api = True is not working

2016-04-05 Thread Swami Reddy
Public bug reported:

I have set the "eanble_v2_api=True" in the glance-api.conf and restarted
the glance-api service.

After the above, expected that glance will use the V2 apis. But still it
uses V1 APIs.

The above has been observed with glance version 0.14.1

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: glance

** Description changed:

  I have set the "eanble_v2_api=True" in the glance-api.conf and restarted
  the glance-api service.
  
  After the above, expected that glance will use the V2 apis. But still it
  uses V1 APIs.
+ 
+ The above has been observed with glance version 0.14.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1566379

Title:
  enable_v2_api = True is not working

Status in Glance:
  New

Bug description:
  I have set the "eanble_v2_api=True" in the glance-api.conf and
  restarted the glance-api service.

  After the above, expected that glance will use the V2 apis. But still
  it uses V1 APIs.

  The above has been observed with glance version 0.14.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1566379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450150] Re: Refactor angular cloud services utilities

2016-04-05 Thread Travis Tripp
This is done. It was primarily completed when I refactored to the hz-if
directive which all the hz-if-services hz-if-policies etc, etc were
based on.

** Changed in: horizon
   Status: Incomplete => Fix Released

** Changed in: horizon
 Assignee: Thai Tran (tqtran) => Travis Tripp (travis-tripp)

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450150

Title:
  Refactor angular cloud services utilities

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  A nova-extensions and settings-service services and directives were
  implemented at the very end of Kilo. The directives created for
  needsto be able to be attribute based and investigated for how it
  might be combined with other conditional behavior based directives.

  The Nova Extension check needs to also make use of the Service Catalog
  (check if service is even enabled before checking extensions). This
  should include that works as well.

  See here:
  
https://review.openstack.org/#/c/164359/12/horizon/static/horizon/js/angular/services/hz.api.keystone.js

  See line 12 here for example of directive using extensions that we
  want to enhance using the service catalog.

  
https://review.openstack.org/#/c/166708/22/openstack_dashboard/static/dashboard
  /launch-instance/configuration/configuration.html

  See line 110 here:
  
https://review.openstack.org/#/c/171418/13/openstack_dashboard/static/dashboard/launch-instance/source/source.html

  See also:
  https://review.openstack.org/#/c/170351/

  This all introduced a new cloud services.js which created some unwanted extra 
dependency injection. This can be refactored to its own new utility.
  Support attributes vs elements.
  Review naming.
  Consider using lazy loading via the compile function
  https://www.youtube.com/watch?v=UMkd0nYmLzY=24m30s

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543459] Re: login available before cloud-init completed

2016-04-05 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Low

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1543459

Title:
  login available before cloud-init completed

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  serial console login, is available before cloud init finished setting
  up the user (e.g. password).

  This is with nocloud provider, in libvirt / qemu.

  console log is here:

  https://pastebin.canonical.com/149365/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1543459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566353] [NEW] glance: use ostestr instead of testr

2016-04-05 Thread Danny Al-Gaaf
Public bug reported:

Glance should use ostestr instead of testr. ostestr is more powerful and
provide much prettier output than testr. Other projects like cinder or
neutron already uses the testr wrapper for openstack projects.

see: http://docs.openstack.org/developer/os-testr/readme.html#

** Affects: glance
 Importance: Undecided
 Assignee: Danny Al-Gaaf (danny-al-gaaf)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Danny Al-Gaaf (danny-al-gaaf)

** Description changed:

  Glance should use ostestr instead of testr. ostestr is more powerful and
  provide much prettier output than testr. Other projects like cinder or
  neutron already uses the testr wrapper for openstack projects.
+ 
+ see: http://docs.openstack.org/developer/os-testr/readme.html#

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1566353

Title:
  glance: use ostestr instead of testr

Status in Glance:
  New

Bug description:
  Glance should use ostestr instead of testr. ostestr is more powerful
  and provide much prettier output than testr. Other projects like
  cinder or neutron already uses the testr wrapper for openstack
  projects.

  see: http://docs.openstack.org/developer/os-testr/readme.html#

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1566353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566354] [NEW] glance_store: use ostestr instead of testr

2016-04-05 Thread Danny Al-Gaaf
Public bug reported:

glance_store should use ostestr instead of testr. ostestr is more
powerful and provide much prettier output than testr. Other projects
like cinder or neutron already uses the testr wrapper for openstack
projects.

see: http://docs.openstack.org/developer/os-testr/readme.html#

** Affects: glance
 Importance: Undecided
 Assignee: Danny Al-Gaaf (danny-al-gaaf)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Danny Al-Gaaf (danny-al-gaaf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1566354

Title:
  glance_store: use ostestr instead of testr

Status in Glance:
  New

Bug description:
  glance_store should use ostestr instead of testr. ostestr is more
  powerful and provide much prettier output than testr. Other projects
  like cinder or neutron already uses the testr wrapper for openstack
  projects.

  see: http://docs.openstack.org/developer/os-testr/readme.html#

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1566354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566327] [NEW] Creating a security group rule with no protocol fails with KeyError

2016-04-05 Thread Miguel Angel Ajo
Public bug reported:

neutron security-group-rule-create --direction ingress default

results in:


2016-04-05 15:50:56.772 ERROR neutron.api.v2.resource 
[req-67736b7a-6a4c-442c-9536-890ccf5c8d19 admin 
3dc1eb0373d34ba9b2edfb41ee98149c] create failed
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 410, in create
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource self.force_reraise()
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 521, in _create
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource obj = do_create(body)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 503, in do_create
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource request.context, 
reservation.reservation_id)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource self.force_reraise()
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 496, in do_create
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return 
obj_creator(request.context, **kwargs)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_rpc_base.py", line 74, in 
create_security_group_rule
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource security_group_rule)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 374, in 
create_security_group_rule
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return 
self._create_security_group_rule(context, security_group_rule)
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 399, in 
_create_security_group_rule
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource 
protocol=rule_dict['protocol'],
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource KeyError: 'protocol'
2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource

This is a regression, since it was working before.

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566327

Title:
  Creating a security group rule with no protocol fails with KeyError

Status in neutron:
  New

Bug description:
  neutron security-group-rule-create --direction ingress default

  results in:

  
  2016-04-05 15:50:56.772 ERROR neutron.api.v2.resource 
[req-67736b7a-6a4c-442c-9536-890ccf5c8d19 admin 
3dc1eb0373d34ba9b2edfb41ee98149c] create failed
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 410, in create
  2016-04-05 15:50:56.772 TRACE neutron.api.v2.resource return 

[Yahoo-eng-team] [Bug 1563233] Re: Invalid mock name in test_ovs_neutron_agent

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/298612
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a5822ca0d01fb6d53f16047b0cdddff99a31dc2e
Submitter: Jenkins
Branch:master

commit a5822ca0d01fb6d53f16047b0cdddff99a31dc2e
Author: Hynek Mlnarik 
Date:   Tue Mar 29 11:09:14 2016 +0200

Fix invalid mock name in test_ovs_neutron_agent

The test_ovs_neutron_agent mocks on reset_tunnel_br while the patch [1]
renamed this method to setup_tunnel_br.

[1] https://review.openstack.org/#/c/182920/

Closes-Bug: 1563233
Change-Id: I273f05f441f72863077e639775a9483c20a9cc5f


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563233

Title:
  Invalid mock name in test_ovs_neutron_agent

Status in neutron:
  Fix Released

Bug description:
  The test_ovs_neutron_agent mocks on reset_tunnel_br while the patch
  [1] renamed this method to  setup_tunnel_br [2].

  [1] https://review.openstack.org/#/c/182920/
  [2] 
https://github.com/openstack/neutron/commit/73673beacd75a2d9f51f15b284f1b458d32e992e#diff-9cca2f63ca397a7e93909a7119fdd16fL915

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564947] Re: ovs-firewall doesn't work with tunneling and vlan tagging

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300542
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0f9ec7b72a8ca173b760f20323f90bffefa91681
Submitter: Jenkins
Branch:master

commit 0f9ec7b72a8ca173b760f20323f90bffefa91681
Author: Jakub Libosvar 
Date:   Fri Apr 1 14:53:03 2016 +

ovsfw: Remove vlan tag before injecting packets to port

Open vSwitch takes care of vlan tagging in case normal switching is
used. When ingress traffic packets are accepted, the
actions=output: is used but we need to explicitly take care
of stripping out the vlan tags.

Closes-Bug: 1564947
Change-Id: If3fc44c9fd1ac0f7bc9dfe9dc48e76352e981f8e


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564947

Title:
  ovs-firewall doesn't work with tunneling and vlan tagging

Status in neutron:
  Fix Released

Bug description:
  As firewall uses actions=output: which doesn't handle vlan tags,
  accepted ingress traffic gets packets that are still tagged. Normal
  actions take care of vlan tags according tags on ports, so those are
  fine. We should use strip_vlan for all actions using
  output:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1564947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566291] [NEW] L3 agent: at some point an agent becomes unable to handle new routers

2016-04-05 Thread Oleg Bondarev
Public bug reported:

Following seen in l3 agent logs:

2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router 'e341e0e2-5089-46e9-91f9-2099a156b27f'
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 497, in 
_process_router_update
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 434, in 
_process_router_if_compatible
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 439, in 
_process_added_router
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 
self._router_added(router['id'], router)
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 340, in 
_router_added
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent ri = 
self._create_router(router_id, router)
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 337, in 
_create_router
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent return 
legacy_router.LegacyRouter(*args, **kwargs)
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 61, in 
__init__
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 
DEFAULT_ADDRESS_SCOPE: ADDRESS_SCOPE_MARK_IDS.pop()}
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent KeyError: 'pop from 
an empty set'
2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent
2016-04-05 09:30:09.034 24216 DEBUG neutron.agent.l3.agent [-] Starting router 
update for e341e0e2-5089-46e9-91f9-2099a156b27f, action None, priority 1 
_process_router_update 
/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py:463
2016-04-05 09:30:09.035 24216 DEBUG oslo_messaging._drivers.amqpdriver [-] CALL 
msg_id: 6295fbe9cf2040d79c68f5c5f8b1e963 exchange 'neutron' topic 'q-l3-plugin' 
_send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:454
2016-04-05 09:30:09.417 24216 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: 6295fbe9cf2040d79c68f5c5f8b1e963 __call__ 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:302
2016-04-05 09:30:09.418 24216 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router 'e341e0e2-5089-46e9-91f9-2099a156b27f'

So agent is constantly resyncing (causing load on neutron server) and
unable to handle new routers.

I believe that set "ADDRESS_SCOPE_MARK_IDS = set(range(1024, 2048))"
from router_info.py should not be agent global but it should be
ADDRESS_SCOPE_MARK_IDS  per router. Or at least need to return values
back to the set when router is deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566291

Title:
  L3 agent: at some point an agent becomes unable to handle new routers

Status in neutron:
  New

Bug description:
  Following seen in l3 agent logs:

  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router 'e341e0e2-5089-46e9-91f9-2099a156b27f'
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 497, in 
_process_router_update
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 434, in 
_process_router_if_compatible
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 439, in 
_process_added_router
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 
self._router_added(router['id'], router)
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 340, in 
_router_added
  2016-04-05 09:30:09.033 24216 ERROR neutron.agent.l3.agent 

[Yahoo-eng-team] [Bug 1561152] Re: neutron-sanity-check generates invalid bridge names

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296700
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8ad9c902c20397f5ee643990ef1847e7f2ab799b
Submitter: Jenkins
Branch:master

commit 8ad9c902c20397f5ee643990ef1847e7f2ab799b
Author: Terry Wilson 
Date:   Fri Mar 18 02:16:01 2016 -0500

Ensure bridge names are shorter than max device name len

Use test.base's get_rand_name function to ensure that bridge name
lengths don't exceed the maximum device name length. For uses that
require multiple bridges/ports with the same random characters
appended, add get_related_rand_names and get_related_rand_device_names
functions.

Change-Id: Ib03653f3ca2d2c3d2ea7be1dff4ab0e4e77df51e
Closes-Bug: #1561152


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561152

Title:
  neutron-sanity-check generates invalid bridge names

Status in neutron:
  Fix Released

Bug description:
  Instead of using neutron.tests.base.get_rand_device_name(), sanity
  check tests have been generating their own prefix name and appending a
  random string with utils.get_rand_name(). Many of the strings
  generated were too long to be device names, so ovs-vswitchd would fail
  to create the devices.

  For example:

  2016-03-18T05:40:41.950Z|07166|dpif|WARN|system@ovs-system: failed to query 
port patchtest-b76adc: Invalid argument
  2016-03-18T05:40:41.950Z|07167|dpif|WARN|system@ovs-system: failed to add 
patchtest-b76adc as port: Invalid argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565753] Re: notification_format config option is not part of the sample config

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281942
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f8506618356816c3a01dff5dd99ba7c8a48f483c
Submitter: Jenkins
Branch:master

commit f8506618356816c3a01dff5dd99ba7c8a48f483c
Author: Balazs Gibizer 
Date:   Thu Feb 18 17:35:01 2016 +0100

config options: Centralize 'nova.rpc' options

The single option in nova.rpc is moved to the central place.

As this module contains a single option this patch also enhances
the help text of it.

Implements: bp centralize-config-options-newton
Closes-bug: #1565753
Change-Id: Ib6030c67b315a3b8d0d4854b4b6f1b969be6c00c


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565753

Title:
  notification_format config option is not part of the sample config

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The generated sample config [1]  does not include the
  notification_format option introduced in Mitaka.

  [1] http://docs.openstack.org/developer/nova/sample_config.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1565753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566282] [NEW] Returning federated user fails to authenticate with HTTP 500

2016-04-05 Thread Boris Bobrov
Public bug reported:

I've set up stable/mitaka keystone with AD FS and it worked. After some
time, i decided to test the set up again and after trying to authenicate
i've got HTTP 500.

In keystone logs, there is this: http://paste.openstack.org/show/492968/
(the logs are the same as below).

This happens because  self.update_federated_user_display_name is called
in identity_api.shadow_federated_user. Since no
update_federated_user_display_name is defined in identity_api,
__getattr__ tries to lookup the name in the driver. The driver used for
identity_api hasn't update_federated_user_display_name, and
AttributeError is raised.

The issue seems to exist on both stable/mitaka and master (6f9f390).

2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] direct_maps: 
 
_update_local_mapping /opt/stack/keystone/keystone/federation/utils.py:691
2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] local: {u'id': 
u'f7567142a8024543ab678de7be553dbf'} _update_local_mapping 
/opt/stack/keystone/keystone/federation/utils.py:692
2016-04-05 11:53:56.173 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] identity_values: 
[{u'user': {u'domain': {u'name': u'Default'}, u'name': u'bre...@winad.org'}}, 
{u'group': {u'id': u'f7567142a8024543ab678de7be553dbf'}}] proc
ess /opt/stack/keystone/keystone/federation/utils.py:535
2016-04-05 11:53:56.174 2100 DEBUG keystone.federation.utils 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] mapped_properties: 
{'group_ids': [u'f7567142a8024543ab678de7be553dbf'], 'user': {u'domain': {'id': 
'Federated'}, 'type': 'ephemeral', u'name': u'breton@winad
.org'}, 'group_names': []} process 
/opt/stack/keystone/keystone/federation/utils.py:537
2016-04-05 11:53:56.273 2100 ERROR keystone.common.wsgi 
[req-fe431d33-f850-4a49-87b6-abad9290e638 - - - - -] 'Identity' object has no 
attribute 'update_federated_user_display_name'
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 249, in __call__
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi result = 
method(context, **params)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/federation/controllers.py", line 320, in 
federated_sso_auth
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi protocol_id)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/federation/controllers.py", line 302, in 
federated_authentication
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi return 
self.authenticate_for_token(context, auth=auth)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 396, in 
authenticate_for_token
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi 
self.authenticate(context, auth_info, auth_context)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 520, in authenticate
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi auth_context)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/plugins/mapped.py", line 65, in authenticate
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi self.identity_api)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/plugins/mapped.py", line 153, in 
handle_unscoped_token
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi display_name)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/manager.py", line 124, in wrapped
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053, in 
decorate
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi should_cache_fn)
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657, in 
get_or_create
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi async_creator) as 
value:
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi return self._enter()
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, in 
_enter
2016-04-05 11:53:56.273 2100 TRACE keystone.common.wsgi generated = 
self._enter_create(createdtime)

[Yahoo-eng-team] [Bug 1565824] Re: config option generation doesn't work with itertools.chain generator

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301166
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ee53631886fff8c7e9d09b19b3456d0e80c5de88
Submitter: Jenkins
Branch:master

commit ee53631886fff8c7e9d09b19b3456d0e80c5de88
Author: Markus Zoeller 
Date:   Mon Apr 4 16:42:25 2016 +0200

config option generation doesn't work with a generator

The config options won't get emitted into "sample.config" when the
"itertools.chain" method is used to combine multiple lists. The reason
is that the generator created by "itertools.chain" doesn't get reset
after getting used in "register_opts". A simple complete example:

import itertools

a = [1, 2]
b = [3, 4]

ab = itertools.chain(a, b)

print("printing 'ab' for the first time")
for i in ab:
print(i)

print("printing 'ab' for the second time")
for i in ab:
print(i)

The combined list 'ab' won't get printed a second time. The same thing
happens when the "oslo.config" generator wants to print the file
"sample.config". The method "register_opts" gets called first and
sets the cursor of the generator to the end, which means the same
generator in "list_opts" is already at its end and iterates over
nothing.

This change creates a list with the generator. This list can be used
multiple times, first by "register_opts" and then by "list_opts".
The options get emitted into the "sample.config" file again.

Closes bug 1565824
Change-Id: Ib1bad2d76f34c5557b089f225511adfc0259fdb6


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565824

Title:
  config option generation doesn't work with itertools.chain generator

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Config options code like this doesn't generate output in the
  sample.config file:

  ALL_OPTS = itertools.chain(
 compute_opts,
 resource_tracker_opts,
 allocation_ratio_opts
 )

  
  def register_opts(conf):
  conf.register_opts(ALL_OPTS)

  
  def list_opts():
  return {'DEFAULT': ALL_OPTS}

  The reason is that the generator created by "itertools.chain" doesn't
  get reset after getting used in "register_opts". A simple complete
  example:

  import itertools

  a = [1, 2]
  b = [3, 4]

  ab = itertools.chain(a, b)

  print("printing 'ab' for the first time")
  for i in ab:
print(i)

  print("printing 'ab' for the second time")
  for i in ab:
print(i)

  The combined list 'ab' won't get printed a second time. The same thing
  happens when the oslo.config generator wants to print the
  sample.config file. This means we use either:

  ab = list(itertools.chain(a, b))

  or

  ab = a + b

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1565824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501116] Re: "Image Registry" window contains a text box which overlaps with buttons (Japanese and German)

2016-04-05 Thread Rob Cresswell
Horizon no longer contains the Sahara content; it lives in the sahara-
dashboard repo. Moving bug to Sahara tracker.

** Also affects: sahara
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501116

Title:
  "Image Registry" window contains a text box which overlaps with
  buttons (Japanese and German)

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Sahara:
  New

Bug description:
  Project > Data Processing > Image Registry > Register Image

  The "Register Image" window, "Image Registry tool" text box becomes
  larger when it contains translated text (only confirmed in German and
  Japanese, but could be affecting more languages). It overlaps with
  buttons and other text.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331537] Re: nova service-list shows nova-compute as down and is required to be restarted frequently in order to provision new vms

2016-04-05 Thread Daniel Berrange
** Changed in: nova
   Status: Won't Fix => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1331537

Title:
  nova service-list shows nova-compute as down and is required to be
  restarted frequently in order to provision new vms

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Nova compute services in Openstack Havana go down frequently as listed
  by "nova service-list" and requires to be restarted very frequently,
  multiple times every day. All the compute nodes have the ntp times in
  sync.

  When a node shows down, it is not able to use those compute nodes for
  launching new VMs and we quickly run out of compute resources. Hence
  our workaround is to restart the Compute nodes on those servers
  hourly.

  In the nova-compute node I've found the following error and they did match 
with the "Updated_at" field from nova service-list.
  2014-06-07 00:21:15.690 511340 ERROR nova.servicegroup.drivers.db [-] model 
server went away
  2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db Traceback 
(most recent call last):
  2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db File 
"/usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py", l ine 92, 
in _report_state
  5804 2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db 
report_count = service.service_ref['report_count'] + 1
  5805 2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db 
TypeError: 'NoneType' object has no attribute '__getitem__'
  5806 2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db

  It looks like the ones that are shown as down haven't been able to update the 
database with the latest status and they did match with the Traceback seen 
above (2014-06-07 00:21:15.690) on at least two compute nodes that I have seen.
  
+--++--+--+---++-+
  | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
+--++--+--+---++-+
  | nova-compute | nova1| blabla | enabled | up | 2014-06-07T00:37:42.00 | 
None |
  | nova-compute | nova2 | blabla | enabled | down | 2014-06-07T00:21:05.00 
| None |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1331537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520570] Re: Unable to retrieve limits on unlimited quota

2016-04-05 Thread Rob Cresswell
Given that Kilo is out of support and this bug doesnt exist on master,
this is too low priority to invest time in to.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1520570

Title:
  Unable to retrieve limits on unlimited quota

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Release: Kilo

  I got some compute quota set to -1 (tested with keypairs and cores) or some 
network quota set to -1 (tested with floating ip).
  When I click the (new in Kilo) launch instance button, I get the error 
'Unable to retrieve limits' and I'm not able to launch an instance, because all 
flavors are disabled.

  Seems to be a reincarnation of
  https://bugs.launchpad.net/horizon/+bug/1098480, the 'old' launch
  instance button still works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1520570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555644] Re: VMware: Extending virtual disk failed with error: capacity

2016-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291203
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=589660af7c4ac3903f209c912e89795b8d62a122
Submitter: Jenkins
Branch:master

commit 589660af7c4ac3903f209c912e89795b8d62a122
Author: Dongcan Ye 
Date:   Thu Mar 10 22:09:24 2016 +0800

VMware: Always update image size for sparse image

In some situation, if image cache folder already exists, we may not
update image size. This will cause extend root disk failed in situation
using sparse image and use_linked_clone.

This patch update image size for sparse image whether the image cache
folder exists or not.

Change-Id: I017194e7314458a493ccc942bef34209a902809e
Closes-Bug: #1555644


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1555644

Title:
  VMware: Extending virtual disk failed with error: capacity

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Scenario A:
  1. image disk type: sparse
  2. image size(2.3G)
  3. flavor1(root_disk: 5G)
  4. use_linked_clone

  Scenario B:
  1. image disk type: sparse
  2. image size(2.3G)
  3. flavor2(root_disk: 6G)
  4. use_linked_clone

  I boot an instance with sparse image disk, image size is 2.3G, Nova
  flavor root disk is 5G, everything got well(Scenario A).

  Then I boot another instance  with new flavor root disk 6G(Scenario B),  it 
raises error:
  2016-03-10 17:31:56.350 3211 ERROR nova.compute.manager 
[req-a3e93241-5f54-485a-a7f0-2e1e0ebad92d 4412e38ec9814b96a03e63097ec51f1a 
8f75187cd29f4715881f450646fc6e08 - - -] [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] Instance failed to spawn
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] Traceback (most recent call last):
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in 
_build_resources
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] yield resources
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in 
_build_and_run_instance
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] block_device_info=block_device_info)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/driver.py", line 480, in 
spawn
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] admin_password, network_info, 
block_device_info)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 636, in 
spawn
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] 
self._use_disk_image_as_linked_clone(vm_ref, vi)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 1747, in 
_use_disk_image_as_linked_clone
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] vi.dc_info, vi.ii, vi.instance, 
str(sized_disk_ds_loc))
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 224, in 
_extend_if_required
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] root_vmdk_path, dc_info.ref)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 202, in 
_extend_virtual_disk
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] self._delete_datastore_file(ds_path, 
dc_ref)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 
22ea4a58-2296-40fc-b69b-a511a3b6d925] six.reraise(self.type_, self.value, 
self.tb)
  2016-03-10 17:31:56.350 3211 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1547415] Re: eslint warnings should have a treshold

2016-04-05 Thread Itxaka Serrano
Not needed

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1547415

Title:
  eslint warnings should have a treshold

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  We should make eslint warnings to have a threshold to avoid adding
  more and more warnings to our javascript code.

  Right now we are at 465 warning and it does not seem to decrease over
  time. A limit should be put in place to avoid increasing the number of
  warnings over time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1547415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561252] Re: Removing 'force_gateway_on_subnet' option

2016-04-05 Thread KATO Tomoyuki
** Changed in: openstack-manuals
   Status: In Progress => Fix Released

** Changed in: openstack-manuals
Milestone: None => mitaka

** Changed in: openstack-manuals
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561252

Title:
  Removing 'force_gateway_on_subnet' option

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/295843
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 7215168b119c11a973fbdff56c007f6eb157d257
  Author: Sreekumar S 
  Date:   Tue Mar 22 19:17:54 2016 +0530

  Removing 'force_gateway_on_subnet' option
  
  With this fix 'force_gateway_on_subnet' configuration
  option is removed, and gateway outside the subnet is
  always allowed. Gateway cannot be forced onto to the
  subnet range.
  
  DocImpact: All references of 'force_gateway_on_subnet'
  configuration option and its description should be
  removed from the docs.
  
  Change-Id: I1a676f35828e46fcedf339235ef7be388341f91e
  Closes-Bug: #1548193

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563255] Re: Can't see Public Images in Angular Launch Instance

2016-04-05 Thread Rob Cresswell
** Changed in: horizon
   Status: New => Invalid

** Tags removed: angularjs mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1563255

Title:
  Can't see Public Images in Angular Launch Instance

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  A user in IRC reported being unable to see public images in the new
  launch instance workflow, when they show fine in the python launch
  instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1563255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551907] Re: Add API extension for reporting IP availability usage statistics

2016-04-05 Thread Rob Cresswell
I've looked through the patches and comments, and can't see why this is
linked to Horizon. Feel free to adjust status if I've made a mistake.

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
 Assignee: Ankur (ankur-gupta-f) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551907

Title:
  Add API extension for reporting IP availability usage statistics

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in neutron:
  Fix Released
Status in openstack-api-site:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/212955
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2f741ca5f9545c388270ddab774e9e030b006d8a
  Author: Mike Dorman 
  Date:   Thu Aug 13 21:24:58 2015 -0600

  Add API extension for reporting IP availability usage statistics
  
  Implements an API extension for reporting availibility of IP
  addresses on Neutron networks/subnets based on the blueprint
  proposed at https://review.openstack.org/#/c/180803/
  
  This provides an easy way for operators to count the number of
  used and total IP addresses on any or all networks and/or
  subnets.
  
  Co-Authored-By: David Bingham 
  Co-Authored-By: Craig Jellick 
  
  APIImpact
  DocImpact: As a new API, will need all new docs. See devref for details.
  
  Implements: blueprint network-ip-usage-api
  Closes-Bug: 1457986
  Change-Id: I81406054d46b2c0e0ffcd56e898e329f943ba46f

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465885] Re: API reorg retained some framework/dashboard crossover

2016-04-05 Thread Rob Cresswell
Marked invalid, as comment indicates it was fixed.

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
 Assignee: Richard Jones (r1chardj0n3s) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1465885

Title:
  API reorg retained some framework/dashboard crossover

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The recent patch landed for the API ngReorg
  (https://review.openstack.org/#/c/184543/) has links from the
  framework over to the dashboard in the framework karma.conf.js
  (https://review.openstack.org/#/c/184543/28/horizon/karma.conf.js,cm).

  Either the framework is a separate thing, or it doesn't depend on
  dashboard components.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1465885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468366] Re: (Operator-only) Logging API for security group rules

2016-04-05 Thread Nguyen Phuong An
** Description changed:

- [Existing problem]
- - Logging is currently a missing feature in security-groups, it is
-   necessary for operators (Cloud admins, developers etc) to
-   auditing easier.
- - Tenant also needs to make sure their security-groups works as
-   expected, and to assess what kinds of events/packets went
-   through their security-groups or were dropped.
+ Learning what happened on traffic flows is necessary for cloud
+ administrator to tackle a problem related to network.
  
- [Main purpose of this feature]
- * Enable to configure logs for security-group-rules.
+ Problem Description
+ ===
+ - When *operator* (including cloud administrator and developer) has an issue 
related to network (e.g network security issue). Gathering all events related 
to security groups is necessary for troubleshooting process.
  
- * In order to assess what kinds of events/packets went
-   through their security-groups or were dropped.
+ - When tenant or operator deploys a security groups for number of VMs.
+ They want to make sure security group rules work as expected and to
+ assess what kinds of packets went through their security-groups or were
+ dropped.
  
- [What is the enhancement?]
- - Proposes to create new generic logging API for security-group-rules
-   in order to make the trouble shooting process easier for operators
-   (or Cloud admins, developers etc)..
- - Introduce layout the logging api model for future API and model
-   extension for log driver types(rsyslog, ...).
+ Currently, we don't have a way to perform that. In other word, logging
+ is a missing feature in security groups.
  
- Specification: https://review.openstack.org/#/c/203509
+ Proposed Change
+ ===
+ - To improve the situation, we'd like to propose a logging API [1]_ to 
collect all events related to security group rules when they occurred.
+ 
+ - Only *operator* will be allowed to execute logging API.
+ 
+ [1] https://review.openstack.org/#/c/203509/

** Tags removed: rfe-approved
** Tags added: rfe

** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468366

Title:
  (Operator-only) Logging API for security group rules

Status in neutron:
  New

Bug description:
  Learning what happened on traffic flows is necessary for cloud
  administrator to tackle a problem related to network.

  Problem Description
  ===
  - When *operator* (including cloud administrator and developer) has an issue 
related to network (e.g network security issue). Gathering all events related 
to security groups is necessary for troubleshooting process.

  - When tenant or operator deploys a security groups for number of VMs.
  They want to make sure security group rules work as expected and to
  assess what kinds of packets went through their security-groups or
  were dropped.

  Currently, we don't have a way to perform that. In other word, logging
  is a missing feature in security groups.

  Proposed Change
  ===
  - To improve the situation, we'd like to propose a logging API [1]_ to 
collect all events related to security group rules when they occurred.

  - Only *operator* will be allowed to execute logging API.

  [1] https://review.openstack.org/#/c/203509/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477292] Re: launch instance step scss references ng controller

2016-04-05 Thread Rob Cresswell
This is no longer a bug following the Launch Instance cleanup

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1477292

Title:
  launch instance step scss references ng controller

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  For example, security-groups.scss contains a selector:
  [ng-controller="LaunchInstanceSecurityGroupsController as ctrl"] {

  This is very fragile and will break styling if the step is refactored
  to become a directive, or if the controller name is changed, or if the
  angular template for that page is overridden.

  Instead, use a class in the HTML template such as
  class="security-groups-controlller"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1477292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566191] [NEW] Allow multiple networks with FIP range to be associated with Tenant router

2016-04-05 Thread Irena Berezovsky
Public bug reported:

This requirement came out during Manila-Neutron integration discussion to 
provide solution for multi-tenant environment to work with File Share store.
The way to solve it is as following:
A dedicated NAT based network connection should be established between a 
tenant's private network (where his VMs reside) and a data center local storage 
network. Sticking to IP based authorization, as used by Manila, the NAT 
assigned floating IPs in the storage network are used to check authorization in 
the storage backend, as well as to deal with possible overlapping IP ranges in 
the private networks of different tenants. A dedicated NAT and not the public 
FIP is suggested since public FIPs are usually limited resources.
In order to be able to orchestrate the above use case, it should be possible to 
associate more than one subnet with 'FIP' range with the router (via router 
interface)  and enable NAT based on the destination subnet. 
This behaviour was possible in Mitaka and worked for MidoNet plugin, but due to 
the https://bugs.launchpad.net/neutron/+bug/1556884 it won't be possible any 
more. 

Related bug for security use case that can benefit from the proposed
behavior is described here
https://bugs.launchpad.net/neutron/+bug/1250105

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: rfe

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566191

Title:
  Allow multiple networks with FIP range to be associated with Tenant
  router

Status in neutron:
  Confirmed

Bug description:
  This requirement came out during Manila-Neutron integration discussion to 
provide solution for multi-tenant environment to work with File Share store.
  The way to solve it is as following:
  A dedicated NAT based network connection should be established between a 
tenant's private network (where his VMs reside) and a data center local storage 
network. Sticking to IP based authorization, as used by Manila, the NAT 
assigned floating IPs in the storage network are used to check authorization in 
the storage backend, as well as to deal with possible overlapping IP ranges in 
the private networks of different tenants. A dedicated NAT and not the public 
FIP is suggested since public FIPs are usually limited resources.
  In order to be able to orchestrate the above use case, it should be possible 
to associate more than one subnet with 'FIP' range with the router (via router 
interface)  and enable NAT based on the destination subnet. 
  This behaviour was possible in Mitaka and worked for MidoNet plugin, but due 
to the https://bugs.launchpad.net/neutron/+bug/1556884 it won't be possible any 
more. 

  Related bug for security use case that can benefit from the proposed
  behavior is described here
  https://bugs.launchpad.net/neutron/+bug/1250105

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566194] [NEW] Make sure resources for HA router exists before the router creation

2016-04-05 Thread venkata anil
Public bug reported:

Before HA rouer is used by agent,
1) HA network should be created
2) vr_id has to be allocated
3) HA router should able to create sufficient number of ports on HA network

If scheduler(from rpc worker) process the HA router(as router is available in 
DB) before these resources are created, then the following races(between api 
and rpc workers) can happen
1) Race for creating HA network
2) vr_id not avialable for agent, so can't spawn HA proxy process
3) If creating router ports in api worker is failed, router is deleted. So rpc 
worker will have races as router is deleted while it is binding router's ha 
ports to agent.


To avoid this, l3 scheduler should skip this router(while syncing for the 
agent) if above resources are not yet created.

To facilitate this, new status("ALLOCATING") is proposed for HA router in 
https://review.openstack.org/#/c/257059/
In this patch, first router is created and set status as ALLOCATING. And once 
all the above resources are created, its status is changed back to ACTIVE. 
Added proper checks(in the code) to skip using Router if it's status is 
ALLOCATING.
So with this patch
1) we are creating a new router status 
2) carefully identify where router can be accessed before its resources are 
created.
3) How code behaves(during its acess to router) when status transitioned from 
ALLOCATING to ACTIVE
Alternatively, if we are able to create HA router's resources before HA router 
creation, we can avoid a new status and new checks, but same functionality as 
https://review.openstack.org/#/c/257059/.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566194

Title:
  Make sure resources for HA router exists before the router creation

Status in neutron:
  New

Bug description:
  Before HA rouer is used by agent,
  1) HA network should be created
  2) vr_id has to be allocated
  3) HA router should able to create sufficient number of ports on HA network

  If scheduler(from rpc worker) process the HA router(as router is available in 
DB) before these resources are created, then the following races(between api 
and rpc workers) can happen
  1) Race for creating HA network
  2) vr_id not avialable for agent, so can't spawn HA proxy process
  3) If creating router ports in api worker is failed, router is deleted. So 
rpc worker will have races as router is deleted while it is binding router's ha 
ports to agent.

  
  To avoid this, l3 scheduler should skip this router(while syncing for the 
agent) if above resources are not yet created.

  To facilitate this, new status("ALLOCATING") is proposed for HA router in 
https://review.openstack.org/#/c/257059/
  In this patch, first router is created and set status as ALLOCATING. And once 
all the above resources are created, its status is changed back to ACTIVE. 
Added proper checks(in the code) to skip using Router if it's status is 
ALLOCATING.
  So with this patch
  1) we are creating a new router status 
  2) carefully identify where router can be accessed before its resources are 
created.
  3) How code behaves(during its acess to router) when status transitioned from 
ALLOCATING to ACTIVE
  Alternatively, if we are able to create HA router's resources before HA 
router creation, we can avoid a new status and new checks, but same 
functionality as https://review.openstack.org/#/c/257059/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566188] [NEW] keystone client reports 500 error if database service is not running

2016-04-05 Thread Sheel Rana
Public bug reported:

When running keystone command to authenticate from cinderclient,
keystone reports 500 internal server error.

> /usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py(116)get_token()
-> return self.session.get_token(auth or self.auth)
(Pdb) 
InternalServerError: Internal...24d54)',)
> /usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py(116)get_token()
-> return self.session.get_token(auth or self.auth)
(Pdb) locals()
{'self': , 
'__exception__': (, 
InternalServerError(u'An unexpected error prevented the server from fulfilling 
your request. (HTTP 500) (Request-ID: 
req-e4fbb478-79f4-4529-b061-512b29324d54)',)), 'auth': None}

It should report 400 error with proper details..


Steps to reproduce:

1. stop mysql.
2. run cinder list or any command.(It internally call keystone client to 
authenticate the request)
3. check output..

** Affects: keystone
 Importance: Undecided
 Assignee: Mark (rocky-asdf)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1566188

Title:
  keystone client reports 500 error if database service is not running

Status in OpenStack Identity (keystone):
  New

Bug description:
  When running keystone command to authenticate from cinderclient,
  keystone reports 500 internal server error.

  > 
/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py(116)get_token()
  -> return self.session.get_token(auth or self.auth)
  (Pdb) 
  InternalServerError: Internal...24d54)',)
  > 
/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py(116)get_token()
  -> return self.session.get_token(auth or self.auth)
  (Pdb) locals()
  {'self': , 
'__exception__': (, 
InternalServerError(u'An unexpected error prevented the server from fulfilling 
your request. (HTTP 500) (Request-ID: 
req-e4fbb478-79f4-4529-b061-512b29324d54)',)), 'auth': None}

  It should report 400 error with proper details..

  
  Steps to reproduce:

  1. stop mysql.
  2. run cinder list or any command.(It internally call keystone client to 
authenticate the request)
  3. check output..

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1566188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566178] [NEW] Lbaasv2 healthmonitor is not deleted

2016-04-05 Thread Alex Stafeyev
Public bug reported:

I was running neutron_lbaas tempest
neutron_lbaas/tests/tempest/v2/scenario/test_healthmonitor_basic.py  (
https://github.com/openstack/neutron-
lbaas/tree/master/neutron_lbaas/tests/tempest/v2/scenario )

I stopped the test before when all objects created in order to manually check 
LB.
I created and attached new healtmonitor to the pool ( PING hm). 
After validation I continued the test run , which should delete all related to 
the test object. I saw that   the newly created HM is not deleted. 


logs: 
http://pastebin.com/KdZ6mY4X

More specific reproduction steps will be added asap

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566178

Title:
  Lbaasv2 healthmonitor is not deleted

Status in neutron:
  New

Bug description:
  I was running neutron_lbaas tempest
  neutron_lbaas/tests/tempest/v2/scenario/test_healthmonitor_basic.py  (
  https://github.com/openstack/neutron-
  lbaas/tree/master/neutron_lbaas/tests/tempest/v2/scenario )

  I stopped the test before when all objects created in order to manually check 
LB.
  I created and attached new healtmonitor to the pool ( PING hm). 
  After validation I continued the test run , which should delete all related 
to the test object. I saw that   the newly created HM is not deleted. 

  
  logs: 
  http://pastebin.com/KdZ6mY4X

  More specific reproduction steps will be added asap

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566159] [NEW] ERROR nova.api.openstack.extensions HTTPInternalServerError: 500

2016-04-05 Thread michelvaillant
Public bug reported:

2016-04-05 10:54:42.673 989 INFO oslo_service.service [-] Child 1100 exited 
with status 0
2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1096 exited 
with status 0
2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1076 exited 
with status 0
2016-04-05 10:54:42.680 989 INFO oslo_service.service [-] Child 1090 exited 
with status 0
2016-04-05 10:54:42.681 989 INFO oslo_service.service [-] Child 1077 exited 
with status 0
2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1091 exited 
with status 0
2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1079 exited 
with status 0
2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1094 exited 
with status 0
2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1075 exited 
with status 0
2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1071 exited 
with status 0
2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1097 exited 
with status 0
2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1092 exited 
with status 0
2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1098 exited 
with status 0
2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1099 exited 
with status 0
2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1073 exited 
with status 0
2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1067 exited 
with status 0
2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1068 exited 
with status 0
2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1069 exited 
with status 0
2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1072 exited 
with status 0
2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1066 exited 
with status 0
2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1074 exited 
with status 0
2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1078 exited 
with status 0
2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1080 exited 
with status 0
2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1081 exited 
with status 0
2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1093 killed by 
signal 15
2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1095 killed by 
signal 15
2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1101 exited 
with status 0
2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1102 exited 
with status 0
2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1103 exited 
with status 0
2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1104 exited 
with status 0
2016-04-05 10:54:42.713 989 INFO oslo_service.service [-] Child 1105 exited 
with status 0
2016-04-05 10:54:46.299 1259 INFO oslo_service.periodic_task [-] Skipping 
periodic task _periodic_update_dns because its interval is negative
2016-04-05 10:54:46.529 1259 INFO nova.api.openstack [-] Loaded extensions: 
['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 
'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 
'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 
'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 
'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 
'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 
'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 
'os-evacuate', 'os-extended-availability-zone', 
'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 
'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 
'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 
'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 
'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 
'os-instance-actions', 'os-instance-usage-audit-
 log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 
'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 
'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 
'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 
'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 
'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 
'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 
'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 
'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 
'server-metadata', 'servers', 'versions']
2016-04-05 10:54:46.533 1259 WARNING oslo_config.cfg [-] Option "username" from 
group "keystone_authtoken" is deprecated. Use option "user-name" from group 
"keystone_authtoken".
2016-04-05 10:54:46.703 1259 INFO nova.api.openstack [-] Loaded extensions: 
['extensions', 'flavors',