[Yahoo-eng-team] [Bug 1525775] Re: When ovs-agent is restarted flows creatd by other than ovs-agent are deleted.

2016-02-16 Thread YAMAMOTO Takashi
** Also affects: tap-as-a-service
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525775

Title:
  When ovs-agent is restarted flows creatd by other than ovs-agent are
  deleted.

Status in neutron:
  In Progress
Status in tap-as-a-service:
  New

Bug description:
  When ovs-agent is restarted, the cleanup logic drops flow entries
  unless they are stamped by agent_uuid (recorded as a cookie).

  Referene:
  
https://git.openstack.org/cgit/openstack/neutron/commit/?id=73673beacd75a2d9f51f15b284f1b458d32e992e

  Not only old flows, but also flows created by other than ovs-agent
  (flows without a stamp) are deleted.

  Version: Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545761] Re: admin_token_auth 'deprecation' actually removes it from the pipelines

2016-02-16 Thread Steve Martinelli
We've reverted the change, marking this as released

** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1545761

Title:
  admin_token_auth 'deprecation' actually removes it from the pipelines

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The admin_token_auth filter was meant to be deprecated in this
  commit.[1] However, instead it was removed from the pipelines in  etc
  /keystone-paste.ini. This makes it a breaking change for consumers of
  the puppet modules (and probably others) that rely on the
  admin_token_auth filter for initial endpoint setup.

  We need to leave the admin_token_auth filter in those pipelines until
  the deprecation period is over in the O release.

  [1]
  
https://github.com/openstack/keystone/commit/5286b4a297b5a94895a311a9e564aa87cb54dbfd

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1545761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546396] [NEW] DBAPIError exception while trying to delete instances from kvm

2016-02-16 Thread Prashant Shetty
Public bug reported:


System was running with 2k cirrOS vm on 100 KVM hypervisors, and seeing
below DB exception while trying to delete using nova api.


Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-379addbc-c4e5-43b4-bf37-f64436e13750)

stack@controller:/opt/stack/nova$ git log -1
commit 5aee67a80a30725a7d2b95533baf8bfb73476ef1
Merge: 2e28de7 0ecc870
Author: Jenkins 
Date:   Mon Feb 15 21:56:09 2016 +

Merge "Move Disk allocation ratio to ResourceTracker"
stack@controller:/opt/stack/nova$ 

Have attached nova-api logs to bug.

Logs:

2016-02-16 20:47:29.186 DEBUG nova.api.openstack.wsgi 
[req-eff95987-035d-48fe-8c3b-5b947167e72c admin admin] Calling method '>' from (pid=29444) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:699
2016-02-16 20:47:29.187 DEBUG nova.compute.api 
[req-eff95987-035d-48fe-8c3b-5b947167e72c admin admin] Searching by: 
{'deleted': False, 'project_id': u'3122784921764f0c8e2ca9feb5fc7424', u'name': 
u'|'} fro
m (pid=29444) get_all /opt/stack/nova/nova/compute/api.py:2001
2016-02-16 20:47:29.225 ERROR oslo_db.sqlalchemy.exc_filters 
[req-eff95987-035d-48fe-8c3b-5b947167e72c admin admin] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (1139, u"Got error 'empty 
(sub)expression' from regexp") [SQL: u'SELECT anon_1.instances_created_at AS 
anon_1_instances_created_at, anon_1.instances_updated_at AS 
anon_1_instances_updated_at, anon_1.instances_deleted_at AS anon_1_
instances_deleted_at, anon_1.instances_deleted AS anon_1_instances_deleted, 
anon_1.instances_id AS anon_1_instances_id, anon_1.instances_user_id AS 
anon_1_instances_user_id, anon_1.instances_project_id AS
 anon_1_instances_project_id, anon_1.instances_image_ref AS 
anon_1_instances_image_ref, anon_1.instances_kernel_id AS 
anon_1_instances_kernel_id, anon_1.instances_ramdisk_id AS 
anon_1_instances_ramdisk_id
, anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS anon_1_instances_launch_index, 
anon_1.instances_key_name AS anon_1_instances_key_name, 
anon_1.instances_key_data 
AS anon_1_instances_key_data, anon_1.instances_power_state AS 
anon_1_instances_power_state, anon_1.instances_vm_state AS 
anon_1_instances_vm_state, anon_1.instances_task_state AS 
anon_1_instances_task_sta
te, anon_1.instances_memory_mb AS anon_1_instances_memory_mb, 
anon_1.instances_vcpus AS anon_1_instances_vcpus, anon_1.instances_root_gb AS 
anon_1_instances_root_gb, anon_1.instances_ephemeral_gb AS anon_
1_instances_ephemeral_gb, anon_1.instances_ephemeral_key_uuid AS 
anon_1_instances_ephemeral_key_uuid, anon_1.instances_host AS 
anon_1_instances_host, anon_1.instances_node AS anon_1_instances_node, anon_1
.instances_instance_type_id AS anon_1_instances_instance_type_id, 
anon_1.instances_user_data AS anon_1_instances_user_data, 
anon_1.instances_reservation_id AS anon_1_instances_reservation_id, anon_1.insta
nces_launched_at AS anon_1_instances_launched_at, 
anon_1.instances_terminated_at AS anon_1_instances_terminated_at, 
anon_1.instances_availability_zone AS anon_1_instances_availability_zone, 
anon_1.instanc
es_display_name AS anon_1_instances_display_name, 
anon_1.instances_display_description AS anon_1_instances_display_description, 
anon_1.instances_launched_on AS anon_1_instances_launched_on, anon_1.instanc
es_locked AS anon_1_instances_locked, anon_1.instances_locked_by AS 
anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS anon_1_instances_arch
itecture, anon_1.instances_vm_mode AS anon_1_instances_vm_mode, 
anon_1.instances_uuid AS anon_1_instances_uuid, 
anon_1.instances_root_device_name AS anon_1_instances_root_device_name, 
anon_1.instances_def
ault_ephemeral_device AS anon_1_instances_default_ephemeral_device, 
anon_1.instances_default_swap_device AS anon_1_instances_default_swap_device, 
anon_1.instances_config_drive AS anon_1_instances_config_d
rive, anon_1.instances_access_ip_v4 AS anon_1_instances_access_ip_v4, 
anon_1.instances_access_ip_v6 AS anon_1_instances_access_ip_v6, 
anon_1.instances_auto_disk_config AS anon_1_instances_auto_disk_config
, anon_1.instances_progress AS anon_1_instances_progress, 
anon_1.instances_shutdown_terminate AS anon_1_instances_shutdown_terminate, 
anon_1.instances_disable_terminate AS anon_1_instances_disable_termina
te, anon_1.instances_cell_name AS anon_1_instances_cell_name, 
anon_1.instances_internal_id AS anon_1_instances_internal_id, 
anon_1.instances_cleaned AS anon_1_instances_cleaned, instance_info_caches_1.cre
ated_at AS instance_info_caches_1_created_at, instance_info_caches_1.updated_at 
AS instance_info_caches_1_updated_at, instance_info_caches_1.deleted_at AS 
instance_info_caches_1_deleted_at, instance_info_
caches_1.deleted AS instance_info_caches_1_deleted, instance_info_caches_1.id 
AS 

[Yahoo-eng-team] [Bug 1546383] [NEW] Branding: Horizon close buttons use 'x' instead of icon

2016-02-16 Thread Diana Whitten
Public bug reported:

To increase brandability, Horizon close buttons should use an icon font.

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

** Changed in: horizon
   Status: New => Confirmed

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
Milestone: None => mitaka-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546383

Title:
  Branding: Horizon close buttons use 'x' instead of icon

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  To increase brandability, Horizon close buttons should use an icon
  font.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468366] Re: (Operator-only) Logging API for security group rules

2016-02-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468366

Title:
  (Operator-only) Logging API for security group rules

Status in neutron:
  Expired

Bug description:
  [Existing problem]
  - Logging is currently a missing feature in security-groups, it is
    necessary for operators (Cloud admins, developers etc) to
    auditing easier.
  - Tenant also needs to make sure their security-groups works as
    expected, and to assess what kinds of events/packets went
    through their security-groups or were dropped.

  [Main purpose of this feature]
  * Enable to configure logs for security-group-rules.

  * In order to assess what kinds of events/packets went
    through their security-groups or were dropped.

  [What is the enhancement?]
  - Proposes to create new generic logging API for security-group-rules
    in order to make the trouble shooting process easier for operators
    (or Cloud admins, developers etc)..
  - Introduce layout the logging api model for future API and model
    extension for log driver types(rsyslog, ...).

  Specification: https://review.openstack.org/#/c/203509

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545729] Re: Use 4 byte unicode for entity names in mysql

2016-02-16 Thread Sheel Rana
BP implementation will require much time because of migration related changes. 
Spec for same is already in discussion.
Please refer below:

https://review.openstack.org/#/c/280371/

So, I created this bug for better message to user in case 4 byte unicode
is used in entity names as short term fix.

I will update the issue details and reopen it for short time fix.

Thanks!!

** Changed in: nova
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545729

Title:
  Use 4 byte unicode for entity names in mysql

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  mysql database does not support 4 byte unicode due to its utf8
  character set.

  If any operation is executed with 4byte unicode name, it reports 500 error 
without any proper error message to user.
  This will be confusing for user as no information is present about why this 
issue occurred.

  Please refer below for details:

  nova secgroup-create sheel 
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-a4eef1d6-11fa-4188-b116-ffdf728e04f4)

  
  Bug can be reproduced by simply using 4byte unicode characters in name of 
security group.

  This is 100% reproducible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1545729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532004] Re: gateway update restriction should apply only to router interfaces

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/264996
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=30dab936e602cba7e35806e8a558b53eb8936f48
Submitter: Jenkins
Branch:master

commit 30dab936e602cba7e35806e8a558b53eb8936f48
Author: Kevin Benton 
Date:   Thu Jan 7 14:28:24 2016 -0800

Only restrict gateway_ip change for router ports

The subnet update code was restricting gateway_ip changes if the
existing gateway IP belonged to a Neutron port. This was implemented
because changing the gateway will break all floating IP addresses if
the gateway is a Neutron router. However, this restriction makes it
possible to get a subnet stuck to an IP address that belongs to another
port (e.g. a compute port) so the user has to either delete the port
or change it's IP, both of which are disruptive.

This patch just changes the restriction so it only prevents gateway
IP changes if the current gateway IP belongs to a router. This
preserves the intent of the original change while allowing the subnet
to be updated off of IP addresses that belong to normal ports.

Change-Id: I4691505ef2fad6019e0d2fd80ff1b9e157662a29
Closes-bug: #1532004


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1532004

Title:
  gateway update restriction should apply only to router interfaces

Status in neutron:
  Fix Released

Bug description:
  The restriction that prevents a subnet's gateway IP from being updated
  if it points to an IP in use by a port can get the subnet in a stuck
  state without messing with a port if the wrong gateway_ip is set.

  Take the following example:

  administrator@13:35:25:~/code/neutron$ neutron subnet-create bojangles 
10.0.0.0/24 --name=bojangles --allocation-pool start=10.0.0.3,end=10.0.0.250
  Created a new subnet:
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "10.0.0.3", "end": "10.0.0.250"} |
  | cidr  | 10.0.0.0/24|
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 10.0.0.1   |
  | host_routes   ||
  | id| 21c9a4b3-a1d0-402f-8e1e-b463236cc612   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  | bojangles  |
  | network_id| 3c6ca69c-7662-441e-abc3-7a104aa603a1   |
  | subnetpool_id ||
  | tenant_id | de56db175c1d48b0bbe72f09a24a3b66   |
  +---++

  administrator@13:35:58:~/code/neutron$ neutron port-create bojangles 
--fixed-ip ip_address=10.0.0.2
  Created a new port:
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:host_id   | 
 |
  | binding:profile   | {}  
 |
  | binding:vif_details   | {}  
 |
  | binding:vif_type  | unbound 
 |
  | binding:vnic_type | normal  
 |
  | device_id | 
 |
  | device_owner  | 
 |
  | dns_assignment| {"hostname": "host-10-0-0-2", 

[Yahoo-eng-team] [Bug 1544548] Re: DHCP: no indication in API that DHCP service is not running

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279081
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=80cfec66259e05e6914abf0ee443b7c280de21a9
Submitter: Jenkins
Branch:master

commit 80cfec66259e05e6914abf0ee443b7c280de21a9
Author: Gary Kotton 
Date:   Thu Feb 11 05:28:45 2016 -0800

DHCP: release DHCP port if not enough memory

When the DHCP agent fails to create a namespace for the DHCP
service we will release the DHCP port instead of failing silently.

This will at least give the user an indication that there is no DHCP
service. No DHCP port will exist.

Change-Id: I59af745d3991e6deb424ecd9b916b03f146c246a
Closes-bug: #1544548


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544548

Title:
  DHCP: no indication in API that DHCP service is not running

Status in neutron:
  Fix Released

Bug description:
  Even if DHCP namespace creation fails at the network node due to some
  reason, neutron API still returns success to the user.

  2016-01-18 02:51:12.661 ^[[00;32mDEBUG neutron.agent.dhcp.agent [^[[01
  ;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb
  ^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0
  bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mCalling
  driver for network: 351d9017-6e92-4310-ae6d-cf1d0bce0b14 action:
  enable^[[00m ^[[00;33mfrom (pid=26547) call_driver
  /opt/stack/neutron/neutron/agent/dhcp/agent.py:104

  2016-01-18 02:51:12.662 ^[[00;32mDEBUG neutron.agent.linux.dhcp 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mDHCP port 
dhcpa382383f-19b6-5ca7-94ec-5ec1e62dc705-351d9017-6e92-4310-ae6d-cf1d0bce0b14 
on network 351d9017-6e92-4310-ae6d-cf1d0bce0b14 does not yet exist. Checking 
for a reserved port.^[[00m ^[[00;33mfrom (pid=26547) _setup_reserved_dhcp_port 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:1098^[[00m
  2016-01-18 02:51:12.663 ^[[00;32mDEBUG neutron.agent.linux.dhcp 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mDHCP port 
dhcpa382383f-19b6-5ca7-94ec-5ec1e62dc705-351d9017-6e92-4310-ae6d-cf1d0bce0b14 
on network 351d9017-6e92-4310-ae6d-cf1d0bce0b14 does not yet exist. Creating 
new one.^[[00m ^[[00;33mfrom (pid=26547) _setup_new_dhcp_port 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:1119

  2016-01-18 02:51:13.000 ^[[01;31mERROR neutron.agent.dhcp.agent 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[01;31m] ^[[01;35m^[[01;31mUnable to enable 
dhcp for 351d9017-6e92-4310-ae6d-cf1d0bce0b14.^[[00m
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 
113, in call_driver
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mgetattr(driver, action)(**action_kwargs)
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 
206, in enable
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00minterface_name = self.device_manager.setup(self.network)
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 
1206, in setup
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mnamespace=network.namespace)
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/interface.py", 
line 243, in plug
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mbridge, namespace, prefix)
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/interface.py", 
line 311, in plug_new
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mself.check_bridge_exists(bridge)
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/interface.py", 
line 220, in check_bridge_exists
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mif not ip_lib.device_exists(bridge):
  ^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File 

[Yahoo-eng-team] [Bug 1381961] Re: Keystone API GET 5000/v3 returns wrong endpoint URL in response body

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/226464
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=40c3942c12d1dd2c826d836987616838a73a64a1
Submitter: Jenkins
Branch:master

commit 40c3942c12d1dd2c826d836987616838a73a64a1
Author: Julien Danjou 
Date:   Mon Sep 21 17:27:07 2015 +0200

wsgi: fix base_url finding

The current wsgi.Application.base_url() function does not work correctly
if Keystone runs on something like "http://1.2.3.4/identity; which is now
a default in devstack.

This patch fixes that by using wsgiref.util to parse environment
variable set in WSGI mode to find the real base url and returns the
correct URL. The following environment variables will be used to
produce the effective base url:

  HTTP_HOST
  SERVER_NAME
  SERVER_PORT
  SCRIPT_NAME

Closes-Bug: #1381961
Change-Id: I111c206a8a751ed117c6869f55f8236b29ab88a2


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1381961

Title:
  Keystone API GET 5000/v3 returns wrong endpoint URL in response body

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When I was invoking a GET request to  public endpoint of Keystone, I found 
the admin endpoint URL in response body, I assume it should be the public 
endpoint URL:
  GET https://192.168.101.10:5000/v3

  {
"version": {
  "status": "stable",
  "updated": "2013-03-06T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v3+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v3+xml"
}
  ],
  "id": "v3.0",
  "links": [
{
  "href": "https://172.20.14.10:35357/v3/;,
  "rel": "self"
}
  ]
}
  }

  ===
  Btw, I can get the right URL for public endpoint in the response body of the 
versionless API call:
  GET https://192.168.101.10:5000

  {
"versions": {
  "values": [
{
  "status": "stable",
  "updated": "2013-03-06T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v3+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v3+xml"
}
  ],
  "id": "v3.0",
  "links": [
{
  "href": "https://192.168.101.10:5000/v3/;,
  "rel": "self"
}
  ]
},
{
  "status": "stable",
  "updated": "2014-04-17T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v2.0+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v2.0+xml"
}
  ],
  "id": "v2.0",
  "links": [
{
  "href": "https://192.168.101.10:5000/v2.0/;,
  "rel": "self"
},
{
  "href": 
"http://docs.openstack.org/api/openstack-identity-service/2.0/content/;,
  "type": "text/html",
  "rel": "describedby"
},
{
  "href": 
"http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf;,
  "type": "application/pdf",
  "rel": "describedby"
}
  ]
}
  ]
}
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1381961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546218] Re: Ignore local folder

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280840
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=142b3027455f0ec5b2cabd0c7d6155fae1f6a16d
Submitter: Jenkins
Branch:master

commit 142b3027455f0ec5b2cabd0c7d6155fae1f6a16d
Author: Thai Tran 
Date:   Tue Feb 16 09:28:21 2016 -0800

Ignore local folder

We currently ignore example files in dashboard/local. We should just ignore
the entire folder and exempt those files. This would allow developers and
operators to add content to the folder without accidentally committing it
upstream.

Change-Id: I2fe40af96ee62d85dd01e279485ce7f392a50a03
Closes-Bug: #1546218


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546218

Title:
  Ignore local folder

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We currently ignore example files in dashboard/local. We should just
  ignore the entire folder and exempt those files. This would allow
  developers and operators to add content to the folder without
  accidentally committing it upstream.

  For example, many developers would like to enable angular panels that
  are disabled by default. Sometimes, they forget to disable it before
  committing. They can instead just copy and modify it in their own
  local folder. Furthermore, this will allow operators to add their own
  enabled files without accidentally committing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528510] Re: Pecan: startup assumes controllers specify plugins

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/260439
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5273c6cc86c403f6f787c5d5d357a28462c9b868
Submitter: Jenkins
Branch:master

commit 5273c6cc86c403f6f787c5d5d357a28462c9b868
Author: Salvatore Orlando 
Date:   Mon Dec 21 16:00:27 2015 -0800

Pecan: Always associate plugins with resource

with this patch, the logic for associating a resource with a plugin
is now executed for every resoruce. This will avoid requiring the
method get_pecan_controllers in extensions descriptors to deal with
this. This item of work is required for a speedy "Pecanization" of
existing extensions, in particular the 'router' extension.

The routine for finding a plugin for a resource has been modified to
allow special treatment of the 'quotas' extension. This extension
indeed is declared as supported by plugins (usually the core one),
but the plugin does not implement relevant methods as quota management
is performed by a distinct driver.

Further, NeutronPecanController's plugin attribute has been
converted into a property which loads the value from NeutronManager
if not yet defined. Indeed in some cases the plugin might be
instantiated after the controller instance is created.

Closes-Bug: #1528510

Change-Id: Ibbfec8fd53855641bd21dec8ef824d5741dfebea


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528510

Title:
  Pecan: startup assumes controllers specify plugins

Status in neutron:
  Fix Released

Bug description:
  At startup, the Pecan API server associates a plugin (core or service) to 
every Neutron resource.
  With this association, every Pecan controller gets a plugin where calls 
should be dispatched.

  However, this association is not performed for 'pecanized extensions'
  [1]. A 'pecanized' extension is a Neutron API extension which is able
  to return Pecan controllers. The plugin association is instead
  currently performed only for those extensions for which a controller
  is generated on-the-fly using the generic CollectionController and
  ItemController.

  This approach has the drawback that the API extension descriptor should have 
the logic to identify a plugin for the API itself.
  While this is not a bad idea, it requires extensions descriptors to identify 
a plugin, thus duplicating, in a way, what's already done by the extension 
manager.

  For this reason it is advisable to do plugin association for all
  extensions during pecan startup unless until the Pecan framework won't
  rely anymore on the home grown extension manager.


  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/pecan_wsgi/startup.py#n86

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499664] Re: Add pagination support to the volume snapshots and backups tabs in the Dashboard

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/251427
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f90b374ad748bc4f3648a7f187ced0d34d34594b
Submitter: Jenkins
Branch:master

commit f90b374ad748bc4f3648a7f187ced0d34d34594b
Author: Timur Sufiev 
Date:   Mon Nov 30 18:07:39 2015 +0300

Add pagination to volume snapshots and backups pages

Do it for both Project (both Snapshots and Backups tabs) and Admin
(only Snapshots tab) dashboards.

To test: set 'Items Per Page' in the UI Settings page to a low number.

Change-Id: I9b16cf31c726055da0afad347e033f3918af3049
Closes-Bug: #1499664


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1499664

Title:
  Add pagination support to the volume snapshots and backups tabs in the
  Dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Both snapshots and backups API endpoints now support pagination in
  Cinder API (https://review.openstack.org/#/c/195071/ and
  https://review.openstack.org/#/c/204493/), it's time to leverage this
  support in Horizon.

  This is similar to bug 1316793 (except that it does the same for
  Volumes tab).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1499664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545370] Re: pycryptodome breaks nova/barbican/glance/kite

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280008
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=b5ffb569e0687b0016ea962348d8454c1517dde4
Submitter: Jenkins
Branch:master

commit b5ffb569e0687b0016ea962348d8454c1517dde4
Author: Davanum Srinivas 
Date:   Sun Feb 14 12:44:39 2016 -0500

Tolerate installation of pycryptodome

Newer versions of pysaml2 uses pycryptodome, so if by
accident if this library gets installed, Glance breaks.

paramiko folks are working on this:
https://github.com/paramiko/paramiko/issues/637

In the meanwhile, we should tolerate if either pycrypto
or pycryptodome is installed.

Closes-Bug: #1545370
Change-Id: I8969382b380aa843a0826eded4b694251dd27922


** Changed in: glance
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545370

Title:
  pycryptodome breaks nova/barbican/glance/kite

Status in Barbican:
  New
Status in Glance:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  pysaml2===4.0.3 drags in pycryptodome===3.4 which breaks Nova in the
  both unit tests and grenade.

  nova.tests.unit.test_crypto.KeyPairTest.test_generate_key_pair_1024_bits
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/test_crypto.py", line 352, in 
test_generate_key_pair_1024_bits
  (private_key, public_key, fingerprint) = 
crypto.generate_key_pair(bits)
File "nova/crypto.py", line 165, in generate_key_pair
  key = paramiko.RSAKey.generate(bits)
File 
"/Users/dims/openstack/openstack/nova/.tox/py27/lib/python2.7/site-packages/paramiko/rsakey.py",
 line 146, in generate
  rsa = RSA.generate(bits, os.urandom, progress_func)
File 
"/Users/dims/openstack/openstack/nova/.tox/py27/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py",
 line 436, in generate
  if e % 2 == 0 or e < 3:
  TypeError: unsupported operand type(s) for %: 'NoneType' and 'int'

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1545370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545620] Re: Remove NEC plugin tables from neutron db migration

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280116
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3d244c3cba68fb1f0147b723049e2ab80496fd98
Submitter: Jenkins
Branch:master

commit 3d244c3cba68fb1f0147b723049e2ab80496fd98
Author: Akihiro Motoki 
Date:   Mon Feb 15 18:56:59 2016 +0900

Remove NEC plugin tables

The initial NEC plugin was marked as deprecated in Liberty
and has been replaced by a new implementation in Mitaka.
This commit drops NEC plugin tables.

Closes-Bug: #1545620
Change-Id: Ifa8dad589ff6c47ba76452e8996a910f8732739b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1545620

Title:
  Remove NEC plugin tables from neutron db migration

Status in neutron:
  Fix Released

Bug description:
  The initial NEC plugin was marked as deprecated in Liberty and has been 
replaced by a new implementation in Mitaka.
  There is no need the existing tables related to NEC plugin are populated by 
the neutron in-tree DB migration.
  It is time to remove NEC plugin tables from the db migration script in the 
main neutron tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1545620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545210] Re: overlapping IPv6 cidrs leads to a masking failure in auto_allocate extension

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279837
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3785dc31a465e0ac9d5de23339daddfbc346ef7c
Submitter: Jenkins
Branch:master

commit 3785dc31a465e0ac9d5de23339daddfbc346ef7c
Author: Armando Migliaccio 
Date:   Fri Feb 12 16:46:13 2016 -0800

Address masking issue during auto-allocation failure

If an interface could not successfully be plugged to a router
during the provisioning of external connectivity, a failure
is triggered, and a subsequent cleanup performed.

This patch ensures that the cleanup accounts only for the
subnets that were uplinked successfully, thus fixing the
incomplete cleanup, and the masking user error that resulted
otherwise.

Closes-bug: #1545210

Change-Id: I061a1407abf6ed1a39056b1f72fd039919e2fc79


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1545210

Title:
  overlapping IPv6 cidrs leads to a masking failure in auto_allocate
  extension

Status in neutron:
  Fix Released

Bug description:
  Preconditions:

  1) one default, external network with cidr: 2001:db8::/64
  2) one default, subnet pool with cidr: 2001:db8::/64

  Steps to repro:

  1) neutron auto-allocated-topology-show

  Actual output:

  Router b1ecf10a-49f9-439e-9c8c-c8f22665b950 has no interface on subnet
  7c4f7247-87b7-4ca9-a157-e508c52ee88b

  Expected output:

  Deployment error: Unable to provide external connectivity.

  The log should be revealing as to why this configuration error led to
  this failure mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1545210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493350] Re: Attach Volume fail : Cinder - ISCSI device symlinks under /dev/disk/by-path in hex.

2016-02-16 Thread Walt Boring
** Changed in: nova
   Status: Opinion => Fix Released

** Changed in: cinder
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493350

Title:
  Attach Volume fail : Cinder - ISCSI device symlinks under /dev/disk
  /by-path in hex.

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  As part of POC for an enterprise storage we have implemented an ISCSIDriver.
  Volume operation works as expected.

  Facing an issue with attach volume. Request your help.

  1. If the volume lun id in backend storage is less than 255 attach volume 
works fine. Symlinks in /dev/disk/by-path is as below
  lrwxrwxrwx 1 root root   9 Jun 18 15:18 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xyz:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-169
 -> ../../sde
  lrwxrwxrwx 1 root root   9 Aug 26 14:47 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xyz:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-172
 -> ../../sdi

  2. If volume lun id is more than 255, /dev/disk/by-path is an hexadecimal 
number and hence attach volume is failing with message Volume path not found.  
Symlinks in /dev/disk/by-path is as below [hexadecimal lun id  according to 
SCSI standard ( REPORT LUNS)]
  lrwxrwxrwx 1 root root   9 Aug 26 14:47 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xxx:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-0x016b
 -> ../../sdh
  lrwxrwxrwx 1 root root   9 Jun 18 15:18 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xxx:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-0x0200
 -> ../../sdc

  Please provide your suggession.

  I would suggest cinder utility to have a check of both with normal
  lun-id in /dev/disk/by-path as well as hexadecimal lunid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1493350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538006] Re: useless 'u' in the return info of "openstack aggregate show"

2016-02-16 Thread Pushkar
** Changed in: python-novaclient
   Status: Triaged => Invalid

** Changed in: python-openstackclient
 Assignee: (unassigned) => Pushkar (push7joshi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538006

Title:
  useless 'u' in the return info of "openstack aggregate show"

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  Invalid
Status in python-openstackclient:
  Confirmed

Bug description:
  [Summary]
  useless 'u' in the return info of "openstack aggregate show"

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  no useless 'u' in the return info of "openstack aggregate show"

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) useless 'u' for metadata when creating an aggregate:
  root@45-59:/opt/stack/devstack# openstack aggregate create 
  --zone bbb --property abc=1  agg3
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | bbb  |
  | created_at| 2016-01-26T11:07:16.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | []   |
  | id| 3|
  | metadata  | {u'abc': u'1', u'availability_zone': u'bbb'} |  
  | name  | agg3 |
  | updated_at| None |
  +---+--+
  root@45-59:/opt/stack/devstack# 

  2)useless 'u' for properties when show an aggregate:
  root@45-59:/opt/stack/devstack# openstack aggregate show agg3
  +---++
  | Field | Value  |
  +---++
  | availability_zone | bbb|
  | created_at| 2016-01-26T11:07:16.00 |
  | deleted   | False  |
  | deleted_at| None   |
  | hosts | [] |
  | id| 3  |
  | name  | agg3   |
  | properties| {u'abc': u'1'} |
  | updated_at| None   |
  +---++
  root@45-59:/opt/stack/devstack# 

  3)useless 'u' for hosts when add host into an aggregate:
  root@45-59:~# openstack aggregate add host agg3 45-59
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | bbb  |
  | created_at| 2016-01-26T11:07:16.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | [u'45-59']   |
  | id| 3|
  | metadata  | {u'abc': u'1', u'availability_zone': u'bbb'} |
  | name  | agg3 |
  | updated_at| None |
  +---+--+
  root@45-59:~# 

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532809] Re: Gate failures when DHCP lease cannot be acquired

2016-02-16 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532809

Title:
  Gate failures when DHCP lease cannot be acquired

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  In Progress
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  Example from:
  
http://logs.openstack.org/97/265697/1/check/gate-grenade-dsvm/6eeced7/console.html#_2016-01-11_07_42_30_838

  Logstash query:
  message:"No lease, failing" AND voting:1

  dhcp_release for an ip/mac does not seem to reach dnsmasq (or it fails
  to act on it - "unknown lease") as i don't see entries in syslog for
  it.

  Logs from nova network:
  dims@dims-mac:~/junk/6eeced7$ grep dhcp_release old/screen-n-net.txt.gz | 
grep 10.1.0.3 | grep CMD
  2016-01-11 07:25:35.548 DEBUG oslo_concurrency.processutils 
[req-62aaa0b9-093e-4f28-805d-d4bf3008bfe6 tempest-ServersTestJSON-1206086292 
tempest-ServersTestJSON-1551541405] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:32:51:c3" 
returned: 0 in 0.117s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
  2016-01-11 07:25:51.259 DEBUG oslo_concurrency.processutils 
[req-31115ffa-8d43-4621-bb2e-351d6cd4bcef 
tempest-ServerActionsTestJSON-357128318 
tempest-ServerActionsTestJSON-854742699] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:a4:f0:11" 
returned: 0 in 0.108s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
  2016-01-11 07:26:35.357 DEBUG oslo_concurrency.processutils 
[req-c32a216e-d909-41a0-a0bc-d5eb7a21c048 
tempest-TestVolumeBootPattern-46217374 
tempest-TestVolumeBootPattern-1056816637] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:ed:de:f6" 
returned: 0 in 0.110s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297

  Logs from syslog:
  dims@dims-mac:~/junk$ grep 10.1.0.3 syslog.txt.gz
  Jan 11 07:25:35 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:32:51:c3 unknown lease
  Jan 11 07:25:51 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:a4:f0:11 unknown lease
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPOFFER(br100) 10.1.0.3 fa:16:3e:ed:de:f6
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPREQUEST(br100) 10.1.0.3 fa:16:3e:ed:de:f6
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPACK(br100) 10.1.0.3 fa:16:3e:ed:de:f6 tempest
  Jan 11 07:27:34 devstack-trusty-rax-iad-7090830 object-auditor: Object audit 
(ALL). Since Mon Jan 11 07:27:34 2016: Locally: 1 passed, 0 quarantined, 0 
errors files/sec: 2.03 , bytes/sec: 10119063.16, Total time: 0.49, Auditing 
time: 0.00, Rate: 0.00
  Jan 11 07:39:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:40:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:41:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:42:26 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:fe:1f:36 unknown lease

  Net, The test that runs the ssh against the vm fails with the "No
  lease, failing" in its serial console

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493350] Re: Attach Volume fail : Cinder - ISCSI device symlinks under /dev/disk/by-path in hex.

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277606
Committed: 
https://git.openstack.org/cgit/openstack/os-brick/commit/?id=82cdb40f870c958da6ab6ef447b35be2eaec5c8d
Submitter: Jenkins
Branch:master

commit 82cdb40f870c958da6ab6ef447b35be2eaec5c8d
Author: Jenkins 
Date:   Thu Feb 11 17:22:19 2016 +

Lun id's > 255 should be converted to hex

This patch addresses the issue where lun id's values are larger
than 255 and are being kept as integers. This causes the volume and
search paths to be malformed and volumes can't be found. This patch
adds two functions to linuxscsi.py to process the lun id's; they can
process both a single lun id and a list of them. If a lun id has a
value larger than 255 it is converted to hex. This patch also modifies
the necessary unit tests and adds ones to cover the new main function.

Change-Id: Ib0b2f239a8152275de9ea66fa99a286dfbe53d57
Closes-bug: #1493350


** Changed in: os-brick
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493350

Title:
  Attach Volume fail : Cinder - ISCSI device symlinks under /dev/disk
  /by-path in hex.

Status in Cinder:
  Confirmed
Status in OpenStack Compute (nova):
  Opinion
Status in os-brick:
  Fix Released

Bug description:
  As part of POC for an enterprise storage we have implemented an ISCSIDriver.
  Volume operation works as expected.

  Facing an issue with attach volume. Request your help.

  1. If the volume lun id in backend storage is less than 255 attach volume 
works fine. Symlinks in /dev/disk/by-path is as below
  lrwxrwxrwx 1 root root   9 Jun 18 15:18 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xyz:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-169
 -> ../../sde
  lrwxrwxrwx 1 root root   9 Aug 26 14:47 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xyz:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-172
 -> ../../sdi

  2. If volume lun id is more than 255, /dev/disk/by-path is an hexadecimal 
number and hence attach volume is failing with message Volume path not found.  
Symlinks in /dev/disk/by-path is as below [hexadecimal lun id  according to 
SCSI standard ( REPORT LUNS)]
  lrwxrwxrwx 1 root root   9 Aug 26 14:47 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xxx:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-0x016b
 -> ../../sdh
  lrwxrwxrwx 1 root root   9 Jun 18 15:18 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xxx:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-0x0200
 -> ../../sdc

  Please provide your suggession.

  I would suggest cinder utility to have a check of both with normal
  lun-id in /dev/disk/by-path as well as hexadecimal lunid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1493350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534763] Re: Sensitive location_data information exposed in debug message

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280778
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=093af94c315dc4105ea060936dce37bd91a6e9a2
Submitter: Jenkins
Branch:master

commit 093af94c315dc4105ea060936dce37bd91a6e9a2
Author: Cyril Roelandt 
Date:   Tue Feb 16 16:46:25 2016 +0100

Do not log sensitive data

The content of the "location_data" was leaked in the logs.

Change-Id: I90b1b8b5be1f9ca9ecd9be62e46531d3c50df777
Closes-Bug: #1534763


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1534763

Title:
  Sensitive location_data information exposed in debug message

Status in Glance:
  Fix Released

Bug description:
  When creating an image with the swift backend, the swift object URL
  (including password) is logged at debug level in the registry log.
  The locations field is currently censored, but location_data is not.

  Example:
  # glance image-create --name test --disk-format raw --container-format bare < 
init.sh
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | 463dafb5b048669f108dd1bb1545c5b6 |
  | container_format | bare |
  | created_at   | 2016-01-15T17:27:18.00   |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | raw  |
  | id   | c4d1a9fe-0ee8-4df6-81f4-7dc74a96b010 |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | test |
  | owner| b426c75b76de448481322f4a0bd5dbbe |
  | protected| False|
  | size | 153  |
  | status   | active   |
  | updated_at   | 2016-01-15T17:27:19.00   |
  | virtual_size | None |
  +--+--+
  # grep -rn 6TWxXyb5L2qenL4uAZTB /var/log/glance/
  /var/log/glance/glance-registry.log:967:2016-01-15 17:27:19.321 18032 DEBUG 
glance.registry.api.v1.images [req-5207a920-90c3-4d84-b572-127b56d10fc1 
3604171c33684cc9a4c11d5506cc3c34 b426c75b76de448481322f4a0bd5dbbe - - -] 
Updating image c4d1a9fe-0ee8-4df6-81f4-7dc74a96b010 with metadata: {u'status': 
u'active', u'location_data': [{u'url': 
u'swift+http://service%3Aglance:6TWxXyb5L2qenL4uAZTB@10.142.0.1:5000/v2.0/images/c4d1a9fe-0ee8-4df6-81f4-7dc74a96b010',
 u'status': u'active', u'metadata': {}}]} update 
/usr/lib/python2.7/site-packages/glance/registry/api/v1/images.py:469

  Adding 'location_data' to the filtered fields in
  
https://github.com/openstack/glance/blob/master/glance/registry/api/v1/images.py#L461
  fixed this issue.

  Seen on stable/kilo, but the censoring code does not appear to have
  changed since.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1534763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538006] Re: useless 'u' in the return info of "openstack aggregate show"

2016-02-16 Thread Neetu Jain
its in openstackclient not in novaclient

** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: python-openstackclient
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538006

Title:
  useless 'u' in the return info of "openstack aggregate show"

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  Triaged
Status in python-openstackclient:
  Confirmed

Bug description:
  [Summary]
  useless 'u' in the return info of "openstack aggregate show"

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  no useless 'u' in the return info of "openstack aggregate show"

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) useless 'u' for metadata when creating an aggregate:
  root@45-59:/opt/stack/devstack# openstack aggregate create 
  --zone bbb --property abc=1  agg3
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | bbb  |
  | created_at| 2016-01-26T11:07:16.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | []   |
  | id| 3|
  | metadata  | {u'abc': u'1', u'availability_zone': u'bbb'} |  
  | name  | agg3 |
  | updated_at| None |
  +---+--+
  root@45-59:/opt/stack/devstack# 

  2)useless 'u' for properties when show an aggregate:
  root@45-59:/opt/stack/devstack# openstack aggregate show agg3
  +---++
  | Field | Value  |
  +---++
  | availability_zone | bbb|
  | created_at| 2016-01-26T11:07:16.00 |
  | deleted   | False  |
  | deleted_at| None   |
  | hosts | [] |
  | id| 3  |
  | name  | agg3   |
  | properties| {u'abc': u'1'} |
  | updated_at| None   |
  +---++
  root@45-59:/opt/stack/devstack# 

  3)useless 'u' for hosts when add host into an aggregate:
  root@45-59:~# openstack aggregate add host agg3 45-59
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | bbb  |
  | created_at| 2016-01-26T11:07:16.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | [u'45-59']   |
  | id| 3|
  | metadata  | {u'abc': u'1', u'availability_zone': u'bbb'} |
  | name  | agg3 |
  | updated_at| None |
  +---+--+
  root@45-59:~# 

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479452] Re: Changing resource's domain_id should not be possible

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/207218
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=27c4cbc9f7565ee978525de0053a1ae5f15de633
Submitter: Jenkins
Branch:master

commit 27c4cbc9f7565ee978525de0053a1ae5f15de633
Author: henriquetruta 
Date:   Wed Jul 29 17:49:32 2015 -0300

Restricting domain_id update

Restricts the update of a domain_id for a project, (even with the
'domain_id_immutable' property set to False), allowing it only for
root projects that have no children of its own. The update of the
domain_id of a project that has the is_domain field set True is not
allowed either. The update of this property may cause projects hierarchy
inconsistency and security issues.
This patch also sets the 'domain_id_immutable' as deprecated and emits
a WARN in case it is set False, when updating the domain_id of
users, groups or projects.

Closes-bug: 1479452
Related-bug: 1502157

Change-Id: Ib53f2173d4e4694d7ed2ecd330878664f8199371


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1479452

Title:
  Changing resource's domain_id should not be possible

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Changing a resource's domain_id, specially a project, is not something
  we want, as discussed at the last topic of:
  
http://eavesdrop.openstack.org/meetings/keystone/2015/keystone.2015-07-21-18.01.log.html

  This could cause some security problems as well as hierarchy's
  inconsistency, once it'll require the whole hierarchy to be changed,
  when changing a parent project's domain_id.

  We shall deprecate the 'domain_id_immutable' property
  
(https://github.com/openstack/keystone/blob/master/etc/keystone.conf.sample#L66)
  to remove it in the future and for now,  show a warning if it is set
  false.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1479452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277316] Re: Disconnecting a volume with multipath generates excessive multipath calls

2016-02-16 Thread Matt Riedemann
*** This bug is a duplicate of bug 1456480 ***
https://bugs.launchpad.net/bugs/1456480

** Tags added: multipath

** This bug has been marked a duplicate of bug 1456480
   Performance issue during volume detachment in ISCSIConnector when iSCSI 
multipath is used

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277316

Title:
  Disconnecting a volume with multipath generates excessive multipath
  calls

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  I have a compute node with 20 volumes attached using iscsi and multipath.
  Each multipath device has 4 iscsi devices.

  When I disconnect a volume it generates 779 multipath -ll calls.

  
  iscsiadm -m node --rescan
  iscsiadm -m session --rescan
  multipath - r

  multipath -ll /dev/sdch
  multipath -ll /dev/sdcg
  multipath -ll /dev/sdcf
  multipath -ll /dev/sdce
  multipath -ll /dev/sdcd
  multipath -ll /dev/sdcc
  multipath -ll /dev/sdcb
  multipath -ll /dev/sdca
  multipath -ll /dev/sdbz
  multipath -ll /dev/sdby
  multipath -ll /dev/sdbx
  multipath -ll /dev/sdbw
  multipath -ll /dev/sdbv
  multipath -ll /dev/sdbu
  multipath -ll /dev/sdbt
  multipath -ll /dev/sdbs
  multipath -ll /dev/sdbr
  multipath -ll /dev/sdbq
  multipath -ll /dev/sdbp
  multipath -ll /dev/sdbo
  multipath -ll /dev/sdbn
  multipath -ll /dev/sdbm
  multipath -ll /dev/sdbl
  multipath -ll /dev/sdbk
  multipath -ll /dev/sdbj
  multipath -ll /dev/sdbi
  multipath -ll /dev/sdbh
  multipath -ll /dev/sdbg
  multipath -ll /dev/sdbf
  multipath -ll /dev/sdbe
  multipath -ll /dev/sdbd
  multipath -ll /dev/sdbc
  multipath -ll /dev/sdbb
  multipath -ll /dev/sdba
  
  .. And so on for 779 times
  cp /dev/stdin /sys/block/sdcd/device/delete
  cp /dev/stdin /sys/block/sdcc/device/delete
  cp /dev/stdin /sys/block/sdcb/device/delete
  cp /dev/stdin /sys/block/sdca/device/delete
  multipath - r

  
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545202] Re: Domain info is not shown in federated token validation response

2016-02-16 Thread Sam Leong
The response shows the actual group id but the domain shows 'Federated',
so that made me to be not sure if that is the intended behavior.  Since
it's documented, I'm cool with that ;-)

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1545202

Title:
  Domain info is not shown in federated token validation response

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  When validating a federated token, the group id info shows in the
  response but the domain info does not show, only shows 'Federated' for
  both domain id and domain name.

  HTTP/1.1 200 OK
  Date: Fri, 12 Feb 2016 21:24:16 GMT
  Server: Apache/2.4.10 (Debian)
  X-Subject-Token: b6c115ce0aed425baf3e8ed104da945d
  Vary: X-Auth-Token
  x-openstack-request-id: req-6c1a9a92-f3f9-48a2-8767-61001c77cadd
  Content-Length: 419
  Content-Type: application/json

  {"token": {"methods": ["saml2"], "expires_at":
  "2016-02-13T01:20:20.037092Z", "extras": {}, "user": {"OS-FEDERATION":
  {"identity_provider": {"id": "ks_fed1_idp"}, "protocol": {"id":
  "saml2"}, "groups": [{"id": "357f50fed4cc4f00804cd8da821426ea"}]},
  "domain": {"id": "Federated", "name": "Federated"}, "id": "admin",
  "name": "admin"}, "audit_ids": ["gCNZNyOAQfughh3tPMyEhQ"],
  "issued_at": "2016-02-12T21:20:20.037127Z"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1545202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433309] Re: Libvirt: Detaching volume from instance on host with many attached volumes is very slow

2016-02-16 Thread Matt Riedemann
*** This bug is a duplicate of bug 1456480 ***
https://bugs.launchpad.net/bugs/1456480

** Tags added: libvirt netapp

** Changed in: cinder
   Status: Incomplete => Invalid

** This bug has been marked a duplicate of bug 1456480
   Performance issue during volume detachment in ISCSIConnector when iSCSI 
multipath is used

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433309

Title:
  Libvirt: Detaching volume from instance on host with many attached
  volumes is very slow

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Expired

Bug description:
  When many volumes are attached to instances on the same compute host
  (with multipath enabled), volume detach is very slow and get slower as
  more volumes are attached.

  For example:
  1. compute1 is a compute node with instance1 and instance2. 
  2. instance1 has 10 volumes attached while instance2 has a single volume 
attached. 
  3. Issue a detach for the volume attached to instance2
  4. Nova spends >20 minutes executing the 'multipath -ll' command for every 
device on the hypervisor
  5. Finally the detach completes successfully

  The following log is output in n-cpu many, many times during the detach call. 
Repeated many times for each volume device.
  http://paste.openstack.org/show/192981/

  
  Environment details:
  nova.conf virt driver
  [libvirt]
  iscsi_use_multipath = True
  vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  inject_partition = -2
  live_migration_uri = qemu+ssh://ameade@%s/system
  use_usb_tablet = False
  cpu_mode = none
  virt_type = kvm

  cinder.conf backend
  [eseries]
  volume_backend_name = eseries
  volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
  netapp_storage_family = eseries
  netapp_storage_protocol = iscsi
  netapp_server_hostname = localhost
  netapp_server_port = 8081
  netapp_webservice_path = /devmgr/v2
  netapp_controller_ips = 10.78.152.114,10.78.152.115
  netapp_login = rw
  netapp_password = xx
  netapp_storage_pools = DDP
  use_multipath_for_image_xfer = True
  netapp_sa_password = password
  netapp_enable_multi_attach=True

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1433309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439869] Re: encrypted iSCSI volume attach fails when iscsi_use_multipath is enabled

2016-02-16 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439869

Title:
  encrypted iSCSI volume attach fails when iscsi_use_multipath is
  enabled

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  When attempting to attach an encrypted iSCSI volume to an instance
  with iscsi_use_multipath set to True in nova.conf an error occurs in
  n-cpu.

  The devstack system being used had the following nova version:

  commit ab25f5f34b6ee37e495aa338aeb90b914f622b9d
  Merge "instance termination with update_dns_entries set fails"

  The following error occurs in n-cpu:

  Stack Trace:

  2015-04-02 13:46:22.641 ERROR nova.virt.block_device 
[req-61f49ff8-b814-42c0-8cf8-ffe7b6a3561c admin admin] [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Driver failed to attach volume 
4778e71c-a1b5-4d
  b5-b677-1d8191468e87 at /dev/vdb
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Traceback (most recent call last):
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 251, in attach
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] device_type=self['device_type'], 
encryption=encryption)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1064, in attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] 
self._disconnect_volume(connection_info, disk_dev)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] six.reraise(self.type_, self.value, 
self.tb)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1051, in attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] encryptor.attach_volume(context, 
**encryption)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/volume/encryptors/cryptsetup.py", line 93, in 
attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] self._open_volume(passphrase, 
**kwargs)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/volume/encryptors/cryptsetup.py", line 78, in _open_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] check_exit_code=True, 
run_as_root=True)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File "/opt/stack/nova/nova/utils.py", 
line 206, in execute
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] return processutils.execute(*cmd, 
**kwargs)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
233, in execute
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] cmd=sanitized_cmd)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] ProcessExecutionError: Unexpected error 
while running command.
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf cryptsetup create --key-file=- 36000eb37601bcf0200
  00036c /dev/mapper/36000eb37601bcf02036c
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Exit code: 1
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stdout: u''
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stderr: u''
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]

  multipath-tools was installed
  iscsi_use_multipath = True was set under the [libvirt] entry in nova.conf

  To 

[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280760
Committed: 
https://git.openstack.org/cgit/openstack/octavia/commit/?id=15fdc0ea7f5b86a6022449214c9bdd284476
Submitter: Jenkins
Branch:master

commit 15fdc0ea7f5b86a6022449214c9bdd284476
Author: Bo Wang 
Date:   Tue Feb 16 23:06:09 2016 +0800

Fix hacking rule of assert_equal_or_not_none

The rule is not robust enough result in error code existing.
Fix the code and the rule.

Change-Id: I68934d4931a6e7857a824d0af5ed571a9c3e6480
Closes-bug: #1280522


** Changed in: octavia
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in bifrost:
  Fix Committed
Status in Blazar:
  In Progress
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in dox:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in ooi:
  In Progress
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  In Progress
Status in OpenStack SDK:
  In Progress
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in refstack:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  New
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546237] [NEW] Typo in alembic_migrations.rst

2016-02-16 Thread James Arendt
Public bug reported:

"Indepedent" should be "Independent" in header "1. Indepedent Sub-
Project Tables"

Externally visible at
http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html.

** Affects: neutron
 Importance: Undecided
 Assignee: James Arendt (james-arendt-7)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => James Arendt (james-arendt-7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546237

Title:
  Typo in alembic_migrations.rst

Status in neutron:
  In Progress

Bug description:
  "Indepedent" should be "Independent" in header "1. Indepedent Sub-
  Project Tables"

  Externally visible at
  http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545729] Re: 4 byte unicode handling for entity names

2016-02-16 Thread Augustina Ragwitz
This is a feature request. If you'd like to move forward with using
4-byte unicode characters on the backend, please submit a spec for doing
so. If the issue is that the returned error is confusing and you'd like
that improved, that is a valid bug and this issue can be reopened with
that as the bug description.

** Tags added: compute

** Summary changed:

- 4 byte unicode handling for entity names
+ Nova should use 4 byte unicode for entity names

** Summary changed:

- Nova should use 4 byte unicode for entity names
+ Use 4 byte unicode for entity names in mysql

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545729

Title:
  Use 4 byte unicode for entity names in mysql

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  mysql database does not support 4 byte unicode due to its utf8
  character set.

  If any operation is executed with 4byte unicode name, it reports 500 error 
without any proper error message to user.
  This will be confusing for user as no information is present about why this 
issue occurred.

  Please refer below for details:

  nova secgroup-create sheel 
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-a4eef1d6-11fa-4188-b116-ffdf728e04f4)

  
  Bug can be reproduced by simply using 4byte unicode characters in name of 
security group.

  This is 100% reproducible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1545729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536715] Re: Duplicated code

2016-02-16 Thread Béla Vancsics
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536715

Title:
  Duplicated code

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  It's duplicated code in the code (nova/compute/manager.py)
  http://openqa.sed.hu/dashboard/index/4?did=1 , in 39~CloneClass

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1536715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546218] [NEW] Ignore local folder

2016-02-16 Thread Thai Tran
Public bug reported:

We currently ignore example files in dashboard/local. We should just
ignore the entire folder and exempt those files. This would allow
developers and operators to add content to the folder without
accidentally committing it upstream.

For example, many developers would like to enable angular panels that
are disabled by default. Sometimes, they forget to disable it before
committing. They can instead just copy and modify it in their own local
folder. Furthermore, this will allow operators to add their own enabled
files without accidentally committing it.

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546218

Title:
  Ignore local folder

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We currently ignore example files in dashboard/local. We should just
  ignore the entire folder and exempt those files. This would allow
  developers and operators to add content to the folder without
  accidentally committing it upstream.

  For example, many developers would like to enable angular panels that
  are disabled by default. Sometimes, they forget to disable it before
  committing. They can instead just copy and modify it in their own
  local folder. Furthermore, this will allow operators to add their own
  enabled files without accidentally committing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516566] Re: TestServerMultinode.test_schedule_to_all_nodes fails in gate-tempest-dsvm-cells for Liberty

2016-02-16 Thread Sylvain Bauza
Yeah the backport was merged too.

https://review.openstack.org/#/q/Icc71e36f4ecb015dff9e806caacd31262f7e17f7,n,z

** Changed in: nova
   Status: Incomplete => Fix Released

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Importance: Low => High

** Changed in: nova
   Status: Fix Released => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1516566

Title:
  TestServerMultinode.test_schedule_to_all_nodes fails in gate-tempest-
  dsvm-cells for Liberty

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  It seems that all gate-tempest-dsvm-cells runs against the liberty
  branch fail because of

  2015-11-16 04:50:15.334 | ==
  2015-11-16 04:50:15.334 | Failed 1 tests - output below:
  2015-11-16 04:50:15.334 | ==
  2015-11-16 04:50:15.334 |
  2015-11-16 04:50:15.334 | 
tempest.scenario.test_server_multinode.TestServerMultinode.test_schedule_to_all_nodes[compute,id-9cecbe35-b9d4-48da-a37e-7ce70aa43d30,network,smoke]
  2015-11-16 04:50:15.334 | 

  2015-11-16 04:50:15.335 |
  2015-11-16 04:50:15.335 | Captured traceback:
  2015-11-16 04:50:15.335 | ~~~
  2015-11-16 04:50:15.335 | Traceback (most recent call last):
  2015-11-16 04:50:15.335 |   File "tempest/test.py", line 127, in wrapper
  2015-11-16 04:50:15.335 | return f(self, *func_args, **func_kwargs)
  2015-11-16 04:50:15.335 |   File 
"tempest/scenario/test_server_multinode.py", line 75, in 
test_schedule_to_all_nodes
  2015-11-16 04:50:15.335 | create_kwargs=create_kwargs)
  2015-11-16 04:50:15.335 |   File "tempest/scenario/manager.py", line 200, 
in create_server
  2015-11-16 04:50:15.335 | status='ACTIVE')
  2015-11-16 04:50:15.336 |   File "tempest/common/waiters.py", line 75, in 
wait_for_server_status
  2015-11-16 04:50:15.336 | server_id=server_id)
  2015-11-16 04:50:15.336 | tempest.exceptions.BuildErrorException: Server 
7b801bb9-c957-45cb-ab1e-0d802c9374c2 failed to build and is in ERROR status
  2015-11-16 04:50:15.336 | Details: {u'message': u'No valid host was 
found. There are not enough hosts available.', u'code': 500, u'details': u'  
File "/opt/stack/new/nova/nova/conductor/manager.py", line 739, in 
build_instances\nrequest_spec, filter_properties)\n  File 
"/opt/stack/new/nova/nova/scheduler/utils.py", line 343, in wrapped\nreturn 
func(*args, **kwargs)\n  File 
"/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 52, in 
select_destinations\ncontext, request_spec, filter_properties)\n  File 
"/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in 
__run_method\nreturn getattr(self.instance, __name)(*args, **kwargs)\n  
File "/opt/stack/new/nova/nova/scheduler/client/query.py", line 34, in 
select_destinations\ncontext, request_spec, filter_properties)\n  File 
"/opt/stack/new/nova/nova/scheduler/rpcapi.py", line 120, in 
select_destinations\nrequest_spec=request_spec, 
filter_properties=filter_properties)\n  File "/usr/local/lib
 /python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call\n
retry=self.retry)\n  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send\ntimeout=timeout, retry=retry)\n  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send\nretry=retry)\n  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 422, in _send\nraise result\n', u'created': u'2015-11-16T04:41:59Z'}
  2015-11-16 04:50:15.336 |
  2015-11-16 04:50:15.336 |
  2015-11-16 04:50:15.336 | Captured traceback-1:
  2015-11-16 04:50:15.336 | ~
  2015-11-16 04:50:15.336 | Traceback (most recent call last):
  2015-11-16 04:50:15.336 |   File "tempest/common/waiters.py", line 111, 
in wait_for_server_termination
  2015-11-16 04:50:15.336 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2015-11-16 04:50:15.337 | tempest.exceptions.BuildErrorException: Server 
7b801bb9-c957-45cb-ab1e-0d802c9374c2 failed to build and is in ERROR status
  2015-11-16 04:50:15.337 |
  2015-11-16 04:50:15.337 |
  2015-11-16 04:50:15.337 | Captured traceback-2:
  2015-11-16 04:50:15.337 | ~
  2015-11-16 04:50:15.337 | Traceback (most recent call last):
  2015-11-16 04:50:15.337 |   File "tempest/scenario/manager.py", line 143, 
in _wait_for_cleanups
  2015-11-16 04:50:15.337 | waiter_callable(**wait)
  2015-11-16 04:50:15.337 |   File "tempest/common/waiters.py", line 111, 
in wait_for_server_termination
  2015-11-16 

[Yahoo-eng-team] [Bug 1546039] Re: If one trustor role is removed, the trust cannot be used

2016-02-16 Thread Adam Young
Its a feature.  A trust is assumed to be the smallest chunk of delegated
roles possible to perform an action.  If a user does not have all those
roles, the trustor should be informed immediately that the trust is no
longer viable.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546039

Title:
  If one trustor role is removed, the trust cannot be used

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  If a trust is created with a list of roles, when the trust is used by
  the trustee to obtain a token, we first make sure that the trustor
  still has all the delegated roles. However, the way the code is
  written, if any have been removed, we immediately fail the token
  creation, rather than, instead, grant the token with the remaining
  roles. The current exception comment suggests that this was not our
  intention.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546189] [NEW] Add driver details in architecture doc

2016-02-16 Thread maestropandy
Public bug reported:

This bug tracks fixing the issue referred in below comment in 
https://review.openstack.org/#/c/209524

...
Lance Bragstad
Sep 3 12:04 AM
↩
Patch Set 21:
(1 comment)
keystone/resource/core.py
Line 1367:
Does this one need to be added to the architecture doc?


** Affects: keystone
 Importance: Undecided
 Assignee: maestropandy (maestropandy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546189

Title:
  Add driver details in architecture doc

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  This bug tracks fixing the issue referred in below comment in 
https://review.openstack.org/#/c/209524
  
  ...
  Lance Bragstad
  Sep 3 12:04 AM
  ↩
  Patch Set 21:
  (1 comment)
  keystone/resource/core.py
  Line 1367:
  Does this one need to be added to the architecture doc?
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546158] [NEW] Table actions with non-unique names cause horizon to incorrectly bind them to a table

2016-02-16 Thread a.zhukov
Public bug reported:

If you define two different actions with the same name and place one of them 
into table_actions and the other into row_action, horizon will place them 
non-deterministically,  because horizon.tables.base.DataTable class relies on 
the Action.name attribute 
https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L1389
https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L1124

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546158

Title:
  Table actions with non-unique names cause horizon to incorrectly bind
  them to a table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If you define two different actions with the same name and place one of them 
into table_actions and the other into row_action, horizon will place them 
non-deterministically,  because horizon.tables.base.DataTable class relies on 
the Action.name attribute 
https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L1389
  https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L1124

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546152] [NEW] openstack adding a role to an openldap user failed

2016-02-16 Thread Alexandre Carnal
Public bug reported:

When issuing "openstack role add  --domain  --user
  --user-domain  member" command on a domain
associated with OpenLDAP, the keystone logs report that the domain and
the role member could not be found though the openstack role show member
displays the member role and openstack domain show  displays the
domain as active.

OpenLDAP is running on a CentOS 7 host.
Openstack keystone release is Liberty running on a CentOS 7 host.
OpenLDAP version: OpenLDAP: slapd 2.4.39 (Sep 29 2015 13:31:12)
openstack v: 1.7.2

This "bug" could be probably related to the two other bugs I reported
before: #1546040 and #1546136

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546152

Title:
  openstack adding a role to an openldap user failed

Status in OpenStack Identity (keystone):
  New

Bug description:
  When issuing "openstack role add  --domain  --user
--user-domain  member" command on a domain
  associated with OpenLDAP, the keystone logs report that the domain and
  the role member could not be found though the openstack role show
  member displays the member role and openstack domain show 
  displays the domain as active.

  OpenLDAP is running on a CentOS 7 host.
  Openstack keystone release is Liberty running on a CentOS 7 host.
  OpenLDAP version: OpenLDAP: slapd 2.4.39 (Sep 29 2015 13:31:12)
  openstack v: 1.7.2

  This "bug" could be probably related to the two other bugs I reported
  before: #1546040 and #1546136

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541657] Re: Scoped OS-FEDERATION token not working

2016-02-16 Thread Steve Martinelli
thanks for confirming bogdan, we'll get this into the next kilo
scheduled release

** Changed in: keystone
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541657

Title:
  Scoped OS-FEDERATION token not working

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Identity (keystone) kilo series:
  In Progress

Bug description:
  I have implemented Keystone Federation scenario with Kilo against a
  non-Keystone IdP.

  Following the flow described at https://specs.openstack.org/openstack
  /keystone-specs/api/v3/identity-api-v3-os-federation-ext.html I
  successfully went through SAML2 authentication and I ended up with an
  unscoped token which is working just fine.

  When I then request a scoped token out of the unscoped token I get a token 
which differs from the documentation:
  docs says that user will have groups:

  "user": {
  "domain": {
  "id": "Federated"
  },
  "id": "username%40example.com",
  "name": "usern...@example.com",
  "OS-FEDERATION": {
  "identity_provider": "ACME",
  "protocol": "SAML",
  "groups": [
  {"id": "abc123"},
  {"id": "bcd234"}
  ]
  }
  }

  while in my implementation I get user with no groups (in contrast my unscoped 
token has the groups in user) :
  "user": {
"domain": {
"id": "Federated",
"name": "Federated"
},
"id": "myUser",
"name": "myUser"
"OS-FEDERATION": {
"identity_provider": {
"id": "myIdP"
},
"protocol": {"id": "saml2"}
  }
  }

  If I try to use the scoped token I get the error message:
  # openstack --os-token 3e68789050944e9296f1e366f63a31a8 --os-auth-url 
https://host:5000/v3 --os-identity-api-version 3 --os-cacert 
/etc/pki/trust/anchors/ca.pem --os-project-name Project1 server list
  ERROR: openstack Unable to find valid groups while using mapping saml_mapping 
(Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: 
req-eb23e61c-6f1f-4259-8ff0-92063f60b5f0)

  And this is no surprise if we debug the code for token creation and
  see that **_handle_mapped_tokens** in /usr/lib/python2.7/site-
  packages/keystone/token/providers/common.py says:

  if project_id or domain_id:
  roles = self.v3_token_data_helper._populate_roles_for_groups(
  group_ids, project_id, domain_id, user_id)
  token_data.update({'roles': roles})
  else:
  token_data['user'][federation.FEDERATION].update({
  'groups': [{'id': x} for x in group_ids]
  })
  return token_data

  So, the only way to get our groups added to the scoped token is to NOT
  use domain or project scoping, but if we do not scope the token for
  domain or project then we will simply get yet another unscoped token
  ;).

  
  What am I missing? How am I supposed to create a scoped token which works?

  Thanks in advance!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1541657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546149] [NEW] Launch from new volume fails with flavor.disksize = 0

2016-02-16 Thread Martin Millnert
Public bug reported:

Environment:
 - I'm using Liberty (on RDO)
 - I'm using the angularjs launch instance.
 - I have a flavor with disk = 0 GB.
 - I use Ceph RBD backend and always create volume on instantiation (If I 
don't, the actual image size should be used when creating temporary volume etc 
of RBD)

Work flow:
 - I go to project / instances and open "Launch Instance" panel
 - I click Yes to Create volume
 - I pick an image with min_disk 1,
 - I see warning that my volume size must be minimum 2 (don't mind the GB/GiB 
inconsistency)
 - I up the volume size to 2 GB
 - I continue to selecting flavor dialogue panel
 - Any flavor with disk size = 0 GB is grayed out, hitting the warnings at 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/flavor/flavor.controller.js#L302

Expected behaviour:
 - since I am creating a volume and defining its size,
 - and since that the size of the volume I'm creating has already been verified,
 - i expect that disksize = 0 GB in a volume should be allowed in the flavor 
checking (code line referenced above)

Disk size = 0 GB is a special value, according to for instance: 
http://docs.openstack.org/openstack-ops/content/flavors.html .
And it makes sense (to me at least) that I should be able to detach disk size 
information from flavors when using Ceph / RBD and I as the user always 
configure the volume.

And if I do not set the volume size, and eg boot from image, Nova,
Glance and Ceph should take care of instantiating a snap or similar of
the original image to its predefined size already - "no user input
required".

I.e. flavor disk size = 0 GB should mean that either the source image or
the volume size applies for the volume to be created (or not, in the
boot from image case).

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: launch-instance workflow

** Description changed:

  Environment:
-  - I'm using Liberty (on RDO)
-  - I'm using the angularjs launch instance.
-  - I have a flavor with disk = 0 GB.
-  - I use Ceph RBD backend and always create volume on instantiation (If I 
don't, the actual image size should be used when creating temporary volume etc 
of RBD)
+  - I'm using Liberty (on RDO)
+  - I'm using the angularjs launch instance.
+  - I have a flavor with disk = 0 GB.
+  - I use Ceph RBD backend and always create volume on instantiation (If I 
don't, the actual image size should be used when creating temporary volume etc 
of RBD)
  
  Work flow:
-  - I click Yes to Create volume
-  - I pick an image with min_disk 1, 
-  - I see warning that my volume size must be minimum 2 (don't mind the GB/GiB 
inconsistency)
-  - I up the volume size to 2 GB
-  - I continue to selecting flavor dialogue panel
-  - Any flavor with disk size = 0 GB is grayed out, hitting the warnings at 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/flavor/flavor.controller.js#L302
+  - I go to project / instances and open "Launch Instance" panel
+  - I click Yes to Create volume
+  - I pick an image with min_disk 1,
+  - I see warning that my volume size must be minimum 2 (don't mind the GB/GiB 
inconsistency)
+  - I up the volume size to 2 GB
+  - I continue to selecting flavor dialogue panel
+  - Any flavor with disk size = 0 GB is grayed out, hitting the warnings at 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/flavor/flavor.controller.js#L302
  
  Expected behaviour:
-  - since I am creating a volume and defining its size,
-  - and since that the size of the volume I'm creating has already been 
verified,
-  - i expect that disksize = 0 GB in a volume should be allowed in the flavor 
checking (code line referenced above)
+  - since I am creating a volume and defining its size,
+  - and since that the size of the volume I'm creating has already been 
verified,
+  - i expect that disksize = 0 GB in a volume should be allowed in the flavor 
checking (code line referenced above)
  
  Disk size = 0 GB is a special value, according to for instance: 
http://docs.openstack.org/openstack-ops/content/flavors.html .
  And it makes sense (to me at least) that I should be able to detach disk size 
information from flavors when using Ceph / RBD and I as the user always 
configure the volume.
  
  And if I do not set the volume size, and eg boot from image, Nova,
  Glance and Ceph should take care of instantiating a snap or similar of
  the original image to its predefined size already - "no user input
  required".
  
  I.e. flavor disk size = 0 GB should mean that either the source image or
  the volume size applies for the volume to be created (or not, in the
  boot from image case).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed 

[Yahoo-eng-team] [Bug 1546146] [NEW] Errors messages are behind the modal backdrop

2016-02-16 Thread Rob Cresswell
Public bug reported:

Error messages appear to be rendering behind the modal backdrop.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546146

Title:
  Errors messages are behind the modal backdrop

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Error messages appear to be rendering behind the modal backdrop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-02-16 Thread Wang Bo
** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  Fix Released
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in Sahara:
  Fix Released

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-02-16 Thread Wang Bo
** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in bifrost:
  Fix Committed
Status in Blazar:
  In Progress
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in dox:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  In Progress
Status in ooi:
  In Progress
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  In Progress
Status in OpenStack SDK:
  In Progress
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in refstack:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  New
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546136] [NEW] openstack user group lookup returns nothing

2016-02-16 Thread Alexandre Carnal
Public bug reported:

When issuing "openstack group list --user  --user-domain
" command on a domain associated with OpenLDAP, an incorrect
LDAP query is composed and openstack-keystone report just nothing.

OpenLDAP is running on a CentOS 7 host.
Openstack keystone release is Liberty running on a CentOS 7 host.
OpenLDAP version: OpenLDAP: slapd 2.4.39 (Sep 29 2015 13:31:12)
openstack v: 1.7.2

Keystone log when issuing the command:

LDAP search: base=ou=Group,dc=gvadc,dc=localdomain scope=2
filterstr=(&(memberUid=cn=,ou=People,dc=,dc=localdomain)(objectClass=posixGroup)(cn=*))
attrs=['cn', 'description'] attrsonly=0 search_s /usr/lib/python2.7
/site-packages/keystone/common/ldap/core.py:934

When translating the query to ldapsearch returns no results because of the 
filterstr memberUID=cn=first_name last_name instead of the userid.
ldapsearch -H ldap:// -D cn=Manager,dc=,dc=localdomain 
-W -x -b ou=Group,dc=,dc=localdomain "(&(memberUid=cn=l,ou=People,dc=,dc=localdomain)(objectClass=posixGroup)(cn=*))"

With the correct filter, the search is successfull
ldapsearch -H ldap:// -D cn=Manager,dc=,dc=localdomain 
-W -x -b ou=Group,dc=,dc=localdomain 
"(&(memberUid=

[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog

2016-02-16 Thread Timur Nurlygayanov
** Changed in: fuel/8.0.x
   Status: Fix Committed => Fix Released

** Changed in: fuel
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1356053

Title:
  Doesn't properly get keystone endpoint when Keystone is configured to
  use templated catalog

Status in devstack:
  Invalid
Status in Fuel for OpenStack:
  Fix Released
Status in Fuel for OpenStack 8.0.x series:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Python client library for Sahara:
  Fix Released
Status in Sahara:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  When using the keystone static catalog file to register endpoints 
(http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog),
 an endpoint registered (correctly) as catalog.region.data_processing gets 
read as "data-processing" by keystone.
  Thus, when Sahara looks for an endpoint, it is unable to find one for 
data_processing.

  This causes a problem with the commandline interface and the
  dashboard.

  Keystone seems to be converting underscores to dashes here:
  
https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47

  modifying this line to not perform the replacement seems to work fine
  for me, but may have unintended consequences.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542486] Re: nova-compute stack traces with BadRequest: Specifying 'tenant_id' other than authenticated tenant in request requires admin privileges

2016-02-16 Thread Emilien Macchi
Jamie, we fixed it in puppet-nova:
https://review.openstack.org/#/c/276932/

** Changed in: puppet-nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542486

Title:
  nova-compute stack traces with BadRequest: Specifying 'tenant_id'
  other than authenticated tenant in request requires admin privileges

Status in OpenStack Identity (keystone):
  Invalid
Status in keystonemiddleware:
  New
Status in OpenStack Compute (nova):
  Incomplete
Status in puppet-nova:
  Fix Released

Bug description:
  The puppet-openstack-integration tests (rebased on
  https://review.openstack.org/#/c/276773/ ) currently fail on the
  latest version of RDO Mitaka (delorean current) due to what seems to
  be a problem with the neutron configuration.

  Everything installs fine but tempest fails:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/console.html#_2016-02-05_20_26_35_569

  And there are stack traces in nova-compute.log:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/logs/nova/nova-compute.txt.gz#_2016-02-05_20_22_16_151

  I talked with #openstack-nova and they pointed out a difference between what 
devstack yields as a [neutron] configuration versus what puppet-nova configures:
  
  # puppet-nova via puppet-openstack-integration
  
  [neutron]
  service_metadata_proxy=True
  metadata_proxy_shared_secret =a_big_secret
  url=http://127.0.0.1:9696
  region_name=RegionOne
  ovs_bridge=br-int
  extension_sync_interval=600
  auth_url=http://127.0.0.1:35357
  password=a_big_secret
  tenant_name=services
  timeout=30
  username=neutron
  auth_plugin=password
  default_tenant_id=default

  
  # Well, it worked in devstack™
  
  [neutron]
  service_metadata_proxy = True
  url = http://127.0.0.1:9696
  region_name = RegionOne
  auth_url = http://127.0.0.1:35357/v3
  password = secretservice
  auth_strategy = keystone
  project_domain_name = Default
  project_name = service
  user_domain_name = Default
  username = neutron
  auth_plugin = v3password

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546110] [NEW] DB error causes router rescheduling loop to fail

2016-02-16 Thread Oleg Bondarev
Public bug reported:

In router rescheduling looping task db call to get down bindings is done
outside of try/except block which may cause task to fail (see traceback
below). Need to put db operation inside try/except.

2016-02-15T10:44:44.259995+00:00 err: 2016-02-15 10:44:44.250 15419 ERROR 
oslo.service.loopingcall [req-79bce4c3-2e81-446c-8b37-6d30e3a964e2 - - - - -] 
Fixed interval looping call 
'neutron.services.l3_router.l3_router_plugin.L3RouterPlugin.reschedule_routers_from_down_agents'
 failed
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 113, in 
_run_loop
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 
101, in reschedule_routers_from_down_agents
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall down_bindings = 
self._get_down_bindings(context, cutoff)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_dvrscheduler_db.py", line 460, 
in _get_down_bindings
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall context, cutoff)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 
149, in _get_down_bindings
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return query.all()
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2399, in all
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return list(self)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2516, in 
__iter__
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return 
self._execute_and_instances(context)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2529, in 
_execute_and_instances
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
close_with_result=True)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2520, in 
_connection_from_session
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall **kw)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 882, in 
connection
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
execution_options=execution_options)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 889, in 
_connection_for_bind
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall conn = 
engine.contextual_connect(**kw)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2039, in 
contextual_connect
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
self._wrap_pool_connect(self.pool.connect, None),
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2078, in 
_wrap_pool_connect
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall e, dialect, self)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1401, in 
_handle_dbapi_exception_noconnection
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
util.raise_from_cause(newraise, exc_info)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 199, in 
raise_from_cause
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
reraise(type(exception), exception, tb=exc_tb)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2074, in 
_wrap_pool_connect
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return fn()
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 376, in connect
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return 
_ConnectionFairy._checkout(self)
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 713, in _checkout
2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall fairy = 
_ConnectionRecord.checkout(pool)
2016-02-15 10:44:44.250 15419 ERROR 

[Yahoo-eng-team] [Bug 1442024] Re: AvailabilityZoneFilter does not filter when doing live migration

2016-02-16 Thread Roman Dobosz
I have performed the test, which I hoped will shed some light on this 
(potential) behavior, however turns out it's not.

The idea was to prepare two AZ which will separate the two groups of 
computes (in my case it was simply 3-node devstack), so that first AZ would 
have one compute and the second AZ would have the other one. There is also 
one Host Aggregate which contain all the computes. With this approach it 
might happen, that Host Aggregate will take a precedence over the AZ.

The actors:

1. ctrl (controller node)
2. Alter the nova.conf:
   scheduler_available_filters=nova.scheduler.filters.all_filters
   
scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter
3. cpu1 and cpu2 (compute nodes)
4. availability zone az1 which include cpu1 and have metadata set to
   some.hw=true
5. availability zone az2 which include cpu2
6. host aggregate aggr3 which include cpu and cpu2
7. flavor aztest with the extra spec set to some.hw=true

The action:

Create the vms with aztest - all of them should be spawned on cpu1. Note,
cirrosXXX has to be avialable; i've used image for i386 to be able to 
successfully perform live migration on my devstack setup.

$ nova boot --flavor aztest --image cirrosXXX --min-count 4 vm
$ nova list --fields host,name,status
+--+--+--++
| ID   | Host | Name | Status |
+--+--+--++
| 1569be1a-1289-4d52-b3d1-c3008f7c865f | cpu1 | vm-4 | ACTIVE |
| 217cb74e-74c6-4e46-abbc-3582d7e5fb4d | cpu1 | vm-3 | ACTIVE |
| 7dc98646-db5a-4433-b000-fd0ae671f3c7 | cpu1 | vm-2 | ACTIVE |
| a6ddd4d8-d05f-45c3-9e6a-4c9fa33da2ea | cpu1 | vm-1 | ACTIVE |
+--+--+--++

Now, try live migrate the vm-1:

$ nova live-migration --block-migrate vm-1
ERROR (BadRequest): No valid host was found. There are not enough hosts 
available. (HTTP 400) (Request-ID: req-2b1cd8d2-2316-40f2-8600-98c748ae565d)

After adding another compute to the cluster, and adding it to the az1, live 
migration works as expected:

$ nova aggregate-add-host aggr1 cpu3
$ nova live-migration --block-migrate vm-1

So I've failed to reproduce the reported behaviour, which might be a result 
of not enough data provided, and might be an configuration issue on the 
production.


** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442024

Title:
  AvailabilityZoneFilter does not  filter when doing live migration

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  last night our ops team live migrated (nova live-migration --block-
  migrate $vm) a group of vm to do hw  maintenance.

  the vm ended on a different AZ making the vm unusable (we have different 
upstream network connectivity on each AZ)
  it never happened before, i tested 

  
  of course, i have setup AZ filter

  
  scheduler_available_filters=nova.scheduler.filters.all_filters
  
scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter

  i'm using icehouse 2014.1.2-0ubuntu1.1~cloud0

  i will clean and upload logs right away

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546102] [NEW] Improve flavor list logic to handle multi sort keys

2016-02-16 Thread xiexs
Public bug reported:

The nova flavor API now can only support a single sort key and sort direction, 
so we should improve it to handle the multiple sort keys and directions.

** Affects: nova
 Importance: Undecided
 Assignee: xiexs (xiexs)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => xiexs (xiexs)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546102

Title:
  Improve flavor list logic to handle multi sort keys

Status in OpenStack Compute (nova):
  New

Bug description:
  The nova flavor API now can only support a single sort key and sort 
direction, 
  so we should improve it to handle the multiple sort keys and directions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544857] Re: Powered-ff VMs still get 'cpu_util' metrics.

2016-02-16 Thread jichenjc
is this a ceilometer problem?

** Project changed: nova => ceilometer

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544857

Title:
  Powered-ff VMs still get 'cpu_util' metrics.

Status in Ceilometer:
  New

Bug description:
  Powered-ff VMs still get 'cpu_util' metrics with zero values. There
  should be a distinction between no values and zero values. There is no
  such problem with, say, 'memory.usage' metric.

  I tested it on KILO.

  [root@controller ~(keystone_admin)]# nova show 
2648fd92-b84d-4309-a3cb-34e3b5ceea74
  
+--+-+
  | Property | Value
   |
  
+--+-+
  | OS-DCF:diskConfig| AUTO 
   |
  | OS-EXT-AZ:availability_zone  | nova 
   |
  | OS-EXT-SRV-ATTR:host | kvm3.openstack5.lan  
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | kvm3.openstack5.lan  
   |
  | OS-EXT-SRV-ATTR:instance_name| instance-009f
   |
  | OS-EXT-STS:power_state   | 4
   |
  | OS-EXT-STS:task_state| -
   |
  | OS-EXT-STS:vm_state  | stopped  
   |
  | OS-SRV-USG:launched_at   | 2015-09-24T12:40:04.00   
   |
  | OS-SRV-USG:terminated_at | -
   |
  | accessIPv4   |  
   |
  | accessIPv6   |  
   |
  | config_drive |  
   |
  | created  | 2015-09-24T12:39:33Z 
   |
  | ext_net network  | 192.168.185.21   
   |
  | flavor   | m1.tiny (1)  
   |
  | hostId   | 
a2f6da6c2121d7aefdb19ecae81cb18248a8ed9775d1151131f6ed41|
  | id   | 2648fd92-b84d-4309-a3cb-34e3b5ceea74 
   |
  | image| olegn-3-cirros-0.3.4-x86_64-disk 
(744f81b2-aa16-42dd-ade0-05b050d4f17b) |
  | key_name | -
   |
  | metadata | {}   
   |
  | name | test_cirros034--1
   |
  | os-extended-volumes:volumes_attached | []   
   |
  | qa_net network   | 10.0.3.22
   |
  | security_groups  | default  
   |
  | status   | SHUTOFF  
   |
  | tenant_id| 86f1bbb7f7054997a67239680b69aaaf 
   |
  | updated  | 2015-11-19T12:26:32Z 
   |
  | user_id  | 8c60c96e20e6417bb19701677afb6a2f 
   |
  
+--+-+
  [root@controller ~(keystone_admin)]# ceilometer sample-list -m cpu_util -l 10 
-q "resource=2648fd92-b84d-4309-a3cb-34e3b5ceea74"
  
+--+--+---++--+-+
  | Resource ID  | Name | Type  | Volume | Unit | 
Timestamp   |
  
+--+--+---++--+-+
  | 

[Yahoo-eng-team] [Bug 1546039] [NEW] If one trustor role is removed, the trust cannot be used

2016-02-16 Thread Henry Nash
Public bug reported:

If a trust is created with a list of roles, when the trust is used by
the trustee to obtain a token, we first make sure that the trustor still
has all the delegated roles. However, the way the code is written, if
any have been removed, we immediately fail the token creation, rather
than, instead, grant the token with the remaining roles. The current
exception comment suggests that this was not our intention.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546039

Title:
  If one trustor role is removed, the trust cannot be used

Status in OpenStack Identity (keystone):
  New

Bug description:
  If a trust is created with a list of roles, when the trust is used by
  the trustee to obtain a token, we first make sure that the trustor
  still has all the delegated roles. However, the way the code is
  written, if any have been removed, we immediately fail the token
  creation, rather than, instead, grant the token with the remaining
  roles. The current exception comment suggests that this was not our
  intention.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546040] [NEW] Group membership lookup failed with error HTTP 500

2016-02-16 Thread Alexandre Carnal
Public bug reported:

When issuing "openstack user list --group  --domain
" command on a domain associated with OpenLDAP, an incorrect
LDAP query is composed and openstack-keystone report error HTTP 500.

OpenLDAP is running on a CentOS 7 host.
Openstack keystone release is Liberty running on a CentOS 7 host.
OpenLDAP version: OpenLDAP: slapd 2.4.39 (Sep 29 2015 13:31:12)
openstack v: 1.7.2

Keystone log when issuing the command:
LDAP search: base=cn=Cloudmembers,ou=Group,dc=,dc=localdomain scope=0 
filterstr=(objectClass=posixGroup) attrs=['memberUid'] attrsonly=0 search_s 
/usr/lib/python2.7/site-packages/keystone/common/ldap/core.py:934

When translating the query to ldapsearch returns no results
ldapsearch -H ldap:// -D cn=Manager,dc=,dc=localdomain 
-s one -W -x -b cn=Cloudmembers,ou=Group,dc=,dc=localdomain 
"(objectClass=posixGroup)"

But with a scope option as subtree, it works fine
ldapsearch -H ldap:// -D cn=Manager,dc=,dc=localdomain 
-s sub -W -x -b cn=Cloudmembers,ou=Group,dc=,dc=localdomain 
"(objectClass=posixGroup)"

So the bug is the scope=0 option parsed by keystone though the
query_scope option in the domain config file is set to sub.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: keystone liberty openldap

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546040

Title:
  Group membership lookup failed with error HTTP 500

Status in OpenStack Identity (keystone):
  New

Bug description:
  When issuing "openstack user list --group  --domain
  " command on a domain associated with OpenLDAP, an incorrect
  LDAP query is composed and openstack-keystone report error HTTP 500.

  OpenLDAP is running on a CentOS 7 host.
  Openstack keystone release is Liberty running on a CentOS 7 host.
  OpenLDAP version: OpenLDAP: slapd 2.4.39 (Sep 29 2015 13:31:12)
  openstack v: 1.7.2

  Keystone log when issuing the command:
  LDAP search: base=cn=Cloudmembers,ou=Group,dc=,dc=localdomain scope=0 
filterstr=(objectClass=posixGroup) attrs=['memberUid'] attrsonly=0 search_s 
/usr/lib/python2.7/site-packages/keystone/common/ldap/core.py:934

  When translating the query to ldapsearch returns no results
  ldapsearch -H ldap:// -D 
cn=Manager,dc=,dc=localdomain -s one -W -x -b 
cn=Cloudmembers,ou=Group,dc=,dc=localdomain "(objectClass=posixGroup)"

  But with a scope option as subtree, it works fine
  ldapsearch -H ldap:// -D 
cn=Manager,dc=,dc=localdomain -s sub -W -x -b 
cn=Cloudmembers,ou=Group,dc=,dc=localdomain "(objectClass=posixGroup)"

  So the bug is the scope=0 option parsed by keystone though the
  query_scope option in the domain config file is set to sub.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546010] [NEW] Deprecate ARP spoofing protection option

2016-02-16 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/280336
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 7bbacd49a46a6c15b306d1a8f512cdfd078736c3
Author: Kevin Benton 
Date:   Mon Feb 15 09:27:42 2016 -0800

Deprecate ARP spoofing protection option

This protection should always be enabled unless its explicitly
shutoff via the port security extension via the API. The primary
reason it was a config option was because it was merged at the end
of Kilo development so it wasn't considered stable. Now that it
has been enabled by default for all of Liberty and the development
of Mitaka, it's a good idea to just get rid of the option completely.

DocImpact: Remove references to prevent_arp_spoofing and replace
   with pointer to port security extension for disabling
   security features.
Change-Id: Ib63ba8ae7050465a0786ea3d50c65f413f4ebe38

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546010

Title:
  Deprecate ARP spoofing protection option

Status in neutron:
  New

Bug description:
  https://review.openstack.org/280336
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 7bbacd49a46a6c15b306d1a8f512cdfd078736c3
  Author: Kevin Benton 
  Date:   Mon Feb 15 09:27:42 2016 -0800

  Deprecate ARP spoofing protection option
  
  This protection should always be enabled unless its explicitly
  shutoff via the port security extension via the API. The primary
  reason it was a config option was because it was merged at the end
  of Kilo development so it wasn't considered stable. Now that it
  has been enabled by default for all of Liberty and the development
  of Mitaka, it's a good idea to just get rid of the option completely.
  
  DocImpact: Remove references to prevent_arp_spoofing and replace
 with pointer to port security extension for disabling
 security features.
  Change-Id: Ib63ba8ae7050465a0786ea3d50c65f413f4ebe38

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541094] Re: Volume list does not work if User does not have right to list transfers

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277304
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=64499a730d215c60e36d5d8438b15770755c22e3
Submitter: Jenkins
Branch:master

commit 64499a730d215c60e36d5d8438b15770755c22e3
Author: Itxaka 
Date:   Mon Feb 8 08:39:50 2016 +0100

Protect cinder list against permission issues

When listing all cinder volumes the user may not have
permissions to see the volume transfers.
This patch makes it so the call to transfer_list
is protected against Forbiden errir so the user can still
see the volume list.

Change-Id: I575ffebcd5084165e72f6e100ed43b4d3f358e98
Closes-Bug: #1541094


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541094

Title:
  Volume list does not work if User does not have right to list
  transfers

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  If a User does not have the rights to list transfers and does has the
  right to list volumes when you go to the Project Compute Volume
  dashboard no volumes will be listed.

  The code in horizon/openstack_dashboard/api/cinder.py method
  volume_list_paged checks each volume transfer status and the method
  throw and exception and no volume will be listed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545999] [NEW] [vmware] missing os types: suseGuest64/suseGuest

2016-02-16 Thread xhzhf
Public bug reported:

According to http://pubs.vmware.com/vsphere-60/index.jsp, vsphere os types in 
openstack is not completed.
Missing 
suse64Guest Suse Linux (64 bit)
suseGuest   Suse Linux

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: guestos vmware

** Tags added: guestos vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545999

Title:
  [vmware] missing os types: suseGuest64/suseGuest

Status in OpenStack Compute (nova):
  New

Bug description:
  According to http://pubs.vmware.com/vsphere-60/index.jsp, vsphere os types in 
openstack is not completed.
  Missing 
  suse64Guest   Suse Linux (64 bit)
  suseGuest Suse Linux

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1545999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279934
Committed: 
https://git.openstack.org/cgit/openstack/networking-ofagent/commit/?id=14184ef7cb5ce5a6b61e0fcec89412f6b2630cb5
Submitter: Jenkins
Branch:master

commit 14184ef7cb5ce5a6b61e0fcec89412f6b2630cb5
Author: Ayush Garg 
Date:   Sun Feb 14 14:04:46 2016 +0530

Run py34 first in default tox run

this avoids py34 failure in default tox run when testrepository is
not yet initialized (e.g. fresh repo clone).

Change-Id: I23ec5be0ea60ecfc5344c705bb08fc91e08d271f
Closes-Bug: #1489059


** Changed in: networking-ofagent
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Bareon:
  Fix Released
Status in cloudkitty:
  Fix Committed
Status in Fuel for OpenStack:
  In Progress
Status in Glance:
  Fix Committed
Status in glance_store:
  Fix Committed
Status in hacking:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in kolla:
  Fix Released
Status in Manila:
  Fix Released
Status in Murano:
  Fix Committed
Status in networking-midonet:
  Fix Released
Status in networking-ofagent:
  Fix Released
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-muranoclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  In Progress
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in tap-as-a-service:
  Fix Released
Status in tempest:
  Fix Released
Status in zaqar:
  Fix Released
Status in python-ironicclient package in Ubuntu:
  Fix Committed

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528258] Re: secure_proxy_ssl_header should default to HTTP_X_FORWARDED_PROTO

2016-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280435
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=eb104714f2488bd8281fbc656c5d0e470939231e
Submitter: Jenkins
Branch:master

commit eb104714f2488bd8281fbc656c5d0e470939231e
Author: Steve Martinelli 
Date:   Mon Feb 15 17:37:56 2016 -0500

sensible default for secure_proxy_ssl_header

there is only one sensible default for secure_proxy_ssl_header,
so let's use it, one less step for deployers to configure.

Change-Id: I0cee5d6051b2c91bc87dc7eabcec57dd4852184c
Closes-Bug: 1528258


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1528258

Title:
  secure_proxy_ssl_header should default to HTTP_X_FORWARDED_PROTO

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  https://bugs.launchpad.net/keystone/+bug/1370022 resulted in
  https://review.openstack.org/132235 which added
  secure_proxy_ssl_header option being added to keystone. It works if
  it's correctly set, but there is no valid reason why you would not
  want to enable this feature by default. It adds an extra burden to
  configuration managers when there's exactly 1 ideal default value
  (even specified in the comment for the option).

  I propose that we have default/secure_proxy_ssl_header =
  "HTTP_X_FORWARDED_PROTO" instead of default/secure_proxy_ssl_header =
   instated as default in the package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1528258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp