[Yahoo-eng-team] [Bug 1709801] Re: Domain scope auth fails when use endpoint filter

2017-08-10 Thread Lance Bragstad
The EndpointFilter catalog was removed in Pike [0] and the issue isn't
reproducible in stable/ocata.

[0]
https://github.com/openstack/keystone/commit/d35f36916e109f0d2557bb778424e7aee3bc6b31

** Also affects: keystone/newton
   Importance: Undecided
   Status: New

** Also affects: keystone/ocata
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

** Changed in: keystone/newton
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1709801

Title:
  Domain scope auth fails when use endpoint filter

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) newton series:
  Invalid
Status in OpenStack Identity (keystone) ocata series:
  New

Bug description:
  When use endpoint_filter.sql catalog driver in Newton and authenticate
  with domain scope, we fail to receive endpoints. Should be all
  endpoints instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1709801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701346] Re: Trust mechanism is broken

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/479047
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=7024b3f4ea5b7c53e240239e39d5ab1d44aa3242
Submitter: Jenkins
Branch:master

commit 7024b3f4ea5b7c53e240239e39d5ab1d44aa3242
Author: Mike Fedosin 
Date:   Thu Jun 29 22:30:19 2017 +0300

Fix trust auth mechanism

This code fixes creation of trusts by following
the recommended usage techniques of keystoneauth1.

Change-Id: I233883dc6a37f282eb8e024c059c6a12ebb7e9f1
Closes-bug: #1701346


** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1701346

Title:
  Trust mechanism is broken

Status in Glance:
  Fix Released

Bug description:
  Because of various changes in keystoneauth1 module current trust
  mechanism glance.common.trust_auth cannot create a trust and fails
  with a error:

  [None req-b7ac5edd-2104-4cab-b85e-ddae7c205261 admin admin] Unable to
  create trust: 'NoneType' object has no attribute 'endswith' Use the
  existing user token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1701346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709902] [NEW] source host allocation not cleaned up in placement after evacuation

2017-08-10 Thread Balazs Gibizer
Public bug reported:

1) boot a server
2) kill the compute (optionally force-down it)
3) evacuate the server
4) start up the original compute
5) check the allocations in placement

We expect that the allocation on the original compute is removed when that 
compute start up (init_host) after the evacuation but it isn't.
The compute host periodic resource healing also skips this case here 
https://review.openstack.org/#/c/491850/4/nova/compute/resource_tracker.py@1084

Here is a patch to reproduce the problem in the functional test env: 
https://review.openstack.org/#/c/492548/ 
Here is the debug log for that run: https://pastebin.com/hzb33Awu

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: evacuate openstack-version.pike placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709902

Title:
  source host allocation not cleaned up in placement after evacuation

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  1) boot a server
  2) kill the compute (optionally force-down it)
  3) evacuate the server
  4) start up the original compute
  5) check the allocations in placement

  We expect that the allocation on the original compute is removed when that 
compute start up (init_host) after the evacuation but it isn't.
  The compute host periodic resource healing also skips this case here 
https://review.openstack.org/#/c/491850/4/nova/compute/resource_tracker.py@1084

  Here is a patch to reproduce the problem in the functional test env: 
https://review.openstack.org/#/c/492548/ 
  Here is the debug log for that run: https://pastebin.com/hzb33Awu

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1709902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709882] [NEW] In the Instances table, some filters have "=", others don't

2017-08-10 Thread Yuko Katabami
Public bug reported:

Project > Compute > Instances

On the top of the table, there is a drop down list to select a filter from.
Some filters have the "=" symbol attached at the end, others don't.
It might be better to have them consistent, unless there is a specific reason 
to do so.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "ProjectComputeInstancesFiltering.png"
   
https://bugs.launchpad.net/bugs/1709882/+attachment/4930031/+files/ProjectComputeInstancesFiltering.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1709882

Title:
  In the Instances table, some filters have "=", others don't

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Project > Compute > Instances

  On the top of the table, there is a drop down list to select a filter from.
  Some filters have the "=" symbol attached at the end, others don't.
  It might be better to have them consistent, unless there is a specific reason 
to do so.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1709882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709236] Re: Live migration failed in openstack on xenserver

2017-08-10 Thread Matt Riedemann
** Changed in: nova
   Status: New => Confirmed

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/ocata
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709236

Title:
  Live migration failed in openstack on xenserver using microversion
  2.34

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  XenServer live migration is broken with the latest NOVA API. At least
  it works well when using Nova API 2.27.

  See the following trace where it reports error with "Action for
  request_id req-a06c9561-0458-43c6-b767-08bf67e38b07 on instance
  8a8ed9ad-2fb8-46d4-bee2-9d947e2d3e58 not found"

  
  ul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server [None req-a06c9561-0458-43c6-b767-08bf67e38b07 admin 
admin] Exception during message handling: InstanceActionNotFound: Action for 
request_id req-a06c9561-0458-43c6-b767-08bf67e38b07 on instance 
8a8ed9ad-2fb8-46d4-bee2-9d947e2d3e58 not found
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server Traceback (most recent call last):
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, 
args)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server result = func(ctxt, **new_args)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/utils.py", line 
863, in decorated_function
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server with EventReporter(context, event_name, 
instance_uuid):
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/utils.py", line 
834, in __enter__
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server self.context, uuid, self.event_name, 
want_result=False)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server result = fn(cls, context, *args, **kwargs)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/objects/instance_action.py", line 169, in event_start
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server db_event = db.action_event_start(context, values)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/db/api.py", line 1958, 
in action_event_start
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server return IMPL.action_event_start(context, values)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/db/sqlalchemy/api.py", 
line 250, in wrapped
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server return f(context, *args, **kwargs)
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/db/sqlalchemy/api.py", 
line 6155, in action_event_start
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server instance_uuid=values['instance_uuid'])
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 
oslo_messaging.rpc.server InstanceActionNotFound: Action for request_id 
req-a06c9561-0458-43c6-b767-08bf67e38b07 on instance 
8a8ed9ad-2fb8-46d4-bee2-9d947e2d3e58 not found
  Jul 27 01:57:12 DevStackOSDomU nova-conductor[2134]: ERROR 

[Yahoo-eng-team] [Bug 1516706] Re: Glance v1 API makes requests to the v2 registry

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/431709
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=c74e6bb8ddee8ad1ad2479f3fcfd8396dedef55b
Submitter: Jenkins
Branch:master

commit c74e6bb8ddee8ad1ad2479f3fcfd8396dedef55b
Author: Dharini Chandrasekar 
Date:   Thu Feb 9 18:34:02 2017 +

Prevent v1_api from making requests to v2_registry

In glance v2, when one opts to use v2_registry, it is required that
'data_api' is set to 'glance.db.registry.api'. This is returned by
method 'get_api()' which currently simply returns whatever is provided
to 'data_api'. This is suitable for v2. But when using v1, this same
method is used to fetch the db api. This returns 'glance.db.registry.api'
which inturn relies on the registry rpc client (v2).
To prevent this, this patch proposes to change what get_api()
will return based on whether it is serving v1 api or v2 api.

Change-Id: Ifef36859b3f7692769a6991364b6063c9f7cc451
Closes-Bug: 1516706


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1516706

Title:
  Glance v1 API makes requests to the v2 registry

Status in Glance:
  Fix Released

Bug description:
  If I configure storage quotas with:

  
  user_storage_quota = 6  

  And I enable the v2 registry:

  data_api = glance.db.registry.api

  Then a v1 image create:

  $ glance --os-image-api-version 1 image-create --name x3 --disk-format raw 
--container-format bare --file /etc/fstab
  413 Request Entity Too Large: Denying attempt to upload image because it 
exceeds the quota: The size of the data 145 will exceed the limit. -5794 bytes 
remaining. (HTTP 413)

  Generates the following request to the v2 registry:

  POST /rpc HTTP/1.1.
  Host: 0.0.0.0:9191.
  Accept-Encoding: identity.
  Content-Length: 151.
  x-auth-token: bee70651417c474dac02d6e4e4a5b9fc.
  .
  [{"command": "user_get_storage_usage", "kwargs": {"image_id": 
"c4252759-9c2f-4858-b23a-1b4c87f7b155", "owner_id": 
"411423405e10431fb9c47ac5b2446557"}}]

  Amusingly, this works.

  But I'm pretty sure it's not what we intended.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1516706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708637] Re: nova does not properly claim resources when server resized to a too big flavor

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/491491
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6b0ab40e4233a480c9bdcca657f594d7e90fadc8
Submitter: Jenkins
Branch:master

commit 6b0ab40e4233a480c9bdcca657f594d7e90fadc8
Author: Balazs Gibizer 
Date:   Mon Aug 7 15:12:25 2017 +0200

Raise NoValidHost if no allocation candidates

Placement took over the role of the CoreFilter, RamFilter and DiskFilter
from the FilterScheduler. Therefore if placement returns no allocation
candidates for a request then scheduling should be stopped as this means
there is not enough VCPU, MEMORY_MB, or DISK_GB available in any compute
node for the request.

Change-Id: If20a20e5cce7ab490998643e32556a1016646b07
Closes-Bug: #1708637


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708637

Title:
  nova does not properly claim resources when server resized to a too
  big flavor

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Problematic scenario: Boot a server then try to resize it to a flavor
  which requests more vcpu than what available in any compute hosts.

  If the CoreFilter is enabled then the resize fails with NoValidHost
  and the resource allocation is OK.

  However if the CoreFilter is not enabled in the FilterScheduler then
  the resize is accepted but the placement API is not updated with the
  actual resource (over) allocation.

  In this case I don't know which would be the expected behavior:
  Option A: No valid host shall be raised
  Option B: Resize is accepted and the resources state are updated properly

  There is a patch proposed with functional tests that reproduces the
  problem https://review.openstack.org/#/c/490814

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709788] [NEW] AllocationCandidates return duplicated allocation requests when no sharing rp

2017-08-10 Thread Alex Xu
Public bug reported:

The case is: computenode1 and comptuenode3 share the storage from
sharedstrorage1, computenode3 has local disk. The expected result from
AllocationCandidates is (computenode1, sharedstorage1), (computenode2,
sharedstorage1) and (computenode3). But the current return is
(computenode1, sharedstorage1), (computenode2, sharedstorage1),
(computenode3) and (computenode3), a duplicated (computenodes) returned.

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New


** Tags: placement

** Changed in: nova
 Assignee: (unassigned) => Alex Xu (xuhj)

** Tags added: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709788

Title:
  AllocationCandidates return duplicated allocation requests when no
  sharing rp

Status in OpenStack Compute (nova):
  New

Bug description:
  The case is: computenode1 and comptuenode3 share the storage from
  sharedstrorage1, computenode3 has local disk. The expected result from
  AllocationCandidates is (computenode1, sharedstorage1), (computenode2,
  sharedstorage1) and (computenode3). But the current return is
  (computenode1, sharedstorage1), (computenode2, sharedstorage1),
  (computenode3) and (computenode3), a duplicated (computenodes)
  returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1709788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709779] [NEW] reboot ovs service lose dhcp port in dhcp namespace

2017-08-10 Thread jk
Public bug reported:

a. i install ocata  on two host, all agent work well, as below:
[root@controller openstack]# openstack network agent list
+--+++---+---+---+---+
| ID   | Agent Type | Host   | 
Availability Zone | Alive | State | Binary|
+--+++---+---+---+---+
| 1296f653-7e28-47dc-b0c7-73e9fabb695f | Metadata agent | controller | None 
 | True  | UP| neutron-metadata-agent|
| 47bd5b59-feb7-47a6-864e-0cf7ed90ab8e | Open vSwitch agent | compute| None 
 | True  | UP| neutron-openvswitch-agent |
| 9d8f5a9d-2fd4-4c6f-b6d6-1730843738e3 | DHCP agent | controller | nova 
 | True  | UP| neutron-dhcp-agent|
| c420da8e-7028-4589-bd2f-9d25756e08f2 | Open vSwitch agent | controller | None 
 | True  | UP| neutron-openvswitch-agent |
| f79bf249-874b-422a-9d21-949786fbf367 | L3 agent   | controller | nova 
 | True  | UP| neutron-l3-agent  |
+--+++---+---+---+---+
[root@controller openstack]# openstack compute service list
++--++--+-+---++
| ID | Binary   | Host   | Zone | Status  | State | Updated At  
   |
++--++--+-+---++
|  1 | nova-consoleauth | controller | internal | enabled | up| 
2017-08-10T05:47:04.00 |
|  3 | nova-conductor   | controller | internal | enabled | up| 
2017-08-10T05:47:04.00 |
|  7 | nova-scheduler   | controller | internal | enabled | up| 
2017-08-10T05:47:05.00 |
| 10 | nova-compute | controller | nova | enabled | up| 
2017-08-10T05:47:06.00 |
| 11 | nova-compute | compute| nova | enabled | up| 
2017-08-10T05:47:09.00 |
++--++--+-+---++


b. create a tenant with vlan mode, 
[root@controller openstack]# ip netns
qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6
[root@controller openstack]# ip netns exec 
qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6 ifconfig
lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 1  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapbfe934a3-9d: flags=323  mtu 1500
inet 1.2.3.4  netmask 255.255.255.0  broadcast 1.2.3.255
inet6 fe80::f816:3eff:feed:ea19  prefixlen 64  scopeid 0x20
ether fa:16:3e:ed:ea:19  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 5  bytes 438 (438.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

3. when reboot ovs sevice, port tapbfe934a3-9d in dhcp namespace will be lose
[root@controller openstack]#  systemctl restart openvswitch
[root@controller openstack]# ip netns exec 
qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6 ifconfig -a
lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 1  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

4. when reboot dhcp agent, that port appear again
[root@controller openstack]# systemctl restart neutron-dhcp-agent.service
[root@controller openstack]# ip netns exec 
qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6 ifconfig -a
lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 1  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapbfe934a3-9d: flags=323  mtu 1500
inet 1.2.3.4  netmask 255.255.255.0  broadcast 1.2.3.255
inet6 fe80::f816:3eff:feed:ea19  prefixlen 64  scopeid 0x20
ether fa:16:3e:ed:ea:19  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 5  bytes 438 (438.0 B)
   

[Yahoo-eng-team] [Bug 1709801] [NEW] Domain scope auth fails when use endpoint filter

2017-08-10 Thread Martins Jakubovics
Public bug reported:

When use endpoint_filter.sql catalog driver in Newton and authenticate
with domain scope, we fail to receive endpoints. Should be all endpoints
instead.

** Affects: keystone
 Importance: Undecided
 Assignee: Martins Jakubovics (martins-k)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Martins Jakubovics (martins-k)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1709801

Title:
  Domain scope auth fails when use endpoint filter

Status in OpenStack Identity (keystone):
  New

Bug description:
  When use endpoint_filter.sql catalog driver in Newton and authenticate
  with domain scope, we fail to receive endpoints. Should be all
  endpoints instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1709801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover

2017-08-10 Thread James Page
This bug was fixed in the package neutron - 2:8.4.0-0ubuntu4~cloud0
---

 neutron (2:8.4.0-0ubuntu4~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:8.4.0-0ubuntu4) xenial; urgency=medium
 .
   * d/p/Update-the-host_id-for-network-router_gateway-interf.patch:
 keep the router's gateway interface updated when keepalived
 fails over (LP: #1694337).


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694337

Title:
  Port information (binding:host_id) not updated for
  network:router_gateway after qRouter failover

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Won't Fix
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Impact]

  When using l3 ha and a router agent fails over, the interface holding
  the network:router_gateway interface does not get its property
  binding:host_id updated to reflect where the keepalived moved the
  router.

  [Steps to reproduce]

  0) Deploy a cloud with l3ha enabled
    - If familiar with juju, it's possible to use this bundle 
http://paste.ubuntu.com/24707730/ , but the deployment tool is not relevant

  1) Once it's deployed, configure it and create a router see 
https://docs.openstack.org/mitaka/networking-guide/deploy-lb-ha-vrrp.html )
    - This is the script used during the troubleshooting
  -8<--
  #!/bin/bash -x

  source novarc  # admin

  neutron net-create ext-net --router:external True
  --provider:physical_network physnet1 --provider:network_type flat

  neutron subnet-create ext-net 10.5.0.0/16 --name ext-subnet
  --allocation-pool start=10.5.254.100,end=10.5.254.199 --disable-dhcp
  --gateway 10.5.0.1 --dns-nameserver 10.5.0.3

  keystone tenant-create --name demo 2>/dev/null
  keystone user-role-add --user admin --tenant demo --role Admin 2>/dev/null

  export TENANT_ID_DEMO=$(keystone tenant-list | grep demo | awk -F'|'
  '{print $2}' | tr -d ' ' 2>/dev/null )

  neutron net-create demo-net --tenant-id ${TENANT_ID_DEMO}
  --provider:network_type vxlan

  env OS_TENANT_NAME=demo neutron subnet-create demo-net 192.168.1.0/24 --name 
demo-subnet --gateway 192.168.1.1
  env OS_TENANT_NAME=demo neutron router-create demo-router
  env OS_TENANT_NAME=demo neutron router-interface-add demo-router demo-subnet
  env OS_TENANT_NAME=demo neutron router-gateway-set demo-router ext-net

  # verification
  neutron net-list
  neutron l3-agent-list-hosting-router demo-router
  neutron router-port-list demo-router
  - 8< ---

  2) Kill the associated master keepalived process for the router
  ps aux | grep keepalived | grep $ROUTER_ID
  kill $PID

  3) Wait until "neutron l3-agent-list-hosting-router demo-router" shows the 
other host as active
  4) Check the binding:host_id property for the interfaces of the router
  for ID in `neutron port-list --device-id $ROUTER_ID | tail -n +4 | head 
-n -1| awk -F' ' '{print $2}' `; do neutron port-show $ID ; done

  Expected results:

  The interface where the device_owner is network:router_gateway has its
  property binding:host_id set to where the keepalived process is master

  Actual result:

  The binding:host_id is never updated, it stays set with the value
  obtainer during the creation of the port.

  [Regression Potential]
  - This patch changes the UPDATE query to the port bindings in the database, a 
possible regression will express as failures in the query or binding:host_id 
property outdated.

  [Other Info]

  The patches for zesty and yakkety are a direct backport from
  stable/ocata and stable/newton respectively. The patch for xenial is
  NOT merged in stable/xenial because it's already EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709803] [NEW] use of undefined variable hm_status

2017-08-10 Thread sumitjami
Public bug reported:

Traceback (most recent call last):  
 
  File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 93, 
in resource   
result = method(request=request, **args)
 
  File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in 
wrapped 
setattr(e, '_RETRY_EXCEEDED', True) 
 
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__  
self.force_reraise()
 
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise 
six.reraise(self.type_, self.value, self.tb)
 
  File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in 
wrapped 
return f(*args, **kwargs)   
 
  File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper  
 
ectxt.value = e.inner_exc   
 
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__  
self.force_reraise()
 
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise 
six.reraise(self.type_, self.value, self.tb)
 
  File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper  
 
return f(*args, **kwargs)   
 
  File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in 
wrapped
traceback.format_exc()) 
 
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__  
self.force_reraise()
 
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise 
six.reraise(self.type_, self.value, self.tb)
 
  File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 119, in 
wrapped
return f(*dup_args, **dup_kwargs)   
 
  File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 251, in 
_handle_action
ret_value = getattr(self._plugin, name)(*arg_list, **kwargs)
 
  File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 1223, in statuses  
pool_status["healthmonitor"] = hm_status
 
UnboundLocalError: local variable 'hm_status' referenced before assignment

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709803

Title:
  use of undefined variable hm_status

Status in neutron:
  New

Bug description:
  Traceback (most recent call last):
   
File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 
93, in resource   
  result = method(request=request, **args)  
   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in 
wrapped 
  setattr(e, '_RETRY_EXCEEDED', True)   
   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__  
  self.force_reraise()  
   
File 

[Yahoo-eng-team] [Bug 1709808] [NEW] Admin user should not get the stacks created by other project user

2017-08-10 Thread qiaomin032
Public bug reported:

Reproduce:
1, Login in as a demo user, the demo user is a normal user and belong to the 
demo project,
2, Switch to Project/Orchestration/Stacks page, create a stack named 'test' and 
the 'test' will display in the page
3 log out demo, and login in as a admin user, this user belong to admin 
project, there will display the 'test' stack, this is not expected, there 
should not display the 'test' stack because it belong to the other project.

** Affects: horizon
 Importance: Undecided
 Assignee: qiaomin032 (chen-qiaomin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1709808

Title:
  Admin user should not get the stacks created by other project user

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Reproduce:
  1, Login in as a demo user, the demo user is a normal user and belong to the 
demo project,
  2, Switch to Project/Orchestration/Stacks page, create a stack named 'test' 
and the 'test' will display in the page
  3 log out demo, and login in as a admin user, this user belong to admin 
project, there will display the 'test' stack, this is not expected, there 
should not display the 'test' stack because it belong to the other project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1709808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655182] Re: keystone-manage mapping_engine tester problems

2017-08-10 Thread James Page
This bug was fixed in the package keystone - 2:10.0.2-0ubuntu1~cloud0
---

 keystone (2:10.0.2-0ubuntu1~cloud0) xenial-newton; urgency=medium
 .
   [ Frode Nordahl ]
   * d/p/keystone_manage_mapping_engine_fix.patch: Fix keystone-manage
 mapping_engine usability issues (LP: #1655182).
 .
   [ James Page ]
   * New upstream point release for OpenStack Newton (LP: #1705176).


** Changed in: cloud-archive/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1655182

Title:
  keystone-manage mapping_engine tester problems

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released
Status in keystone source package in Xenial:
  Fix Released
Status in keystone source package in Yakkety:
  Won't Fix

Bug description:
  [Impact]

   * A bug in keystone-manage tool prohibits the use of the
  mapping_engine command for testing federation rules.

   * Users of Keystone Federation will not be able to verify their
  mapping rules before pushing these to production.

   * Not being able to test rules before pushing to production is a
  major operational challenge for our users.

   * The proposed upload fixes this by backporting a fix for this issue
  from upstream stable/ocata.

  [Test Case]

   * Deploy keystone using Juju with this bundle:
 http://pastebin.ubuntu.com/24855409/

   * ssh to keystone unit, grab artifacts and run command:
 - mapping.json: http://pastebin.ubuntu.com/24855419/
 - input.txt: http://pastebin.ubuntu.com/24855420/
 - command:
 'keystone-manage mapping_engine --rules mapping.json --input input.txt'

   * Observe that command provides no output and that a Python Traceback
  is printed in /var/log/keystone/keystone.log

   * Install the proposed package, repeat the above steps and observe
  that the command now outputs its interpretation and effect of the
  rules.

  [Regression Potential]

   * keystone-manage mapping_engine is a operational test tool and is
  solely used by the operator to test their rules.

   * The distributed version of this command in Xenial and Yakkety does
  currently not work at all.

   * The change will make the command work as our users expect it to.

  [Original bug description]
  There are several problems with keystone-manage mapping_engine

  * It aborts with a backtrace because of wrong number of arguments
    passed to the RuleProcessor

  * The --engine-debug option does not work.

  * Error messages related to input data are cryptic and inprecise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1655182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover

2017-08-10 Thread James Page
This bug was fixed in the package neutron - 2:9.4.0-0ubuntu1.1~cloud0
---

 neutron (2:9.4.0-0ubuntu1.1~cloud0) xenial-newton; urgency=medium
 .
   * d/p/Update-the-host_id-for-network-router_gateway-interf.patch:
 keep the router's gateway interface updated when keepalived fails over
 (LP: #1694337).


** Changed in: cloud-archive/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694337

Title:
  Port information (binding:host_id) not updated for
  network:router_gateway after qRouter failover

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Won't Fix
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Impact]

  When using l3 ha and a router agent fails over, the interface holding
  the network:router_gateway interface does not get its property
  binding:host_id updated to reflect where the keepalived moved the
  router.

  [Steps to reproduce]

  0) Deploy a cloud with l3ha enabled
    - If familiar with juju, it's possible to use this bundle 
http://paste.ubuntu.com/24707730/ , but the deployment tool is not relevant

  1) Once it's deployed, configure it and create a router see 
https://docs.openstack.org/mitaka/networking-guide/deploy-lb-ha-vrrp.html )
    - This is the script used during the troubleshooting
  -8<--
  #!/bin/bash -x

  source novarc  # admin

  neutron net-create ext-net --router:external True
  --provider:physical_network physnet1 --provider:network_type flat

  neutron subnet-create ext-net 10.5.0.0/16 --name ext-subnet
  --allocation-pool start=10.5.254.100,end=10.5.254.199 --disable-dhcp
  --gateway 10.5.0.1 --dns-nameserver 10.5.0.3

  keystone tenant-create --name demo 2>/dev/null
  keystone user-role-add --user admin --tenant demo --role Admin 2>/dev/null

  export TENANT_ID_DEMO=$(keystone tenant-list | grep demo | awk -F'|'
  '{print $2}' | tr -d ' ' 2>/dev/null )

  neutron net-create demo-net --tenant-id ${TENANT_ID_DEMO}
  --provider:network_type vxlan

  env OS_TENANT_NAME=demo neutron subnet-create demo-net 192.168.1.0/24 --name 
demo-subnet --gateway 192.168.1.1
  env OS_TENANT_NAME=demo neutron router-create demo-router
  env OS_TENANT_NAME=demo neutron router-interface-add demo-router demo-subnet
  env OS_TENANT_NAME=demo neutron router-gateway-set demo-router ext-net

  # verification
  neutron net-list
  neutron l3-agent-list-hosting-router demo-router
  neutron router-port-list demo-router
  - 8< ---

  2) Kill the associated master keepalived process for the router
  ps aux | grep keepalived | grep $ROUTER_ID
  kill $PID

  3) Wait until "neutron l3-agent-list-hosting-router demo-router" shows the 
other host as active
  4) Check the binding:host_id property for the interfaces of the router
  for ID in `neutron port-list --device-id $ROUTER_ID | tail -n +4 | head 
-n -1| awk -F' ' '{print $2}' `; do neutron port-show $ID ; done

  Expected results:

  The interface where the device_owner is network:router_gateway has its
  property binding:host_id set to where the keepalived process is master

  Actual result:

  The binding:host_id is never updated, it stays set with the value
  obtainer during the creation of the port.

  [Regression Potential]
  - This patch changes the UPDATE query to the port bindings in the database, a 
possible regression will express as failures in the query or binding:host_id 
property outdated.

  [Other Info]

  The patches for zesty and yakkety are a direct backport from
  stable/ocata and stable/newton respectively. The patch for xenial is
  NOT merged in stable/xenial because it's already EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover

2017-08-10 Thread James Page
This bug was fixed in the package neutron - 2:10.0.2-0ubuntu1.1~cloud0
---

 neutron (2:10.0.2-0ubuntu1.1~cloud0) xenial-ocata; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:10.0.2-0ubuntu1.1) zesty; urgency=medium
 .
   * d/p/Update-the-host_id-for-network-router_gateway-interf.patch:
 keep the router's gateway interface updated when keepalived fails
 over (LP: #1694337).


** Changed in: cloud-archive/ocata
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694337

Title:
  Port information (binding:host_id) not updated for
  network:router_gateway after qRouter failover

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Won't Fix
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Impact]

  When using l3 ha and a router agent fails over, the interface holding
  the network:router_gateway interface does not get its property
  binding:host_id updated to reflect where the keepalived moved the
  router.

  [Steps to reproduce]

  0) Deploy a cloud with l3ha enabled
    - If familiar with juju, it's possible to use this bundle 
http://paste.ubuntu.com/24707730/ , but the deployment tool is not relevant

  1) Once it's deployed, configure it and create a router see 
https://docs.openstack.org/mitaka/networking-guide/deploy-lb-ha-vrrp.html )
    - This is the script used during the troubleshooting
  -8<--
  #!/bin/bash -x

  source novarc  # admin

  neutron net-create ext-net --router:external True
  --provider:physical_network physnet1 --provider:network_type flat

  neutron subnet-create ext-net 10.5.0.0/16 --name ext-subnet
  --allocation-pool start=10.5.254.100,end=10.5.254.199 --disable-dhcp
  --gateway 10.5.0.1 --dns-nameserver 10.5.0.3

  keystone tenant-create --name demo 2>/dev/null
  keystone user-role-add --user admin --tenant demo --role Admin 2>/dev/null

  export TENANT_ID_DEMO=$(keystone tenant-list | grep demo | awk -F'|'
  '{print $2}' | tr -d ' ' 2>/dev/null )

  neutron net-create demo-net --tenant-id ${TENANT_ID_DEMO}
  --provider:network_type vxlan

  env OS_TENANT_NAME=demo neutron subnet-create demo-net 192.168.1.0/24 --name 
demo-subnet --gateway 192.168.1.1
  env OS_TENANT_NAME=demo neutron router-create demo-router
  env OS_TENANT_NAME=demo neutron router-interface-add demo-router demo-subnet
  env OS_TENANT_NAME=demo neutron router-gateway-set demo-router ext-net

  # verification
  neutron net-list
  neutron l3-agent-list-hosting-router demo-router
  neutron router-port-list demo-router
  - 8< ---

  2) Kill the associated master keepalived process for the router
  ps aux | grep keepalived | grep $ROUTER_ID
  kill $PID

  3) Wait until "neutron l3-agent-list-hosting-router demo-router" shows the 
other host as active
  4) Check the binding:host_id property for the interfaces of the router
  for ID in `neutron port-list --device-id $ROUTER_ID | tail -n +4 | head 
-n -1| awk -F' ' '{print $2}' `; do neutron port-show $ID ; done

  Expected results:

  The interface where the device_owner is network:router_gateway has its
  property binding:host_id set to where the keepalived process is master

  Actual result:

  The binding:host_id is never updated, it stays set with the value
  obtainer during the creation of the port.

  [Regression Potential]
  - This patch changes the UPDATE query to the port bindings in the database, a 
possible regression will express as failures in the query or binding:host_id 
property outdated.

  [Other Info]

  The patches for zesty and yakkety are a direct backport from
  stable/ocata and stable/newton respectively. The patch for xenial is
  NOT merged in stable/xenial because it's already EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709931] [NEW] Windows: exec calls stdout trimmed

2017-08-10 Thread Lucian Petrut
Public bug reported:

At some point, we've switched to an alternative process launcher that
uses named pipes to communicate with the child processes. This
implementation has some issues, truncating the process output in some
situations.

Trace:
http://paste.openstack.org/show/616053/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709931

Title:
  Windows: exec calls stdout trimmed

Status in neutron:
  New

Bug description:
  At some point, we've switched to an alternative process launcher that
  uses named pipes to communicate with the child processes. This
  implementation has some issues, truncating the process output in some
  situations.

  Trace:
  http://paste.openstack.org/show/616053/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709946] [NEW] ServersAdminTestJSON.test_create_server_with_scheduling_hint randomly fails SameHostFilter in cells v1 job

2017-08-10 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/10/488510/33/gate/gate-tempest-dsvm-cells-
ubuntu-xenial/5b1240f/console.html#_2017-08-10_16_13_05_368008

http://logs.openstack.org/10/488510/33/gate/gate-tempest-dsvm-cells-
ubuntu-xenial/5b1240f/logs/screen-n-sch.txt.gz#_Aug_10_15_20_12_743452

Aug 10 15:20:12.743452 ubuntu-xenial-infracloud-vanilla-10374272 nova-
scheduler[18597]: INFO nova.filters [None req-
3faa8b1f-4186-405f-b308-9f65c36657ef tempest-
ServersAdminTestJSON-339468122 tempest-ServersAdminTestJSON-339468122]
Filter SameHostFilter returned 0 hosts

This is the server create request using the same_host scheduler hint:

2017-08-10 16:13:05.372164 | 2017-08-10 15:20:12,433 27140 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_create_server_with_scheduling_hint): 202 POST 
http://15.184.65.159/compute/v2.1/servers 0.590s
2017-08-10 16:13:05.372286 | 2017-08-10 15:20:12,434 27140 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2017-08-10 16:13:05.372461 | Body: {"server": {"flavorRef": "42", 
"name": "tempest-ServersAdminTestJSON-server-1290239585", "imageRef": 
"6e666a3a-86f9-42a1-aaff-de1e0aea4d92"}, "os:scheduler_hints": {"same_host": 
"39ffdbdb-0c34-42bd-b45e-900a9b36b309"}}
2017-08-10 16:13:05.372818 | Response - Headers: {u'content-length': 
'384', u'location': 
'http://15.184.65.159/compute/v2.1/servers/9498ca79-452b-48a4-84d5-0e597208be33',
 u'x-openstack-request-id': 'req-3faa8b1f-4186-405f-b308-9f65c36657ef', 
u'server': 'Apache/2.4.18 (Ubuntu)', u'x-openstack-nova-api-version': '2.1', 
u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', u'date': 'Thu, 
10 Aug 2017 15:20:11 GMT', u'connection': 'close', 'content-location': 
'http://15.184.65.159/compute/v2.1/servers', u'content-type': 
'application/json', u'x-compute-request-id': 
'req-3faa8b1f-4186-405f-b308-9f65c36657ef', u'openstack-api-version': 'compute 
2.1', 'status': '202'}

This is probably a latent cells v1 bug where timing is off getting
information from the computes to the scheduler so we fail the
SameHostFilter check.

** Affects: nova
 Importance: Low
 Status: Confirmed


** Tags: cellsv1 same-host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709946

Title:
  ServersAdminTestJSON.test_create_server_with_scheduling_hint randomly
  fails SameHostFilter in cells v1 job

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/10/488510/33/gate/gate-tempest-dsvm-cells-
  ubuntu-xenial/5b1240f/console.html#_2017-08-10_16_13_05_368008

  http://logs.openstack.org/10/488510/33/gate/gate-tempest-dsvm-cells-
  ubuntu-xenial/5b1240f/logs/screen-n-sch.txt.gz#_Aug_10_15_20_12_743452

  Aug 10 15:20:12.743452 ubuntu-xenial-infracloud-vanilla-10374272 nova-
  scheduler[18597]: INFO nova.filters [None req-
  3faa8b1f-4186-405f-b308-9f65c36657ef tempest-
  ServersAdminTestJSON-339468122 tempest-ServersAdminTestJSON-339468122]
  Filter SameHostFilter returned 0 hosts

  This is the server create request using the same_host scheduler hint:

  2017-08-10 16:13:05.372164 | 2017-08-10 15:20:12,433 27140 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_create_server_with_scheduling_hint): 202 POST 
http://15.184.65.159/compute/v2.1/servers 0.590s
  2017-08-10 16:13:05.372286 | 2017-08-10 15:20:12,434 27140 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-08-10 16:13:05.372461 | Body: {"server": {"flavorRef": "42", 
"name": "tempest-ServersAdminTestJSON-server-1290239585", "imageRef": 
"6e666a3a-86f9-42a1-aaff-de1e0aea4d92"}, "os:scheduler_hints": {"same_host": 
"39ffdbdb-0c34-42bd-b45e-900a9b36b309"}}
  2017-08-10 16:13:05.372818 | Response - Headers: {u'content-length': 
'384', u'location': 
'http://15.184.65.159/compute/v2.1/servers/9498ca79-452b-48a4-84d5-0e597208be33',
 u'x-openstack-request-id': 'req-3faa8b1f-4186-405f-b308-9f65c36657ef', 
u'server': 'Apache/2.4.18 (Ubuntu)', u'x-openstack-nova-api-version': '2.1', 
u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', u'date': 'Thu, 
10 Aug 2017 15:20:11 GMT', u'connection': 'close', 'content-location': 
'http://15.184.65.159/compute/v2.1/servers', u'content-type': 
'application/json', u'x-compute-request-id': 
'req-3faa8b1f-4186-405f-b308-9f65c36657ef', u'openstack-api-version': 'compute 
2.1', 'status': '202'}

  This is probably a latent cells v1 bug where timing is off getting
  information from the computes to the scheduler so we fail the
  SameHostFilter check.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1709938] [NEW] DefaultSubnetPoolsTest is racy

2017-08-10 Thread Jakub Libosvar
Public bug reported:

Default subnet can exist only once in cloud and there are two tests that
create default and one that updates to default, it happens that tests
are running in parallel while the check for existing subnet is at the
class level. So it happens that:

 1) class checks for default subnet, it's not there
 3) test1 creates default subnet -> it's fine, we now have our unique resource
 4) test2 creates default subnet -> the error we see cause test1 has the default

>From the tempest logs:

Step1:
2017-08-10 07:03:12.341 3008 INFO tempest.lib.common.rest_client 
[req-07271c2f-6725-4f77-b676-a55a95adbf7b ] Request 
(DefaultSubnetPoolsTest:setUpClass): 200 GET 
http://10.0.0.103:9696/v2.0/subnetpools 0.418s
ion/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '200', u'content-length': '18', 
'content-location': 'http://10.0.0.103:9696/v2.0/subnetpools', u'date': 'Thu, 
10 Aug 2017 11:03:12 GMT', u'content-type': 'application/json', u'connection': 
'close', u'x-openstack-request-id': 'req-07271c2f-6725-4f77-b676-a55a95adbf7b'}
Body: {"subnetpools":[]} _log_request_full 
/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py:425

Step2:
2017-08-10 07:03:12.998 3008 INFO tempest.lib.common.rest_client 
[req-6322524f-d1d8-4c7c-abd4-f08862bcec60 ] Request 
(DefaultSubnetPoolsTest:test_admin_create_default_subnetpool): 201 POST 
http://10.0.0.103:9696/v2.0/subnetpools 0.655s
2017-08-10 07:03:12.998 3008 DEBUG tempest.lib.common.rest_client 
[req-6322524f-d1d8-4c7c-abd4-f08862bcec60 ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
Body: {"subnetpool": {"is_default": true, "prefixes": 
["10.11.12.0/24"], "name": "tempest-smoke-subnetpool-2026337716", 
"min_prefixlen": "29"}}
Response - Headers: {'status': '201', u'content-length': '508', 
'content-location': 'http://10.0.0.103:9696/v2.0/subnetpools', u'date': 'Thu, 
10 Aug 2017 11:03:12 GMT', u'content-type': 'application/json', u'connection': 
'close', u'x-openstack-request-id': 'req-6322524f-d1d8-4c7c-abd4-f08862bcec60'}
Body: 
{"subnetpool":{"is_default":true,"description":"","default_quota":null,"tenant_id":"542c5acbca3f49a0bc89d0903eb5c7e5","created_at":"2017-08-10T11:03:12Z","tags":[],"updated_at":"2017-08-10T11:03:12Z","prefixes":["10.11.12.0/24"],"min_prefixlen":"29","max_prefixlen":"32","address_scope_id":null,"revision_number":0,"ip_version":4,"shared":false,"default_prefixlen":"29","project_id":"542c5acbca3f49a0bc89d0903eb5c7e5","id":"dd1b15f4-0dc1-4582-9435-394a5b2bdea9","name":"tempest-smoke-subnetpool-2026337716"}}
 _log_request_full 
/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py:425

Step3:
2017-08-10 07:03:15.667 3008 INFO tempest.lib.common.rest_client 
[req-24e48bfe-473f-44e0-aaf4-8f2debf81a0e ] Request 
(DefaultSubnetPoolsTest:test_convert_subnetpool_to_default_subnetpool): 400 PUT 
http://10.0.0.103:9696/v2.0/subnetpools/fb199e24-a9e2-443f-81cc-3c07c3bd7a20 
0.842s
2017-08-10 07:03:15.668 3008 DEBUG tempest.lib.common.rest_client 
[req-24e48bfe-473f-44e0-aaf4-8f2debf81a0e ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
Body: {"subnetpool": {"is_default": true}}
Response - Headers: {'status': '400', u'content-length': '203', 
'content-location': 
'http://10.0.0.103:9696/v2.0/subnetpools/fb199e24-a9e2-443f-81cc-3c07c3bd7a20', 
u'date': 'Thu, 10 Aug 2017 11:03:15 GMT', u'content-type': 'application/json', 
u'connection': 'close', u'x-openstack-request-id': 
'req-24e48bfe-473f-44e0-aaf4-8f2debf81a0e'}
Body: {"NeutronError": {"message": "Invalid input for operation: A 
default subnetpool for this IP family has already been set. Only one default 
may exist per IP family.", "type": "InvalidInput", "detail": ""}} 
_log_request_full 
/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py:425

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: New


** Tags: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709938

Title:
  DefaultSubnetPoolsTest is racy

Status in neutron:
  New

Bug description:
  Default subnet can exist only once in cloud and there are two tests
  that create default and one that updates to default, it happens that
  tests are running in parallel while the check for existing subnet is
  at the class level. So it happens that:

   1) class checks for default subnet, it's not there
   3) test1 creates default subnet -> it's fine, we now have our unique resource
   4) test2 creates default subnet -> the error we see cause test1 has the 
default

  From the tempest logs:

  Step1:
  2017-08-10 07:03:12.341 3008 INFO tempest.lib.common.rest_client 
[req-07271c2f-6725-4f77-b676-a55a95adbf7b ] Request 

[Yahoo-eng-team] [Bug 1709985] [NEW] test_rebuild_server_in_error_state randomly times out waiting for rebuilding instance to be active

2017-08-10 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/12/491012/12/check/gate-tempest-dsvm-cells-
ubuntu-xenial/4aa3da8/console.html#_2017-08-10_18_58_35_158151

2017-08-10 18:58:35.158151 | 
tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_rebuild_server_in_error_state[id-682cb127-e5bb-4f53-87ce-cb9003604442]
2017-08-10 18:58:35.158207 | 
---
2017-08-10 18:58:35.158221 | 
2017-08-10 18:58:35.158239 | Captured traceback:
2017-08-10 18:58:35.158258 | ~~~
2017-08-10 18:58:35.158281 | Traceback (most recent call last):
2017-08-10 18:58:35.158323 |   File 
"tempest/api/compute/admin/test_servers.py", line 188, in 
test_rebuild_server_in_error_state
2017-08-10 18:58:35.158346 | raise_on_error=False)
2017-08-10 18:58:35.158381 |   File "tempest/common/waiters.py", line 96, 
in wait_for_server_status
2017-08-10 18:58:35.158407 | raise lib_exc.TimeoutException(message)
2017-08-10 18:58:35.158436 | tempest.lib.exceptions.TimeoutException: 
Request timed out
2017-08-10 18:58:35.158525 | Details: 
(ServersAdminTestJSON:test_rebuild_server_in_error_state) Server 
e57c5e75-9a8b-436d-aa53-a545e32c308a failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: REBUILD. Current 
task state: rebuild_spawning.

Looks like this mostly shows up in cells v1 jobs, which wouldn't be
surprising if we missed some state change due to the instance sync to
the top level cell, but it's also happening sometimes in non-cells jobs.
Could be a duplicate bug where we missing or don't get a network change
/ vif plug notification from neutron so we just wait forever.

** Affects: nova
 Importance: Low
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709985

Title:
  test_rebuild_server_in_error_state randomly times out waiting for
  rebuilding instance to be active

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/12/491012/12/check/gate-tempest-dsvm-cells-
  ubuntu-xenial/4aa3da8/console.html#_2017-08-10_18_58_35_158151

  2017-08-10 18:58:35.158151 | 
tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_rebuild_server_in_error_state[id-682cb127-e5bb-4f53-87ce-cb9003604442]
  2017-08-10 18:58:35.158207 | 
---
  2017-08-10 18:58:35.158221 | 
  2017-08-10 18:58:35.158239 | Captured traceback:
  2017-08-10 18:58:35.158258 | ~~~
  2017-08-10 18:58:35.158281 | Traceback (most recent call last):
  2017-08-10 18:58:35.158323 |   File 
"tempest/api/compute/admin/test_servers.py", line 188, in 
test_rebuild_server_in_error_state
  2017-08-10 18:58:35.158346 | raise_on_error=False)
  2017-08-10 18:58:35.158381 |   File "tempest/common/waiters.py", line 96, 
in wait_for_server_status
  2017-08-10 18:58:35.158407 | raise lib_exc.TimeoutException(message)
  2017-08-10 18:58:35.158436 | tempest.lib.exceptions.TimeoutException: 
Request timed out
  2017-08-10 18:58:35.158525 | Details: 
(ServersAdminTestJSON:test_rebuild_server_in_error_state) Server 
e57c5e75-9a8b-436d-aa53-a545e32c308a failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: REBUILD. Current 
task state: rebuild_spawning.

  Looks like this mostly shows up in cells v1 jobs, which wouldn't be
  surprising if we missed some state change due to the instance sync to
  the top level cell, but it's also happening sometimes in non-cells
  jobs. Could be a duplicate bug where we missing or don't get a network
  change / vif plug notification from neutron so we just wait forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1709985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710003] [NEW] Pseudo-translation generation fails

2017-08-10 Thread Nick Timkovich
Public bug reported:

Pseudo-translations fail to generate; it looks the logic expects the
"pofile" name to be a babel function, not the string that holds the
filename. Not sure if this is old logic, but I was following along with
https://docs.openstack.org/horizon/latest/contributor/topics/translation.html
#pseudo-translation-tool

$ tox -e manage -- update_catalog -l fr --pseudo
...
  File 
"/Users/npt/Code/Ar/horizon/openstack_dashboard/management/commands/update_catalog.py",
 line 105, in handle
pot_cat = pofile.read_po(f, ignore_obsolete=True)
AttributeError: 'str' object has no attribute 'read_po'
ERROR: InvocationError: '/Users/npt/Code/Ar/horizon/.tox/manage/bin/python 
/Users/npt/Code/Ar/horizon/manage.py update_catalog -l fr --pseudo'

Change submitted to https://review.openstack.org/492670

** Affects: horizon
 Importance: Undecided
 Assignee: Nick Timkovich (nicktimko)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1710003

Title:
  Pseudo-translation generation fails

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Pseudo-translations fail to generate; it looks the logic expects the
  "pofile" name to be a babel function, not the string that holds the
  filename. Not sure if this is old logic, but I was following along
  with
  https://docs.openstack.org/horizon/latest/contributor/topics/translation.html
  #pseudo-translation-tool

  $ tox -e manage -- update_catalog -l fr --pseudo
  ...
File 
"/Users/npt/Code/Ar/horizon/openstack_dashboard/management/commands/update_catalog.py",
 line 105, in handle
  pot_cat = pofile.read_po(f, ignore_obsolete=True)
  AttributeError: 'str' object has no attribute 'read_po'
  ERROR: InvocationError: '/Users/npt/Code/Ar/horizon/.tox/manage/bin/python 
/Users/npt/Code/Ar/horizon/manage.py update_catalog -l fr --pseudo'

  Change submitted to https://review.openstack.org/492670

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1710003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700852] Re: Slow listing projects for user with many role assignments

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/487143
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=63124f703a81074793360c1b91711b6ee5a76196
Submitter: Jenkins
Branch:master

commit 63124f703a81074793360c1b91711b6ee5a76196
Author: Lance Bragstad 
Date:   Tue Jul 25 17:03:55 2017 +

Cache list projects and domains for user

Listing projects and domains for a user based on their role
assignments was noted as being really slow, especially when users
have a lot of assignments. This commit implements caching to mitigate
the issue while we continue to investigate ways to speed up the
assignment API.

Change-Id: I72e398c65f01aa4f9a37f817d184a13ed01089ce
Closes-Bug: 1700852


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1700852

Title:
  Slow listing projects for user with many role assignments

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  Won't Fix
Status in OpenStack Identity (keystone) ocata series:
  Confirmed

Bug description:
  With a large number of role assignments (e.g 500) it becomes very slow
  to list the projects a user has access to (via /users//projects).
  I'm seeing times of around 4 seconds versus 0.1 for a user with a
  couple of assignments.

  Instrumenting list_projects_for_user
  
(https://github.com/openstack/keystone/blob/stable/newton/keystone/assignment/core.py#L268),
  where each number is the time elapsed since the start of
  list_projects_for_user, I get:

    list_projects_for_user 3.998 role_assignments
    list_projects_for_user 3.999 project_ids
    list_projects_for_user 4.105 list_projects

  The time is spent on the call made to list_role_assignments on line
  269 which in turns calls  _list_effective_role_assignments at
  
https://github.com/openstack/keystone/blob/stable/newton/keystone/assignment/core.py#L986.

  Listing role assignments for a user directly (with GET
  /role_assignments?user.id=) is very fast, which is further
  indication that something in the effective role processing is the
  cause. I haven't yet timed the internal of
  _list_effective_role_assignments.

  Running keystone mitaka (though I believe this would apply to master)
  with users in LDAP but roles and projects managed by keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1700852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709774] Re: Multiple router_centralized_snat interfaces created during Heat deployment

2017-08-10 Thread Kevin Benton
>From my initial findings this does look like some kind of race condition
server side in Neutron and isn't an issue with Heat.

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
 Assignee: (unassigned) => Swaminathan Vasudevan (swaminathan-vasudevan)

** Changed in: neutron
   Importance: Undecided => High

** Changed in: heat
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709774

Title:
  Multiple router_centralized_snat interfaces created during Heat
  deployment

Status in OpenStack Heat:
  Invalid
Status in neutron:
  Triaged

Bug description:
  While attempting to deploy the attached hot template I ran into a few
  issues:

  1. Multiple router_centralized_snat interfaces are being created.
  2. One router_centralized_snat interface is created, but it's Down.

  When Multiple interfaces are created the stack can't be deleted. I
  need to manually delete the additional ports that have been created
  before the stack can be deleted.

  I'm using Newton with OVS+DVR.

  
  I should state up front that the `depends_on` that are in the template are 
more of a last ditch effort than anything else, and are likely incorrect. 
However, without them the problem still exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1709774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708005] [NEW] 6 out 10 keystone.tests.unit.test_cert_setup.* unit test cases failed in stable/newton branch

2017-08-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The failure were caused by the formatting string for command openssl.
Here is the diff to fix the issue.

$ git diff keystone/common/openssl.py
diff --git a/keystone/common/openssl.py b/keystone/common/openssl.py
index c581e8d..4ea2410 100644
--- a/keystone/common/openssl.py
+++ b/keystone/common/openssl.py
@@ -217,7 +217,7 @@ class BaseCertificateConfigure(object):
 self.exec_command(['openssl', 'ca', '-batch',
'-out', '%(signing_cert)s',
'-config', '%(ssl_config)s',
-   '-days', '%(valid_days)dd',
+   '-days', '%(valid_days)d',
'-cert', '%(ca_cert)s',
'-keyfile', '%(ca_private_key)s',
'-infiles', '%(request_file)s'])
$ uname -a
Linux os-cs-g3w-31.dft.twosigma.com 4.9.0-2-amd64 #1 SMP Debian 4.9.18-1 
(2017-03-30) x86_64 GNU/Linux
$ git branch
  master
* stable/newton
$ git log  | head -4
commit 05a129e54573b6cbda1ec095f4526f2b9ba90a90
Author: Boris Bobrov 
Date:   Tue Apr 25 14:20:36 2017 +

{0}
keystone.tests.unit.test_cert_setup.CertSetupTestCase.test_create_pki_certs_twice_without_rebuild
[0.670882s] ... FAILED

Captured pythonlogging:
~~~
Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
NeedRegenerationException
no value, waiting for create lock
value creation lock  acquired
Calling creation function
Released creation lock
The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
make_dirs 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None group=None
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None(None) group=None(None)
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/openssl.conf'
 mode=0640 user=None(None) group=None(None)
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/index.txt'
 mode=0640 user=None(None) group=None(None)
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/serial'
 mode=0640 user=None(None) group=None(None)
make_dirs 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0750 user=None group=None
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0750 user=None(None) group=None(None)
Running command - openssl genrsa -out 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem
 2048
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem'
 mode=0640 user=None(None) group=None(None)
make_dirs 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None group=None
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None(None) group=None(None)
Running command - openssl req -new -x509 -extensions v3_ca -key 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem
 -out 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/ca.pem 
-days 3650 -config 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/openssl.conf
 -subj /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/ca.pem'
 mode=0644 user=None(None) group=None(None)
make_dirs 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/private'
 mode=0750 user=None group=None
set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/private'
 mode=0750 user=None(None) group=None(None)
Running command - openssl genrsa -out 

[Yahoo-eng-team] [Bug 1708005] Re: 6 out 10 keystone.tests.unit.test_cert_setup.* unit test cases failed in stable/newton branch

2017-08-10 Thread Morgan Fainberg
** Project changed: keystoneauth => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1708005

Title:
  6 out 10 keystone.tests.unit.test_cert_setup.* unit test cases failed
  in stable/newton branch

Status in OpenStack Identity (keystone):
  New

Bug description:
  The failure were caused by the formatting string for command openssl.
  Here is the diff to fix the issue.

  $ git diff keystone/common/openssl.py
  diff --git a/keystone/common/openssl.py b/keystone/common/openssl.py
  index c581e8d..4ea2410 100644
  --- a/keystone/common/openssl.py
  +++ b/keystone/common/openssl.py
  @@ -217,7 +217,7 @@ class BaseCertificateConfigure(object):
   self.exec_command(['openssl', 'ca', '-batch',
  '-out', '%(signing_cert)s',
  '-config', '%(ssl_config)s',
  -   '-days', '%(valid_days)dd',
  +   '-days', '%(valid_days)d',
  '-cert', '%(ca_cert)s',
  '-keyfile', '%(ca_private_key)s',
  '-infiles', '%(request_file)s'])
  $ uname -a
  Linux os-cs-g3w-31.dft.twosigma.com 4.9.0-2-amd64 #1 SMP Debian 4.9.18-1 
(2017-03-30) x86_64 GNU/Linux
  $ git branch
master
  * stable/newton
  $ git log  | head -4
  commit 05a129e54573b6cbda1ec095f4526f2b9ba90a90
  Author: Boris Bobrov 
  Date:   Tue Apr 25 14:20:36 2017 +

  {0}
  
keystone.tests.unit.test_cert_setup.CertSetupTestCase.test_create_pki_certs_twice_without_rebuild
  [0.670882s] ... FAILED

  Captured pythonlogging:
  ~~~
  Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
  Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
  Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
  NeedRegenerationException
  no value, waiting for create lock
  value creation lock  acquired
  Calling creation function
  Released creation lock
  The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
  The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
  The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
  The admin_token_auth middleware presents a security risk and should be 
removed from the [pipeline:api_v3], [pipeline:admin_api], and 
[pipeline:public_api] sections of your paste ini file.
  make_dirs 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None group=None
  set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None(None) group=None(None)
  set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/openssl.conf'
 mode=0640 user=None(None) group=None(None)
  set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/index.txt'
 mode=0640 user=None(None) group=None(None)
  set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/serial'
 mode=0640 user=None(None) group=None(None)
  make_dirs 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0750 user=None group=None
  set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0750 user=None(None) group=None(None)
  Running command - openssl genrsa -out 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem
 2048
  set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem'
 mode=0640 user=None(None) group=None(None)
  make_dirs 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None group=None
  set_permissions: 
path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' 
mode=0755 user=None(None) group=None(None)
  Running command - openssl req -new -x509 -extensions v3_ca -key 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem
 -out 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/ca.pem 
-days 3650 -config 
/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/openssl.conf
 -subj /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com
  

[Yahoo-eng-team] [Bug 1709872] [NEW] rhel fails to install packages

2017-08-10 Thread James Lawrence
Public bug reported:

cloudinit/distros/rhel.py is missing an import.

adding import os to the file fixes the issue.

cloud-init --version
cloud-init 0.7.9

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "output from: cloud-init --debug --force single --name 
package-update-upgrade-install"
   
https://bugs.launchpad.net/bugs/1709872/+attachment/4929990/+files/stacktrace.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1709872

Title:
  rhel fails to install packages

Status in cloud-init:
  New

Bug description:
  cloudinit/distros/rhel.py is missing an import.

  adding import os to the file fixes the issue.

  cloud-init --version
  cloud-init 0.7.9

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1709872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709319] Re: LibvirtConfigGuestDeviceAddressPCI missing format_dom method

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/491822
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=376a902cabd548f1bcc7f7f7f8f98bd2dfa87c89
Submitter: Jenkins
Branch:master

commit 376a902cabd548f1bcc7f7f7f8f98bd2dfa87c89
Author: Vladyslav Drok 
Date:   Tue Aug 8 17:18:37 2017 +0300

Add format_dom for PCI device addresses

In case of having a PCI device, its address can not be output
properly in the instance XML because of the missing format_dom
method. This change adds this method.

Closes-Bug: #1709319
Change-Id: I1a8023ee6e8c85eed1c7c55a21f996371a0dd80a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709319

Title:
  LibvirtConfigGuestDeviceAddressPCI missing format_dom method

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In my case, we had a chain of patches from
  https://review.openstack.org/#/q/topic:bug/1686116 backported to ocata
  downstream. Then, when detaching a ceph volume from a node, the
  following happens:

  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] Traceback 
(most recent call last):
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4835, in 
_driver_detach_volume
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] 
encryption=encryption)
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1393, in 
detach_volume
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] live=live)
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 413, in 
detach_device_with_retry
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] 
_try_detach_device(conf, persistent, live)
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 407, in 
_try_detach_device
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] raise 
exception.DeviceNotFound(device=device)
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in _exit_
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] 
self.force_reraise()
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] 
six.reraise(self.type_, self.value, self.tb)
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 392, in 
_try_detach_device
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] 
self.detach_device(conf, persistent=persistent, live=live)
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 449, in 
detach_device
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] 
self._domain.detachDeviceFlags(device_xml, flags=flags)
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
  nova/nova-compute.log.1:2017-07-31 00:21:24.261 341396 ERROR 
nova.compute.manager [instance: 43304a1b-bfcf-4e78-a9a0-eec1c6eff604] result 

[Yahoo-eng-team] [Bug 1709803] Re: use of undefined variable hm_status

2017-08-10 Thread sumitjami
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709803

Title:
  use of undefined variable hm_status

Status in octavia:
  New

Bug description:
  Traceback (most recent call last):
   
File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 
93, in resource   
  result = method(request=request, **args)  
   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in 
wrapped 
  setattr(e, '_RETRY_EXCEEDED', True)   
   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__  
  self.force_reraise()  
   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise 
  six.reraise(self.type_, self.value, self.tb)  
   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in 
wrapped 
  return f(*args, **kwargs) 
   
File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in 
wrapper   
  ectxt.value = e.inner_exc 
   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__  
  self.force_reraise()  
   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise 
  six.reraise(self.type_, self.value, self.tb)  
   
File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in 
wrapper   
  return f(*args, **kwargs) 
   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in 
wrapped
  traceback.format_exc())   
   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__  
  self.force_reraise()  
   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise 
  six.reraise(self.type_, self.value, self.tb)  
   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 119, in 
wrapped
  return f(*dup_args, **dup_kwargs) 
   
File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 251, 
in _handle_action
  ret_value = getattr(self._plugin, name)(*arg_list, **kwargs)  
   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 1223, in statuses  
  pool_status["healthmonitor"] = hm_status  
   
  UnboundLocalError: local variable 'hm_status' referenced before assignment

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1709803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674676] Re: The URL listed against the details of identity resources returns 404 Not Found error

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/491934
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=6c8ea57210e91bac53268da2475e509071b575cf
Submitter: Jenkins
Branch:master

commit 6c8ea57210e91bac53268da2475e509071b575cf
Author: Gage Hugo 
Date:   Tue Aug 8 16:25:38 2017 -0500

Add description for relationship links in api-ref

This adds a section within the index file that describes what a
relationship link is and what it is used for in terms of each
operation within keystone. There will be a relationships section
in both v3 and v3-ext.

This should help clarify any confusion that may arise when a user is
viewing the api-ref about what the relationship links are.

Change-Id: I9c6b7959ed6ea682c565c515af0cf509b6a64e5d
Closes-Bug: #1674676


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1674676

Title:
  The URL listed against the details of identity resources returns 404
  Not Found error

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Many of the details referencing the relationship between the operation
  and the resource return 404 Not Found when you try and follow the
  link.

  For example, the details of the endpoint given against the following
  URL:

  https://developer.openstack.org/api-
  ref/identity/v3/index.html?expanded=create-endpoint-detail#create-
  endpoint

  Points to:

  https://docs.openstack.org/api/openstack-identity/3/rel/endpoints

  To describe the relationship but results in a 404 Not Found error.

  This issue it consistent across many of the relationship links.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1674676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709328] Re: the default of the scheduler.enabled_filters config is inconsistent

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/491854
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2fe96819c24eff5a9493a6559f3e8d5b4624a8c9
Submitter: Jenkins
Branch:master

commit 2fe96819c24eff5a9493a6559f3e8d5b4624a8c9
Author: Chris Friesen 
Date:   Tue Aug 8 10:31:54 2017 -0600

Remove ram/disk sched filters from default list

Since we now use placement to verify basic CPU/RAM/disk resources,
we should default to disabling the equivalent scheduler filters.

Oddly enough, CoreFilter was already disabled so now also disable
RamFilter and DiskFilter.

Closes-Bug: #1709328
Change-Id: Ibe1cee1cb2642f61a8d6bf9c3f6bbee4f2c2f414


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709328

Title:
  the default of the scheduler.enabled_filters config is inconsistent

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The default value of enabled_filters in the nova conf should either
  contains CoreFilter,RamFilter and DiskFilter or non of them. Today it
  contains only RamFilter and DiskFilter. [1]

  [1]
  
https://github.com/openstack/nova/blob/1e5c7b52a403e708dba5a069dd86b628a4cb952c/nova/conf/scheduler.py#L247-L258

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1709328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675147] Re: Compute flavor management not granular enough by policy and code

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/449288
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a8fd8731d2e5562c5631d6847d4d781ed0a2e772
Submitter: Jenkins
Branch:master

commit a8fd8731d2e5562c5631d6847d4d781ed0a2e772
Author: Rick Bartra 
Date:   Tue Jul 18 17:38:52 2017 -0400

Add policy granularity to the Flavors API

The same policy rule (os_compute_api:os-flavor-manage) is being used
for the create and delete actions of the flavors REST API. It is thus
impossible to provide different RBAC for the create and delete actions
based on roles. To address this, changes are made to have separate
policy rules for each action.

Most other places in nova (and OpenStack in general) have separate
policy rules for each action. This affords the ultimate flexibility
to deployers, who can obviously use the same rule if that is what they
want.

To address backwards compatibility, the new rules added to the
flavor_manage.py policy file, default to the existing rule
(os_compute_api:os-flavor-manage). That way across upgrades this
should ensure if an existing admin has customised the rule, it keeps
working, but folks that know about the new setting can override the
default rule. In addtion, a verify_deprecated_policy method is added
to see if the old policy action is being configured instead of the
new actions.

Closes-Bug: #1675147

Co-Authored-By: Felipe Monteiro 
Change-Id: Ic67b52ebac3a47e9fb7e3c0d6c3ce8a6bc539e11


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1675147

Title:
  Compute flavor management not granular enough by policy and code

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We need the Nova policy and code to support more granularity (i.e.
  Create/Delete) for Flavor management. Current policy check only checks
  os_compute_api:os-flavor-manage and action(s) are missing in the nova
  policy-in-code. Each API should have its own policy action that it
  checks.

  The new policy checks should be added here:
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/flavor_manage.py

  Additional policy actions should be added here:
  https://github.com/openstack/nova/blob/master/nova/policies/flavor_manage.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1675147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710047] [NEW] we only return the first instance info when create server in batches

2017-08-10 Thread huangtianhua
Public bug reported:

nova boot --image 1bc10d6d-1660-47c9-b73b-10e788b5b19a --flavor 1 --nic
net-name=private --min 2

RESP BODY: {"server": {"security_groups": [{"name": "default"}], "OS-
DCF:diskConfig": "MANUAL", "id": "4ead8f97-548d-420e-94a1-5d59ea87fbca",
"links": [{"href": "http://10.3.150.21:8774/v2.1/servers/4ead8f97-548d-
420e-94a1-5d59ea87fbca", "rel": "self"}, {"href":
"http://10.3.150.21:8774/servers/4ead8f97-548d-420e-94a1-5d59ea87fbca;,
"rel": "bookmark"}], "adminPass": "eNCj4mjqARux"}}

Why we only return the first instance info when creating server in
batches?  I think we have to return the instances info to user, at least
return the instance ids to user, then user can get the details by the
ids.

** Affects: nova
 Importance: Undecided
 Assignee: huangtianhua (huangtianhua)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => huangtianhua (huangtianhua)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1710047

Title:
  we only return the first instance info when create server in batches

Status in OpenStack Compute (nova):
  New

Bug description:
  nova boot --image 1bc10d6d-1660-47c9-b73b-10e788b5b19a --flavor 1
  --nic net-name=private --min 2

  RESP BODY: {"server": {"security_groups": [{"name": "default"}], "OS-
  DCF:diskConfig": "MANUAL", "id": "4ead8f97-548d-420e-
  94a1-5d59ea87fbca", "links": [{"href":
  "http://10.3.150.21:8774/v2.1/servers/4ead8f97-548d-420e-
  94a1-5d59ea87fbca", "rel": "self"}, {"href":
  "http://10.3.150.21:8774/servers/4ead8f97-548d-420e-
  94a1-5d59ea87fbca", "rel": "bookmark"}], "adminPass": "eNCj4mjqARux"}}

  Why we only return the first instance info when creating server in
  batches?  I think we have to return the instances info to user, at
  least return the instance ids to user, then user can get the details
  by the ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1710047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710047] Re: we only return the first instance info when create server in batches

2017-08-10 Thread huangtianhua
Maybe I know the reason, in the above case we can set
"return_reservation_id" (true) to ask nova to return the reservation_id,
then using nova list by the reservation_id to get the infos of all
instances created in batches.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1710047

Title:
  we only return the first instance info when create server in batches

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  nova boot --image 1bc10d6d-1660-47c9-b73b-10e788b5b19a --flavor 1
  --nic net-name=private --min 2

  RESP BODY: {"server": {"security_groups": [{"name": "default"}], "OS-
  DCF:diskConfig": "MANUAL", "id": "4ead8f97-548d-420e-
  94a1-5d59ea87fbca", "links": [{"href":
  "http://10.3.150.21:8774/v2.1/servers/4ead8f97-548d-420e-
  94a1-5d59ea87fbca", "rel": "self"}, {"href":
  "http://10.3.150.21:8774/servers/4ead8f97-548d-420e-
  94a1-5d59ea87fbca", "rel": "bookmark"}], "adminPass": "eNCj4mjqARux"}}

  Why we only return the first instance info when creating server in
  batches?  I think we have to return the instances info to user, at
  least return the instance ids to user, then user can get the details
  by the ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1710047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284107] Re: Support neutron lbaas entities quota

2017-08-10 Thread Akihiro Motoki
neutron-lbaas feature is split out to neutron-lbaas-dashboard.
quota feature might be pluggable in the future as part of quota refactoring 
work, but I am not sure.
Anyway this is no longer part of horizon, so let's mark it as Won't Fix.

** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1284107

Title:
  Support neutron lbaas entities quota

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  This bug was reported in order to monitor the neutron BP about lbaas
  entities quota support:

  https://blueprints.launchpad.net/openstack/?searchtext=neutron-quota-
  extension

  Dashboard horizon should support new neutron LBaaS quota configuration
  horizon change addressing the issue is: 
https://review.openstack.org/#/c/59195/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1284107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615076] [NEW] Keystone server does not define "enabled" attribute for Region but mentions in v3 regions.py

2017-08-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The bug was discovered while writing the region functional tests [1].
The create() and update() calls [2] in regions.py mention the "enabled"
attribute, but the specs [3] don't mention it and the code [4] doesn't
support it. We don't check for "enabled" in the region schema either
[5].

So, it's being stored as an extra attribute and it even works if one
passes {'enabled': 'WHATEVER'}

[1] https://review.openstack.org/#/c/339158/
[2] 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/v3/regions.py
[3] 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#regions-v3-regions
[4] 
https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/sql.py#L33-L49
[5] 
https://github.com/openstack/keystone/blob/master/keystone/catalog/schema.py#L17-L43

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
Keystone server does  not define "enabled" attribute for Region but mentions in 
v3 regions.py
https://bugs.launchpad.net/bugs/1615076
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Identity (keystone).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1702790] Re: DVR Router update task fails when agent restarts

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/481321
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7da955bf46e3e62f775afaad936006497e4320e3
Submitter: Jenkins
Branch:master

commit 7da955bf46e3e62f775afaad936006497e4320e3
Author: Swaminathan Vasudevan 
Date:   Thu Jul 6 16:41:17 2017 -0700

DVR: Fix router_update failure when agent restarts

Router update task fails when agent restarts with DVR routers
as it was failing adding an IP rule to the namespace.

The IP rule matching code was not finding a match for
a rule for an interface since we are not specifying an
IP address, but the resulting rule does have the "any" IP
address in its output, for example, 0.0.0.0/0.  Change
to always supply the IP address.

Change-Id: Ic2e80ebb59ac9e0e0063e5f6e69f3d66abe775a1
Closes-Bug: #1702790


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1702790

Title:
  DVR Router update task fails when agent restarts

Status in neutron:
  Fix Released

Bug description:
  When there is a DVR router with gateway enabled, and if the agent
  restarts, then the router_update fails and you can see Error log in
  the l3_agent.log.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1702790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705072] Re: clearing default project_id from users using wrong driver implementation

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/491916
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=d0ad287df397513dd7cb8dd4da0cae383c6b49b0
Submitter: Jenkins
Branch:master

commit d0ad287df397513dd7cb8dd4da0cae383c6b49b0
Author: Lance Bragstad 
Date:   Tue Aug 8 20:31:26 2017 +

Unset project ids for all identity backends

Previously, the default behavior for the callback that unset
default project ids was to only call the method for the default
domain's identity driver. This meant that when a project was deleted,
only the default identity backend would have references to that
project removed. This means it would be possible for other identity
backends to still have references to a project that doesn't exist
because the callback wasn't invoked for that specific backend.

This commit ensures each backend clears project id from a user's
default_project_id attribute when a project is deleted.

Change-Id: Ibb5396f20101a3956fa91d6ff68155d4c00ab0f9
Closes-Bug: 1705072


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1705072

Title:
  clearing default project_id from users using wrong driver
  implementation

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  
https://github.com/openstack/keystone/commit/51d5597df729158d15b71e2ba80ab103df5d55f8
  #diff-271e091a68fb7b6526431423e4efe6e5 attempts to clear the default
  project_id for users if/when the project to which that ID belongs is
  deleted. However it only calls the identity driver for a single
  backend (the default driver from /etc/keystone/keystone.conf) instead
  of doing this for all backends like it should. In a multiple-backend
  environment, this will mean that only users in the backend using the
  default driver configuration will have their default project_id field
  cleaned up. Any users in a different backend that were using that
  project_id as their default would not have that appropriately cleaned
  up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1705072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707071] Re: Compute nodes will fight over allocations during migration

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/488510
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5390210a4fa46e2af6b6aec9b41c03147b52760c
Submitter: Jenkins
Branch:master

commit 5390210a4fa46e2af6b6aec9b41c03147b52760c
Author: Jay Pipes 
Date:   Wed Aug 2 17:48:38 2017 -0400

Remove provider allocs in confirm/revert resize

Now that the scheduler creates a doubled-up allocation for the duration
of a move operation (with part of the allocation referring to the
source and part referring to the destination host), we need to remove
the source provider when confirming the resize and remove the
destination provider from the allocation when reverting a resize. This
patch adds this logic in the RT's drop_move_claim() method.

Change-Id: I6f8afe6680f83125da9381c812016b3623503825
Co-Authored-By: Dan Smith 
Fixes-bug: #1707071


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707071

Title:
  Compute nodes will fight over allocations during migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  As far back as Ocata, compute nodes that manage allocations will end
  up overwriting allocations from other compute nodes when doing a
  migration. This stems from the fact that the Resource Tracker was
  designed to manage a per-compute-node set of accounting, but placement
  is per-instance accounting. When we try to create/update/delete
  allocations for instances on compute nodes from the existing resource
  tracker code paths, we end up deleting allocations that apply to other
  compute nodes in the process.

  For example, when an instance A is running against compute1, there is
  an allocation for its resources against that node. When migrating that
  instance to compute2, the target compute (or scheduler) may create
  allocations for instance A against compute2, which overwrite those for
  compute1. Then, compute1's periodic healing task runs, and deletes the
  allocation for instance A against compute2, replacing it with one for
  compute1. When migration completes, compute2 heals again and
  overwrites the allocation with one for the new home of the instance.
  Then, compute1 may delete the allocation it thinks it owns, followed
  finally by another heal on compute2. While this is going on, the
  scheduler (via placement) does not have a consistent view of resources
  to make proper decisions.

  In order to fix this, we need a combination of changes:

  1. There should be allocations against both compute nodes for an instance 
during a migration
  2. Compute nodes should respect the double claim, and not delete allocations 
for instances it used to own, if the allocation has no resources for its 
resource provider
  3. Compute nodes should not delete allocations for instances unless they own 
the instance _and_ the instance is in DELETED/SHELVED_OFFLOADED state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707071/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615084] [NEW] Keystone server should define "type" attribute as a MIME Media Type but accepts everything

2017-08-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The bug was discovered while writing the policies functional tests [1].
Keystone server should define "type" attribute as a MIME Media Type [2]
but accepts everything, for example, UUID is accepted in [1] while
creating and updating a policy.

[1] 
https://review.openstack.org/#/c/337836/2/keystoneclient/tests/functional/v3/test_policies.py
[2] 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/v3/policies.py

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
Keystone server should define "type" attribute as a MIME Media Type but accepts 
everything
https://bugs.launchpad.net/bugs/1615084
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Identity (keystone).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615076] Re: Keystone server does not define "enabled" attribute for Region but mentions in v3 regions.py

2017-08-10 Thread Morgan Fainberg
** Project changed: python-keystoneclient => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1615076

Title:
  Keystone server does  not define "enabled" attribute for Region but
  mentions in v3 regions.py

Status in OpenStack Identity (keystone):
  New

Bug description:
  The bug was discovered while writing the region functional tests [1].
  The create() and update() calls [2] in regions.py mention the
  "enabled" attribute, but the specs [3] don't mention it and the code
  [4] doesn't support it. We don't check for "enabled" in the region
  schema either [5].

  So, it's being stored as an extra attribute and it even works if one
  passes {'enabled': 'WHATEVER'}

  [1] https://review.openstack.org/#/c/339158/
  [2] 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/v3/regions.py
  [3] 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#regions-v3-regions
  [4] 
https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/sql.py#L33-L49
  [5] 
https://github.com/openstack/keystone/blob/master/keystone/catalog/schema.py#L17-L43

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1615076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615084] Re: Keystone server should define "type" attribute as a MIME Media Type but accepts everything

2017-08-10 Thread Morgan Fainberg
Keystoneclient has nothing to say about what the server accepts. If
anything this is a keystone issue.

** Project changed: python-keystoneclient => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1615084

Title:
  Keystone server should define "type" attribute as a MIME Media Type
  but accepts everything

Status in OpenStack Identity (keystone):
  New

Bug description:
  The bug was discovered while writing the policies functional tests
  [1]. Keystone server should define "type" attribute as a MIME Media
  Type [2] but accepts everything, for example, UUID is accepted in [1]
  while creating and updating a policy.

  [1] 
https://review.openstack.org/#/c/337836/2/keystoneclient/tests/functional/v3/test_policies.py
  [2] 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/v3/policies.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1615084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708167] Re: placement services logs 405 response as untrapped error

2017-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/490021
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=7a895c67b1c2be1a4be17203c06f023aaf371546
Submitter: Jenkins
Branch:master

commit 7a895c67b1c2be1a4be17203c06f023aaf371546
Author: Chris Dent 
Date:   Wed Aug 2 14:19:34 2017 +0100

[placement] Avoid error log on 405 response

Treat HTTPMethodNotAllowed as a WSGI application rather than exception
so that it is not treated as an uncaught exception and logged as an ERROR
in the PlacementHandler before being caught in the FaultWrapper middleware.
Bad method detection is happening outside the context of WebOb wsgify
handling (which automatically catches Webob exceptions and transforms
them into appropriate responses) so we need to do the transformation
ourself.

This will help to avoid spurious noise in the logs. See the bug report
for more detail.

Change-Id: I6de7c2bffb08f370fcfbd86070c94263ee202f93
Closes-Bug: #1708167


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708167

Title:
  placement services logs 405 response as untrapped error

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the placement service gets a bad method for an existing URL it
  raises an HTTPMethodNotAllowed exception. It does this outside of the
  WebOb wsgify context, meaning the the exception is caught the the
  FautlWrapper middleware and perceived to be an uncaught exception and
  logged as such, muddying the log files with something that is normal.

  We don't see this log messaages in CI because we don't accidentally
  cause 405s. Where we intentionally cause them (in gabbi tests) the log
  message is recorded in the subunit data but not in the test output
  because the tests pass (passing tests do not display those
  messages).[1]

  The fix is to treat the HTTPMethodNotAllowed as a wsgi app instead of
  an exception and call it. When doing that, things work as desired. Fix
  forthcoming.

  
  [1] I discovered this because the subunit files on 
https://review.openstack.org/362766 were cresting the 50M limit, because in 
that change the api sample tests were passing but having all kinds of errors 
with the placement fixture (I've since fixed the patch) generating vast amounts 
of log messagse on successful tests. Digging in there also revealed the error 
message that this bug wants to deal with.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709869] [NEW] test_convert_default_subnetpool_to_non_default fails: Subnet pool could not be found

2017-08-10 Thread Itzik Brown
Public bug reported:

Running test_convert_default_subnetpool_to_non_default it fails with the
following error:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/test.py", line 122, in wrapper
return func(*func_args, **func_kwargs)
  File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/api/test_subnetpools.py",
 line 392, in test_convert_default_subnetpool_to_non_default
show_body = self.client.show_subnetpool(subnetpool_id)
  File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/services/network/json/network_client.py",
 line 136, in _show
resp, body = self.get(uri)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 285, in get
return self.request('GET', url, extra_headers, headers)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 659, in request
self._error_checker(resp, resp_body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 765, in _error_checker
raise exceptions.NotFound(resp_body, resp=resp)
tempest.lib.exceptions.NotFound: Object not found
Details: {u'message': u'Subnet pool 428fbe89-0282-4587-a59b-be3957d5c701 could 
not be found.', u'type': u'SubnetPoolNotFound', u'detail': u''}

Version
===
Pike
python-neutron-tests-11.0.0-0.20170804190459.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Assignee: Itzik Brown (itzikb1)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709869

Title:
  test_convert_default_subnetpool_to_non_default fails: Subnet pool
  could not be found

Status in neutron:
  In Progress

Bug description:
  Running test_convert_default_subnetpool_to_non_default it fails with
  the following error:

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/tempest/test.py", line 122, in 
wrapper
  return func(*func_args, **func_kwargs)
File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/api/test_subnetpools.py",
 line 392, in test_convert_default_subnetpool_to_non_default
  show_body = self.client.show_subnetpool(subnetpool_id)
File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/services/network/json/network_client.py",
 line 136, in _show
  resp, body = self.get(uri)
File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 285, in get
  return self.request('GET', url, extra_headers, headers)
File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 659, in request
  self._error_checker(resp, resp_body)
File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 765, in _error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {u'message': u'Subnet pool 428fbe89-0282-4587-a59b-be3957d5c701 
could not be found.', u'type': u'SubnetPoolNotFound', u'detail': u''}

  Version
  ===
  Pike
  python-neutron-tests-11.0.0-0.20170804190459.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710063] [NEW] Unable to find Simplified Chinese in the language list

2017-08-10 Thread Yuko Katabami
Public bug reported:

In Horizon UI deployed from the latest code in master branch does not
seem to include Simplified Chinese in the language list on User Settings
page.

Please see the attached screenshot.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "langlist.png"
   
https://bugs.launchpad.net/bugs/1710063/+attachment/4930474/+files/langlist.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1710063

Title:
  Unable to find Simplified Chinese in the language list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Horizon UI deployed from the latest code in master branch does not
  seem to include Simplified Chinese in the language list on User
  Settings page.

  Please see the attached screenshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1710063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710064] [NEW] agent logs filled with log noise after sinkhole patch

2017-08-10 Thread Kevin Benton
Public bug reported:

Aug 11 03:23:30.735163 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
Aug 11 03:23:30.735305 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
Aug 11 03:23:30.735418 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
Aug 11 03:23:30.735525 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
207, in dispatch
Aug 11 03:23:30.735632 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server if not 
(self._is_namespace(target, namespace) and
Aug 11 03:23:30.735739 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
170, in _is_namespace
Aug 11 03:23:30.735853 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server return 
namespace in target.accepted_namespaces
Aug 11 03:23:30.735960 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server 
AttributeError: 'function' object has no attribute


http://logs.openstack.org/55/491555/5/check/gate-tempest-dsvm-py35-ubuntu-xenial/72551bb/logs/screen-q-agt.txt.gz?level=TRACE#_Aug_11_03_18_16_310607

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Description changed:

  Aug 11 03:23:30.735163 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  Aug 11 03:23:30.735305 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
  Aug 11 03:23:30.735418 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  Aug 11 03:23:30.735525 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
207, in dispatch
  Aug 11 03:23:30.735632 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server if not 
(self._is_namespace(target, namespace) and
  Aug 11 03:23:30.735739 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
170, in _is_namespace
  Aug 11 03:23:30.735853 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server return 
namespace in target.accepted_namespaces
  Aug 11 03:23:30.735960 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server 
AttributeError: 'function' object has no attribute
+ 
+ 
+ 
http://logs.openstack.org/55/491555/5/check/gate-tempest-dsvm-py35-ubuntu-xenial/72551bb/logs/screen-q-agt.txt.gz?level=TRACE#_Aug_11_03_18_16_310607

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
Milestone: None => pike-rc1

** Changed in: neutron
Milestone: pike-rc1 => pike-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1710064

Title:
  agent logs filled with log noise after sinkhole patch

Status in neutron:
  In Progress

Bug description:
  Aug 11 03:23:30.735163 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  Aug 11 03:23:30.735305 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
  Aug 11 03:23:30.735418 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  Aug 11 03:23:30.735525 ubuntu-xenial-rax-ord-10386499 
neutron-openvswitch-agent[19463]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.5/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
207, in dispatch
  Aug 11 03:23:30.735632 ubuntu-xenial-rax-ord-10386499