[Yahoo-eng-team] [Bug 1775496] [NEW] agentschedulers: concurrent port delete on unscheduling may cause unscheduling to fail

2018-06-06 Thread Kailun Qin
Public bug reported:

When a network is removed from a dhcp agent, in some scenarios if the
agent releases its port concurrently, there is chance that the removal
of network from agent will fail due to that the target port is not
found.

The issue can be reproduced on the latest devstack.
Steps to reproduce:
1. neutron port-list, identify the port to delete
2. Remove one network from a dhcp agent: neutron dhcp-agent-network-remove 
--dhcp_agent xxx --network xxx
   AND at the same time delete the port associated: neutron port-delete xxx

Failed CLI:
vagrant@control:~/devstack$ neutron dhcp-agent-network-remove --dhcp_agent 
73721261-41c6-4f82-b0f4-ef9a750c7f70 --network net
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Port 6089a77e-1975-40a5-9d4d-819e0d9e8fd5 could not be found.
Neutron server returns request_ids: ['req-dfecf6a3-8d61-435b-a6a2-919ac6ca972f']
Failed Log:
DEBUG oslo.privsep.daemon [^[[00;36m-] ^[[01;35mprivsep: Exception during 
request[140677005388208]: Network interface tap83924265-3e not found in 
namespace qdhcp-d57b2982-69e8-4e62-8dd0-6241f204e132.^[[00m 
^[[00;33m{{(pid=10686) loop 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:449}}^[[00m
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 
445, in loop
reply = self._process_cmd(*msg)
  File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 
428, in _process_cmd
ret = func(*f_args, **f_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", 
line 209, in _wrap
return func(*args, **kwargs)
  File "/opt/stack/neutron/neutron/privileged/agent/linux/ip_lib.py", line 272, 
in get_link_attributes
link = _run_iproute_link("get", device, namespace)[0]
  File "/opt/stack/neutron/neutron/privileged/agent/linux/ip_lib.py", line 130, 
in _run_iproute_link
idx = _get_link_id(device, namespace)
  File "/opt/stack/neutron/neutron/privileged/agent/linux/ip_lib.py", line 124, 
in _get_link_id
raise NetworkInterfaceNotFound(device=device, namespace=namespace)
NetworkInterfaceNotFound: Network interface tap83924265-3e not found in 
namespace qdhcp-d57b2982-69e8-4e62-8dd0-6241f204e132.

** Affects: neutron
 Importance: Undecided
 Assignee: Kailun Qin (kailun.qin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kailun Qin (kailun.qin)

** Summary changed:

- agentschedulers: concurrent port delete on unscheduling may cause port not 
found
+ agentschedulers: concurrent port delete on unscheduling may cause 
unscheduling to fail

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775496

Title:
  agentschedulers: concurrent port delete on unscheduling may cause
  unscheduling to fail

Status in neutron:
  New

Bug description:
  When a network is removed from a dhcp agent, in some scenarios if the
  agent releases its port concurrently, there is chance that the removal
  of network from agent will fail due to that the target port is not
  found.

  The issue can be reproduced on the latest devstack.
  Steps to reproduce:
  1. neutron port-list, identify the port to delete
  2. Remove one network from a dhcp agent: neutron dhcp-agent-network-remove 
--dhcp_agent xxx --network xxx
 AND at the same time delete the port associated: neutron port-delete xxx

  Failed CLI:
  vagrant@control:~/devstack$ neutron dhcp-agent-network-remove --dhcp_agent 
73721261-41c6-4f82-b0f4-ef9a750c7f70 --network net
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Port 6089a77e-1975-40a5-9d4d-819e0d9e8fd5 could not be found.
  Neutron server returns request_ids: 
['req-dfecf6a3-8d61-435b-a6a2-919ac6ca972f']
  Failed Log:
  DEBUG oslo.privsep.daemon [^[[00;36m-] ^[[01;35mprivsep: Exception during 
request[140677005388208]: Network interface tap83924265-3e not found in 
namespace qdhcp-d57b2982-69e8-4e62-8dd0-6241f204e132.^[[00m 
^[[00;33m{{(pid=10686) loop 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:449}}^[[00m
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 
445, in loop
  reply = self._process_cmd(*msg)
File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 
428, in _process_cmd
  ret = func(*f_args, **f_kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", 
line 209, in _wrap
  return func(*args, **kwargs)
File "/opt/stack/neutron/neutron/privileged/agent/linux/ip_lib.py", line 
272, in get_link_attributes
  link = _run_iproute_link("get", device, namespace)[0]
File "/opt/stack/neutron/neutron/privileged/agent/linux/ip_lib.py", line 
130, in _run_iproute_link
  idx = _get_link_id(device, namespace)
File 

[Yahoo-eng-team] [Bug 1769006] Re: reconfigure functional-py35 tests

2018-06-06 Thread Brian Rosmaita
Update to governance repo merged as commit
http://git.openstack.org/cgit/openstack/governance/commit/?id=fdd2be2c733d8409a75ff6d8f3cd343e8ab5f12f

** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1769006

Title:
  reconfigure functional-py35 tests

Status in Glance:
  Fix Released

Bug description:
  Glance never completed the Pike py35 community goal [0].  There are
  two tests currently being skipped when run under py35:

  glance.tests.functional.test_reload.TestReload.test_reload
  glance.tests.functional.test_ssl.TestSSL.test_ssl_ok

  The last patch to touch these tests was
  https://review.openstack.org/#/c/456788/ (Clean up py35 env in
  tox.ini).  The commit message has this comment:

  This patch enables the py35 job in tox.ini to run using ostestr. It
  also fixes a bytes encoding issue in the 'test_wsgi' functional test
  to make progress towards the community goal of enabling python3.5. Two
  other functional tests remain disabled and will need to be addressed
  in a later patch in order to fully complete the community goal -
  'test_ssl' and 'test_reload'. These tests fail due to SSL handshake
  not working in python3.5 when using self-signed certificate and
  authority.

  OpenStack is entering the final stages of the python 3 transition [1],
  so we need to get these fixed and running.

  
  [0] https://etherpad.openstack.org/p/glance-pike-python35-goal
  [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129866.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1769006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758033] Re: Floating IP QoS scenario tests failing

2018-06-06 Thread LIU Yulong
Fix Released in neutron-12.0.1:
https://github.com/openstack/neutron/commits/12.0.1

** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1758033

Title:
  Floating IP QoS scenario tests failing

Status in neutron:
  Fix Released

Bug description:
  In stable/queens branch it looks that for some reason fip_qos L3 extension 
driver is not loaded and QoS for floating IP doesn't work in scenario tests.
  That makes failing scenario tests for stable/queens branch, like for example 
here: 
http://logs.openstack.org/59/554859/1/check/neutron-tempest-plugin-scenario-linuxbridge/b02500e/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1758033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775418] Re: Swap volume of multiattached volume will corrupt data

2018-06-06 Thread Matt Riedemann
As mentioned in the mailing list, I think this is also something to be
controlled in Cinder during retype or volume live migration since that
would be a fast fail for this scenario:

http://lists.openstack.org/pipermail/openstack-dev/2018-June/131234.html

Otherwise cinder calls swap volume in nova, which will fail back to
cinder, and then cinder has to rollback; it's just easier to fail fast
in the cinder API.

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
   Importance: Undecided => High

** Tags added: libvirt multiattach volumes

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Triaged

** Changed in: nova/queens
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775418

Title:
  Swap volume of multiattached volume will corrupt data

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  Triaged

Bug description:
  We currently permit the following:

  Create multiattach volumes a and b
  Create servers 1 and 2
  Attach volume a to servers 1 and 2
  swap_volume(server 1, volume a, volume b)

  In fact, we have a tempest test which tests exactly this sequence:
  
api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach

  The problem is that writes from server 2 during the copy operation on
  server 1 will continue to hit the underlying storage, but as server 1
  doesn't know about them they won't be reflected on the copy on volume
  b. This will lead to an inconsistent copy, and therefore data
  corruption on volume b.

  Also, this whole flow makes no sense for a multiattached volume
  because even if we managed a consistent copy all we've achieved is
  forking our data between the 2 volumes. The purpose of this call is to
  allow the operator to move volumes. We need a fundamentally different
  approach for multiattached volumes.

  In the short term we should at least prevent data corruption by
  preventing swap volume of a multiattached volume. This would also
  cause the above tempest test to fail, but as I don't believe it's
  possible to implement the test safely this would be correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1775418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775295] Re: Queen keystone installation instructions outdated, keystone-managed credential_setup invalid choice

2018-06-06 Thread Gage Hugo
Seems like ubuntu 16.04 ships with Mitaka, and I think
"credential_setup" was added in Newton so that would explain why the
command is missing.

The docs seem to be correct though in terms of setting up the queens
repo.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1775295

Title:
  Queen keystone installation instructions outdated, keystone-managed
  credential_setup invalid choice

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  - [x] This doc is inaccurate in this way:
  This command
  "apt install keystone  apache2 libapache2-mod-wsgi"
  installs keystone with keystone-manage version 9.3.0 which doesn't support 
subsequent command:
  "keystone-manage credential_setup --keystone-user keystone --keystone-group 
keystone"

  I'm new to openstack so not sure whether this issue is unique to my 
environment.
  If this is an actual issue, how do i get around this. 

  P.s: I'm installing keystone on a clean installation of Ubuntu 16.04

  Thanks so much in advance!
  ---
  Release: 13.0.1.dev9 on 2018-05-08 06:44
  SHA: 4ca0172fcdb1ce28a1f00d5a0e1bb3d646141803
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/queens/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1775295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775418] [NEW] Swap volume of multiattached volume will corrupt data

2018-06-06 Thread Matthew Booth
Public bug reported:

We currently permit the following:

Create multiattach volumes a and b
Create servers 1 and 2
Attach volume a to servers 1 and 2
swap_volume(server 1, volume a, volume b)

In fact, we have a tempest test which tests exactly this sequence:
api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach

The problem is that writes from server 2 during the copy operation on
server 1 will continue to hit the underlying storage, but as server 1
doesn't know about them they won't be reflected on the copy on volume b.
This will lead to an inconsistent copy, and therefore data corruption on
volume b.

Also, this whole flow makes no sense for a multiattached volume because
even if we managed a consistent copy all we've achieved is forking our
data between the 2 volumes. The purpose of this call is to allow the
operator to move volumes. We need a fundamentally different approach for
multiattached volumes.

In the short term we should at least prevent data corruption by
preventing swap volume of a multiattached volume. This would also cause
the above tempest test to fail, but as I don't believe it's possible to
implement the test safely this would be correct.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775418

Title:
  Swap volume of multiattached volume will corrupt data

Status in OpenStack Compute (nova):
  New

Bug description:
  We currently permit the following:

  Create multiattach volumes a and b
  Create servers 1 and 2
  Attach volume a to servers 1 and 2
  swap_volume(server 1, volume a, volume b)

  In fact, we have a tempest test which tests exactly this sequence:
  
api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach

  The problem is that writes from server 2 during the copy operation on
  server 1 will continue to hit the underlying storage, but as server 1
  doesn't know about them they won't be reflected on the copy on volume
  b. This will lead to an inconsistent copy, and therefore data
  corruption on volume b.

  Also, this whole flow makes no sense for a multiattached volume
  because even if we managed a consistent copy all we've achieved is
  forking our data between the 2 volumes. The purpose of this call is to
  allow the operator to move volumes. We need a fundamentally different
  approach for multiattached volumes.

  In the short term we should at least prevent data corruption by
  preventing swap volume of a multiattached volume. This would also
  cause the above tempest test to fail, but as I don't believe it's
  possible to implement the test safely this would be correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1775418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775415] [NEW] pagination for list operations behaves inconsistently

2018-06-06 Thread Sascha Giebner
Public bug reported:

Using pagination with Neutron API behaves inconsistently with documented 
behavior.
Next/previous links are not what they should be.

Observed behavior:
GET call "floatingips?limit=5_reverse=False" returns a first page of 
results that includes a "next" and "previous" link (should only contain a 
"next" link)

GET call "networks?limit=5_reverse=False" returns a first result
page that only includes a "previous" link (should only contain a "next"
link)

It seems that either the links are generated in a faulty manner or the
wrong page is returned (should be 1st page).

We are using version 2.0

Thx and best regards,
Sascha

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775415

Title:
  pagination for list operations behaves inconsistently

Status in neutron:
  New

Bug description:
  Using pagination with Neutron API behaves inconsistently with documented 
behavior.
  Next/previous links are not what they should be.

  Observed behavior:
  GET call "floatingips?limit=5_reverse=False" returns a first page of 
results that includes a "next" and "previous" link (should only contain a 
"next" link)

  GET call "networks?limit=5_reverse=False" returns a first result
  page that only includes a "previous" link (should only contain a
  "next" link)

  It seems that either the links are generated in a faulty manner or the
  wrong page is returned (should be 1st page).

  We are using version 2.0

  Thx and best regards,
  Sascha

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771707] Re: allocation candidates with nested providers have inappropriate candidates when traits specified

2018-06-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/567150
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0a6b5676886d2fb07bdf0ae88b22f3cd3f1e2b4d
Submitter: Zuul
Branch:master

commit 0a6b5676886d2fb07bdf0ae88b22f3cd3f1e2b4d
Author: Tetsuro Nakamura 
Date:   Thu May 10 06:13:41 2018 +0900

Add traits check in nested provider candidates

This patch adds trait check in the path used in getting allocation
candidates with nested providers creating a trait check helper
function, _check_traits_for_alloc_request().

Change-Id: I728825a03db9f6419c8f4b3fa23aef63ec8baa5e
Blueprint: nested-resource-providers-allocation-candidates
Closes-Bug: #1771707


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771707

Title:
  allocation candidates with nested providers have inappropriate
  candidates when traits specified

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  * We are setting up two compute nodes with numa node & pf nested providers.
    And only one pf from cn1 has HW_NIC_OFFLOAD_GENEVE trait.

     compute node (cn1)
  [CPU:16, MEMORY_MB:32768]
   /+++\
  / \
     cn1_numa0  cn1_numa1
     | ++ |
     | ++ |
     cn1_numa0_pf0 ++ cn1_numa1_pf1 (trait=HW_NIC_OFFLOAD_GENEVE)
  [SRIOV_NET_VF:8]  [SRIOV_NET_VF:8]

     compute node (cn2)
     [CPU:16, MEMORY_MB:32768]
  /++\
     /+++ \
     cn2_numa0 ++ cn2_numa1
    |  |
    |  |
       cn2_numa0_pf0 ++ cn2_numa1_pf1
       [SRIOV_NET_VF:8]  [SRIOV_NET_VF:8]

  * Next request with
  - resources={CPU: 2, MEMORY_MB: 256, SRIOV_NET_VF: 1}
  - required_traits=[HW_NIC_OFFLOAD_GENEVE]

  * The expected result is to get allocation request with only “cn1_numa1_pf1”,
  [('cn1’, fields.ResourceClass.VCPU, 2),
   ('cn1’, fields.ResourceClass.MEMORY_MB, 256),
   ('cn1_numa1_pf1’, fields.ResourceClass.SRIOV_NET_VF, 1)],

  * But actually we also get allocation request with “cn1_numa1_pf0”  from the 
same tree with traits.
  [('cn1’, fields.ResourceClass.VCPU, 2),
   ('cn1’, fields.ResourceClass.MEMORY_MB, 256),
   ('cn1_numa1_pf1’, fields.ResourceClass.SRIOV_NET_VF, 1)],
  [('cn1’, fields.ResourceClass.VCPU, 2),
   ('cn1’, fields.ResourceClass.MEMORY_MB, 256),
   ('cn1_numa0_pf0', fields.ResourceClass.SRIOV_NET_VF, 1)],

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733364] Re: dhcpagentscheduler extension not documented in api-ref

2018-06-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/565173
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=4a24ba71db7cba6f60f08ed66e59a25b9ba480a0
Submitter: Zuul
Branch:master

commit 4a24ba71db7cba6f60f08ed66e59a25b9ba480a0
Author: Michal Kelner Mishali 
Date:   Mon Apr 30 12:16:16 2018 +0300

Documenting DHCP agent scheduler

Closes-Bug: #1733364

Change-Id: Ibf854d1f8d6cb80d1d831c070e97274f902d142c
Signed-off-by: Michal Kelner Mishali 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733364

Title:
  dhcpagentscheduler extension not documented in api-ref

Status in neutron:
  Fix Released

Bug description:
  The dhcpagentscheduler extension is not documented in our api-ref:
  - The extension needs to be documented on the agents resource as well as the 
attr(s) it adds.
  - The extension needs to be documented on the networks resource as well as 
the attr(s) it adds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775382] [NEW] neutron-openvswitch-agent cannot start on Windows

2018-06-06 Thread Claudiu Belu
Public bug reported:

Currently, the neutron-openvswitch-agent cannot start on Windows [1] due
to various Linux-centric modules being imported on Windows.

This issue only affects master.


[1] http://paste.openstack.org/show/722788/

** Affects: neutron
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775382

Title:
  neutron-openvswitch-agent cannot start on Windows

Status in neutron:
  In Progress

Bug description:
  Currently, the neutron-openvswitch-agent cannot start on Windows [1]
  due to various Linux-centric modules being imported on Windows.

  This issue only affects master.

  
  [1] http://paste.openstack.org/show/722788/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775308] Re: Listing placement usages (total or per resource provider) in a new process can result in a 500

2018-06-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/572652
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=724d440122232a5bfdfec51eb0d37ca4f1d748d8
Submitter: Zuul
Branch:master

commit 724d440122232a5bfdfec51eb0d37ca4f1d748d8
Author: Chris Dent 
Date:   Tue Jun 5 18:07:05 2018 -0700

Ensure resource class cache when listing usages

In rare circumstances it is possible to list usages in a new placement
process that has not yet instantiated the _RC_CACHE but for which
there are inventories and allocations in the database (added by
other processes running against the same db). Before this change
that would cause a 500 error (AttributeError) when the Usage objects
in the UsageList were instantiated.

The fix is to added _ensure_rc_cache to the two list methods. The
addition is done there rather than in the _from_db_object as the
latter would cause a lot of redundant checks.

While we could probably devise a test for this, it's perhaps good
enough to evaluate the change by inspection. If not, suggestions
welcome.

Change-Id: I00f7dee26f031366dbc0d3d6a03abe89afeb85fd
Closes-Bug: #1775308


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775308

Title:
  Listing placement usages (total or per resource provider) in a new
  process can result in a 500

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When requesting /usages or /resource_providers/{uuid}/usages it is
  possible to cause a 500 error if placement is running in a multi-
  process scenario and the usages query is the first request a process
  has received. This is because the methods which provide UsageLists do
  not _ensure_rc_cache, resulting in:

File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/placement/objects/resource_provider.py",
 line 2374, in _from_db_object
 rc_str = _RC_CACHE.string_from_id(source['resource_class_id'])
 AttributeError: 'NoneType' object has no attribute 'string_from_id'

  We presumably don't see this in our usual testing because any process
  has already had other requests happen, setting the cache.

  For now, the fix is to add the _ensure_rc_cache call in the right
  places, but long term if/when we switch to the os-resource-class model
  we can do the caching or syncing a bit differently (see
  https://review.openstack.org/#/c/553857/ for an example).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1775308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775371] [NEW] cloud-init (18.2) fails on decoding proc1 env

2018-06-06 Thread Kurt Garloff
Public bug reported:

cloud-init-18.2 on an openSUSE-15 (python 3.6) kiwi image fails with
this:

failed run of stage init-local

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 655, in 
status_wrapper
ret = functor(name, args)
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 222, in 
main_init
network=not args.local)]
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 156, in 
attempt_cmdline_url
cmdline = util.get_cmdline()
  File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 1351, in 
get_cmdline
if is_container():
  File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 2075, in 
is_container
pid1env = get_proc_env(1)
  File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 2109, in 
get_proc_env
contents = load_file(fn)
  File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 1338, in 
load_file
return decode_binary(contents)
  File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 150, in 
decode_binary
return blob.decode(encoding)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3432: 
invalid start byte


This is from reading /proc/1/environ

hexdump around the relvant offset (3432 = 0x0d68) shows me:
0d50  32 61 32 32 37 37 30 34  66 00 42 4f 4f 54 41 42  |2a227704f.BOOTAB|
0d60  4c 45 5f 46 4c 41 47 3d  80 00 69 6e 69 74 3d 2f  |LE_FLAG=..init=/|

So there we go: BOOTABLE_FLAG=\x80 does not decode to utf-8 ...

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1775371

Title:
  cloud-init (18.2) fails on decoding proc1 env

Status in cloud-init:
  New

Bug description:
  cloud-init-18.2 on an openSUSE-15 (python 3.6) kiwi image fails with
  this:

  failed run of stage init-local
  
  Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 655, in 
status_wrapper
  ret = functor(name, args)
File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 222, in 
main_init
  network=not args.local)]
File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 156, in 
attempt_cmdline_url
  cmdline = util.get_cmdline()
File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 1351, in 
get_cmdline
  if is_container():
File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 2075, in 
is_container
  pid1env = get_proc_env(1)
File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 2109, in 
get_proc_env
  contents = load_file(fn)
File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 1338, in 
load_file
  return decode_binary(contents)
File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 150, in 
decode_binary
  return blob.decode(encoding)
  UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3432: 
invalid start byte
  

  This is from reading /proc/1/environ

  hexdump around the relvant offset (3432 = 0x0d68) shows me:
  0d50  32 61 32 32 37 37 30 34  66 00 42 4f 4f 54 41 42  |2a227704f.BOOTAB|
  0d60  4c 45 5f 46 4c 41 47 3d  80 00 69 6e 69 74 3d 2f  |LE_FLAG=..init=/|

  So there we go: BOOTABLE_FLAG=\x80 does not decode to utf-8 ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1775371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp