[Yahoo-eng-team] [Bug 1776587] [NEW] Configure neutron services on compute and controller node., keystone listens port 5000 but document instructs to configure the service at 35357

2018-06-12 Thread johnpham
Public bug reported:

Hi everyone,

I was following the document to install and configure the neutron
services on a controller and a compute node. The document specified the
services to authenticate at http://controller:35357

"https://docs.openstack.org/neutron/queens/install/compute-install-
ubuntu.html"

The services wasn't working and "openstack network agent list" returned an 
empty table.
However, I realised keystone service only listen to port 5000/v3.

I also checked to see if any process is listening to port 35357 on the 
controller, but got nothing
"lsof -i :35357" returned empty

This might be an issue with the document, however I am 100% positive. So
it would be great if someone can check and confirm this.

setup: 2 nodes running Ubuntu 16.04
---
Release: 12.0.3.dev25 on 2018-06-09 01:18
SHA: 9eef1db160521076d8243f1980e681f0f04ecbc6
Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/compute-install-ubuntu.rst
URL: 
https://docs.openstack.org/neutron/queens/install/compute-install-ubuntu.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776587

Title:
  Configure neutron services on compute and controller node., keystone
  listens port 5000 but document instructs to configure the service at
  35357

Status in neutron:
  New

Bug description:
  Hi everyone,

  I was following the document to install and configure the neutron
  services on a controller and a compute node. The document specified
  the services to authenticate at http://controller:35357

  "https://docs.openstack.org/neutron/queens/install/compute-install-
  ubuntu.html"

  The services wasn't working and "openstack network agent list" returned an 
empty table.
  However, I realised keystone service only listen to port 5000/v3.

  I also checked to see if any process is listening to port 35357 on the 
controller, but got nothing
  "lsof -i :35357" returned empty

  This might be an issue with the document, however I am 100% positive.
  So it would be great if someone can check and confirm this.

  setup: 2 nodes running Ubuntu 16.04
  ---
  Release: 12.0.3.dev25 on 2018-06-09 01:18
  SHA: 9eef1db160521076d8243f1980e681f0f04ecbc6
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/compute-install-ubuntu.rst
  URL: 
https://docs.openstack.org/neutron/queens/install/compute-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1776587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621667] Re: cc_locale fails on CentOS 7

2018-06-12 Thread Launchpad Bug Tracker
[Expired for cloud-init because there has been no activity for 60 days.]

** Changed in: cloud-init
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1621667

Title:
  cc_locale fails on CentOS 7

Status in cloud-init:
  Expired

Bug description:
  Sep 08 17:14:56 testing03.novalocal cloud-init[1465]: [CLOUDINIT] 
handlers.py[DEBUG]: finish: modules-config/config-locale: FAIL: running 
config-locale with frequency once-per-instance
  Sep 08 17:14:56 testing03.novalocal cloud-init[1465]: [CLOUDINIT] 
util.py[WARNING]: Running module locale ()
 failed
  Sep 08 17:14:56 testing03.novalocal cloud-init[1465]: 2016-09-08 17:14:56,954 
- util.py[WARNING]: Running module locale ()
 failed
  Sep 08 17:14:56 testing03.novalocal cloud-init[1465]: [CLOUDINIT] 
util.py[DEBUG]: Running module locale ()
 failed
Traceback (most recent 
call last):
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/stages.py",
 line 785, in _run_modules
freq=freq)
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/cloud.py",
 line 70, in run
return 
self._runners.run(name, functor, args, freq, clear_on_fail)
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/helpers.py",
 line 199, in run
results = 
functor(*args)
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/config/cc_locale.py",
 line 37, in handle

cloud.distro.apply_locale(locale, locale_cfgfile)
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/distros/rhel.py",
 line 133, in apply_locale

rhel_util.update_sysconfig_file(out_fn, locale_cfg)
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/distros/rhel_util.py",
 line 37, in update_sysconfig_file
(exists, contents) 
= read_sysconfig_file(fn)
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/distros/rhel_util.py",
 line 64, in read_sysconfig_file
return (exists, 
SysConf(contents))
  File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/distros/parsers/sys_conf.py",
 line 61, in __init__

write_empty_values=True)
  File 
"/usr/lib/python2.7/site-packages/configobj.py", line 1242, in __init__
self._load(infile, 
configspec)
  File 
"/usr/lib/python2.7/site-packages/configobj.py", line 1302, in _load
infile = 
self._handle_bom(infile)
  File 
"/usr/lib/python2.7/site-packages/configobj.py", line 1457, in _handle_bom
if not 
line.startswith(BOM):
UnicodeDecodeError: 
'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1621667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754123] Re: [RFE] Support filter with floating IP address substring

2018-06-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1754123

Title:
  [RFE] Support filter with floating IP address substring

Status in neutron:
  Expired

Bug description:
  This report proposes to introduce a new filter for filtering
  floatingips list result with substring of IP address. For example:

GET /v2.0/floatingips?floating_ip_address_substr=172.24.4.

  This allows users/admins to efficiently retrieve a list of floating IP
  addresses within a network, which is the common usage pattern in real
  production scenario.

  A use case is that cloud admin finds some suspicious traffics from
  some known floatingips cidr and want to locate the targets (i.e. the
  VMs). Retrieve a filtered list of floating IP addresses would be the
  first step for them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1754123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776566] [NEW] DVR: FloatingIP create throws an error if the L3 agent is not running in the given host

2018-06-12 Thread Swaminathan Vasudevan
Public bug reported:

FloatingIP create throws an error if the L3 agent is not running on the given 
host for DVR Routers.
This can be reproduced by
1. Configure the global router settings to be 'Legacy' CVR routers.
2. Then configure a DVR Router by manually setting '--distributed = True' from 
CLI.
3. Create a network
4. Create a Subnet
5. Attach the subnet to the DVR router
6. Configure the Gateway for the Router.
7. Then create a VM on the created Subnet
8. Now create a FloatingIP and associate it with the VM port.
9. You would see an 'Internal Server Error' while creating the FloatingIP.

~/devstack$ neutron floatingip-associate 1cafc567-c6fc-4424-9c44-ab7d90bc6ce0 
5c95fa16-a8cc-4d93-8f31-988f692e01ae
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Request Failed: internal server error while processing your request.


The reason is before creating the 'FloatingIP Agent Gateway Port' it checks for 
the Agent type by the given host, and it raises an Exception since the Agent is 
not running on the Compute Host.

This is basically a Test Error, but still we should handle the error
condition and not throw an Internal Server Error.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-dvr-backlog

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Importance: Critical => High

** Changed in: neutron
   Importance: High => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776566

Title:
  DVR: FloatingIP create throws an error if the L3 agent is not running
  in the given host

Status in neutron:
  Confirmed

Bug description:
  FloatingIP create throws an error if the L3 agent is not running on the given 
host for DVR Routers.
  This can be reproduced by
  1. Configure the global router settings to be 'Legacy' CVR routers.
  2. Then configure a DVR Router by manually setting '--distributed = True' 
from CLI.
  3. Create a network
  4. Create a Subnet
  5. Attach the subnet to the DVR router
  6. Configure the Gateway for the Router.
  7. Then create a VM on the created Subnet
  8. Now create a FloatingIP and associate it with the VM port.
  9. You would see an 'Internal Server Error' while creating the FloatingIP.

  ~/devstack$ neutron floatingip-associate 1cafc567-c6fc-4424-9c44-ab7d90bc6ce0 
5c95fa16-a8cc-4d93-8f31-988f692e01ae
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Request Failed: internal server error while processing your request.

  
  The reason is before creating the 'FloatingIP Agent Gateway Port' it checks 
for the Agent type by the given host, and it raises an Exception since the 
Agent is not running on the Compute Host.

  This is basically a Test Error, but still we should handle the error
  condition and not throw an Internal Server Error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1776566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776469] Re: neutron-netns-cleanup explodes when trying to delete an OVS internal port

2018-06-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/574712
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7458575cfbc00a9bedf4d514a95e9b891639d5e8
Submitter: Zuul
Branch:master

commit 7458575cfbc00a9bedf4d514a95e9b891639d5e8
Author: Miguel Angel Ajo 
Date:   Tue Jun 12 14:35:39 2018 +0200

Convert missing exception on device.link.delete()

Once we started using oslo.privsep the call to device.link.delete()
should return RuntimeError when the device can't be handled by ip link
for example, when it's an ovs internal device.

Closes-Bug: #1776469

Change-Id: Ibf4b0bbb54aef38fc569036880668c745cb5c096


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776469

Title:
  neutron-netns-cleanup explodes when trying to delete an OVS internal
  port

Status in neutron:
  Fix Released

Bug description:
  
  Apparently, the exception is not bubbling up out of privsep, and the cleanup 
exits instead of retrying with ovsdb del port:

  
https://github.com/openstack/neutron/blob/100491cec72ecf694cc8cbd6cd17b66a191a5bd7/neutron/cmd/netns_cleanup.py#L124

  
  def unplug_device(conf, device):
  orig_log_fail_as_error = device.get_log_fail_as_error()
  device.set_log_fail_as_error(False)
  try:
  device.link.delete()
  except RuntimeError:
  device.set_log_fail_as_error(orig_log_fail_as_error)
  # Maybe the device is OVS port, so try to delete
  ovs = ovs_lib.BaseOVS()
  bridge_name = ovs.get_bridge_for_iface(device.name)
  if bridge_name:
  bridge = ovs_lib.OVSBridge(bridge_name)
  bridge.delete_port(device.name)
  else:
  LOG.debug('Unable to find bridge for device: %s', device.name)
  finally:
  device.set_log_fail_as_error(orig_log_fail_as_error)


  neutron-netns-cleanup --config-file /usr/share/neutron/neutron-
  dist.conf --config-dir /usr/share/neutron/l3_agent --config-file
  /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini
  --config-dir /etc/neutron/conf.d/common --config-dir
  /etc/neutron/conf.d/neutron-l3-agent --agent-type l3 -d --force

  
  2018-06-12 11:39:26.868 254573 INFO neutron.common.config [-] Logging enabled!
  2018-06-12 11:39:26.868 254573 INFO neutron.common.config [-] 
/usr/bin/neutron-netns-cleanup version 13.0.0.0b2.dev174
  2018-06-12 11:39:26.868 254573 DEBUG neutron.common.config [-] command line: 
/usr/bin/neutron-netns-cleanup --config-file 
/usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini 
--config-dir /etc/neutron/conf.d/common --config-dir 
/etc/neutron/conf.d/neutron-l3-agent --agent-type l3 -d --force setup_logging 
/usr/lib/python2.7/site-packages/neutron/common/config.py:104
  2018-06-12 11:39:26.869 254573 INFO oslo.privsep.daemon [-] Running privsep 
helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', 
'--config-file', '/etc/neutron/neutron.conf', '--config-file', 
'/etc/neutron/l3_agent.ini', '--config-dir', 
'/etc/neutron/conf.d/neutron-l3-agent', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmpNU7Loh/privsep.sock']
  2018-06-12 11:39:27.455 254573 INFO oslo.privsep.daemon [-] Spawned new 
privsep daemon via rootwrap
  2018-06-12 11:39:27.456 254573 DEBUG oslo.privsep.daemon [-] Accepted privsep 
connection to /tmp/tmpNU7Loh/privsep.sock __init__ 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:331
  2018-06-12 11:39:27.386 254707 INFO oslo.privsep.daemon [-] privsep daemon 
starting
  2018-06-12 11:39:27.390 254707 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
  2018-06-12 11:39:27.395 254707 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): 
CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
  2018-06-12 11:39:27.395 254707 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 254707
  2018-06-12 11:39:27.458 254707 DEBUG oslo.privsep.daemon [-] privsep: 
request[140529299646096]: (1,) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
  2018-06-12 11:39:27.458 254707 DEBUG oslo.privsep.daemon [-] privsep: 
reply[140529299646096]: (2,) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
  2018-06-12 11:39:27.458 254707 DEBUG oslo.privsep.daemon [-] privsep: 
request[140529299646096]: (3, 
'neutron.privileged.agent.linux.ip_lib.list_netns', (), {}) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
  2018-06-12 11:39:27.501 254707 DEBUG oslo.privsep.daemon [-] privsep: 
reply[140529299646096]: (4, ['qdhcp-64aa11b0-d9ff-47c3-9a44-2906bc22d724', 

[Yahoo-eng-team] [Bug 1776468] Re: neutron-netns-cleanup does not configure privsep correctly

2018-06-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/574703
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5106dfe5217b5274305ab565e23dbd1548c1f756
Submitter: Zuul
Branch:master

commit 5106dfe5217b5274305ab565e23dbd1548c1f756
Author: Miguel Angel Ajo 
Date:   Tue Jun 12 14:02:58 2018 +0200

Configure privsep helper in neutron-netns-cleanup

This closes a bug that makes netns-cleanup crash when
trying to invoke privsep helper, because the rootwrap
config isn't correctly passed down to the privsep helper
library.

Closes-Bug: #1776468

Change-Id: I8258a44a9e2542ec222ebac72c4b889858ab2fc2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776468

Title:
  neutron-netns-cleanup does not configure privsep correctly

Status in neutron:
  Fix Released

Bug description:
  It crashes when trying to invoke privsep:

  
  2018-06-12 10:37:05.932 1038529 INFO neutron.common.config [-] Logging 
enabled!
  2018-06-12 10:37:05.932 1038529 INFO neutron.common.config [-] 
/usr/bin/neutron-netns-cleanup version 13.0.0.0b2.dev174
  2018-06-12 10:37:05.932 1038529 DEBUG neutron.common.config [-] command line: 
/usr/bin/neutron-netns-cleanup --config-file 
/usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini 
--config-dir /etc/neutron/conf.d/common --config-dir 
/etc/neutron/conf.d/neutron-l3-agent --agent-type l3 -d --force setup_logging 
/usr/lib/python2.7/site-packages/neutron/common/config.py:104
  2018-06-12 10:37:05.933 1038529 INFO oslo.privsep.daemon [-] Running privsep 
helper: ['sudo', 'privsep-helper', '--config-file', 
'/usr/share/neutron/neutron-dist.conf', '--config-file', 
'/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/l3_agent.ini', 
'--config-dir', '/etc/neutron/conf.d/neutron-l3-agent', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmpwc58JK/privsep.sock']
  2018-06-12 10:37:05.954 1038529 WARNING oslo.privsep.daemon [-] privsep log:
  2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log: 
We trust you have received the usual lecture from the local System
  2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log: 
Administrator. It usually boils down to these three things:
  2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:
  2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:  
   #1) Respect the privacy of others.
  2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:  
   #2) Think before you type.
  2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:  
   #3) With great power comes great responsibility.
  2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:
  2018-06-12 10:37:05.956 1038529 WARNING oslo.privsep.daemon [-] privsep log: 
sudo: no tty present and no askpass program specified
  2018-06-12 10:37:05.955 1038529 CRITICAL oslo.privsep.daemon [-] privsep 
helper command exited non-zero (1)
  2018-06-12 10:37:05.961 1038529 CRITICAL neutron [-] Unhandled error: 
FailedToDropPrivileges: privsep helper command exited non-zero (1)
  2018-06-12 10:37:05.961 1038529 ERROR neutron Traceback (most recent call 
last):
  2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/bin/neutron-netns-cleanup", line 10, in 
  2018-06-12 10:37:05.961 1038529 ERROR neutron sys.exit(main())
  2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/cmd/netns_cleanup.py", line 289, in 
main
  2018-06-12 10:37:05.961 1038529 ERROR neutron 
cleanup_network_namespaces(conf)
  2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/cmd/netns_cleanup.py", line 259, in 
cleanup_network_namespaces
  2018-06-12 10:37:05.961 1038529 ERROR neutron 
ip_lib.list_network_namespaces()
  2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 1100, in 
list_network_namespaces
  2018-06-12 10:37:05.961 1038529 ERROR neutron return 
privileged.list_netns(**kwargs)
  2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 206, in 
_wrap
  2018-06-12 10:37:05.961 1038529 ERROR neutron self.start()
  2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 217, in 
start
  2018-06-12 10:37:05.961 1038529 ERROR neutron channel = 
daemon.RootwrapClientChannel(context=self)
  2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 

[Yahoo-eng-team] [Bug 1776541] [NEW] Enhance PCI-DSS compliance documentation

2018-06-12 Thread John Dennis
Public bug reported:

Keystone provides some documentation on PCI-DSS compliance but it's less
than ideal if you're trying to answer the following questions:

* What are the PCI-DSS requirements?
* How does Keystone satisfy the requirements?
* What release did Keystone add support for a given requirement?
* How do you configure to meet the requirement?

You'll discover the information is (mostly) there but it's scattered
across several documents, release notes, etc. It would be good to have
one document that pulls all the information listed above into one
location to serve as a focal point for those needing to understand PCI-
DSS compliance.

I have written such a document. Rather than duplicate the information in
the other documents it references the information via links where
possible.

This bug is mostly to have something to reference for the Gerrit review
for when the doc is submitted.

** Affects: keystone
 Importance: Undecided
 Assignee: John Dennis (jdennis-a)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => John Dennis (jdennis-a)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1776541

Title:
  Enhance PCI-DSS compliance documentation

Status in OpenStack Identity (keystone):
  New

Bug description:
  Keystone provides some documentation on PCI-DSS compliance but it's
  less than ideal if you're trying to answer the following questions:

  * What are the PCI-DSS requirements?
  * How does Keystone satisfy the requirements?
  * What release did Keystone add support for a given requirement?
  * How do you configure to meet the requirement?

  You'll discover the information is (mostly) there but it's scattered
  across several documents, release notes, etc. It would be good to have
  one document that pulls all the information listed above into one
  location to serve as a focal point for those needing to understand
  PCI-DSS compliance.

  I have written such a document. Rather than duplicate the information
  in the other documents it references the information via links where
  possible.

  This bug is mostly to have something to reference for the Gerrit
  review for when the doc is submitted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1776541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771728] Re: libvirt: Shared Resource Provider (RP) DISK_GB is NOT taken into account if it's configured with Compute Node RPs

2018-06-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/560459
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=7e48b227d0622b268f74d1a559aca4cd3784f7cf
Submitter: Zuul
Branch:master

commit 7e48b227d0622b268f74d1a559aca4cd3784f7cf
Author: Eric Fried 
Date:   Wed Apr 11 09:47:49 2018 -0500

libvirt: Don't report DISK_GB if sharing

For libvirt, if the operator wishes to use shared storage, they must
manually configure the sharing resource provider in placement and
associate it via aggregate with the compute node.  However, the libvirt
driver was still reporting the (same) DISK_GB inventory in the compute
node provider.

With this patch, we check the provider tree to see if a sharing provider
of DISK_GB is present.  If so, we don't report that inventory - because
it's already accounted for by the sharing provider.

Co-Authored-By: Bhagyashri Shewale 
Closes-Bug: #1771728
Change-Id: Iea283322124cb35fc0bc6d25f35548621e8c8c2f


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771728

Title:
  libvirt: Shared Resource Provider (RP) DISK_GB is NOT taken into
  account if it's configured with Compute Node RPs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If user configures shared resource provider for disk_gb resources and
  links it with the compute nodes resource provider using aggregates
  then the disk_gb resources are still allocated from compute Node
  resource provider instead of shared resource provider.

  Environment Details:

  commit id: 0051e622e90ef4ea2678f3a1135043da460cad19

  1. Compute Host A
  2. NFS server on Host B 
  3. Disk files stored in the instance path on each compute nodes is mounted on 
NFS server storage.

  Steps to reproduce:

  1. Create shared resource provider

  $ curl -g -i -X POST http:///placement/resource_providers  -H
  "Content-Type: application/json" -H "X-Auth-Token: "
  -H "OpenStack-API-Version: placement latest" -d '{"name": "shared-
  disk", "uuid": '

  2. Create inventory DISK_GB against shared resource provider created
  step 1.

  $ curl -g -i -X POST http:///placement/resource_providers//inventories -H "Content-Type: application/json" -H "X-Auth-Token:
  " " -d '{"resource_class": "DISK_GB", "total": 78,
  "reserved": 0, "min_unit": 1, "max_unit": 78, "step_size": 1,
  "allocation_ratio": 1.0}'

  3. Create an aggregate

  $ nova aggregate-create shared_resource_aggregate

  4. link both the compute node RP and shared resource provider to
  agrregate create in step 3

  $ curl -g -i -X PUT http:///placement/resource_providers//aggregates -H "Accept: application/json" -H "Content-Type:
  application/json" -H "OpenStack-API-Version: placement latest" -H "x
  -auth-token: " " -d '{"aggregates": [ "" ], "resource_provider_generation": 1}'

  5. Add MISC_SHARES_VIA_AGGREGATE trait to shared resource provider

  $ curl -g -i -X PUT http:///placement/resource_providers//traits -H "Accept: application/json" -H "Content-Type:
  application/json" -H "OpenStack-API-Version: placement latest" -H "x
  -auth-token: " " -d '{"traits":
  ["MISC_SHARES_VIA_AGGREGATE"], "resource_provider_generation": 1}'

  6. Boot the instance:

  Flavor Details:

  $ nova boot --flavor 1 --image  

  7. Check usages of compute node resource provider:

  $ curl -g -i -X GET http:///placement/resource_providers//usages -H "Accept: application/json" -H "Content-Type: application/json" -H 
"OpenStack-API-Version: placement latest" -H "x-auth-token: 
27903d0f-cc28-45ae-ae2e-3105c9e640b9"
  HTTP/1.1 200 OK
  Date: Wed, 28 Mar 2018 06:22:59 GMT
  Server: Apache/2.4.18 (Ubuntu)
  Content-Length: 90
  Content-Type: application/json
  Cache-Control: no-cache
  Last-Modified: Wed, 28 Mar 2018 06:22:59 GMT
  openstack-api-version: placement 1.21
  vary: openstack-api-version
  x-openstack-request-id: req-b3fe929f-187f-47d6-92a6-03c605a39848
  Connection: close

  {"resource_provider_generation": 5, "usages": {"VCPU": 1, "MEMORY_MB":
  512, "DISK_GB": 1}}

  8. Check usages of shared resource provider:

  $ curl -g -i -X GET http:// /placement/resource_providers//usages -H "Accept: application/json" -H "Content-Type: application/json" -H 
"OpenStack-API-Version: placement latest" -H "x-auth-token: 
27903d0f-cc28-45ae-ae2e-3105c9e640b9"
  HTTP/1.1 200 OK
  Date: Wed, 28 Mar 2018 06:23:05 GMT
  Server: Apache/2.4.18 (Ubuntu)
  Content-Length: 61
  Content-Type: application/json
  Cache-Control: no-cache
  Last-Modified: Wed, 28 Mar 2018 06:23:05 GMT
  openstack-api-version: placement 1.21
  vary: openstack-api-version
  x-openstack-request-id: req-8093854e-c5ab-429a-8dbc-474ef06ed243
  Connection: close

  {"resource_provider_generation": 3, "usages": {"DISK_GB": 0}}

  Observation:
  By comparing usages details 

[Yahoo-eng-team] [Bug 1776532] [NEW] LDAP backend should support python-ldap trace logging

2018-06-12 Thread John Dennis
Public bug reported:

The python-ldap library has a diagnostic and debugging feature called
trace logging. The information in the trace log is crucial when trying
to diagnose LDAP problems, especially connection problems. This is
because what is visible at the Keystone backend is obscured by 2 other
abstraction layers, the OpenStack ldappool library and the
ReconnectLDAPObject implementation in python-ldap. When connection
problems occur you need to be able to see what happened at the lowest
level in order to understand what the upper abstraction layers are
doing. Trace logging is also useful for other LDAP information besides
connection issues.

python-ldap controls trace logging with these two parameters:

trace_level: An integer controlling the verbosity of the trace information
trace_file: A Python file object used when writing trace info.

Unfortunately as of today there is no way to turn on trace logging other
than editing the source code to change the parameters passed into
various python-ldap methods. As of python-ldap 3.1.0 you can set the
environment variables PYTHON_LDAP_TRACE_LEVEL PYTHON_LDAP_TRACE_FILE (a
pathname) to set these values without a code change. This version of
python-ldap is very new (May 2018), however setting environment
variables to turn on trace logging is not easy because of the way
Keystone is deployed as an operating system service. It would be
preferable to add two new configuration options to the LDAP section to
control the trace_level and trace_file and have the ldap backend set
these values when creating python-ldap objects. It would be good to set
the trace_file to the same logging file object the rest of the backend
uses so the information is contained in one place and interleaved.

Also note there is already a LDAP debug level in the config,
'debug_level', which turns on debugging in the openldap C library via
the OPT_DEBUG_LEVEL ldap option. python-ldap calls this library to
perform many of it's operations and as such is one level below python-
ldap. This debug feature is independent of the trace facility in python-
ldap. We need both facilities.

** Affects: keystone
 Importance: Undecided
 Assignee: John Dennis (jdennis-a)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => John Dennis (jdennis-a)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1776532

Title:
  LDAP backend should support python-ldap trace logging

Status in OpenStack Identity (keystone):
  New

Bug description:
  The python-ldap library has a diagnostic and debugging feature called
  trace logging. The information in the trace log is crucial when trying
  to diagnose LDAP problems, especially connection problems. This is
  because what is visible at the Keystone backend is obscured by 2 other
  abstraction layers, the OpenStack ldappool library and the
  ReconnectLDAPObject implementation in python-ldap. When connection
  problems occur you need to be able to see what happened at the lowest
  level in order to understand what the upper abstraction layers are
  doing. Trace logging is also useful for other LDAP information besides
  connection issues.

  python-ldap controls trace logging with these two parameters:

  trace_level: An integer controlling the verbosity of the trace information
  trace_file: A Python file object used when writing trace info.

  Unfortunately as of today there is no way to turn on trace logging
  other than editing the source code to change the parameters passed
  into various python-ldap methods. As of python-ldap 3.1.0 you can set
  the environment variables PYTHON_LDAP_TRACE_LEVEL
  PYTHON_LDAP_TRACE_FILE (a pathname) to set these values without a code
  change. This version of python-ldap is very new (May 2018), however
  setting environment variables to turn on trace logging is not easy
  because of the way Keystone is deployed as an operating system
  service. It would be preferable to add two new configuration options
  to the LDAP section to control the trace_level and trace_file and have
  the ldap backend set these values when creating python-ldap objects.
  It would be good to set the trace_file to the same logging file object
  the rest of the backend uses so the information is contained in one
  place and interleaved.

  Also note there is already a LDAP debug level in the config,
  'debug_level', which turns on debugging in the openldap C library via
  the OPT_DEBUG_LEVEL ldap option. python-ldap calls this library to
  perform many of it's operations and as such is one level below python-
  ldap. This debug feature is independent of the trace facility in
  python-ldap. We need both facilities.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1776532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1774666] Re: Bond interfaces stuck at 1500 MTU on Bionic

2018-06-12 Thread Chad Smith
This is a cloud-init issue only. Once cloud-init is SRU'd netplan will
properly set mtu.

** Changed in: netplan.io (Ubuntu Artful)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1774666

Title:
  Bond interfaces stuck at 1500 MTU on Bionic

Status in cloud-init:
  Fix Committed
Status in MAAS:
  Invalid
Status in cloud-init package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  New
Status in netplan.io source package in Xenial:
  Invalid
Status in cloud-init source package in Artful:
  New
Status in netplan.io source package in Artful:
  Invalid
Status in cloud-init source package in Bionic:
  New
Status in netplan.io source package in Bionic:
  Invalid
Status in cloud-init source package in Cosmic:
  Confirmed
Status in netplan.io source package in Cosmic:
  Confirmed

Bug description:
  When deploying a machine through MAAS with bonded network interfaces,
  the bond does not have a 9000 byte MTU applied despite the attached
  VLANs having had a 9000 MTU explicitly set. The MTU size is set on the
  bond members, but not on the bond itself in Netplan. Consequently,
  when the bond is brought up, the interface MTU is decreased from 9000
  to 1500. Manually changing the interface MTU after boot is successful.

  This is not observed when deploying Xenial on the same machine. The
  bond comes up at the expected 9000 byte MTU.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1774666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774666] Re: Bond interfaces stuck at 1500 MTU on Bionic

2018-06-12 Thread Chad Smith
This is a cloud-init issue only. Once cloud-init is SRU'd netplan will
properly set mtu.

** Changed in: netplan.io (Ubuntu Xenial)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1774666

Title:
  Bond interfaces stuck at 1500 MTU on Bionic

Status in cloud-init:
  Fix Committed
Status in MAAS:
  Invalid
Status in cloud-init package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  New
Status in netplan.io source package in Xenial:
  Invalid
Status in cloud-init source package in Artful:
  New
Status in netplan.io source package in Artful:
  Invalid
Status in cloud-init source package in Bionic:
  New
Status in netplan.io source package in Bionic:
  Invalid
Status in cloud-init source package in Cosmic:
  Confirmed
Status in netplan.io source package in Cosmic:
  Confirmed

Bug description:
  When deploying a machine through MAAS with bonded network interfaces,
  the bond does not have a 9000 byte MTU applied despite the attached
  VLANs having had a 9000 MTU explicitly set. The MTU size is set on the
  bond members, but not on the bond itself in Netplan. Consequently,
  when the bond is brought up, the interface MTU is decreased from 9000
  to 1500. Manually changing the interface MTU after boot is successful.

  This is not observed when deploying Xenial on the same machine. The
  bond comes up at the expected 9000 byte MTU.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1774666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774666] Re: Bond interfaces stuck at 1500 MTU on Bionic

2018-06-12 Thread Chad Smith
This is a cloud-init issue only. Once cloud-init is SRU'd netplan will
properly set mtu.

** Changed in: netplan.io (Ubuntu Bionic)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1774666

Title:
  Bond interfaces stuck at 1500 MTU on Bionic

Status in cloud-init:
  Fix Committed
Status in MAAS:
  Invalid
Status in cloud-init package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  New
Status in netplan.io source package in Xenial:
  Invalid
Status in cloud-init source package in Artful:
  New
Status in netplan.io source package in Artful:
  Invalid
Status in cloud-init source package in Bionic:
  New
Status in netplan.io source package in Bionic:
  Invalid
Status in cloud-init source package in Cosmic:
  Confirmed
Status in netplan.io source package in Cosmic:
  Confirmed

Bug description:
  When deploying a machine through MAAS with bonded network interfaces,
  the bond does not have a 9000 byte MTU applied despite the attached
  VLANs having had a 9000 MTU explicitly set. The MTU size is set on the
  bond members, but not on the bond itself in Netplan. Consequently,
  when the bond is brought up, the interface MTU is decreased from 9000
  to 1500. Manually changing the interface MTU after boot is successful.

  This is not observed when deploying Xenial on the same machine. The
  bond comes up at the expected 9000 byte MTU.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1774666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774205] Re: AggregateMultiTenancyIsolation uses wrong tenant_id during cold migrate

2018-06-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/571245
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8c216608194c89d281e8d2b66abd1e50e2405b01
Submitter: Zuul
Branch:master

commit 8c216608194c89d281e8d2b66abd1e50e2405b01
Author: Matt Riedemann 
Date:   Wed May 30 12:07:53 2018 -0400

Use instance project/user when creating RequestSpec during resize reschedule

When rescheduling from a failed cold migrate / resize, the compute
service does not pass the request spec back to conductor so we
create one based on the in-scope variables.

This introduces a problem for some scheduler filters like the
AggregateMultiTenancyIsolation filter since it will create the
RequestSpec using the project and user information from the current
context, which for a cold migrate is the admin and might not be
the owner of the instance (which could be in some other project).
So the AggregateMultiTenancyIsolation filter might reject the
request or select a host that fits an aggregate for the admin but
not the end user.

This fixes the problem by using the instance project/user information
when constructing the RequestSpec which will take priority over
the context in RequestSpec.from_components().

Long-term we need the compute service to pass the request spec back
to the conductor during a reschedule, but we do this first since we
can backport it.

Change-Id: Iaaf7f68d6874fd5d6e737e7d2bc589ea4a048fee
Closes-Bug: #1774205


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1774205

Title:
  AggregateMultiTenancyIsolation uses wrong tenant_id during cold
  migrate

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  The details are in this mailing list thread:

  http://lists.openstack.org/pipermail/openstack-
  operators/2018-May/015347.html

  But essentially the case is:

  * There are 3 compute hosts.
  * compute1 and compute2 are in a host aggregate and a given tenant is 
restricted to that aggregate
  * The user creates a server on compute1
  * The admin attempts to cold migrate the server which fails in the 
AggregateMultiTenancyIsolation filter because it says the tenant_id in the 
request is not part of the matching host aggregate.

  The reason is because the cold migrate task in the conductor replaces
  the original request spec, which had the instance project_id in it,
  and uses the current context, which is the admin (which could be in a
  different project):

  
https://github.com/openstack/nova/blob/stable/ocata/nova/conductor/tasks/migrate.py#L50

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1774205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774666] Re: Bond interfaces stuck at 1500 MTU on Bionic

2018-06-12 Thread Andreas Hasenack
Somehow the netplan.io and cloud-init tasks are linked in terms of those
nominations. If I approve the cloud-init ones, netplan's also get
approved.

** Also affects: cloud-init (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: netplan.io (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: netplan.io (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: netplan.io (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Cosmic)
   Importance: Undecided
   Status: Confirmed

** Also affects: netplan.io (Ubuntu Cosmic)
   Importance: Undecided
   Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1774666

Title:
  Bond interfaces stuck at 1500 MTU on Bionic

Status in cloud-init:
  Fix Committed
Status in MAAS:
  Invalid
Status in cloud-init package in Ubuntu:
  Confirmed
Status in netplan.io package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  New
Status in netplan.io source package in Xenial:
  New
Status in cloud-init source package in Artful:
  New
Status in netplan.io source package in Artful:
  New
Status in cloud-init source package in Bionic:
  New
Status in netplan.io source package in Bionic:
  New
Status in cloud-init source package in Cosmic:
  Confirmed
Status in netplan.io source package in Cosmic:
  Confirmed

Bug description:
  When deploying a machine through MAAS with bonded network interfaces,
  the bond does not have a 9000 byte MTU applied despite the attached
  VLANs having had a 9000 MTU explicitly set. The MTU size is set on the
  bond members, but not on the bond itself in Netplan. Consequently,
  when the bond is brought up, the interface MTU is decreased from 9000
  to 1500. Manually changing the interface MTU after boot is successful.

  This is not observed when deploying Xenial on the same machine. The
  bond comes up at the expected 9000 byte MTU.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1774666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760006] Re: Creating image with a file make the disk format display uncorrectly

2018-06-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/557879
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b0fccd33554d2b158b4cacb97a5502cc75fdc69d
Submitter: Zuul
Branch:master

commit b0fccd33554d2b158b4cacb97a5502cc75fdc69d
Author: wangliangyu 
Date:   Fri Mar 30 11:42:01 2018 +0800

The disk format is selected automatically when using file to create image

The disk format field has its onchange listener in horizon.forms.js file,
but it is valid only when the change is made by mouse or keyboard.
So, angularJs controller is just change its value and its listener is
not triggered. The listener can't be triggered manually within controller.
This commit set the display manually and resolve it.

Change-Id: I8c228bac9392003055a808eeb56b733ac4c9b07a
Closes-Bug: #1760006


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1760006

Title:
  Creating image with a file make the disk format display uncorrectly

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Django page:Select a image file when creating image will make the
  correct disk format could not be selected. Only must  select another
  disk format and then select the correct one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1760006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775030] Re: Security Key Pair creation is not allowed with "underscore" character from the Launch Instance menu on Horizon.

2018-06-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/572141
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=edb1aabc87f948dd4c9541dc1af7cc9d96a44694
Submitter: Zuul
Branch:master

commit edb1aabc87f948dd4c9541dc1af7cc9d96a44694
Author: Dave Hill 
Date:   Mon Jun 4 11:31:00 2018 -0400

Allow keypairs to contain a underscore

When manually creating keypairs, underscore are allowed but
when creating keypairs while instanciating an instance, it's
not allowed.  This patch solves this.

Change-Id: I0ad19bd1239b7c9ac1d84e123e478cf40508
closes-bug: #1775030


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1775030

Title:
  Security Key Pair creation is not allowed with "underscore" character
  from the Launch Instance menu on Horizon.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Using Horizon to instantiate VM:
  Instances -> Launch Instance -> Key Pair -> Create Key Pair 
  Trying to create Key Pair name with underscore character, getting "name 
contains bad characters" error - see attached screenshot.
  If you are trying to create Security Key Pair prior to VM instantiation using 
Access & Security -> Create Key Pair, "underscore" character is accepted.
  "Underscore" character shall be accepted in Security Key Pair name also in 
Instances -> Launch Instance -> Key Pair -> Create Key Pair menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1775030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776506] [NEW] Keystone JSON HOME on / fails

2018-06-12 Thread Morgan Fainberg
Public bug reported:

With the move to the compat dispatching for Flask, Keystone's JSON HOME
on GET / is now failing. This results in a 500 error and an exception
that looks like:

2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi 
[req-591e2ecd-8088-4d2e-a5ae-c23a1624187d - - - - -] Extra data: line 1 column 
5 - line 5 column 22 (char 4 - 52): ValueError: Extra data: line 1 column 5 - 
line 5 column 22 (char 4 - 52)
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 211, in 
__call__
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi result = method(req, 
**params)
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/version/controllers.py", line 167, 
in get_versions
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi v3_json_home = 
request_v3_json_home('/v3')
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/version/controllers.py", line 46, in 
request_v3_json_home
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi v3_json_home = 
jsonutils.loads(v3_json_home_str)
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_serialization/jsonutils.py", line 264, 
in loads
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi return 
json.loads(encodeutils.safe_decode(s, encoding), **kwargs)
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/json/__init__.py", line 338, in loads
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi return 
_default_decoder.decode(s)
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/json/decoder.py", line 369, in decode
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi raise 
ValueError(errmsg("Extra data", s, end, len(s)))
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi ValueError: Extra data: 
line 1 column 5 - line 5 column 22 (char 4 - 52)
2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi

** Affects: keystone
 Importance: High
 Assignee: Morgan Fainberg (mdrnstm)
 Status: In Progress

** Changed in: keystone
   Importance: Undecided => High

** Changed in: keystone
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

** Changed in: keystone
Milestone: None => rocky-3

** Changed in: keystone
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1776506

Title:
  Keystone JSON HOME on / fails

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  With the move to the compat dispatching for Flask, Keystone's JSON
  HOME on GET / is now failing. This results in a 500 error and an
  exception that looks like:

  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi 
[req-591e2ecd-8088-4d2e-a5ae-c23a1624187d - - - - -] Extra data: line 1 column 
5 - line 5 column 22 (char 4 - 52): ValueError: Extra data: line 1 column 5 - 
line 5 column 22 (char 4 - 52)
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi Traceback (most recent 
call last):
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 211, in 
__call__
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi result = 
method(req, **params)
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/version/controllers.py", line 167, 
in get_versions
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi v3_json_home = 
request_v3_json_home('/v3')
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/version/controllers.py", line 46, in 
request_v3_json_home
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi v3_json_home = 
jsonutils.loads(v3_json_home_str)
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_serialization/jsonutils.py", line 264, 
in loads
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi return 
json.loads(encodeutils.safe_decode(s, encoding), **kwargs)
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/json/__init__.py", line 338, in loads
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi return 
_default_decoder.decode(s)
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/json/decoder.py", line 369, in decode
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi raise 
ValueError(errmsg("Extra data", s, end, len(s)))
  2018-06-11 20:16:29.824 216 ERROR keystone.common.wsgi ValueError: Extra 
data: line 1 

[Yahoo-eng-team] [Bug 1776504] [NEW] flaskification

2018-06-12 Thread Morgan Fainberg
Public bug reported:

Moving keystone to Flask away from it's home-grown WSGI framework is a
long-term plan. The major reasons for this is to ensure we have an easy
way for folks start contributing.

This will include a number of improvements including:

* moving to flask-restful for API definitions

* all routable paths will be owned by the base prefix (e.g.
keystone.api.user will own everything under /user/)

* Paste Deploy removed

** Affects: keystone
 Importance: Medium
 Assignee: Morgan Fainberg (mdrnstm)
 Status: In Progress

** Changed in: keystone
   Status: New => In Progress

** Changed in: keystone
   Importance: Undecided => Medium

** Changed in: keystone
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

** Changed in: keystone
Milestone: None => ongoing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1776504

Title:
  flaskification

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Moving keystone to Flask away from it's home-grown WSGI framework is a
  long-term plan. The major reasons for this is to ensure we have an
  easy way for folks start contributing.

  This will include a number of improvements including:

  * moving to flask-restful for API definitions

  * all routable paths will be owned by the base prefix (e.g.
  keystone.api.user will own everything under /user/)

  * Paste Deploy removed

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1776504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776500] [NEW] neutron-lib is missing ipencap:4 protocol support

2018-06-12 Thread David Hill
Public bug reported:

neutron-lib is missing ipencap:4 protocol support

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776500

Title:
  neutron-lib is missing ipencap:4 protocol support

Status in neutron:
  Invalid

Bug description:
  neutron-lib is missing ipencap:4 protocol support

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1776500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776161] Re: my own test bug

2018-06-12 Thread Kristi Nikolla
** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1776161

Title:
  my own test bug

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  a bug for test by myself

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1776161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776481] [NEW] WS-based serial port proxy prevents access to console log file

2018-06-12 Thread Georg Hoesch
Public bug reported:

This bug occurs with interactive WS-based serial ports in nova. The serial
console on the Websocket works fine, but configuring these consoles
prevents access to the console logfile via 'openstack console log show'.
This bug was discovered in pike with KVM-based virtualization but seems to 
be in current master as well.

Access to the console logfile is very important because my websocket
client is usually not permanently connected.

Detailed analysis:
The console logfile is still generated by KVM/libvirt. The only difference
is that the path of the logfile node changed in the XML information for
the instance. The relevant function get_console_output() in 
nova/virt/libvirt/driver.py fails to find the logfile (it just looks for
@type='file', it should also look for @type='tcp').

I'll try to prove a fix for this myself, it shouldn't be complicated.

Any comments?

** Affects: nova
 Importance: Undecided
 Assignee: Georg Hoesch (hoesch)
 Status: New

** Description changed:

- This bug occurs with interactive WS-based serial ports in nova. The serial 
console
- on the Websocket works fine, but configuring these consoles prevents access 
to the
- console logfile via 'openstack console log show'. This bug was discovered in 
pike
- with KVM-based virtualization but seems to be in current master as well.
+ This bug occurs with interactive WS-based serial ports in nova. The serial
+ console on the Websocket works fine, but configuring these consoles
+ prevents access to the console logfile via 'openstack console log show'.
+ This bug was discovered in pike with KVM-based virtualization but seems to 
+ be in current master as well.
  
- Access to the console logfile is very important because my websocket client 
is usually
- not permanently connected.
+ Access to the console logfile is very important because my websocket
+ client is usually not permanently connected.
  
  Detailed analysis:
- The console logfile is still generated by KVM/libvirt. The only difference is 
that
- the path of the logfile node changed in the XML information for the instance.
- The relevant function get_console_output() in nova/virt/libvirt/driver.py 
fails
- to find the logfile (it just looks for @type='file', it should also look for
- @type='tcp').
+ The console logfile is still generated by KVM/libvirt. The only difference
+ is that the path of the logfile node changed in the XML information for
+ the instance. The relevant function get_console_output() in 
+ nova/virt/libvirt/driver.py fails to find the logfile (it just looks for
+ @type='file', it should also look for @type='tcp').
  
  I'll try to prove a fix for this myself, it shouldn't be complicated.
  
  Any comments?

** Changed in: nova
 Assignee: (unassigned) => Georg Hoesch (hoesch)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776481

Title:
  WS-based serial port proxy prevents access to console log file

Status in OpenStack Compute (nova):
  New

Bug description:
  This bug occurs with interactive WS-based serial ports in nova. The serial
  console on the Websocket works fine, but configuring these consoles
  prevents access to the console logfile via 'openstack console log show'.
  This bug was discovered in pike with KVM-based virtualization but seems to 
  be in current master as well.

  Access to the console logfile is very important because my websocket
  client is usually not permanently connected.

  Detailed analysis:
  The console logfile is still generated by KVM/libvirt. The only difference
  is that the path of the logfile node changed in the XML information for
  the instance. The relevant function get_console_output() in 
  nova/virt/libvirt/driver.py fails to find the logfile (it just looks for
  @type='file', it should also look for @type='tcp').

  I'll try to prove a fix for this myself, it shouldn't be complicated.

  Any comments?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776468] [NEW] neutron-netns-cleanup does not configure privsep correctly

2018-06-12 Thread Miguel Angel Ajo
Public bug reported:

It crashes when trying to invoke privsep:


2018-06-12 10:37:05.932 1038529 INFO neutron.common.config [-] Logging enabled!
2018-06-12 10:37:05.932 1038529 INFO neutron.common.config [-] 
/usr/bin/neutron-netns-cleanup version 13.0.0.0b2.dev174
2018-06-12 10:37:05.932 1038529 DEBUG neutron.common.config [-] command line: 
/usr/bin/neutron-netns-cleanup --config-file 
/usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini 
--config-dir /etc/neutron/conf.d/common --config-dir 
/etc/neutron/conf.d/neutron-l3-agent --agent-type l3 -d --force setup_logging 
/usr/lib/python2.7/site-packages/neutron/common/config.py:104
2018-06-12 10:37:05.933 1038529 INFO oslo.privsep.daemon [-] Running privsep 
helper: ['sudo', 'privsep-helper', '--config-file', 
'/usr/share/neutron/neutron-dist.conf', '--config-file', 
'/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/l3_agent.ini', 
'--config-dir', '/etc/neutron/conf.d/neutron-l3-agent', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmpwc58JK/privsep.sock']
2018-06-12 10:37:05.954 1038529 WARNING oslo.privsep.daemon [-] privsep log:
2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log: We 
trust you have received the usual lecture from the local System
2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log: 
Administrator. It usually boils down to these three things:
2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:
2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:
 #1) Respect the privacy of others.
2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:
 #2) Think before you type.
2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:
 #3) With great power comes great responsibility.
2018-06-12 10:37:05.955 1038529 WARNING oslo.privsep.daemon [-] privsep log:
2018-06-12 10:37:05.956 1038529 WARNING oslo.privsep.daemon [-] privsep log: 
sudo: no tty present and no askpass program specified
2018-06-12 10:37:05.955 1038529 CRITICAL oslo.privsep.daemon [-] privsep helper 
command exited non-zero (1)
2018-06-12 10:37:05.961 1038529 CRITICAL neutron [-] Unhandled error: 
FailedToDropPrivileges: privsep helper command exited non-zero (1)
2018-06-12 10:37:05.961 1038529 ERROR neutron Traceback (most recent call last):
2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/bin/neutron-netns-cleanup", line 10, in 
2018-06-12 10:37:05.961 1038529 ERROR neutron sys.exit(main())
2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/cmd/netns_cleanup.py", line 289, in 
main
2018-06-12 10:37:05.961 1038529 ERROR neutron 
cleanup_network_namespaces(conf)
2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/cmd/netns_cleanup.py", line 259, in 
cleanup_network_namespaces
2018-06-12 10:37:05.961 1038529 ERROR neutron 
ip_lib.list_network_namespaces()
2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 1100, in 
list_network_namespaces
2018-06-12 10:37:05.961 1038529 ERROR neutron return 
privileged.list_netns(**kwargs)
2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 206, in 
_wrap
2018-06-12 10:37:05.961 1038529 ERROR neutron self.start()
2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 217, in 
start
2018-06-12 10:37:05.961 1038529 ERROR neutron channel = 
daemon.RootwrapClientChannel(context=self)
2018-06-12 10:37:05.961 1038529 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 327, in __init__
2018-06-12 10:37:05.961 1038529 ERROR neutron raise 
FailedToDropPrivileges(msg)
2018-06-12 10:37:05.961 1038529 ERROR neutron FailedToDropPrivileges: privsep 
helper command exited non-zero (1)
2018-06-12 10:37:05.961 1038529 ERROR neutron

** Affects: neutron
 Importance: Medium
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: Confirmed

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776468

Title:
  neutron-netns-cleanup does not configure privsep correctly

Status in neutron:
  Confirmed

Bug description:
  It crashes when trying to invoke privsep:

  
  2018-06-12 10:37:05.932 1038529 INFO neutron.common.config [-] Logging 
enabled!
  2018-06-12 10:37:05.932 1038529 INFO 

[Yahoo-eng-team] [Bug 1776469] [NEW] neutron-netns-cleanup explodes when trying to delete an OVS internal port

2018-06-12 Thread Miguel Angel Ajo
Public bug reported:


Apparently, the exception is not bubbling up out of privsep, and the cleanup 
exits instead of retrying with ovsdb del port:

https://github.com/openstack/neutron/blob/100491cec72ecf694cc8cbd6cd17b66a191a5bd7/neutron/cmd/netns_cleanup.py#L124


def unplug_device(conf, device):
orig_log_fail_as_error = device.get_log_fail_as_error()
device.set_log_fail_as_error(False)
try:
device.link.delete()
except RuntimeError:
device.set_log_fail_as_error(orig_log_fail_as_error)
# Maybe the device is OVS port, so try to delete
ovs = ovs_lib.BaseOVS()
bridge_name = ovs.get_bridge_for_iface(device.name)
if bridge_name:
bridge = ovs_lib.OVSBridge(bridge_name)
bridge.delete_port(device.name)
else:
LOG.debug('Unable to find bridge for device: %s', device.name)
finally:
device.set_log_fail_as_error(orig_log_fail_as_error)


neutron-netns-cleanup --config-file /usr/share/neutron/neutron-dist.conf
--config-dir /usr/share/neutron/l3_agent --config-file
/etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini
--config-dir /etc/neutron/conf.d/common --config-dir
/etc/neutron/conf.d/neutron-l3-agent --agent-type l3 -d --force


2018-06-12 11:39:26.868 254573 INFO neutron.common.config [-] Logging enabled!
2018-06-12 11:39:26.868 254573 INFO neutron.common.config [-] 
/usr/bin/neutron-netns-cleanup version 13.0.0.0b2.dev174
2018-06-12 11:39:26.868 254573 DEBUG neutron.common.config [-] command line: 
/usr/bin/neutron-netns-cleanup --config-file 
/usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini 
--config-dir /etc/neutron/conf.d/common --config-dir 
/etc/neutron/conf.d/neutron-l3-agent --agent-type l3 -d --force setup_logging 
/usr/lib/python2.7/site-packages/neutron/common/config.py:104
2018-06-12 11:39:26.869 254573 INFO oslo.privsep.daemon [-] Running privsep 
helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', 
'--config-file', '/etc/neutron/neutron.conf', '--config-file', 
'/etc/neutron/l3_agent.ini', '--config-dir', 
'/etc/neutron/conf.d/neutron-l3-agent', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmpNU7Loh/privsep.sock']
2018-06-12 11:39:27.455 254573 INFO oslo.privsep.daemon [-] Spawned new privsep 
daemon via rootwrap
2018-06-12 11:39:27.456 254573 DEBUG oslo.privsep.daemon [-] Accepted privsep 
connection to /tmp/tmpNU7Loh/privsep.sock __init__ 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:331
2018-06-12 11:39:27.386 254707 INFO oslo.privsep.daemon [-] privsep daemon 
starting
2018-06-12 11:39:27.390 254707 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
2018-06-12 11:39:27.395 254707 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): 
CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
2018-06-12 11:39:27.395 254707 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 254707
2018-06-12 11:39:27.458 254707 DEBUG oslo.privsep.daemon [-] privsep: 
request[140529299646096]: (1,) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2018-06-12 11:39:27.458 254707 DEBUG oslo.privsep.daemon [-] privsep: 
reply[140529299646096]: (2,) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2018-06-12 11:39:27.458 254707 DEBUG oslo.privsep.daemon [-] privsep: 
request[140529299646096]: (3, 
'neutron.privileged.agent.linux.ip_lib.list_netns', (), {}) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2018-06-12 11:39:27.501 254707 DEBUG oslo.privsep.daemon [-] privsep: 
reply[140529299646096]: (4, ['qdhcp-64aa11b0-d9ff-47c3-9a44-2906bc22d724', 
'qrouter-c24debdc-7bcd-40d7-90b9-32e0ec9bb11a', 
'qdhcp-4b523888-7121-4133-b0c1-ff6a81a40dcd']) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2018-06-12 11:39:30.179 254573 DEBUG neutron.agent.linux.utils [-] Unable to 
access /var/lib/neutron/dhcp/qrouter-c24debdc-7bcd-40d7-90b9-32e0ec9bb11a/pid 
get_value_from_file 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:254
2018-06-12 11:39:30.179 254707 DEBUG oslo.privsep.daemon [-] privsep: 
request[140529299646096]: (3, 
'neutron.privileged.agent.linux.ip_lib.list_netns', (), {}) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2018-06-12 11:39:30.179 254707 DEBUG oslo.privsep.daemon [-] privsep: 
reply[140529299646096]: (4, ['qdhcp-64aa11b0-d9ff-47c3-9a44-2906bc22d724', 
'qrouter-c24debdc-7bcd-40d7-90b9-32e0ec9bb11a', 
'qdhcp-4b523888-7121-4133-b0c1-ff6a81a40dcd']) loop 
/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2018-06-12 11:39:30.180 254573 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ip', 'netns', 'exec', 

[Yahoo-eng-team] [Bug 1776459] [NEW] TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost fullstack fails

2018-06-12 Thread Slawek Kaplonski
Public bug reported:

I saw that sometimes TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost 
fullstack test fails because there is really packet lost after agent's restart.
Example of such failure is on: 
http://logs.openstack.org/70/574370/1/check/neutron-fullstack/804a4fa/logs/dsvm-fullstack-logs/TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost.txt.gz#_2018-06-11_21_26_57_858

What I saw in logs is that after L3 agent restart there are some
warnings: http://logs.openstack.org/70/574370/1/check/neutron-
fullstack/804a4fa/logs/dsvm-fullstack-
logs/TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost/neutron-l3-agent
--2018-06-11--21-26-43-905027.txt.gz#_2018-06-11_21_27_04_621

Such warnings are not observed when test runs fine.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: fullstack l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776459

Title:
  TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost fullstack
  fails

Status in neutron:
  Confirmed

Bug description:
  I saw that sometimes 
TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost fullstack test fails 
because there is really packet lost after agent's restart.
  Example of such failure is on: 
http://logs.openstack.org/70/574370/1/check/neutron-fullstack/804a4fa/logs/dsvm-fullstack-logs/TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost.txt.gz#_2018-06-11_21_26_57_858

  What I saw in logs is that after L3 agent restart there are some
  warnings: http://logs.openstack.org/70/574370/1/check/neutron-
  fullstack/804a4fa/logs/dsvm-fullstack-
  
logs/TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost/neutron-l3-agent
  --2018-06-11--21-26-43-905027.txt.gz#_2018-06-11_21_27_04_621

  Such warnings are not observed when test runs fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1776459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773449] Re: VM rbd backed block devices inconsistent after unexpected host outage

2018-06-12 Thread James Page
** Changed in: charm-ceph-mon
   Status: Fix Committed => Fix Released

** Changed in: charm-ceph-mon
Milestone: 18.08 => 18.05

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773449

Title:
  VM rbd backed block devices inconsistent after unexpected host outage

Status in OpenStack ceph-mon charm:
  Fix Released
Status in charms.ceph:
  Fix Released
Status in Ubuntu Cloud Archive:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in ceph package in Ubuntu:
  Invalid
Status in nova package in Ubuntu:
  Invalid
Status in qemu package in Ubuntu:
  Invalid

Bug description:
  Reboot host that contains VMs with volumes and all VMs fail to boot.
  Happens with Queens on Bionic and Xenial

  [0.00] Initializing cgroup subsys cpuset

  [0.00] Initializing cgroup subsys cpu

  [0.00] Initializing cgroup subsys cpuacct

  [0.00] Linux version 4.4.0-124-generic
  (buildd@lcy01-amd64-028) (gcc version 5.4.0 20160609 (Ubuntu
  5.4.0-6ubuntu1~16.04.9) ) #148-Ubuntu SMP Wed May 2 13:00:18 UTC 2018
  (Ubuntu 4.4.0-124.148-generic 4.4.117)

  [0.00] Command line:
  BOOT_IMAGE=/boot/vmlinuz-4.4.0-124-generic
  root=UUID=bca2de6e-f774-4203-ae05-e8deeb05f64a ro console=tty1
  console=ttyS0

  [0.00] KERNEL supported cpus:

  [0.00]   Intel GenuineIntel

  [0.00]   AMD AuthenticAMD

  [0.00]   Centaur CentaurHauls

  [0.00] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256

  [0.00] x86/fpu: Supporting XSAVE feature 0x01: 'x87 floating
  point registers'

  [0.00] x86/fpu: Supporting XSAVE feature 0x02: 'SSE registers'

  [0.00] x86/fpu: Supporting XSAVE feature 0x04: 'AVX registers'

  [0.00] x86/fpu: Enabled xstate features 0x7, context size is
  832 bytes, using 'standard' format.

  [0.00] x86/fpu: Using 'eager' FPU context switches.

  [0.00] e820: BIOS-provided physical RAM map:

  [0.00] BIOS-e820: [mem 0x-0x0009fbff]
  usable

  [0.00] BIOS-e820: [mem 0x0009fc00-0x0009]
  reserved

  [0.00] BIOS-e820: [mem 0x000f-0x000f]
  reserved

  [0.00] BIOS-e820: [mem 0x0010-0x7ffdbfff]
  usable

  [0.00] BIOS-e820: [mem 0x7ffdc000-0x7fff]
  reserved

  [0.00] BIOS-e820: [mem 0xfeffc000-0xfeff]
  reserved

  [0.00] BIOS-e820: [mem 0xfffc-0x]
  reserved

  [0.00] NX (Execute Disable) protection: active

  [0.00] SMBIOS 2.8 present.

  [0.00] Hypervisor detected: KVM

  [0.00] e820: last_pfn = 0x7ffdc max_arch_pfn = 0x4

  [0.00] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC
  UC- WT

  [0.00] found SMP MP-table at [mem 0x000f6a20-0x000f6a2f]
  mapped at [880f6a20]

  [0.00] Scanning 1 areas for low memory corruption

  [0.00] Using GB pages for direct mapping

  [0.00] RAMDISK: [mem 0x361f4000-0x370f1fff]

  [0.00] ACPI: Early table checksum verification disabled

  [0.00] ACPI: RSDP 0x000F6780 14 (v00 BOCHS )

  [0.00] ACPI: RSDT 0x7FFE1649 2C (v01 BOCHS
  BXPCRSDT 0001 BXPC 0001)

  [0.00] ACPI: FACP 0x7FFE14CD 74 (v01 BOCHS
  BXPCFACP 0001 BXPC 0001)

  [0.00] ACPI: DSDT 0x7FFE0040 00148D (v01 BOCHS
  BXPCDSDT 0001 BXPC 0001)

  [0.00] ACPI: FACS 0x7FFE 40

  [0.00] ACPI: APIC 0x7FFE15C1 88 (v01 BOCHS
  BXPCAPIC 0001 BXPC 0001)

  [0.00] No NUMA configuration found

  [0.00] Faking a node at [mem
  0x-0x7ffdbfff]

  [0.00] NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdbfff]

  [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00

  [0.00] kvm-clock: cpu 0, msr 0:7ffcf001, primary cpu clock

  [0.00] kvm-clock: using sched offset of 17590935813 cycles

  [0.00] clocksource: kvm-clock: mask: 0x
  max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns

  [0.00] Zone ranges:

  [0.00]   DMA  [mem 0x1000-0x00ff]

  [0.00]   DMA32[mem 0x0100-0x7ffdbfff]

  [0.00]   Normal   empty

  [0.00]   Device   empty

  [0.00] Movable zone start for each node

  [0.00] Early memory node ranges

  [0.00]   node   0: [mem 0x1000-0x0009efff]

  [0.00]   node   0: [mem 0x0010-0x7ffdbfff]

  [0.00] Initmem setup node 0 [mem
  0x1000-0x7ffdbfff]

  [0.00] ACPI: PM-Timer IO Port: 0x608

  [0.00] ACPI: LAPIC_NMI 

[Yahoo-eng-team] [Bug 1776421] [NEW] glance take 4-5 minute to retrieve the image list.

2018-06-12 Thread vismys
Public bug reported:

hi,

i have installed open stack queens following the install guide. the
issue is that the retrieval of info by any command other then keystone
related is talking a long time.

[root@cassini ~]# openstack image list --timing
+-++
| URL |Seconds |
+-++
| GET http://cassini:5000/v3  |0.00623 |
| POST http://cassini:5000/v3/auth/tokens |   0.673905 |
| POST http://cassini:5000/v3/auth/tokens |   0.689073 |
| GET http://cassini:9292/v2/images   | 121.203616 |
| Total   | 122.572824 |
+-++

[root@cassini ~]# openstack endpoint list --timing
+--+---+--++-+---+---+
| ID   | Region| Service Name | Service Type   
| Enabled | Interface | URL   |
+--+---+--++-+---+---+
| 0a68f10a47c740f6ac63d0730968e199 | RegionOne | nova | compute
| True| internal  | http://cassini:8774/v2.1  |
| 0df785fd02894a8fa68d0cb4f472af69 | RegionOne | glance   | image  
| True| admin | http://cassini:9292   |
| 168b74e1c71845aab650acb00384d69d | RegionOne | glance   | image  
| True| internal  | http://cassini:9292   |
| 181e55cc3ab04cf690b98b6f5ed59e66 | RegionOne | glance   | image  
| True| public| http://cassini:9292   |
| 26d975fa0ad54d359c582be758910847 | RegionOne | heat-cfn | cloudformation 
| True| admin | http://cassini:8000/v1|
| 30e63f821e874aecb9c0588d340f51bf | RegionOne | heat-cfn | cloudformation 
| True| public| http://cassini:8000/v1|
| 34a4d640300041a3a4f68d0b28593acc | RegionOne | nova | compute
| True| public| http://cassini:8774/v2.1  |
| 3c3a4898f9ee4fc28053f1d3edaee8f9 | RegionOne | nova | compute
| True| admin | http://cassini:8774/v2.1  |
| 3cfa4c10277b45dc95ee7eb165599476 | RegionOne | neutron  | network
| True| admin | http://cassini:9696   |
| 4e62cbe09a724965b932a0c3affc33ef | RegionOne | placement| placement  
| True| internal  | http://cassini:8778   |
| 56d8797ae22641a9aa78bc1dda10824d | RegionOne | cinderv2 | volumev2   
| True| admin | http://cassini:8776/v2/%(project_id)s |
| 64f17761796b4b3fb0d30ebdb18258ef | RegionOne | heat-cfn | cloudformation 
| True| internal  | http://cassini:8000/v1|
| 666162128bfb4823a35d88b725c6faa1 | RegionOne | neutron  | network
| True| public| http://cassini:9696   |
| 6c7b48180c3e45d6bc81b471776c0a3d | RegionOne | keystone | identity   
| True| public| http://cassini:5000/v3/   |
| 80aefbe90d8641b3a65ed22f4ccab31c | RegionOne | cinderv2 | volumev2   
| True| public| http://cassini:8776/v2/%(project_id)s |
| 80e85d71d0994b03bcfb51e8a2bc5ac2 | RegionOne | keystone | identity   
| True| admin | http://cassini:5000/v3/   |
| 8e5485e4a24a409cad8ddfe41b96d8b9 | RegionOne | heat | orchestration  
| True| admin | http://cassini:8004/v1/%(tenant_id)s  |
| 8edc3373404d4052aea011c0c7d0ad72 | RegionOne | heat | orchestration  
| True| public| http://cassini:8004/v1/%(tenant_id)s  |
| 9770f45d999647e980db8a23fb7c9ff2 | RegionOne | cinderv3 | volumev3   
| True| public| http://cassini:8776/v3/%(project_id)s |
| a817f46d9597423c9a104888ebc11a15 | RegionOne | cinderv3 | volumev3   
| True| admin | http://cassini:8776/v3/%(project_id)s |
| ad9d48ed8e6e41ba9b719d52fa866947 | RegionOne | placement| placement  
| True| public| http://cassini:8778   |
| b0253ada1de9405a898dff7fd99e7718 | RegionOne | cinderv3 | volumev3   
| True| internal  | http://cassini:8776/v3/%(project_id)s |
| c8827e432e28496ba9e2ea55c773df12 | RegionOne | neutron  | network
| True| internal  | http://cassini:9696   |
| c9bff15f69cd4233ba6a64f560b0b6c4 | RegionOne | heat | orchestration  
| True| internal  | http://cassini:8004/v1/%(tenant_id)s  |
| dd43581411ab4207880ba002036884bc | RegionOne | placement| placement  
| True| admin | http://cassini:8778   |
| e47d93aee60f4f17a27b12113292c850 | RegionOne | keystone | identity   
| True| internal  | http://cassini:5000/v3/   |
|