[Yahoo-eng-team] [Bug 1658164] Re: community images breaks Images v1 API

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/423499
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=18acc704a1573346f751a2b8260720fd09e664cd
Submitter: Jenkins
Branch:master

commit 18acc704a1573346f751a2b8260720fd09e664cd
Author: Ian Cordasco 
Date:   Fri Jan 20 19:21:52 2017 +

Fix regression introduced by Community Images

Updating an image in v1 skipped the work to ensure that the image dictionary
would be Image v2 compliant. It was hidden inside an else clause and was 
only
run when there was no image id provided. This meant that only sometimes 
would
is_public be appropriately converted to visibility.

A functional test has been added to prevent regression and the code has been
mildly altered to fix the issue.

Closes-bug: #1658164
Change-Id: I996fbed2e31df8559c025cca31e5e12c4fb76548


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1658164

Title:
  community images breaks Images v1 API

Status in Glance:
  Fix Released

Bug description:
  The community images code has broken some v1 functionality.

  CI was reverted today by Change-Id:
  I03636966c6912af8fdc9fcfe49da3788e7316150

  OSC reported failing functional tests.  Test in question does an
  image-update in v1 to make an image public; test fails when the
  is_public value for the updated image is still false.

  An exception isn't raised in the test, so it's not a permissions
  problem.  Looks like the call is accepted but the visibility value
  isn't changed properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1658164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640395] Re: Missing 'ports' attribute when GET firewall-groups

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/423047
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=0fe3b406927545859aa33d4ccdcc8871b81e44eb
Submitter: Jenkins
Branch:master

commit 0fe3b406927545859aa33d4ccdcc8871b81e44eb
Author: Yushiro FURUKAWA 
Date:   Fri Jan 20 12:04:01 2017 +0900

Fix 'ports' attribute for firewall_group

This commit fixes 'ports' attribute for firewall_group and enable to
return 'ports' as a POST/GET response.

Change-Id: I2052d977dd9b426e2384d49ee14259aabe6f5a49
Closes-Bug: #1640395


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640395

Title:
  Missing 'ports' attribute when GET firewall-groups

Status in neutron:
  Fix Released

Bug description:
  In current fwaas-v2, "ports" attribute is missing when following
  requests:

* GET v2.0/fwaas/firewall_groups/
* GET v2.0/fwaas/firewall_groups/{firewall_group_id}

  It seems plugin layer does not have method 'get_firewall_groups'.

  [How to reproduce]
  $ source devstack/openrc admin admin
  $ export TOKEN=`openstack token issue | grep ' id ' | get_field 2`
  $ curl -X POST -d '{"firewall_group":{"name":"fwg"}}' -H 
"x-auth-token:$TOKEN" 192.168.122.181:9696/v2.0/fwaas/firewall_groups
  {
"firewall_group": {
  "status": "INACTIVE",
  "description": "",
  "ingress_firewall_policy_id": null,
  "id": "04b9e7a5-abb1-410f-87b2-0b5ad559d02d",
  "name": "fwg",
  "admin_state_up": true,
  "tenant_id": "1c6afc3649a845029606ff83aeb81209",
  "ports": [],
  "project_id": "1c6afc3649a845029606ff83aeb81209",
  "public": false,
  "egress_firewall_policy_id": null
}
  }

  $ curl -s -X GET -H "x-auth-token:$TOKEN" 
192.168.122.181:9696/v2.0/fwaas/firewall_groups/04b9e7a5-abb1-410f-87b2-0b5ad559d02d
 | jq "."
  {
"firewall_group": {
  "status": "INACTIVE",
  "public": false,
  "egress_firewall_policy_id": null,
  "name": "fwg1",
  "admin_state_up": true,
  "tenant_id": "1c6afc3649a845029606ff83aeb81209",
  "project_id": "1c6afc3649a845029606ff83aeb81209",
  "id": "04b9e7a5-abb1-410f-87b2-0b5ad559d02d",
  "ingress_firewall_policy_id": null,
  "description": ""
}
  }

  $ curl -s -X PUT -d '{"firewall_group":{"name":"change"}}' -H 
"x-auth-token:$TOKEN" 
192.168.122.181:9696/v2.0/fwaas/firewall_groups/04b9e7a5-abb1-410f-87b2-0b5ad5
  59d02d | jq "."
  {
"firewall_group": {
  "status": "INACTIVE",
  "description": "",
  "ingress_firewall_policy_id": null,
  "id": "04b9e7a5-abb1-410f-87b2-0b5ad559d02d",
  "name": "change",
  "admin_state_up": true,
  "tenant_id": "1c6afc3649a845029606ff83aeb81209",
  "ports": [],
  "project_id": "1c6afc3649a845029606ff83aeb81209",
  "public": false,
  "egress_firewall_policy_id": null
}
  }

  $ curl -s -X GET -H "x-auth-token:$TOKEN" 
192.168.122.181:9696/v2.0/fwaas/firewall_groups | jq "."
  {
"firewall_groups": [
  {
"status": "INACTIVE",
"public": false,
"egress_firewall_policy_id": null,
"name": "change",
"admin_state_up": true,
"tenant_id": "1c6afc3649a845029606ff83aeb81209",
"project_id": "1c6afc3649a845029606ff83aeb81209",
"id": "04b9e7a5-abb1-410f-87b2-0b5ad559d02d",
"ingress_firewall_policy_id": null,
"description": ""
  }
]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585510] Re: [RFE] openvswitch-agent support rootwrap daemon when hypervisor is XenServer

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/390931
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8047da17db2d4d50f797d99880f3b76d0eba2084
Submitter: Jenkins
Branch:master

commit 8047da17db2d4d50f797d99880f3b76d0eba2084
Author: Jianghua Wang 
Date:   Thu Oct 27 00:43:11 2016 +0800

XenAPI: Support daemon mode for rootwrap

For Neutron's compute agent in a XenServer's compute node, the commands
actually need run in Dom0. Currently XenServer only supports rootwrap
for that purpose by invoking a script which invokes XenAPI to execute
commands in dom0. There are much performance overhead due to it requires
parsing on the script and the configuration file every time running
commands.

This change is to support daemon mode with which each agent service will
call XenAPI directly to execute commands in dom0. And it will keep the
single XenAPI session.

DocImpact: Need update the following configuration.

file: /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
root_helper_daemon = xenapi_root_helper
[xenapi]
connection_url = http://169.254.0.1
connection_username = root
connection_password = xenroot

Closes-Bug: #1585510
Change-Id: I684034359fe0571bc92dbcf342a9821553b1da35


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585510

Title:
  [RFE] openvswitch-agent support rootwrap daemon when hypervisor is
  XenServer

Status in neutron:
  Fix Released
Status in oslo.rootwrap:
  New

Bug description:
  As titled, when XenServer is hypervisor we want to implement rootwrap
  daemon mode in neutron-openvswitch-agent which runs in compute node.

  neutron-openvswitch-agent which runs in compute node(DomU) cannot
  support rootwrap daemon mode. This is because XenServer has the
  seperation of Dom0(privileged domain) and DomU(user domain), br-int
  bridge of neutron-openvswitch-agent(in compute node) resides in Dom0,
  so all the ovs-vsctl/ovs-ofctl/iptables/ipset commands executed by
  neutron-openvswitch-agent(in compute node) need to be executed in Dom0
  not DomU which is different with other hypervisors.

  https://github.com/openstack/neutron/blob/master/bin/neutron-rootwrap-
  xen-dom0 is current implementation but cannot support rootwrap daemon.

  We noticed rootwrap produces significant performance overhead and We
  want to implement the rootwrap daemon mode when XenServer is
  hypervisor to improve the performance.

  Also, we discoverde that calls to netwrap (and creation of lots of
  sessions) are causing huge logging in dom0. Logrotate can handle those
  logs, but it will make diagnosis of issues very difficult indeed due
  to the very regular rotations.

  Also, it seems that perhaps the excessive logging is causing the host
  to be **very** slow downloading an image from glance due to contention
  on the disk (looking at iostat, %iowait is up over 60% the majority of
  the time, sometimes up to 90%)

  So, it's not stable and strong enough for a production OpenStack
  environment.

  Proposal: subclass and override some class/functions from
  oslo.rootwrap to achive the goal. Actually I have did the POC which
  can work well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625516] Re: OVS FW driver ignores all non tcp udp icmp protocol rules

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/402174
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d5c07fe512502342cfde7c49e6ed75686608cc65
Submitter: Jenkins
Branch:master

commit d5c07fe512502342cfde7c49e6ed75686608cc65
Author: Jakub Libosvar 
Date:   Thu Nov 24 12:32:55 2016 -0500

ovsfw: Support protocol numbers instead of just tcp and udp

Neutron API accepts also protocol numbers as protocols for security
groups. This patch makes support for it in OVS firewall driver. iptables
driver already supports it.

Fullstack test covering SCTP connection was added and it requires
ip_conntrack_proto_sctp kernel module in order to make conntrack work
with SCTP.

Change-Id: I6c5665a994c4a50ddbb95cd1360be0de0a6c7e40
Closes-bug: 1625516


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625516

Title:
  OVS FW driver ignores all non tcp udp icmp protocol rules

Status in neutron:
  Fix Released

Bug description:
  Tested ovs 2.5 OVS FW driver.

  Could not run sctp traffic between VMs in the same tenant and network
  after allowing ip protocol 132 (sctp) ingress and egress traffic in
  the security group.

  With iptables driver worked well.

  Tested on rhel7.3

  OSP10- Newton

  2016-09-20 11:20:38.121 17370 DEBUG
  neutron.agent.linux.openvswitch_firewall.firewall [req-1e1ee4b4-0722
  -42fb-b9a6-5499eeac7028 - - - - -] RULGEN: Rules generated for flow
  {u'ethertype': u'IPv4', u'direction': u'ingress', u'source_ip_prefix':
  u'0.0.0.0/0', u'protocol': u'132'} are [{'dl_type': 2048, 'reg_port':
  7, 'actions': 'strip_vlan,output:7', 'priority': 70, 'table': 82,
  'dl_dst': u'fa:16:3e:5b:c9:06'}] add_flows_from_rules
  /usr/lib/python2.7/site-
  packages/neutron/agent/linux/openvswitch_firewall/firewall.py:667

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658261] [NEW] XenAPI: Support daemon mode for rootwrap

2017-01-20 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/390931
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 8047da17db2d4d50f797d99880f3b76d0eba2084
Author: Jianghua Wang 
Date:   Thu Oct 27 00:43:11 2016 +0800

XenAPI: Support daemon mode for rootwrap

For Neutron's compute agent in a XenServer's compute node, the commands
actually need run in Dom0. Currently XenServer only supports rootwrap
for that purpose by invoking a script which invokes XenAPI to execute
commands in dom0. There are much performance overhead due to it requires
parsing on the script and the configuration file every time running
commands.

This change is to support daemon mode with which each agent service will
call XenAPI directly to execute commands in dom0. And it will keep the
single XenAPI session.

DocImpact: Need update the following configuration.

file: /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
root_helper_daemon = xenapi_root_helper
[xenapi]
connection_url = http://169.254.0.1
connection_username = root
connection_password = xenroot

Closes-Bug: #1585510
Change-Id: I684034359fe0571bc92dbcf342a9821553b1da35

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658261

Title:
  XenAPI: Support daemon mode for rootwrap

Status in neutron:
  New

Bug description:
  https://review.openstack.org/390931
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 8047da17db2d4d50f797d99880f3b76d0eba2084
  Author: Jianghua Wang 
  Date:   Thu Oct 27 00:43:11 2016 +0800

  XenAPI: Support daemon mode for rootwrap
  
  For Neutron's compute agent in a XenServer's compute node, the commands
  actually need run in Dom0. Currently XenServer only supports rootwrap
  for that purpose by invoking a script which invokes XenAPI to execute
  commands in dom0. There are much performance overhead due to it requires
  parsing on the script and the configuration file every time running
  commands.
  
  This change is to support daemon mode with which each agent service will
  call XenAPI directly to execute commands in dom0. And it will keep the
  single XenAPI session.
  
  DocImpact: Need update the following configuration.
  
  file: /etc/neutron/plugins/ml2/openvswitch_agent.ini
  [agent]
  root_helper_daemon = xenapi_root_helper
  [xenapi]
  connection_url = http://169.254.0.1
  connection_username = root
  connection_password = xenroot
  
  Closes-Bug: #1585510
  Change-Id: I684034359fe0571bc92dbcf342a9821553b1da35

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641645] Re: PCI: A user who's password has expired must ask an admin to reset their password.

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/404022
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=3ae73b67522bf388a0fdcecceb662831d853a313
Submitter: Jenkins
Branch:master

commit 3ae73b67522bf388a0fdcecceb662831d853a313
Author: Gage Hugo 
Date:   Mon Nov 28 23:01:51 2016 -0600

Allow user to change own expired password

Currently, if a users password expires, they must contact an
administrator in order to have their password reset for them.

This change allows a user to perform the change_password call
without a token, which will allow a user with an expired password
to change it if they are using PCI-DSS related features. This
removes the issue of needing an administrator to reset any
user's password that has expired.

Also updated the api-ref with the related changes.

Change-Id: I4d3421c56642cfdbb25cb33b3cbac4c64dd1
Closes-Bug: #1641645


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641645

Title:
  PCI: A user who's password has expired must ask an admin to reset
  their password.

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  As noted in the bug title, this is a cumbersome process, a user should
  be able to reset their password if it expired.

  (and potentially if locked out -- that's up for debate) (Discussed at
  11/22/16 meeting, locked out from too many attempts should have to ask
  an admin)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656017] Re: nova-manage cell_v2 map_cell0 always returns a non-0 exit code

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/420132
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=aa7b6ebbb254f00fcb548832941ca9dbd3996d9f
Submitter: Jenkins
Branch:master

commit aa7b6ebbb254f00fcb548832941ca9dbd3996d9f
Author: Dan Peschman 
Date:   Fri Jan 13 11:51:51 2017 -0700

nova-manage cell_v2 map_cell0 exit 0

This command used to always return 1 because it was returning a data
structure used by another CLI function.  Now it exits 0 if the cell0
mapping was created succesfully or was already there.

Closes-Bug: #1656017
Change-Id: Ie66de8425bb8f65dc9eab9d0da809e94f6d72b1b


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656017

Title:
  nova-manage cell_v2 map_cell0 always returns a non-0 exit code

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  See the discussion in this review:

  https://review.openstack.org/#/c/409890/1/nova/cmd/manage.py@1289

  The map_cell0 CLI is really treated like a function and it's used by
  the simple_cell_setup command. If map_cell0 is used as a standalone
  command it always returns a non-0 exit code because it's returning a
  CellMapping object (or failing with a duplicate entry error if the
  cell0 mapping already exists).

  We should split the main part of the map_cell0 function out into a
  private method and then treat map_cell0 as a normal CLI with integer
  exit codes (0 on success, >0 on failure) and print out whatever
  information is needed when mapping cell0, like the uuid for example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657476] Re: Metadata agent fails to serve requests in python 3

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/421976
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7953e9886d96557b80a9afa2543d0db5008dc35f
Submitter: Jenkins
Branch:master

commit 7953e9886d96557b80a9afa2543d0db5008dc35f
Author: Oleg Bondarev 
Date:   Wed Jan 18 18:37:52 2017 +0400

Fix empty string check for python 3

It's '' in py2 and b'' in py3.
See bug for traceback.

Closes-Bug: #1657476
Change-Id: Ic2c32669bf238b702e13e81e15dd079d538a6abc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657476

Title:
  Metadata agent fails to serve requests in python 3

Status in neutron:
  Fix Released

Bug description:
  from http://logs.openstack.org/09/421209/7/experimental/gate-tempest-
  dsvm-nova-py35-ubuntu-xenial/2dda79b/logs/screen-q-meta.txt.gz:

  Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/eventlet/greenpool.py", line 
82, in _spawn_n_impl
  func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/eventlet/wsgi.py", line 719, 
in process_request
  proto.__init__(sock, address, self)
File "/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 409, in 
__init__
  server)
File "/usr/lib/python3.5/socketserver.py", line 681, in __init__
  self.handle()
File "/usr/lib/python3.5/http/server.py", line 422, in handle
  self.handle_one_request()
File "/usr/local/lib/python3.5/dist-packages/eventlet/wsgi.py", line 379, 
in handle_one_request
  self.environ = self.get_environ()
File "/usr/local/lib/python3.5/dist-packages/eventlet/wsgi.py", line 593, 
in get_environ
  env['REMOTE_ADDR'] = self.client_address[0]
  IndexError: index out of range

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658224] [NEW] Volumes Should Have Own Panel

2017-01-20 Thread Byron McCollum
Public bug reported:

Volumes haven't been part of Compute (Nova) for quite some time. Volumes
deserve their own panel, with top level links for Volumes, Snapshots,
and Backups.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1658224

Title:
  Volumes Should Have Own Panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Volumes haven't been part of Compute (Nova) for quite some time.
  Volumes deserve their own panel, with top level links for Volumes,
  Snapshots, and Backups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1658224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656691] Re: There is no way to delete a cell mapping except via DB directly

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/420451
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3e04a32df4d06f79a11423c9c0a7780d065187b1
Submitter: Jenkins
Branch:master

commit 3e04a32df4d06f79a11423c9c0a7780d065187b1
Author: Matt Riedemann 
Date:   Sun Jan 15 16:29:33 2017 -0500

Add nova-manage cell_v2 delete_cell command

This provides a way to delete empty cell mappings which
otherwise would have to be done directly in the database.

Change-Id: I541b072638b5d50985145391e76f610417fdcaa6
Closes-Bug: #1656691


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656691

Title:
  There is no way to delete a cell mapping except via DB directly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We currently provide a few ways to create cell mappings for cells v2
  in nova-manage but we don't have a way to delete a cell mapping in
  case there are some created erroneously.

  We should provide a nova-manage cell_v2 delete_cell command which
  allows deleting an empty cell mapping, it would take a cell uuid
  argument. If the cell is not found by uuid we should return an error.
  If the cell has mapped hosts or instances then it's not empty and we
  should return an error.

  Later iterations of the command might allow a --force option to delete
  the host/instance mappings so that those could be moved to another
  cell, but that would require some more thought.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658200] [NEW] Running tests via nosestests fails due to insufficient test isolation

2017-01-20 Thread Lars Kellogg-Stedman
Public bug reported:

I don't know if we care about this or not, since it involves a test
harness other than tox.

Attempting to run the unittests using nosetests (nosetets
tests/unittests) will fail because the _set_mock_metadata method appears
to only run once...so tests that expect non-default metadata (such as
test_instance_level_keys_replace_project_level_keys) will fail if any
prior test calls _set_mock_metadata().

This can be avoided by calling httppretty.reset() before calling
httppretty.register_uri(...).

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1658200

Title:
  Running tests via nosestests fails due to insufficient test isolation

Status in cloud-init:
  New

Bug description:
  I don't know if we care about this or not, since it involves a test
  harness other than tox.

  Attempting to run the unittests using nosetests (nosetets
  tests/unittests) will fail because the _set_mock_metadata method
  appears to only run once...so tests that expect non-default metadata
  (such as test_instance_level_keys_replace_project_level_keys) will
  fail if any prior test calls _set_mock_metadata().

  This can be avoided by calling httppretty.reset() before calling
  httppretty.register_uri(...).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1658200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658174] [NEW] cloud-init fails to disable ecdsa-sha2-nitp521 keys

2017-01-20 Thread Lars Kellogg-Stedman
Public bug reported:

cloud-init adds ssh_authorized_keys to the default user fedora and to
root but for root it disables the keys with a prefix command that echoes
the helpful message:

'Please login as the user "fedora" rather than the user "root".'

However, if the key is of type ecdsa-sha2-nistp521, it is not parsed
correctly, and the prefix command is not prepended.

This means that ECDSA keys can be used to login to root.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1658174

Title:
  cloud-init fails to disable ecdsa-sha2-nitp521 keys

Status in cloud-init:
  New

Bug description:
  cloud-init adds ssh_authorized_keys to the default user fedora and to
  root but for root it disables the keys with a prefix command that
  echoes the helpful message:

  'Please login as the user "fedora" rather than the user "root".'

  However, if the key is of type ecdsa-sha2-nistp521, it is not parsed
  correctly, and the prefix command is not prepended.

  This means that ECDSA keys can be used to login to root.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1658174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658164] [NEW] community images breaks Images v1 API

2017-01-20 Thread Brian Rosmaita
Public bug reported:

The community images code has broken some v1 functionality.

CI was reverted today by Change-Id:
I03636966c6912af8fdc9fcfe49da3788e7316150

OSC reported failing functional tests.  Test in question does an image-
update in v1 to make an image public; test fails when the is_public
value for the updated image is still false.

An exception isn't raised in the test, so it's not a permissions
problem.  Looks like the call is accepted but the visibility value isn't
changed properly.

** Affects: glance
 Importance: Critical
 Assignee: Ian Cordasco (icordasc)
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1658164

Title:
  community images breaks Images v1 API

Status in Glance:
  Triaged

Bug description:
  The community images code has broken some v1 functionality.

  CI was reverted today by Change-Id:
  I03636966c6912af8fdc9fcfe49da3788e7316150

  OSC reported failing functional tests.  Test in question does an
  image-update in v1 to make an image public; test fails when the
  is_public value for the updated image is still false.

  An exception isn't raised in the test, so it's not a permissions
  problem.  Looks like the call is accepted but the visibility value
  isn't changed properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1658164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657930] Re: neutron-db-manage fails with ImportError

2017-01-20 Thread James Anziano
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657930

Title:
  neutron-db-manage fails with ImportError

Status in neutron:
  Invalid

Bug description:
  Hi,

  I am upgrading our OpenStack cluster from liberty to newton. But, I am
  getting the following error when I run neutron-db-manage to sync the
  neutron database. The upgrade of nova was successful but the neutron
  processes fail to start (I am working on controller node for now). I
  am guessing it is because of the fact that first I need to sync the
  database where I get the following ImportError.

  I am following the official Guide of the newtown installation (to
  complete the upgrade) and every thing was fine to the point before
  calling neutron-db-manage.

  Can you please let me know your suggestion to fix this issue? 
   

  root@controller:/var/log# /bin/sh -c "neutron-db-manage --config-file 
/etc/neutron/neutron.conf \  --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  Traceback (most recent call last):
File "/usr/local/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
686, in main
  return_val |= bool(CONF.command.func(config, CONF.command.name))
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
205, in do_upgrade
  run_sanity_checks(config, revision)
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
670, in run_sanity_checks
  script_dir.run_env()
File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 
407, in run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 
93, in load_python_file
  module = load_module_py(module_id, path)
File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 
79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 17, in 
  from neutron_lib.db import model_base
  ImportError: cannot import name model_base

  
  I can not even start neutron-server and the log file is empty. 

  
  ahmad@controller:/var/log$ sudo service neutron-server start
  ahmad@controller:/var/log$ 
  ahmad@controller:/var/log$ 
  ahmad@controller:/var/log$ service neutron-server status
  ● neutron-server.service - OpenStack Neutron Server
 Loaded: loaded (/lib/systemd/system/neutron-server.service; enabled; 
vendor preset: enabled)
 Active: inactive (dead) (Result: exit-code) since Thu 2017-01-19 19:48:12 
EST; 6s ago
Process: 13772 ExecStart=/etc/init.d/neutron-server systemd-start 
(code=exited, status=1/FAILURE)
Process: 13769 ExecStartPre=/bin/chown neutron:adm /var/log/neutron 
(code=exited, status=0/SUCCESS)
Process: 13766 ExecStartPre=/bin/chown neutron:neutron /var/lock/neutron 
/var/lib/neutron (code=exited, stat
Process: 13763 ExecStartPre=/bin/mkdir -p /var/lock/neutron 
/var/log/neutron /var/lib/neutron (code=exited, 
   Main PID: 13772 (code=exited, status=1/FAILURE)

  Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Unit entered 
failed state.
  Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Failed with 
result 'exit-code'.
  Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Service 
hold-off time over, scheduling restart.
  Jan 19 19:48:12 controller systemd[1]: Stopped OpenStack Neutron Server.
  Jan 19 19:48:12 controller systemd[1]: neutron-server.service: Start request 
repeated too quickly.
  Jan 19 19:48:12 controller systemd[1]: Failed to start OpenStack Neutron 
Server.
  ahmad@controller:/var/log$ 
  ahmad@controller:/var/log$ cat /var/log/neutron/neutron-server.log
  ahmad@controller:/var/log$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658148] Re: Horizon/Nova interactions broken

2017-01-20 Thread Rob Cresswell
** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Importance: Undecided => Critical

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
Milestone: None => ocata-rc1

** Description changed:

  This has occurred within the past few days. There are probably two
- distinct errors here, one related to listening extensions and the other
+ distinct errors here, one related to listing extensions and the other
  related to using absolute limits.
  
  Downgraded to python-novaclient 6.0.0 and everything works as expected.
  
  Horizon logs: http://paste.openstack.org/show/595824/
  
  Nova API logs: http://paste.openstack.org/show/595825/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1658148

Title:
  Horizon/Nova interactions broken

Status in OpenStack Dashboard (Horizon):
  New
Status in python-novaclient:
  New

Bug description:
  This has occurred within the past few days. There are probably two
  distinct errors here, one related to listing extensions and the other
  related to using absolute limits.

  Downgraded to python-novaclient 6.0.0 and everything works as
  expected.

  Horizon logs: http://paste.openstack.org/show/595824/

  Nova API logs: http://paste.openstack.org/show/595825/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1658148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658024] Re: Incorrect tag in other-config for openvsiwtch agent after upgrade to mitaka

2017-01-20 Thread Armando Migliaccio
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658024

Title:
  Incorrect tag in other-config for openvsiwtch agent after upgrade to
  mitaka

Status in neutron:
  Invalid

Bug description:
  We've performed upgrade juno->kilo->libery->mitaka (one by one)
  without rebooting compute hosts.

  After mitaka upgrage we found that some of tenant networks are not
  functional. Deeper debug shows that in openvswitch tag value in
  'other-config' field in ovs port description does not match actual tag
  on the port. (tag field).

  This cause openvswitch-agent to set wrong segmentation_id on
  irrelevant host-local tags.

  Visual symptom: after restarting neutron-openvswitch-agent
  connectivity with given port appears for some time, than disappears.
  Tcdpump on the physical interface shows, that traffic coming to host
  with proper segmentation_id, but instance's replies are send back with
  wrong segmentation_id, which belongs to some random network of the
  different tenant.

  There are two ways to fix this: 
  1. reboot host
  2. write tag field to the tag value of the port and restart 
neutron-openvswitch-agent.

  Example of the incorrectly filled port (ovs-vsctl port list):

  _uuid   : a5bfb91f-78de-4916-b16a-6ea737cf3b6d
  bond_active_slave   : []
  bond_downdelay  : 0
  bond_fake_iface : false
  bond_mode   : []
  bond_updelay: 0
  external_ids: {}
  fake_bridge : false
  interfaces  : [7fb9c7a6-963c-4814-b9a4-a23d1a918843]
  lacp: []
  mac : []
  name: "tap20802dee-34"
  other_config: {net_uuid="9a1923c8-a07d-487e-a96e-310103acd911", 
network_type=vlan, physical_network=local, segmentation_id="3035", tag="201"}
  qos : []
  statistics  : {}
  status  : {}
  tag : 302
  trunks  : []
  vlan_mode   : []

  
  This problems repeated in the few installations of openstack, therefore is 
not a random fluke.

  This script [1] fixes bad tags, but I believe this is a rather serious
  issue with openvswitch-agent persistency.

  
  [1] https://gist.github.com/amarao/fba1e766cfa217b0342d0fe066aeedd7

  
  Affected version: mitaka, but I believe it related to previous versions, 
which was: juno, upgraded to kilo, upgraded to liberty, upgraded to mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658137] [NEW] volume backups fail if no container name provided

2017-01-20 Thread Eric Peterson
Public bug reported:

if no container name is provided, horizon ends up passing "" to the
cinder client api call.  This is an invalid container name - None is the
correct value to pass for this to work as desired.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: horizon
 Importance: Undecided
 Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1658137

Title:
  volume backups fail if no container name provided

Status in Cinder:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  if no container name is provided, horizon ends up passing "" to the
  cinder client api call.  This is an invalid container name - None is
  the correct value to pass for this to work as desired.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1658137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658116] [NEW] Wrong migration step run when file names are the same

2017-01-20 Thread Ron De Rose
Public bug reported:

We've seen a couple instances now where the wrong migration step is run
in tests, when the migration file names in each repo are the same. For
example, in the following patch, expand was called, yet the contract
file was the one actually run:

Traceback (most recent call last):
  File "keystone/tests/unit/test_sql_upgrade.py", line 1964, in 
test_migration_013_add_domain_id_to_user
self.expand(13)
  File "keystone/tests/unit/test_sql_upgrade.py", line 228, in expand
self.repos[EXPAND_REPO].upgrade(*args, **kwargs)
  File "keystone/common/sql/upgrades.py", line 63, in upgrade
self.schema_.runchange(ver, change, changeset.step)
  File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/schema.py",
 line 93, in runchange
change.run(self.engine, step)
  File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/script/py.py",
 line 148, in run
script_func(engine)
  File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/keystone/common/sql/contract_repo/versions/013_add_domain_id_to_user.py",
 line 43, in upgrade
migrate.UniqueConstraint(user.c.id, user.c.domain_id,
  File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/_collections.py",
 line 212, in __getattr__
raise AttributeError(key)
AttributeError: domain_id

http://logs.openstack.org/74/409874/29/check/gate-keystone-python27-db-
ubuntu-xenial/d2a60fd/testr_results.html.gz

Likewise, morgan was seeing a similar issue here, where the expand migration 
file wasn't being run, so the test failed the table exist check:
https://review.openstack.org/#/c/422817/3/keystone/tests/unit/test_sql_upgrade.py

http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
/%23openstack-keystone.2017-01-20.log.html

However, both patches would run successfully locally.

As a workaround, making the repo file names unique fixes the problem,
suggesting that perhaps this is related to the files being cached.

** Affects: keystone
 Importance: Undecided
 Assignee: Ron De Rose (ronald-de-rose)
 Status: New

** Description changed:

- We seen a couple instances now where the wrong migration step is run in
- tests when the migration file names in each repo are the same. For
+ We've seen a couple instances now where the wrong migration step is run
+ in tests when the migration file names in each repo are the same. For
  example, in the following patch, expand was called, yet the contract
  file was the one actually run:
  
  Traceback (most recent call last):
-   File "keystone/tests/unit/test_sql_upgrade.py", line 1964, in 
test_migration_013_add_domain_id_to_user
- self.expand(13)
-   File "keystone/tests/unit/test_sql_upgrade.py", line 228, in expand
- self.repos[EXPAND_REPO].upgrade(*args, **kwargs)
-   File "keystone/common/sql/upgrades.py", line 63, in upgrade
- self.schema_.runchange(ver, change, changeset.step)
-   File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/schema.py",
 line 93, in runchange
- change.run(self.engine, step)
-   File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/script/py.py",
 line 148, in run
- script_func(engine)
-   File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/keystone/common/sql/contract_repo/versions/013_add_domain_id_to_user.py",
 line 43, in upgrade
- migrate.UniqueConstraint(user.c.id, user.c.domain_id,
-   File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/_collections.py",
 line 212, in __getattr__
- raise AttributeError(key)
+   File "keystone/tests/unit/test_sql_upgrade.py", line 1964, in 
test_migration_013_add_domain_id_to_user
+ self.expand(13)
+   File "keystone/tests/unit/test_sql_upgrade.py", line 228, in expand
+ self.repos[EXPAND_REPO].upgrade(*args, **kwargs)
+   File "keystone/common/sql/upgrades.py", line 63, in upgrade
+ self.schema_.runchange(ver, change, changeset.step)
+   File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/schema.py",
 line 93, in runchange
+ change.run(self.engine, step)
+   File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/script/py.py",
 line 148, in run
+ script_func(engine)
+   File 
"/home/jenkins/workspace/gate-keystone-python27-db-ubuntu-xenial/keystone/common/sql/contract_repo/versions/013_add_domain_id_to_user.py",
 line 43, in upgrade
+ migrate.UniqueConstraint(user.c.id, user.c.domain_id,
+   File 
"/home/je

[Yahoo-eng-team] [Bug 1658111] [NEW] Parameters from environment ignored when uploading a HEAT template

2017-01-20 Thread Radomir Dopieralski
Public bug reported:

When creating a new stack, we can specify a heat template file and an
environment file. However, the parameter values specified in the
environment file are completely ignored in the second step of the
wizard, where we still need to fill all the parameters manually. Even
though the environment file is getting passed to heat and could be used
to get those values, the fields in the second step of the wizard are all
required, so it's impossible to skip them.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1658111

Title:
  Parameters from environment ignored when uploading a HEAT template

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a new stack, we can specify a heat template file and an
  environment file. However, the parameter values specified in the
  environment file are completely ignored in the second step of the
  wizard, where we still need to fill all the parameters manually. Even
  though the environment file is getting passed to heat and could be
  used to get those values, the fields in the second step of the wizard
  are all required, so it's impossible to skip them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1658111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657865] Re: It is possible to create cross domain implied roles

2017-01-20 Thread Rodrigo Duarte
Although we can do something like [1], the effective role assignments
will be empty because [2]. Looks like this is not a bug after all :)

[1] http://paste.openstack.org/show/595788/
[2] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/core.py#L675-L691

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1657865

Title:
  It is possible to create cross domain implied roles

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Since we can't assign a project a role from a different domain, it is
  expected to not create implied roles from different domains as well.
  For example:

  * user1
  * project1 - domainA
  * role1 - domainA
  * role2 - domainB
  * create an assignment: user1/project1/role1

  If we create a rule where role1 implies role2, we would bypass the
  domain restriction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1657865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658090] [NEW] Nova API reports ProgrammingError in mysql layer

2017-01-20 Thread Silvan Kaiser
Public bug reported:

Nova API sometimes reports a ProgrammingError in Tempests
TestMinimumBasicScenario in our CI:


2017-01-20 08:56:41.246 10377 DEBUG oslo_db.sqlalchemy.engines 
[req-7b026f0c-4071-4425-85be-e98d8ddc2d33 
tempest-TestMinimumBasicScenario-878230226 
tempest-TestMinimumBasicScenario-878230226] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions 
[req-7b026f0c-4071-4425-85be-e98d8ddc2d33 
tempest-TestMinimumBasicScenario-878230226 
tempest-TestMinimumBasicScenario-878230226] Unexpected exception in API method
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 338, in wrapped
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 196, in index
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions servers = 
self._get_servers(req, is_detail=False)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 347, in 
_get_servers
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions 
sort_keys=sort_keys, sort_dirs=sort_dirs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 2407, in get_all
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions 
sort_dirs=sort_dirs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 2484, in _get_instances_by_filters
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions 
expected_attrs=fields, sort_keys=sort_keys, sort_dirs=sort_dirs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions result = 
fn(cls, context, *args, **kwargs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 1220, in get_by_filters
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions 
use_slave=use_slave, sort_keys=sort_keys, sort_dirs=sort_dirs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 226, in wrapper
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 1204, in _get_by_filters_impl
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions 
sort_keys=sort_keys, sort_dirs=sort_dirs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/api.py", line 763, in instance_get_all_by_filters_sort
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions 
sort_dirs=sort_dirs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 170, in wrapper
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 271, in wrapped
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions return 
f(context, *args, **kwargs)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 2250, in 
instance_get_all_by_filters_sort
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions return 
_instances_fill_metadata(context, query_prefix.all(), manual_joins)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2613, in 
all
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions return 
list(self)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2761, in 
__iter__
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions return 
self._execute_and_instances(context)
2017-01-20 08:56:41.267 10377 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py"

[Yahoo-eng-team] [Bug 1658078] [NEW] AttributeError: 'NoneType' object has no attribute 'support_requests'

2017-01-20 Thread Moshe Levi
Public bug reported:

When the compute is ironic driver  and the shceduler is configured with 
pci passthrough filter the vm get to an error state and we can see the 
following error in the scheduler

2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server 
[req-d627c45c-a5cf-47bc-a8d1-fe4669516380 admin admin] Exception during message 
handling
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
218, in inner
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 84, in select_destinations
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server dests = 
self.driver.select_destinations(ctxt, spec_obj)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 51, in 
select_destinations
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server 
selected_hosts = self._schedule(context, spec_obj)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 103, in _schedule
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server spec_obj, 
index=num)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 572, in 
get_filtered_hosts
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server hosts, 
spec_obj, index)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/filters.py", line 89, in get_filtered_objects
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server list_objs = 
list(objs)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/filters.py", line 44, in filter_all
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server if 
self._filter_one(obj, spec_obj):
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filters/__init__.py", line 26, in _filter_one
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server return 
self.host_passes(obj, filter_properties)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filters/pci_passthrough_filter.py", line 48, in 
host_passes
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server if not 
host_state.pci_stats.support_requests(pci_requests.requests):
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server AttributeError: 
'NoneType' object has no attribute 'support_requests'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658078

Title:
  AttributeError: 'NoneType' object has no attribute 'support_requests'

Status in OpenStack Compute (nova):
  New

Bug description:
  When the compute is ironic driver  and the shceduler is configured with 
  pci passthrough filter the vm get to an error state and we can see the 
following error in the scheduler

  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server 
[req-d627c45c-a5cf-47bc-a8d1-fe4669516380 admin admin] Exception during message 
handling
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/r

[Yahoo-eng-team] [Bug 1657923] Re: Add a ReST client for placement API

2017-01-20 Thread John Davidge
This will require an update to the Routed networks section[1] in the
networking guide.

[1] http://docs.openstack.org/newton/networking-guide/config-routed-
networks.html

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New => Confirmed

** Changed in: openstack-manuals
   Importance: Undecided => Medium

** Tags added: networking-guide

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657923

Title:
  Add a ReST client for placement API

Status in neutron:
  New
Status in openstack-manuals:
  Confirmed

Bug description:
  https://review.openstack.org/414726
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ebe62dcd33b54165a0e06340295c4d832f89d3dc
  Author: Miguel Lavalle 
  Date:   Sat Dec 24 18:38:57 2016 -0600

  Add a ReST client for placement API
  
  This patchset adds a ReST client for the placement API. This
  client is used to update the IPv4 inventories associated with
  routed networks segments. This information is used by the
  Nova scheduler to decide the placement of instances in hosts,
  based on the availability of IPv4 addresses in routed
  networks segments
  
  DocImpact: Adds [placement] section to neutron.conf with two
 options: region_name and endpoint_type
  
  Change-Id: I2aa614d4e6229161047b08c8bdcbca0e2e5d1f0b
  Partially-Implements: blueprint routed-networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658074] [NEW] openvswitch-agent spawning infinite number of ovsdb-client processes

2017-01-20 Thread Dr. Jens Rosenboom
Public bug reported:

After installing neutron on Ubuntu Xenial from the Newton UCA
(2:9.0.0-0ubuntu1.16.10.2~cloud0), I noticed these processes:

neutron  11222  2.9  0.8 262628 108712 ?   Ss   10:59   1:36 
/usr/bin/python /usr/bin/neutron-openvswitch-agent 
--config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/plugins/ml2/openvswitch_agent.ini 
--log-file=/var/log/neutron/neutron-openvswitch-agent.lo
root 11686  0.0  0.0  54112  3256 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
root 11688  0.0  0.3  83336 45444 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 11828  0.0  0.0  20056  2976 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 13426  0.0  0.0  54112  3204 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
root 13430  0.0  0.3  83336 45480 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 13490  0.0  0.0  20056  3052 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 14775  0.0  0.0  54112  3256 ?S11:01   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
root 14779  0.0  0.3  83336 45272 ?S11:01   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 14821  0.0  0.0  20056  2944 ?S11:01   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json

with another set being spawned every 30 seconds. In /var/log/neutron
/neutron-openvswitch-agent.log I see these errors:

2017-01-20 11:00:39.804 11222 ERROR neutron.agent.linux.async_process [-] Error 
received from [ovsdb-client monitor Interface name,ofport,external_ids 
--format=json]: sudo: unable to resolve host jr-ansi02
2017-01-20 11:00:39.805 11222 ERROR neutron.agent.linux.async_process [-] 
Process [ovsdb-client monitor Interface name,ofport,external_ids --format=json] 
dies due to the error: sudo: unable to resolve host jr-ansi02

Now of course one can claim that properly setting up sudo (or /etc/hosts
rather) will solve this issue, but still maybe the ovs-agent process
should properly clean up its children and not assume that they are dead
as soon as there is any output to stderr (function _read_stderr() in
neutron/agent/linux/async_process.py).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658074

Title:
  openvswitch-agent spawning infinite number of ovsdb-client processes

Status in neutron:
  New

Bug description:
  After installing neutron on Ubuntu Xenial from the Newton UCA
  (2:9.0.0-0ubuntu1.16.10.2~cloud0), I noticed these processes:

  neutron  11222  2.9  0.8 262628 108712 ?   Ss   10:59   1:36 
/usr/bin/python /usr/bin/neutron-openvswitch-agent 
--config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/plugins/ml2/openvswitch_agent.ini 
--log-file=/var/log/neutron/neutron-openvswitch-agent.lo
  root 11686  0.0  0.0  54112  3256 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
  root 11688  0.0  0.3  83336 45444 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 11828  0.0  0.0  20056  2976 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 13426  0.0  0.0  54112  3204 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
  root 13430  0.0  0.3  83336 45480 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 13490  0.0  0.0  20056  3052 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 14775  0.0  0.0  54112  3256 ?S11:01   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf

[Yahoo-eng-team] [Bug 1606136] Re: Fail to unrescue server with quobyte volume attached

2017-01-20 Thread Silvan Kaiser
*** This bug is a duplicate of bug 1335889 ***
https://bugs.launchpad.net/bugs/1335889

** This bug has been marked a duplicate of bug 1335889
   Race condition in quickly attaching / deleting volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606136

Title:
  Fail to unrescue server with quobyte volume attached

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This Error occurs randomly with the Quobyte CI on arbitrary changes.
  Nova tries to to detach a volume from a VM that is in vm_state error.

  CI run examples can be found at [1][2]

  Example output:
  ==
  Failed 1 tests - output below:
  ==

  
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume[id-f56e465b-fe10-48bf-b75d-646cda3a8bc9,negative,volume]
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_rescue_negative.py", line 
80, in _unrescue
  server_id, 'ACTIVE')
File "tempest/common/waiters.py", line 77, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
35a2d53a-d80d-46b2-87a6-a7a82f9d5244 failed to build and is in ERROR status
  Details: {u'code': 500, u'message': u"Failed to open file 
'/mnt/quobyte-volume/abfa1002557ab2b21ec218a86487dd92/volume-351db2c5-9724-410f-b1d8-8680065c0788':
 No such file or directory", u'created': u'2016-07-23T13:03:20Z'}
  

  Captured traceback-2:
  ~
  Traceback (most recent call last):
File "tempest/api/compute/base.py", line 346, in delete_volume
  cls._delete_volume(cls.volumes_extensions_client, volume_id)
File "tempest/api/compute/base.py", line 277, in _delete_volume
  volumes_client.delete_volume(volume_id)
File "tempest/lib/services/compute/volumes_client.py", line 63, in 
delete_volume
  resp, body = self.delete("os-volumes/%s" % volume_id)
File "tempest/lib/common/rest_client.py", line 301, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/services/compute/base_compute_client.py", line 48, in 
request
  method, url, extra_headers, headers, body, chunked)
File "tempest/lib/common/rest_client.py", line 664, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 828, in _error_checker
  message=message)
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  
  

  Captured traceback-1:
  ~
  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_rescue_negative.py", line 
73, in _detach
  self.servers_client.detach_volume(server_id, volume_id)
File "tempest/lib/services/compute/servers_client.py", line 342, in 
detach_volume
  (server_id, volume_id))
File "tempest/lib/common/rest_client.py", line 301, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/services/compute/base_compute_client.py", line 48, in 
request
  method, url, extra_headers, headers, body, chunked)
File "tempest/lib/common/rest_client.py", line 664, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 777, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already 
exists
  Details: {u'code': 409, u'message': u"Cannot 'detach_volume' instance 
35a2d53a-d80d-46b2-87a6-a7a82f9d5244 while it is in vm_state error"}

  
  [1] http://78.46.57.153:8081/refs-changes-38-346438-3/
  [2] http://78.46.57.153:8081/refs-changes-58-346358-1/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1606136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508791] Re: Sporadic Libvirt volume timing issue

2017-01-20 Thread Silvan Kaiser
This bug report is seriously outdated and the logs no longer available.
I'll create a new bug entry should this issue arise again.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508791

Title:
  Sporadic Libvirt volume timing issue

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  
  Sporadically  a Cinder volume is unmounted previous to beeing required for a 
spawning VM in tempest tests.

  Example log data set at http://176.9.127.22:8081/refs-
  changes-66-235766-3/

  For details search http://176.9.127.22:8081/refs-
  changes-66-235766-3/logs/screen-n-cpu.log.txt for volume name 'volume-
  632e0d7d-d07b-449f-87f4'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1508791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658060] [NEW] FirewallNotFound exceptions when deleting the firewall in FWaaS-DVR

2017-01-20 Thread Yaohua Yan
Public bug reported:

We have four nodes, and we deploy both the FWaaS and DVR services. When
deleting the firewall, we always get three FirewallNotFound exceptions.
At present, we believe that, in DVR environment, evey node would run a
L3-agent service. This causes a plugin corresponding to multiple agents.
And each agent will call back the plugin's firewall_deleted()
(neutron_fwaas/services/firewall/fwaas_plugin.py) to delete the instance
in DB, but only the first agent will succeed.

How to reproduce:
- first create a firewall applied to a DVR router
- then delete it

$ neutron router-show test-fwaas
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| distributed   | True |
| external_gateway_info |  |
| ha| False|
| id| cfa3e65e-d101-4cc7-80e5-39daf72c6572 |
| name  | test-fwaas   |
| routes|  |
| status| ACTIVE   |
| tenant_id | fc170b1b8a9a467b9e1a63d85ced5a86 |
+---+--+
$ neutron firewall-create --name fw --router test-fwaas policy
Created a new firewall:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 1eb3fff7-240f-4f9d-adf6-766e2cad7f59 |
| id | afd38a9e-cf0a-4667-94e0-853a888fd981 |
| name   | fw   |
| router_ids | cfa3e65e-d101-4cc7-80e5-39daf72c6572 |
| status | CREATED  |
| tenant_id  | fc170b1b8a9a467b9e1a63d85ced5a86 |
++--+
$ neutron firewall-show fw
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | 1eb3fff7-240f-4f9d-adf6-766e2cad7f59 |
| id | afd38a9e-cf0a-4667-94e0-853a888fd981 |
| name   | fw   |
| router_ids | cfa3e65e-d101-4cc7-80e5-39daf72c6572 |
| status | ACTIVE   |
| tenant_id  | fc170b1b8a9a467b9e1a63d85ced5a86 |
++--+
$ neutron firewall-delete fw

$ less neutron-service_error.log
2017-01-20 17:19:44.526 19338 ERROR oslo_messaging.rpc.dispatcher 
[req-fa4e7f8c-20e7-4e67-8cd5-fef4c0738498 ] Exception during message handling: 
Firewall 22c13294-b204-42bc-b592-beefe2b9f3c9 could not be found.
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/openstack/.venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 142, in _dispatch_and_reply
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/openstack/.venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 186, in _dispatch
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/openstack/.venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 130, in _do_dispatch
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/openstack/neutron-fwaas/neutron_fwaas/services/firewall/fwaas_plugin.py", 
line 67, in firewall_deleted
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher fw_db = 
self.plugin._get_firewall(context, firewall_id)
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/openstack/neutron-fwaas/neutron_fwaas/db/firewall/firewall_db.py", line 
101, in _get_firewall
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher raise 
fw_ext.FirewallNotFound(firewall_id=id)
2017-01-20 17:19:44.526 19338 TRACE oslo_messaging.rpc.dispatcher 
FirewallNotFound: Firewall 22

[Yahoo-eng-team] [Bug 1658048] [NEW] The generated apache configuration does not spawn enough processes and this can lead to a stalling server (potential DoS)

2017-01-20 Thread Yves-Gwenael Bourhis
Public bug reported:

When creating the apache configuration with:

 python manage.py make_web_conf --apache

The apache configuration file does not specify the number of apache processes.
By default apache will spawn only one. not only is it a performance issue, but 
it can lead to Denial of Service if the apache process is to long to respond or 
stalled.

** Affects: horizon
 Importance: Undecided
 Assignee: Yves-Gwenael Bourhis (yves-gwenael-bourhis)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1658048

Title:
  The generated apache configuration does not spawn enough processes and
  this can lead to a stalling server (potential DoS)

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When creating the apache configuration with:

   python manage.py make_web_conf --apache

  The apache configuration file does not specify the number of apache processes.
  By default apache will spawn only one. not only is it a performance issue, 
but it can lead to Denial of Service if the apache process is to long to 
respond or stalled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1658048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656854] Re: Incorrect metada in ConfigDrive when using barematal ports under neutron

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/422068
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b34809dfae862e37b1362b73fd258af8d12adae5
Submitter: Jenkins
Branch:master

commit b34809dfae862e37b1362b73fd258af8d12adae5
Author: Sam Betts 
Date:   Tue Jan 17 12:45:21 2017 +

Ensure we mark baremetal links as phy links

In the Ironic multi-tenant case, the neutron ports will remain unbound
until later in the deploy process. Nova generates the network_data.json
file with all the links marked as unbound, which we need to correct as
these links will be bound after the config drive is generated and
written to the node. This patch updates the Ironic virt driver to
correct the network metadata.

Change-Id: I1881f4a9bca6a6d6a3b4e0e89a82b0765ae09eee
Closes-Bug: #1656854


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656854

Title:
  Incorrect metada in ConfigDrive when using barematal ports under
  neutron

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  If baremetal instance is booted with neutron network and config drive
  enabled, it receives incorrect network data in network_data.json,
  which cause trace in cloud-init: ValueError: Unknown network_data link
  type: unbound

  All software is at Newton:  ironic (1:6.2.1-0ubuntu1), nova
  (2:14.0.1-0ubuntu1), neutron (2:9.0.0-0ubuntu1).

  network_data.json content:

  {"services": [{"type": "dns", "address": "8.8.8.8"}], "networks":
  [{"network_id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", "type":
  "ipv4", "netmask": "255.255.255.224", "link": "tap7d178b79-86",
  "routes": [{"netmask": "0.0.0.0", "network": "0.0.0.0", "gateway":
  "204.74.228.65"}], "ip_address": "204.74.228.75", "id": "network0"}],
  "links": [{"ethernet_mac_address": "18:66:da:5f:07:f4", "mtu": 1500,
  "type": "unbound", "id": "tap7d178b79-86", "vif_id":
  "7d178b79-86a9-4e56-824e-fe503e422960"}]}

  neutron port description:
  openstack  port show 7d178b79-86a9-4e56-824e-fe503e422960  -f json
  {
"status": "DOWN", 
"binding_profile": "local_link_information='[{u'switch_info': u'c426s1', 
u'port_id': u'1/1/21', u'switch_id': u'60:9c:9f:49:a8:b4'}]'", 
"project_id": "7d450ecf00d64399aeb93bc122cb6dae", 
"binding_vnic_type": "baremetal", 
"binding_vif_details": "", 
"name": "", 
"admin_state_up": "UP", 
"network_id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", 
"created_at": "2017-01-16T14:32:27Z", 
"updated_at": "2017-01-16T14:36:22Z", 
"id": "7d178b79-86a9-4e56-824e-fe503e422960", 
"device_owner": "baremetal:none", 
"binding_host_id": "d02c7361-5e3a-4fdf-89b5-f29b3901f0fc", 
"revision_number": 7, 
"mac_address": "18:66:da:5f:07:f4", 
"binding_vif_type": "other", 
"device_id": "9762e013-ffb9-4512-a56d-2a11694a1de8", 
"fixed_ips": "ip_address='204.74.228.75', 
subnet_id='f41ae071-d0d8-4192-96c3-1fd73886275b'", 
"extra_dhcp_opts": "", 
"description": ""
  }

  ironic is configured for multitenancy (to use neutron): 
default_network_interface=neutron.
  neutron is configured for ML2, ML2 is configured for 
networking_generic_switch. Former works fine and toggle port on real switch in 
vlan (access) and out.

  Network is configured to work with vlans.

  Network description:
  openstack network show client-22-vlan  -f json
  {
"status": "ACTIVE", 
"router:external": "Internal", 
"availability_zone_hints": "", 
"availability_zones": "nova", 
"description": "", 
"provider:physical_network": "client", 
"admin_state_up": "UP", 
"updated_at": "2017-01-16T13:01:47Z", 
"created_at": "2017-01-16T12:59:10Z", 
"tags": [], 
"ipv6_address_scope": null, 
"provider:segmentation_id": 22, 
"mtu": 1500, 
"provider:network_type": "vlan", 
"revision_number": 5, 
"ipv4_address_scope": null, 
"subnets": "f41ae071-d0d8-4192-96c3-1fd73886275b", 
"shared": false, 
"project_id": "7d450ecf00d64399aeb93bc122cb6dae", 
"id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", 
"name": "client-22-vlan"
  }

  subnet description:
  openstack  subnet show f41ae071-d0d8-4192-96c3-1fd73886275b  -f json
  {
"service_types": [], 
"description": "", 
"enable_dhcp": false, 
"network_id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", 
"created_at": "2017-01-16T13:01:12Z", 
"dns_nameservers": "8.8.8.8", 
"updated_at": "2017-01-16T13:01:47Z", 
"ipv6_ra_mode": null, 
"allocation_pools": "204.74.228.66-204.74.228.94", 
"gateway_ip": "204.74.228.65", 
"revision_number": 3, 
"ipv6_address_mode": null, 
"ip_version": 4, 
"host_routes": "", 
"cidr": "204.74.228.64/27", 

[Yahoo-eng-team] [Bug 1658024] [NEW] Incorrect tag in other-config for openvsiwtch agent after upgrade to mitaka

2017-01-20 Thread George Shuklin
Public bug reported:

We've performed upgrade juno->kilo->libery->mitaka (one by one) without
rebooting compute hosts.

After mitaka upgrage we found that some of tenant networks are not
functional. Deeper debug shows that in openvswitch tag value in 'other-
config' field in ovs port description does not match actual tag on the
port. (tag field).

This cause openvswitch-agent to set wrong segmentation_id on irrelevant
host-local tags.

Visual symptom: after restarting neutron-openvswitch-agent connectivity
with given port appears for some time, than disappears. Tcdpump on the
physical interface shows, that traffic coming to host with proper
segmentation_id, but instance's replies are send back with wrong
segmentation_id, which belongs to some random network of the different
tenant.

There are two ways to fix this: 
1. reboot host
2. write tag field to the tag value of the port and restart 
neutron-openvswitch-agent.

Example of the incorrectly filled port (ovs-vsctl port list):

_uuid   : a5bfb91f-78de-4916-b16a-6ea737cf3b6d
bond_active_slave   : []
bond_downdelay  : 0
bond_fake_iface : false
bond_mode   : []
bond_updelay: 0
external_ids: {}
fake_bridge : false
interfaces  : [7fb9c7a6-963c-4814-b9a4-a23d1a918843]
lacp: []
mac : []
name: "tap20802dee-34"
other_config: {net_uuid="9a1923c8-a07d-487e-a96e-310103acd911", 
network_type=vlan, physical_network=local, segmentation_id="3035", tag="201"}
qos : []
statistics  : {}
status  : {}
tag : 302
trunks  : []
vlan_mode   : []


This problems repeated in the few installations of openstack, therefore is not 
a random fluke.

This script [1] fixes bad tags, but I believe this is a rather serious
issue with openvswitch-agent persistency.


[1] https://gist.github.com/amarao/fba1e766cfa217b0342d0fe066aeedd7


Affected version: mitaka, but I believe it related to previous versions, which 
was: juno, upgraded to kilo, upgraded to liberty, upgraded to mitaka.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658024

Title:
  Incorrect tag in other-config for openvsiwtch agent after upgrade to
  mitaka

Status in neutron:
  New

Bug description:
  We've performed upgrade juno->kilo->libery->mitaka (one by one)
  without rebooting compute hosts.

  After mitaka upgrage we found that some of tenant networks are not
  functional. Deeper debug shows that in openvswitch tag value in
  'other-config' field in ovs port description does not match actual tag
  on the port. (tag field).

  This cause openvswitch-agent to set wrong segmentation_id on
  irrelevant host-local tags.

  Visual symptom: after restarting neutron-openvswitch-agent
  connectivity with given port appears for some time, than disappears.
  Tcdpump on the physical interface shows, that traffic coming to host
  with proper segmentation_id, but instance's replies are send back with
  wrong segmentation_id, which belongs to some random network of the
  different tenant.

  There are two ways to fix this: 
  1. reboot host
  2. write tag field to the tag value of the port and restart 
neutron-openvswitch-agent.

  Example of the incorrectly filled port (ovs-vsctl port list):

  _uuid   : a5bfb91f-78de-4916-b16a-6ea737cf3b6d
  bond_active_slave   : []
  bond_downdelay  : 0
  bond_fake_iface : false
  bond_mode   : []
  bond_updelay: 0
  external_ids: {}
  fake_bridge : false
  interfaces  : [7fb9c7a6-963c-4814-b9a4-a23d1a918843]
  lacp: []
  mac : []
  name: "tap20802dee-34"
  other_config: {net_uuid="9a1923c8-a07d-487e-a96e-310103acd911", 
network_type=vlan, physical_network=local, segmentation_id="3035", tag="201"}
  qos : []
  statistics  : {}
  status  : {}
  tag : 302
  trunks  : []
  vlan_mode   : []

  
  This problems repeated in the few installations of openstack, therefore is 
not a random fluke.

  This script [1] fixes bad tags, but I believe this is a rather serious
  issue with openvswitch-agent persistency.

  
  [1] https://gist.github.com/amarao/fba1e766cfa217b0342d0fe066aeedd7

  
  Affected version: mitaka, but I believe it related to previous versions, 
which was: juno, upgraded to kilo, upgraded to liberty, upgraded to mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658019] [NEW] ovs_lib.OVSBridge.delete_flows does not delete flows when called with no args

2017-01-20 Thread Thomas Morin
Public bug reported:

ovs_lib.OVSBridge.delete_flows does not delete flows when called with no
args, because in that case ovs-ofctl is called as "ovs-ofctl del-flows
 -" [2] and nothing is provided on stdin, which is not
interpreted by ovs-fctl as delete all flows [3].

The issue really is in OVSBridge.do_action_flows [4] and would impact
mod_flows as well.

This bug is currently silent because there does not seem to be any code
calling delete_flows() without arguments on an OVSBridge instance ;
existing code uses bridges inheriting from OpenFlowSwitchMixin which
shadow the problematic implementation in ovs_lib.OVSBridge.


[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L310
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L302

[3] http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt

   [--bundle] del-flows switch
   [--bundle] [--strict] del-flows switch [flow]
   [--bundle] [--strict] del-flows switch - < file
  Deletes entries from switch's flow table.  With  only  a  switch
  argument,  deletes  all  flows.  Otherwise, deletes flow entries
  that match the specified flows.  With  --strict,  wildcards  are
  not treated as active for matching purposes.

[4]
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L296

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658019

Title:
  ovs_lib.OVSBridge.delete_flows does not delete flows when called with
  no args

Status in neutron:
  New

Bug description:
  ovs_lib.OVSBridge.delete_flows does not delete flows when called with
  no args, because in that case ovs-ofctl is called as "ovs-ofctl del-
  flows  -" [2] and nothing is provided on stdin, which is
  not interpreted by ovs-fctl as delete all flows [3].

  The issue really is in OVSBridge.do_action_flows [4] and would impact
  mod_flows as well.

  This bug is currently silent because there does not seem to be any
  code calling delete_flows() without arguments on an OVSBridge instance
  ; existing code uses bridges inheriting from OpenFlowSwitchMixin which
  shadow the problematic implementation in ovs_lib.OVSBridge.

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L310
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L302

  [3] http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt

 [--bundle] del-flows switch
 [--bundle] [--strict] del-flows switch [flow]
 [--bundle] [--strict] del-flows switch - < file
Deletes entries from switch's flow table.  With  only  a  switch
argument,  deletes  all  flows.  Otherwise, deletes flow entries
that match the specified flows.  With  --strict,  wildcards  are
not treated as active for matching purposes.

  [4]
  
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L296

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654287] Re: functional test netns_cleanup failing in gate

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/421325
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3f9f740d81b51be5e563069c720366fa90ade9ee
Submitter: Jenkins
Branch:master

commit 3f9f740d81b51be5e563069c720366fa90ade9ee
Author: Daniel Alvarez 
Date:   Thu Jan 12 01:06:01 2017 +

Fix netns_cleanup interrupted on rwd I/O

Functional tests for netns_cleanup have been failing a few times
in the gate lately. After thorough tests we've seen that the issue was
related to using rootwrap-daemon inside a wait_until_true loop. When
timeout fired while utils.execute() was reading from rootwrap-daemon,
it got interrupted and the output of the last command was not read.
Therefore, next calls to utils.execute() would read the output of
their previous command rather than their own, leading to unexpected
results.

This fix will poll existing processes in the namespace without making
use of the wait_until_true loop. Instead, it will check elapsed time
and raise the exception if timeout is exceeded.

Also, i'm removing debug traces introduced in
327f7fc4d54bbaaed3778b5eb3c51a037a9a178f which helped finding the root
cause of this bug.

Change-Id: Ie233261e4be36eecaf6ec6d0532f0f5e2e996cd2
Closes-Bug: #1654287


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654287

Title:
  functional test netns_cleanup failing in gate

Status in neutron:
  Fix Released
Status in oslo.rootwrap:
  New

Bug description:
  
  The functional test for netns_cleanup has failed in the gate today [0].

  Apparently, when trying to get the list of devices
  (ip_lib.get_devices() 'find /sys/class/net -maxdepth 1 -type 1 -printf
  %f') through rootwrap_daemon, it's getting the output of the previous
  command instead ('netstat -nlp'). This causes that the netns_cleanup
  module tries to unplug random devices which correspond to the actual
  output of the 'netstat' command.

  This bug doesn't look related to the test itself but to
  rootwrap_daemon? Maybe due to long output to the netstat command?

  
  Relevant part of the log

  2017-01-05 12:17:04.609 27615 DEBUG neutron.agent.linux.utils 
[req-68eceb29-052a-4c8c-8152-38bbe636cba5 - - - - -] Running command (rootwrap 
daemon): ['ip', 'netns', 'exec', 
'qrouter-cf2030c6-c924-45bb-b13b-6774d275b394', 'netstat', '-nlp'] 
execute_rootwrap_daemon neutron/agent/linux/utils.py:108
  2017-01-05 12:17:04.613 27615 DEBUG neutron.agent.linux.utils 
[req-68eceb29-052a-4c8c-8152-38bbe636cba5 - - - - -] Exit code: 0 execute 
neutron/agent/linux/utils.py:149
  2017-01-05 12:17:04.614 27615 DEBUG neutron.agent.linux.utils 
[req-68eceb29-052a-4c8c-8152-38bbe636cba5 - - - - -] Running command (rootwrap 
daemon): ['ip', 'netns', 'exec', 
'qrouter-cf2030c6-c924-45bb-b13b-6774d275b394', 'find', '/sys/class/net', 
'-maxdepth', '1', '-type', 'l', '-printf', '%f '] execute_rootwrap_daemon 
neutron/agent/linux/utils.py:108
  2017-01-05 12:17:04.645 27615 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 14 __log_wakeup 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/ovs/poller.py:202
  2017-01-05 12:17:04.686 27615 DEBUG neutron.agent.linux.utils 
[req-68eceb29-052a-4c8c-8152-38bbe636cba5 - - - - -] Exit code: 0 execute 
neutron/agent/linux/utils.py:149
  2017-01-05 12:17:04.688 27615 DEBUG neutron.agent.linux.utils 
[req-68eceb29-052a-4c8c-8152-38bbe636cba5 - - - - -] Running command (rootwrap 
daemon): ['ip', 'netns', 'exec', 
'qrouter-cf2030c6-c924-45bb-b13b-6774d275b394', 'ip', 'link', 'delete', 
'Active'] execute_rootwrap_daemon neutron/agent/linux/utils.py:108
  2017-01-05 12:17:04.746 27615 DEBUG neutron.agent.linux.utils 
[req-68eceb29-052a-4c8c-8152-38bbe636cba5 - - - - -] Exit code: 0 execute 
neutron/agent/linux/utils.py:149
  2017-01-05 12:17:04.747 27615 DEBUG neutron.agent.linux.utils 
[req-68eceb29-052a-4c8c-8152-38bbe636cba5 - - - - -] Running command (rootwrap 
daemon): ['ip', 'netns', 'exec', 
'qrouter-cf2030c6-c924-45bb-b13b-6774d275b394', 'ip', 'link', 'delete', 
'Internet'] execute_rootwrap_daemon neutron/agent/linux/utils.py:108
  2017-01-05 12:17:04.758 27615 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 14 __log_wakeup 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/ovs/poller.py:202
  2017-01-05 12:17:04.815 27615 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 14 __log_wakeup 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/ovs/poller.py:202
  2017-01-05 12:17:04.822 27615 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 7 __log_wakeup 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/ovs/poller.py:202
  2017-01-05 12:17:04.822 27615 DEBUG neutron.agent.ovsd

[Yahoo-eng-team] [Bug 1653899] Re: The 500 returns for invalid regex in the pattern match query parameters

2017-01-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/420494
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=841916daf78e740172da8f8b671069e81ae9a735
Submitter: Jenkins
Branch:master

commit 841916daf78e740172da8f8b671069e81ae9a735
Author: Kevin_Zheng 
Date:   Mon Jan 16 12:02:30 2017 +0800

Strict pattern match query parameters

We have a lot of filters are pattern match.

There is list for exact match

https://github.com/openstack/nova/blob/df2fd4a252cecc1e1ef471c071e57526ddf65499/nova/db/sqlalchemy/api.py#L2221

Out of that list will be pattern match

https://github.com/openstack/nova/blob/df2fd4a252cecc1e1ef471c071e57526ddf65499/nova/db/sqlalchemy/api.py#L2231

HTTP 500 raises if invalid regex provided for
those filters, strict there format to be regex
in JSON schema to avoid this.

partial implement of bp add-whitelist-for-server-list-filter-sort-parameters
Closes-Bug: #1653899

Change-Id: I4cf38407c20284dee7127edfe312da81caac9272


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653899

Title:
  The 500 returns for invalid regex in the pattern match query
  parameters

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We have a lot of filters are pattern match.

  There is list for exact match
  
https://github.com/openstack/nova/blob/df2fd4a252cecc1e1ef471c071e57526ddf65499/nova/db/sqlalchemy/api.py#L2221

  Out of that list will be pattern match
  
https://github.com/openstack/nova/blob/df2fd4a252cecc1e1ef471c071e57526ddf65499/nova/db/sqlalchemy/api.py#L2231

  When I input a invalid regex pattern, I will get 500 returned.

  For example:

  curl -g -i -X GET http://hp-pc:8774/v2.1/servers?node=[[[ -H
  "OpenStack-API-Version: compute 2.39" -H "User-Agent: python-
  novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-API-
  Version: 2.39" -H "X-Auth-Token:
  
gABYbKbto9BeEG31MtaiSCtIc43YKQCclVRJBklKMTv010fyB9jUgjmgvFSLCj8TyYfwJyIKiMduDesKNweCqnjcfLNlkMOsiNsHb4AyYrk0OlvZMwJ5I2rS1x_3kjyP2zbEUEJEKU1WrIY7QvjRBXQ7-r8AoI2QBCqolZjhtm2ckfoULTA"

  There is traceback in the log:

  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters context)
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 166, in 
execute
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 322, in _query
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters conn.query(q)
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 835, in 
query
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1019, in 
_read_query_result
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters result.read()
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1302, in 
read
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters first_packet 
= self.connection._read_packet()
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 981, in 
_read_packet
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 393, in 
check_error
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2017-01-04 15:43:49.113 TRACE oslo_db.sqlalchemy.exc_filters raise 
er