[Yahoo-eng-team] [Bug 1773945] Re: nova client servers.list crashes with bad marker

2018-06-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/572539
Committed: 
https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=abaa86fd86d4beca0082ff1768d5306e5e86302e
Submitter: Zuul
Branch:master

commit abaa86fd86d4beca0082ff1768d5306e5e86302e
Author: Matt Riedemann 
Date:   Tue Jun 5 19:57:49 2018 +

Revert "Fix listing of instances above API max_limit"

This reverts commit eff607ccef91d09052d58f6798f68d67404f51ce.

There was no apparent need for the change being reverted since
user can list all servers by specifying --limit=1 when running
the nova list command.

The change introduced a problem whereby the first pass to
list instances from the server would get up to
[api]/max_limit (default 1000) results and then call again
with a marker. If the last instance in the list (the marker)
is corrupted in the instance_mappings table in the API DB
by not having an associated cell mapping, listing instances
will always fail with a MarkerNotFound error even though
the CLI user is not passing a marker nor specifying
--limit=-1. The corrupted instance mapping record resulting
in the MarkerNotFound error is something else that should
be fixed on the server side (and likely result in a 500) but
the change in behavior of the CLI makes it always fail
if you hit this even if you're not passing a marker.

Change-Id: Ibb43f500a74733b85bd3242592d36985bfb45856
Closes-Bug: #1773945


** Changed in: python-novaclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773945

Title:
  nova client servers.list crashes with bad marker

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  Fix Released

Bug description:
  We have a python script that called servers.list() on an instance of
  novaclient.v2.client.Client . Sometimes that raises a "BadRequest
  marker not found" exception:

  Our call:

    client = nova_client.Client("2", session=some_session)
    client.servers.list()

  Observed Stacktrace:

    File "/usr/lib/python2.7/site-packages//.py", line 630, in :
  all_servers = self.nova.servers.list()
    File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 854, 
in list
  "servers")
    File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 257, in 
_list
  resp, body = self.api.client.get(url)
    File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 304, 
in get
  return self.request(url, 'GET', **kwargs)
    File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 83, in 
request
  raise exceptions.from_response(resp, body, url, method)
  BadRequest: marker [6a91d602-ab6e-42e0-929e-5ec33df2ddef] not found (HTTP 
400) (Request-ID: req-78827725-801d-4514-8cc8-e4b94f15c191)

  Discussion:

  We have a lot of stacks and we sometimes create multiple stacks at the same 
time. We've noticed that that the stacks with the mentioned UUIDs were created 
just before these errors occur. It seems that when a newly-created stack 
appears at a certain location in the server list, its UUID is used as a marker, 
but the code that validates the marker does
  not recognize such stacks.

  Relevant versions:

  - python-novaclient (9.1.0)
  - nova (16.0.0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1773945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775310] [NEW] Unused namespace is appeared.

2018-06-05 Thread Yasuhiro Kimura
Public bug reported:

Unused namespace is appeared when VM migrate or resize under the
following conditions.

1) L3-agent is enables on compute node. 
2) Migrating(Resizing) VM connects to no DVR router.
3) Migrating(Resizing) VM uses Floating IP. 
4) There is some VM that connects to the same no DVR router on the source 
compute node.

L3-agent continue to output error logs because l3-agent can not remove
unused namespace.

Although no DVR router is hosted on network node, neutron-server notifies 
l3-agent on compute node. 
The logic determine l3-agent to be notified is broken.

** Affects: neutron
 Importance: Undecided
 Assignee: Yasuhiro Kimura (yasukimura)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Yasuhiro Kimura (yasukimura)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775310

Title:
  Unused namespace is appeared.

Status in neutron:
  New

Bug description:
  Unused namespace is appeared when VM migrate or resize under the
  following conditions.

  1) L3-agent is enables on compute node. 
  2) Migrating(Resizing) VM connects to no DVR router.
  3) Migrating(Resizing) VM uses Floating IP. 
  4) There is some VM that connects to the same no DVR router on the source 
compute node.

  L3-agent continue to output error logs because l3-agent can not remove
  unused namespace.

  Although no DVR router is hosted on network node, neutron-server notifies 
l3-agent on compute node. 
  The logic determine l3-agent to be notified is broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775308] [NEW] Listing placement usages (total or per resource provider) in a new process can result in a 500

2018-06-05 Thread Chris Dent
Public bug reported:

When requesting /usages or /resource_providers/{uuid}/usages it is
possible to cause a 500 error if placement is running in a multi-process
scenario and the usages query is the first request a process has
received. This is because the methods which provide UsageLists do not
_ensure_rc_cache, resulting in:

  File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/placement/objects/resource_provider.py",
 line 2374, in _from_db_object
   rc_str = _RC_CACHE.string_from_id(source['resource_class_id'])
   AttributeError: 'NoneType' object has no attribute 'string_from_id'

We presumably don't see this in our usual testing because any process
has already had other requests happen, setting the cache.

For now, the fix is to add the _ensure_rc_cache call in the right
places, but long term if/when we switch to the os-resource-class model
we can do the caching or syncing a bit differently (see
https://review.openstack.org/#/c/553857/ for an example).

** Affects: nova
 Importance: Medium
 Assignee: Chris Dent (cdent)
 Status: Triaged


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775308

Title:
  Listing placement usages (total or per resource provider) in a new
  process can result in a 500

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  When requesting /usages or /resource_providers/{uuid}/usages it is
  possible to cause a 500 error if placement is running in a multi-
  process scenario and the usages query is the first request a process
  has received. This is because the methods which provide UsageLists do
  not _ensure_rc_cache, resulting in:

File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/placement/objects/resource_provider.py",
 line 2374, in _from_db_object
 rc_str = _RC_CACHE.string_from_id(source['resource_class_id'])
 AttributeError: 'NoneType' object has no attribute 'string_from_id'

  We presumably don't see this in our usual testing because any process
  has already had other requests happen, setting the cache.

  For now, the fix is to add the _ensure_rc_cache call in the right
  places, but long term if/when we switch to the os-resource-class model
  we can do the caching or syncing a bit differently (see
  https://review.openstack.org/#/c/553857/ for an example).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1775308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771885] Re: bionic: static maas missing search domain in systemd-resolve configuration

2018-06-05 Thread David Britton
Given the workaround available for maas & cloud-init this is working as
expected.  Thanks for the debugging everyone.

** Changed in: cloud-init
   Status: New => Won't Fix

** Changed in: maas
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1771885

Title:
  bionic: static maas missing search domain in systemd-resolve
  configuration

Status in cloud-init:
  Won't Fix
Status in juju:
  Fix Committed
Status in juju 2.3 series:
  Fix Released
Status in MAAS:
  Invalid

Bug description:
  juju: 2.4-beta2  
  MAAS: 2.3.0

  Testing deployment of LXD containers on bionic (specifically for an
  openstack deployment) lead to this problem:

  https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1765405

  Summary:

  previously, the DNS config in the LXD containers were the same as the
  host machines

  now, the DNS config is in systemd, the DNS server is set correctly,
  but the search domain is missing, so hostnames won't resolve.

  Working resolv.conf on xenial lxd container:

  nameserver 10.245.168.6
  search maas

  Non-working "systemd-resolve --status":

  ...
  Link 21 (eth0)
Current Scopes: DNS
 LLMNR setting: yes
  MulticastDNS setting: no
DNSSEC setting: no
  DNSSEC supported: no
   DNS Servers: 10.245.168.6

  Working (now able to resolve hostnames after modifying netplan and
  adding search domain):

  Link 21 (eth0)
Current Scopes: DNS
 LLMNR setting: yes
  MulticastDNS setting: no
DNSSEC setting: no
  DNSSEC supported: no
   DNS Servers: 10.245.168.6
DNS Domain: maas

  ubuntu@juju-6406ff-2-lxd-2:/etc$ host node-name
  node-name.maas has address 10.245.168.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1771885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774056] Re: Nodes fail to deploy at ''cloudinit'' running modules for final'

2018-06-05 Thread David Britton
** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1774056

Title:
  Nodes fail to deploy at ''cloudinit'' running modules for final'

Status in cloud-init:
  Invalid

Bug description:
  machine-status:
current: provisioning error
message: 'Failed deployment: ''cloudinit'' running modules for final'

  No other errors seem detectable in the logs for that node. We require
  some expert triage for this one. I have attached all of the logs for
  one of the failing runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1774056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773818] Re: Test failures in docker container

2018-06-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/570270
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8930d33c71e394c1cfaa1b9ad5493c79d394cb40
Submitter: Zuul
Branch:master

commit 8930d33c71e394c1cfaa1b9ad5493c79d394cb40
Author: Slawek Kaplonski 
Date:   Wed May 23 15:13:42 2018 -0700

Fix UT BridgeLibTest when IPv6 is disabled

There was missing mock of
ipv6_utils.is_enabled_and_bind_by_default() in BridgeLibTest
unit tests and that cause failing some of tests from this module
when tests are running on host with disabled IPv6.
Now it's mocked and tests are running properly and are
testing what they should test.

Closes-Bug: #1773818
Change-Id: I9144450ce85e020c0e33c5214a2178acbbbf5f54


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1773818

Title:
  Test failures in docker container

Status in neutron:
  Fix Released

Bug description:
  There are no enough resources to create the pool of virtual machines
  sometimes. People use containers in this case.

  There are two tests that fail in docker container:
  http://paste.openstack.org/show/722171/

  The main reason is that docker container is not a Virtual Machine, so
  it doesn't provide all of the architecture in the same way that a
  Virtual Machine might. The underlying infrastructure of docker
  containers is configured on the Host system, not within the container
  itself

  These tests use brctl and sysctl. You can use --privileged flag
  ('docker run' command) for brctl. What about sysctl? You can use
  sysctl commands with docker-client. But you can not use it inside a
  container.

  It would be very cool if the tests were fixed so that they work in
  docker containers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1773818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775295] [NEW] Queen keystone installation instructions outdated, keystone-managed credential_setup invalid choice

2018-06-05 Thread johnpham
Public bug reported:

- [x] This doc is inaccurate in this way:
This command
"apt install keystone  apache2 libapache2-mod-wsgi"
installs keystone with keystone-manage version 9.3.0 which doesn't support 
subsequent command:
"keystone-manage credential_setup --keystone-user keystone --keystone-group 
keystone"

I'm new to openstack so not sure whether this issue is unique to my environment.
If this is an actual issue, how do i get around this. 

P.s: I'm installing keystone on a clean installation of Ubuntu 16.04

Thanks so much in advance!
---
Release: 13.0.1.dev9 on 2018-05-08 06:44
SHA: 4ca0172fcdb1ce28a1f00d5a0e1bb3d646141803
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
URL: 
https://docs.openstack.org/keystone/queens/install/keystone-install-ubuntu.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: doc

** Summary changed:

- keystone installation instructions needed to be updated  
+ Queen keystone installation instructions outdated, keystone-managed 
credential_setup invalid choice

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1775295

Title:
  Queen keystone installation instructions outdated, keystone-managed
  credential_setup invalid choice

Status in OpenStack Identity (keystone):
  New

Bug description:
  - [x] This doc is inaccurate in this way:
  This command
  "apt install keystone  apache2 libapache2-mod-wsgi"
  installs keystone with keystone-manage version 9.3.0 which doesn't support 
subsequent command:
  "keystone-manage credential_setup --keystone-user keystone --keystone-group 
keystone"

  I'm new to openstack so not sure whether this issue is unique to my 
environment.
  If this is an actual issue, how do i get around this. 

  P.s: I'm installing keystone on a clean installation of Ubuntu 16.04

  Thanks so much in advance!
  ---
  Release: 13.0.1.dev9 on 2018-05-08 06:44
  SHA: 4ca0172fcdb1ce28a1f00d5a0e1bb3d646141803
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/queens/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1775295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773945] Re: nova client servers.list crashes with bad marker

2018-06-05 Thread Matt Riedemann
Revert https://review.openstack.org/#/c/572539/

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Tags added: api

** Changed in: python-novaclient
   Status: New => In Progress

** Changed in: python-novaclient
   Importance: Undecided => Medium

** Changed in: python-novaclient
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773945

Title:
  nova client servers.list crashes with bad marker

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  In Progress

Bug description:
  We have a python script that called servers.list() on an instance of
  novaclient.v2.client.Client . Sometimes that raises a "BadRequest
  marker not found" exception:

  Our call:

    client = nova_client.Client("2", session=some_session)
    client.servers.list()

  Observed Stacktrace:

    File "/usr/lib/python2.7/site-packages//.py", line 630, in :
  all_servers = self.nova.servers.list()
    File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 854, 
in list
  "servers")
    File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 257, in 
_list
  resp, body = self.api.client.get(url)
    File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 304, 
in get
  return self.request(url, 'GET', **kwargs)
    File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 83, in 
request
  raise exceptions.from_response(resp, body, url, method)
  BadRequest: marker [6a91d602-ab6e-42e0-929e-5ec33df2ddef] not found (HTTP 
400) (Request-ID: req-78827725-801d-4514-8cc8-e4b94f15c191)

  Discussion:

  We have a lot of stacks and we sometimes create multiple stacks at the same 
time. We've noticed that that the stacks with the mentioned UUIDs were created 
just before these errors occur. It seems that when a newly-created stack 
appears at a certain location in the server list, its UUID is used as a marker, 
but the code that validates the marker does
  not recognize such stacks.

  Relevant versions:

  - python-novaclient (9.1.0)
  - nova (16.0.0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1773945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773286] Re: In some specific case with dvr mode I found the l2pop flows is incomplete.

2018-06-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/571920
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d0fa2c9ac50a783c410fab2df938974c4e829ffa
Submitter: Zuul
Branch:master

commit d0fa2c9ac50a783c410fab2df938974c4e829ffa
Author: Yang JianFeng 
Date:   Sat Jun 2 05:10:59 2018 +

Don't skip DVR port while neutron-openvswitch-agent is restared.

neutron-openvswitch-agent will refresh flows when it's restarted.
But the port's binding status is not changed, update_port_postcommit
will be skipped at function '_update_individual_port_db_status' in
'neutron/plugins/ml2/plugin.py', l2pop don't handle DVR ports, the
fdb entries about DVR port will not be added.

So, we can't skip DVR port at notify_l2pop_port_wiring when agent
is restared.

Closes-Bug: #1773286
Change-Id: I54e3db4822830a0c83daf7b5150575f8d6e2497b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1773286

Title:
  In some specific case with dvr mode I found the l2pop flows is
  incomplete.

Status in neutron:
  Fix Released

Bug description:
  As shown below,the network that's internal vlan is 4 only has 'qr-'
  port in compute2

  [root@compute2 ~]# ovs-vsctl show|grep "tag: 4" -C 2
  type: internal
  Port "qr-1862e19a-91"
  tag: 4
  Interface "qr-1862e19a-91"
  type: internal
  [root@compute2 ~]# 

  
  I checked the network's flow tables on br-tun,they are right.

  [root@compute2 ~]# ovs-ofctl dump-flows br-tun|grep "dl_vlan=4"
   cookie=0x56ca5c04010b5cea, duration=65.109s, table=1, n_packets=0, 
n_bytes=0, idle_age=69, priority=3,arp,dl_vlan=4,arp_tpa=10.10.122.1 
actions=drop
   cookie=0x56ca5c04010b5cea, duration=65.107s, table=1, n_packets=0, 
n_bytes=0, idle_age=69, priority=2,dl_vlan=4,dl_dst=fa:16:3e:67:d7:df 
actions=drop
   cookie=0x56ca5c04010b5cea, duration=65.106s, table=1, n_packets=0, 
n_bytes=0, idle_age=69, priority=1,dl_vlan=4,dl_src=fa:16:3e:67:d7:df 
actions=mod_dl_src:fa:16:3f:aa:34:f9,resubmit(,2)
   cookie=0x56ca5c04010b5cea, duration=65.544s, table=20, n_packets=0, 
n_bytes=0, idle_age=68, priority=2,dl_vlan=4,dl_dst=fa:16:3e:10:26:dc 
actions=strip_vlan,load:0x56->NXM_NX_TUN_ID[],output:2
   cookie=0x56ca5c04010b5cea, duration=64.158s, table=20, n_packets=0, 
n_bytes=0, idle_age=67, priority=2,dl_vlan=4,dl_dst=fa:16:3e:f5:91:5e 
actions=strip_vlan,load:0x56->NXM_NX_TUN_ID[],output:3
   cookie=0x56ca5c04010b5cea, duration=65.546s, table=21, n_packets=0, 
n_bytes=0, idle_age=68, priority=1,arp,dl_vlan=4,arp_tpa=10.10.122.8 
actions=load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163e1026dc->NXM_NX_ARP_SHA[],load:0xa0a7a08->NXM_OF_ARP_SPA[],move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:fa:16:3e:10:26:dc,IN_PORT
   cookie=0x56ca5c04010b5cea, duration=64.161s, table=21, n_packets=0, 
n_bytes=0, idle_age=67, priority=1,arp,dl_vlan=4,arp_tpa=10.10.122.2 
actions=load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163ef5915e->NXM_NX_ARP_SHA[],load:0xa0a7a02->NXM_OF_ARP_SPA[],move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:fa:16:3e:f5:91:5e,IN_PORT
   cookie=0x56ca5c04010b5cea, duration=64.164s, table=22, n_packets=0, 
n_bytes=0, idle_age=69, priority=1,dl_vlan=4 
actions=strip_vlan,load:0x56->NXM_NX_TUN_ID[],output:2,output:3
  [root@compute2 ~]# 

  
  But,after I restart neutron-openvswitch-agent,I found these flows in table 
20,21,22 is lost.

  systemctl restart neutron-openvswitch-agent

  [root@compute2 ~]# ovs-ofctl dump-flows br-tun|grep "dl_vlan=4"
   cookie=0x6c26ffbe1a6134eb, duration=11.442s, table=1, n_packets=0, 
n_bytes=0, idle_age=13, priority=3,arp,dl_vlan=4,arp_tpa=10.10.122.1 
actions=drop
   cookie=0x6c26ffbe1a6134eb, duration=11.441s, table=1, n_packets=0, 
n_bytes=0, idle_age=13, priority=2,dl_vlan=4,dl_dst=fa:16:3e:67:d7:df 
actions=drop
   cookie=0x6c26ffbe1a6134eb, duration=11.440s, table=1, n_packets=0, 
n_bytes=0, idle_age=13, priority=1,dl_vlan=4,dl_src=fa:16:3e:67:d7:df 
actions=mod_dl_src:fa:16:3f:aa:34:f9,resubmit(,2)
  [root@compute2 ~]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1773286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775250] [NEW] Implement DVR-aware announcement of fixed IP's in neutron-dynamic-routing

2018-06-05 Thread Ryan Tidwell
Public bug reported:

The current implementation of neutron-dynamic-routing is compatible with
DVR, but is not optimized for DVR. It currently announces next-hops for
all tenant subnets through the central router on the network node.
Announcing next-hops via the FIP gateway on the compute node was never
implemented due to the pre-requisite of having DVR fast-exit in place
for packets to be routed through the FIP namespace properly. With DVR
fast-exit now in place, it's time to consider adding DVR-aware /32
and/or /128 announcements for fixed IP's using the FIP gateway as the
next-hop.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775250

Title:
  Implement DVR-aware announcement of fixed IP's in neutron-dynamic-
  routing

Status in neutron:
  New

Bug description:
  The current implementation of neutron-dynamic-routing is compatible
  with DVR, but is not optimized for DVR. It currently announces next-
  hops for all tenant subnets through the central router on the network
  node. Announcing next-hops via the FIP gateway on the compute node was
  never implemented due to the pre-requisite of having DVR fast-exit in
  place for packets to be routed through the FIP namespace properly.
  With DVR fast-exit now in place, it's time to consider adding DVR-
  aware /32 and/or /128 announcements for fixed IP's using the FIP
  gateway as the next-hop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775075] Re: EndpointNotFound raised by Pike n-cpu when running alongside Queens n-api

2018-06-05 Thread Matt Riedemann
** Changed in: nova
   Status: New => Invalid

** Changed in: nova/queens
   Status: Fix Committed => Fix Released

** Changed in: nova/queens
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775075

Title:
  EndpointNotFound raised by Pike n-cpu when running alongside Queens
  n-api

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) queens series:
  Fix Released

Bug description:
  Description
  ===
  During a P to Q upgrade n-cpu processes still running P will be unable to 
find the volumev2 endpoint when running alongside Q n-api processes due to the 
following change: 

  Update cinder in RequestContext service catalog
  https://review.openstack.org/#/c/510947/

  This results in failures anytime the P n-cpu process attempts to
  interact with the volume service, for example during LM from the node:

  2018-06-02 00:19:17.683 1 WARNING nova.virt.libvirt.driver 
[req-3712be3d-b883-4fe1-bab0-83ee44bd5bb5 e16a043a84b14e2b8afbdd1b8677259f 
cb92ed750eac463faf8935cb137f1e60 - default default] [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] Error monitoring migration: internalURL 
endpoint for volumev2 service named cinderv2 not found: EndpointNotFound: 
internalURL endpoint for volumev2 service named cinderv2 not found
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] Traceback (most recent call last):
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6817, in 
_live_migration
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] finish_event, disk_paths)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6728, in 
_live_migration_monitor
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] migrate_data)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] function_name, call_dict, binary)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] self.force_reraise()
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] six.reraise(self.type_, self.value, 
self.tb)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] return f(self, context, *args, **kw)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 218, in 
decorated_function
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] kwargs['instance'], e, sys.exc_info())
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] self.force_reraise()
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] six.reraise(self.type_, self.value, 
self.tb)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 206, in 
decorated_function
  2018-06-02 00:19:17.683 1 ERROR 

[Yahoo-eng-team] [Bug 1775220] [NEW] Unit test neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase. test_get_objects_queries_constant fails often

2018-06-05 Thread Slawek Kaplonski
Public bug reported:

Since some time we have quite often issue with unit test
neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase
.test_get_objects_queries_constant

It happens also for periodic jobs. Examples of failures from last week:

http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master
/openstack-tox-py27-with-oslo-master/031dc64/testr_results.html.gz

http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master
/openstack-tox-py35-with-neutron-lib-
master/4f4b599/testr_results.html.gz

http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master
/openstack-tox-py35-with-oslo-master/348faa8/testr_results.html.gz

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775220

Title:
  Unit test
  neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase.
  test_get_objects_queries_constant fails often

Status in neutron:
  Confirmed

Bug description:
  Since some time we have quite often issue with unit test
  neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase
  .test_get_objects_queries_constant

  It happens also for periodic jobs. Examples of failures from last
  week:

  http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master
  /openstack-tox-py27-with-oslo-master/031dc64/testr_results.html.gz

  http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master
  /openstack-tox-py35-with-neutron-lib-
  master/4f4b599/testr_results.html.gz

  http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master
  /openstack-tox-py35-with-oslo-master/348faa8/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775207] [NEW] Fetching all mappings may become too slow

2018-06-05 Thread Pavlo Shchelokovskyy
Public bug reported:

While fixing bug 1582585 the change
I2c266e91f2f05be760f8a3eea3738868243cc9c6 started fetching all mappings
from SQL and performing in-memory joins.

However, with many users and groups mappings such fetching may become
too slow -  for instance, with ~125k users CLI command "openstack domain
list" takes up to 7 seconds and login to the Horizon - up to 20 seconds.

Additionally filtering the corresponding query in
get_domain_mapping_list method by entity_type speeds up things somewhat.

** Affects: keystone
 Importance: Undecided
 Assignee: Pavlo Shchelokovskyy (pshchelo)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1775207

Title:
  Fetching all mappings may become too slow

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  While fixing bug 1582585 the change
  I2c266e91f2f05be760f8a3eea3738868243cc9c6 started fetching all
  mappings from SQL and performing in-memory joins.

  However, with many users and groups mappings such fetching may become
  too slow -  for instance, with ~125k users CLI command "openstack
  domain list" takes up to 7 seconds and login to the Horizon - up to 20
  seconds.

  Additionally filtering the corresponding query in
  get_domain_mapping_list method by entity_type speeds up things
  somewhat.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1775207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387053] Re: Stopping neutron server with rpc workers raises exeption

2018-06-05 Thread Jakub Libosvar
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1387053

Title:
  Stopping neutron server with rpc workers raises exeption

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  2014-10-28 15:53:38.090 62623 INFO neutron.openstack.common.service [-] 
Caught SIGTERM, stopping children
  2014-10-28 15:53:38.091 62623 INFO neutron.openstack.common.service [-] 
Waiting on 1 children to exit
  2014-10-28 15:53:38.114 62635 INFO neutron.openstack.common.service [-] Child 
caught SIGTERM, exiting
  2014-10-28 15:53:38.116 62635 TRACE neutron.service Traceback (most recent 
call last):
  2014-10-28 15:53:38.116 62635 TRACE neutron.service   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 159, in serve_rpc
  2014-10-28 15:53:38.116 62635 TRACE neutron.service 
launcher.launch_service(rpc, workers=cfg.CONF.rpc_workers)
  2014-10-28 15:53:38.116 62635 TRACE neutron.service   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
341, in launch_service
  2014-10-28 15:53:38.116 62635 TRACE neutron.service 
self._start_child(wrap)
  2014-10-28 15:53:38.116 62635 TRACE neutron.service   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
322, in _start_child
  2014-10-28 15:53:38.116 62635 TRACE neutron.service status, signo = 
self._child_wait_for_exit_or_signal(launcher)
  2014-10-28 15:53:38.116 62635 TRACE neutron.service   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
280, in _child_wait_for_exit_or_signal
  2014-10-28 15:53:38.116 62635 TRACE neutron.service launcher.stop()
  2014-10-28 15:53:38.116 62635 TRACE neutron.service   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
128, in stop
  2014-10-28 15:53:38.116 62635 TRACE neutron.service self.services.stop()
  2014-10-28 15:53:38.116 62635 TRACE neutron.service   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
470, in stop
  2014-10-28 15:53:38.116 62635 TRACE neutron.service service.stop()
  2014-10-28 15:53:38.116 62635 TRACE neutron.service   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 132, in stop
  2014-10-28 15:53:38.116 62635 TRACE neutron.service server.kill()
  2014-10-28 15:53:38.116 62635 TRACE neutron.service AttributeError: 
'MessageHandlingServer' object has no attribute 'kill'
  2014-10-28 15:53:38.116 62635 TRACE neutron.service 
  2014-10-28 15:53:38.471 62635 TRACE neutron Traceback (most recent call last):
  2014-10-28 15:53:38.471 62635 TRACE neutron   File "/usr/bin/neutron-server", 
line 10, in 
  2014-10-28 15:53:38.471 62635 TRACE neutron sys.exit(main())
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/server/__init__.py", line 51, in main
  2014-10-28 15:53:38.471 62635 TRACE neutron neutron_rpc = 
service.serve_rpc()
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 163, in serve_rpc
  2014-10-28 15:53:38.471 62635 TRACE neutron 
LOG.exception(_('Unrecoverable error: please check log '
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/excutils.py", line 
82, in __exit__
  2014-10-28 15:53:38.471 62635 TRACE neutron six.reraise(self.type_, 
self.value, self.tb)
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 159, in serve_rpc
  2014-10-28 15:53:38.471 62635 TRACE neutron launcher.launch_service(rpc, 
workers=cfg.CONF.rpc_workers)
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
341, in launch_service
  2014-10-28 15:53:38.471 62635 TRACE neutron self._start_child(wrap)
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
322, in _start_child
  2014-10-28 15:53:38.471 62635 TRACE neutron status, signo = 
self._child_wait_for_exit_or_signal(launcher)
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
280, in _child_wait_for_exit_or_signal
  2014-10-28 15:53:38.471 62635 TRACE neutron launcher.stop()
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
128, in stop
  2014-10-28 15:53:38.471 62635 TRACE neutron self.services.stop()
  2014-10-28 15:53:38.471 62635 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/service.py", line 
470, in stop
  2014-10-28 

[Yahoo-eng-team] [Bug 1611237] Re: Restart neutron-openvswitch-agent get ERROR "Switch connection timeout"

2018-06-05 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611237

Title:
  Restart neutron-openvswitch-agent get ERROR "Switch connection
  timeout"

Status in devstack:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  Environment: devstack  master, ubuntu 14.04

  After ./stack.sh finished, kill the neutron-openvswitch-agent process
  and then start it by /usr/bin/python /usr/local/bin/neutron-
  openvswitch-agent --config-file /etc/neutron/neutron.conf --config-
  file /etc/neutron/plugins/ml2/ml2_conf.ini

  The log shows :
  2016-08-08 11:02:06.346 ERROR ryu.lib.hub [-] hub: uncaught exception: 
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
  return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 97, in __call__
  self.ofp_ssl_listen_port)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 120, in server_loop
  datapath_connection_factory)
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 117, in 
__init__
  self.server = eventlet.listen(listen_info)
File "/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py", line 
43, in listen
  sock.bind(addr)
File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 98] Address already in use

  and
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

  In kilo I could start ovs-agent in this way correctly, I do not know
  it is right to start ovs-agent in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1611237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445199] Re: Nova user should not have admin role

2018-06-05 Thread Dr. Jens Harbott
Devstack is meant to provide a deployment suitable for development, not
a hardened setup that could be used in production. While it could adopt
this if Nova supported it, I'll mark the bug as invalid for devstack.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445199

Title:
  Nova user should not have admin role

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Security Advisory:
  Invalid

Bug description:
  
  Most of the service users are granted the 'service' role on the 'service' 
project, except the 'nova' user which is given 'admin'. The 'nova' user should 
also be given only the 'service' role on the 'service' project.

  This is for security hardening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1445199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599936] Re: l2gw provider config prevents *aas provider config from being loaded

2018-06-05 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599936

Title:
  l2gw provider config prevents *aas provider config from being loaded

Status in devstack:
  Invalid
Status in networking-l2gw:
  Invalid
Status in neutron:
  In Progress

Bug description:
  networking-l2gw devstack plugin stores its service_providers config in 
/etc/l2gw_plugin.ini
  and add --config-file for it.
  as a result, neutron-server is invoked like the following.

  /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/midonet/midonet.ini --config-file
  /etc/neutron/l2gw_plugin.ini

  it breaks *aas service providers because NeutronModule.service_providers finds
  l2gw providers in cfg.CONF.service_providers.service_provider and thus 
doesn't look
  at *aas service_providers config, which is in /etc/neutron/neutron_*aas.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1599936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660650] Re: Can't log in into Horizon when deploying Manila+Sahara

2018-06-05 Thread Tom Barron
Marking invalid in manila as well; re-open if there an issue any more.

** Changed in: manila
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1660650

Title:
  Can't log in into Horizon when deploying Manila+Sahara

Status in devstack:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Manila:
  Invalid
Status in Sahara:
  Invalid

Bug description:
  When installing latest devstack trunk alongside with the Manila +
  Sahara/Sahara Dashboard plugins, I can't log in using horizon, even if
  the log says I logged in successfully.

  Horizon gets stuck on the greeter, asking again from the username and
  password.

  local.conf extract:

  
# Enable Manila 


enable_plugin manila https://github.com/openstack/manila





  ~ #Enable heat plugin 


  ~_enable_plugin heat https://git.openstack.org/openstack/heat 





# Enable Swift  


enable_service s-proxy s-object s-container s-account   


SWIFT_REPLICAS=1


SWIFT_HASH=$ADMIN_PASSWORD  





# Enable Sahara 


enable_plugin sahara git://git.openstack.org/openstack/sahara   





# Enable sahara-dashboard   


enable_plugin sahara-dashboard 
git://git.openstack.org/openstack/sahara-dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1660650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775183] [NEW] Fullstack test neutron.tests.fullstack.test_l3_agent.TestHAL3Agent. test_ha_router_restart_agents_no_packet_lost fails often

2018-06-05 Thread Slawek Kaplonski
Public bug reported:

Example of failure: http://logs.openstack.org/95/572295/1/check/neutron-
fullstack/14122fa/logs/testr_results.html.gz

Happened about 50 times in last week:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Could%20not%20ping%20the%20other%20VM%2C%20L2%20agent%20restart%20leads%20to%20network%20disruption%5C%22%20AND%20build_name%3A%5C
%22neutron-fullstack%5C%22

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: fullstack gate-failure l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775183

Title:
  Fullstack test neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.
  test_ha_router_restart_agents_no_packet_lost fails often

Status in neutron:
  Confirmed

Bug description:
  Example of failure: http://logs.openstack.org/95/572295/1/check
  /neutron-fullstack/14122fa/logs/testr_results.html.gz

  Happened about 50 times in last week:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Could%20not%20ping%20the%20other%20VM%2C%20L2%20agent%20restart%20leads%20to%20network%20disruption%5C%22%20AND%20build_name%3A%5C
  %22neutron-fullstack%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660650] Re: Can't log in into Horizon when deploying Manila+Sahara

2018-06-05 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1660650

Title:
  Can't log in into Horizon when deploying Manila+Sahara

Status in devstack:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Manila:
  New
Status in Sahara:
  Invalid

Bug description:
  When installing latest devstack trunk alongside with the Manila +
  Sahara/Sahara Dashboard plugins, I can't log in using horizon, even if
  the log says I logged in successfully.

  Horizon gets stuck on the greeter, asking again from the username and
  password.

  local.conf extract:

  
# Enable Manila 


enable_plugin manila https://github.com/openstack/manila





  ~ #Enable heat plugin 


  ~_enable_plugin heat https://git.openstack.org/openstack/heat 





# Enable Swift  


enable_service s-proxy s-object s-container s-account   


SWIFT_REPLICAS=1


SWIFT_HASH=$ADMIN_PASSWORD  





# Enable Sahara 


enable_plugin sahara git://git.openstack.org/openstack/sahara   





# Enable sahara-dashboard   


enable_plugin sahara-dashboard 
git://git.openstack.org/openstack/sahara-dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1660650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700496] Re: Notifications are emitted per-cell instead of globally

2018-06-05 Thread Dr. Jens Harbott
[Closing for devstack because there has been no activity for 60 days.]

** Changed in: devstack
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1700496

Title:
  Notifications are emitted per-cell instead of globally

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  With https://review.openstack.org/#/c/436094/ we began using different 
transport URLs for Nova internal services.
  That said, as notifications emit on a different topic but use the same 
transport URL than the component, it leads to our consumers missing 
notifications as they only subscribe to the original MQ.

  While Nova can use multiple MQs, we should still offer the possibility
  to have global notifications for Nova so a consumer wouldn't require
  to modify their configs every time a new cell is issued.

  That can be an oslo.messaging config option [1], but we certainly need
  to be more gentle in Devstack or depending jobs (like for Vitrage)
  could fail.

  [1]
  
https://docs.openstack.org/developer/oslo.messaging/opts.html#oslo_messaging_notifications.transport_url

  devstack version: pike

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1700496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775146] [NEW] I found some flow tables of br-int will be missing After I restarted neutron-openvswitch-agent.

2018-06-05 Thread yangjianfeng
Public bug reported:

My environment's essential information:
  queens branch
  DVR model
  drop_flows_on_start is confiured true

As shown below, when the config option 'drop_flows_on_start' is false, the 
flows of br-int is complete.
[root@compute2 ~]# ovs-ofctl dump-flows br-int
 cookie=0xe66a6d121ba9839f, duration=9.793s, table=0, n_packets=0, n_bytes=0, 
priority=4,in_port="int-br-ex",dl_src=fa:16:3f:2e:17:8e actions=resubmit(,2)
 cookie=0xe66a6d121ba9839f, duration=9.791s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="patch-tun",dl_src=fa:16:3f:2e:17:8e actions=resubmit(,1)
 cookie=0xe66a6d121ba9839f, duration=9.789s, table=0, n_packets=0, n_bytes=0, 
priority=4,in_port="int-br-ex",dl_src=fa:16:3f:de:6c:16 actions=resubmit(,2)
 cookie=0xe66a6d121ba9839f, duration=9.786s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="patch-tun",dl_src=fa:16:3f:de:6c:16 actions=resubmit(,1)
 cookie=0xe66a6d121ba9839f, duration=9.817s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="int-br-ex" actions=drop
 cookie=0xe66a6d121ba9839f, duration=10.712s, table=0, n_packets=0, n_bytes=0, 
priority=0 actions=resubmit(,60)
 cookie=0xe66a6d121ba9839f, duration=9.820s, table=1, n_packets=0, n_bytes=0, 
priority=1 actions=drop
 cookie=0xe66a6d121ba9839f, duration=9.819s, table=2, n_packets=0, n_bytes=0, 
priority=1 actions=drop
 cookie=0xe66a6d121ba9839f, duration=9.821s, table=23, n_packets=0, n_bytes=0, 
priority=0 actions=drop
 cookie=0xe66a6d121ba9839f, duration=10.706s, table=24, n_packets=0, n_bytes=0, 
priority=0 actions=drop
 cookie=0xe66a6d121ba9839f, duration=10.709s, table=60, n_packets=0, n_bytes=0, 
priority=3 actions=NORMAL
[root@compute2 ~]# 

But, when 'drop_flows_on_start' is configured true, some requisite flows will 
be missing.
[root@compute2 ~]# ovs-ofctl dump-flows br-int
 cookie=0x9fbc60c2eb19902d, duration=7.069s, table=0, n_packets=0, n_bytes=0, 
priority=4,in_port="int-br-ex",dl_src=fa:16:3f:2e:17:8e actions=resubmit(,2)
 cookie=0x9fbc60c2eb19902d, duration=7.066s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="patch-tun",dl_src=fa:16:3f:2e:17:8e actions=resubmit(,1)
 cookie=0x9fbc60c2eb19902d, duration=7.064s, table=0, n_packets=0, n_bytes=0, 
priority=4,in_port="int-br-ex",dl_src=fa:16:3f:de:6c:16 actions=resubmit(,2)
 cookie=0x9fbc60c2eb19902d, duration=7.061s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="patch-tun",dl_src=fa:16:3f:de:6c:16 actions=resubmit(,1)
 cookie=0x9fbc60c2eb19902d, duration=7.102s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="int-br-ex" actions=drop
 cookie=0x9fbc60c2eb19902d, duration=7.106s, table=1, n_packets=0, n_bytes=0, 
priority=1 actions=drop
 cookie=0x9fbc60c2eb19902d, duration=7.104s, table=2, n_packets=0, n_bytes=0, 
priority=1 actions=drop
 cookie=0x9fbc60c2eb19902d, duration=7.107s, table=23, n_packets=0, n_bytes=0, 
priority=0 actions=drop
[root@compute2 ~]#

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775146

Title:
  I found some flow tables of br-int will be missing After I restarted
  neutron-openvswitch-agent.

Status in neutron:
  New

Bug description:
  My environment's essential information:
queens branch
DVR model
drop_flows_on_start is confiured true

  As shown below, when the config option 'drop_flows_on_start' is false, the 
flows of br-int is complete.
  [root@compute2 ~]# ovs-ofctl dump-flows br-int
   cookie=0xe66a6d121ba9839f, duration=9.793s, table=0, n_packets=0, n_bytes=0, 
priority=4,in_port="int-br-ex",dl_src=fa:16:3f:2e:17:8e actions=resubmit(,2)
   cookie=0xe66a6d121ba9839f, duration=9.791s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="patch-tun",dl_src=fa:16:3f:2e:17:8e actions=resubmit(,1)
   cookie=0xe66a6d121ba9839f, duration=9.789s, table=0, n_packets=0, n_bytes=0, 
priority=4,in_port="int-br-ex",dl_src=fa:16:3f:de:6c:16 actions=resubmit(,2)
   cookie=0xe66a6d121ba9839f, duration=9.786s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="patch-tun",dl_src=fa:16:3f:de:6c:16 actions=resubmit(,1)
   cookie=0xe66a6d121ba9839f, duration=9.817s, table=0, n_packets=0, n_bytes=0, 
priority=2,in_port="int-br-ex" actions=drop
   cookie=0xe66a6d121ba9839f, duration=10.712s, table=0, n_packets=0, 
n_bytes=0, priority=0 actions=resubmit(,60)
   cookie=0xe66a6d121ba9839f, duration=9.820s, table=1, n_packets=0, n_bytes=0, 
priority=1 actions=drop
   cookie=0xe66a6d121ba9839f, duration=9.819s, table=2, n_packets=0, n_bytes=0, 
priority=1 actions=drop
   cookie=0xe66a6d121ba9839f, duration=9.821s, table=23, n_packets=0, 
n_bytes=0, priority=0 actions=drop
   cookie=0xe66a6d121ba9839f, duration=10.706s, table=24, n_packets=0, 
n_bytes=0, priority=0 actions=drop
   cookie=0xe66a6d121ba9839f, duration=10.709s, table=60, n_packets=0, 
n_bytes=0, priority=3 actions=NORMAL
  [root@compute2 ~]# 

  But, when 

[Yahoo-eng-team] [Bug 1775140] [NEW] Keystoneauth does not consistently add the collect-timing parameter

2018-06-05 Thread Andras Kovi
Public bug reported:

Mistral devstack tests started to fail due to the following error [1]

2018-06-04 19:23:37.223185 Server-side error: "no such option collect_timing in 
group [keystone_authtoken]". Detail: 
2018-06-04 19:23:37.223221 Traceback (most recent call last):
2018-06-04 19:23:37.223226 
2018-06-04 19:23:37.223231   File 
"/usr/local/lib/python2.7/dist-packages/wsmeext/pecan.py", line 85, in 
callfunction
2018-06-04 19:23:37.223235 result = f(self, *args, **kwargs)
2018-06-04 19:23:37.223239 
2018-06-04 19:23:37.223243   File 
"/opt/stack/mistral/mistral/api/controllers/v2/event_trigger.py", line 81, in 
post
2018-06-04 19:23:37.223247 workflow_params=values.get('workflow_params'),
2018-06-04 19:23:37.223251 
2018-06-04 19:23:37.223255   File "/opt/stack/mistral/mistral/db/utils.py", 
line 88, in decorate
2018-06-04 19:23:37.223259 return retry.call(_with_auth_context, auth_ctx, 
func, *args, **kw)
2018-06-04 19:23:37.223263 
2018-06-04 19:23:37.223267   File 
"/opt/stack/mistral/mistral/utils/rest_utils.py", line 247, in call
2018-06-04 19:23:37.223271 return super(MistralRetrying, self).call(fn, 
*args, **kwargs)
2018-06-04 19:23:37.223275 
2018-06-04 19:23:37.223279   File 
"/usr/local/lib/python2.7/dist-packages/tenacity/__init__.py", line 330, in call
2018-06-04 19:23:37.223283 start_time=start_time)
2018-06-04 19:23:37.223287 
2018-06-04 19:23:37.223291   File 
"/usr/local/lib/python2.7/dist-packages/tenacity/__init__.py", line 279, in iter
2018-06-04 19:23:37.223295 return fut.result()
2018-06-04 19:23:37.223299 
2018-06-04 19:23:37.223303   File 
"/usr/local/lib/python2.7/dist-packages/concurrent/futures/_base.py", line 455, 
in result
2018-06-04 19:23:37.223307 return self.__get_result()
2018-06-04 19:23:37.223311 
2018-06-04 19:23:37.223315   File 
"/usr/local/lib/python2.7/dist-packages/tenacity/__init__.py", line 333, in call
2018-06-04 19:23:37.223319 result = fn(*args, **kwargs)
2018-06-04 19:23:37.223322 
2018-06-04 19:23:37.223326   File "/opt/stack/mistral/mistral/db/utils.py", 
line 45, in _with_auth_context
2018-06-04 19:23:37.223330 return func(*args, **kw)
2018-06-04 19:23:37.223334 
2018-06-04 19:23:37.223338   File 
"/opt/stack/mistral/mistral/services/triggers.py", line 186, in 
create_event_trigger
2018-06-04 19:23:37.223342 security.add_trust_id(values)
2018-06-04 19:23:37.223355 
2018-06-04 19:23:37.223360   File 
"/opt/stack/mistral/mistral/services/security.py", line 110, in add_trust_id
2018-06-04 19:23:37.223364 trust = create_trust()
2018-06-04 19:23:37.223368 
2018-06-04 19:23:37.223372   File 
"/opt/stack/mistral/mistral/services/security.py", line 45, in create_trust
2018-06-04 19:23:37.223376 trustee_id = 
keystone.client_for_admin().session.get_user_id()
2018-06-04 19:23:37.223380 
2018-06-04 19:23:37.223384   File 
"/opt/stack/mistral/mistral/utils/openstack/keystone.py", line 143, in 
client_for_admin
2018-06-04 19:23:37.223388 return _admin_client()
2018-06-04 19:23:37.223392 
2018-06-04 19:23:37.223396   File 
"/opt/stack/mistral/mistral/utils/openstack/keystone.py", line 136, in 
_admin_client
2018-06-04 19:23:37.223400 auth=auth
2018-06-04 19:23:37.223403 
2018-06-04 19:23:37.223408   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/loading/session.py", line 
270, in load_from_conf_options
2018-06-04 19:23:37.223412 return Session().load_from_conf_options(*args, 
**kwargs)
2018-06-04 19:23:37.223416 
2018-06-04 19:23:37.223420   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/loading/session.py", line 
251, in load_from_conf_options
2018-06-04 19:23:37.223424 kwargs.setdefault('collect_timing', 
c.collect_timing)
2018-06-04 19:23:37.223428 
2018-06-04 19:23:37.223432   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3409, in 
__getattr__
2018-06-04 19:23:37.223436 return self._conf._get(name, self._group)
2018-06-04 19:23:37.223440 
2018-06-04 19:23:37.223444   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2950, in _get
2018-06-04 19:23:37.223448 value, loc = self._do_get(name, group, namespace)
2018-06-04 19:23:37.223452 
2018-06-04 19:23:37.223456   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2968, in 
_do_get
2018-06-04 19:23:37.223460 info = self._get_opt_info(name, group)
2018-06-04 19:23:37.223464 
2018-06-04 19:23:37.223468   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3130, in 
_get_opt_info
2018-06-04 19:23:37.223474 raise NoSuchOptError(opt_name, group)
2018-06-04 19:23:37.223481 
2018-06-04 19:23:37.223487 NoSuchOptError: no such option collect_timing in 
group [keystone_authtoken]

The issue seems to be related to the release of the patch "Collect
timing information for API calls" [2]

[1] 
http://logs.openstack.org/85/527085/29/check/mistral-devstack/56e57ce/controller/logs/apache/mistral_api_log.txt.gz#_2018-06-04_19_23_37_223487