[Yahoo-eng-team] [Bug 1684371] [NEW] port mapping for floating ip

2017-04-19 Thread Yan Songming
Public bug reported:

Now we just support floating ip for DNET and SNET. But we don't support port 
mapping .
I think we can add a extension param "port" to the floating ip add allowed the 
same floating ip with different port associate to different local ip.

For example:
 We have two vm192.168.1.10/192.168.1.11, one for Web server, another for 
Ftp server. We need to use the same external ip address a.b.c.d to connect this 
two server with different port.
 Then we could use:
iptables -t nat -A PREROUTING -d a.b.c.d -p tcp --dport 80 -j DNAT --to 
192.168.1.10
iptables -t nat -A POSTROUTING -d 192.168.1.10 -p tcp --dport 80 -j SNAT --to 
192.168.1.1

iptables -t nat -A PREROUTING -d a.b.c.d -p tcp --dport 21 -j DNAT --to 
192.168.1.11
iptables -t nat -A POSTROUTING -d 192.168.1.11 -p tcp --dport 21 -i eth0 -j 
SNAT --to 192.168.1.1

** Affects: neutron
 Importance: Undecided
 Assignee: Yan Songming (songmingyan)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Yan Songming (songmingyan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684371

Title:
  port mapping for floating ip

Status in neutron:
  New

Bug description:
  Now we just support floating ip for DNET and SNET. But we don't support port 
mapping .
  I think we can add a extension param "port" to the floating ip add allowed 
the same floating ip with different port associate to different local ip.

  For example:
   We have two vm192.168.1.10/192.168.1.11, one for Web server, another for 
Ftp server. We need to use the same external ip address a.b.c.d to connect this 
two server with different port.
   Then we could use:
  iptables -t nat -A PREROUTING -d a.b.c.d -p tcp --dport 80 -j DNAT --to 
192.168.1.10
  iptables -t nat -A POSTROUTING -d 192.168.1.10 -p tcp --dport 80 -j SNAT --to 
192.168.1.1

  iptables -t nat -A PREROUTING -d a.b.c.d -p tcp --dport 21 -j DNAT --to 
192.168.1.11
  iptables -t nat -A POSTROUTING -d 192.168.1.11 -p tcp --dport 21 -i eth0 -j 
SNAT --to 192.168.1.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684353] [NEW] LImit group name 64 characters and role name 255 characters

2017-04-19 Thread wei.ying
Public bug reported:

Env: devstack master branch

Steps to reproduce:
1. Go to identity/Groups/ panel
2. Click 'Create Group'
3. Enter a name longer than 64 characters
4. Submit form

Erro info:
Recoverable error: Group name should not be greater than 64 characters. (HTTP 
400) (Request-ID: req-979dfe60-c461-4fdc-ab6c-61c62ed24ff3)

Steps to reproduce:
1. Go to identity/Roles/ panel
2. Click 'Create Role'
3. Enter a role name longer than 255 characters
4. Submit form

Erro info:
Recoverable error: Invalid input for field 'name':‘’ is too long (HTTP 400) 
(Request-ID: req-ffaaf93d-b066-4c51-9308-b584891e075c)

** Affects: horizon
 Importance: Undecided
 Assignee: wei.ying (wei.yy)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => wei.ying (wei.yy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1684353

Title:
  LImit group name 64 characters and role name 255  characters

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Env: devstack master branch

  Steps to reproduce:
  1. Go to identity/Groups/ panel
  2. Click 'Create Group'
  3. Enter a name longer than 64 characters
  4. Submit form

  Erro info:
  Recoverable error: Group name should not be greater than 64 characters. (HTTP 
400) (Request-ID: req-979dfe60-c461-4fdc-ab6c-61c62ed24ff3)

  Steps to reproduce:
  1. Go to identity/Roles/ panel
  2. Click 'Create Role'
  3. Enter a role name longer than 255 characters
  4. Submit form

  Erro info:
  Recoverable error: Invalid input for field 'name':‘’ is too long (HTTP 400) 
(Request-ID: req-ffaaf93d-b066-4c51-9308-b584891e075c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1684353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684349] [NEW] mask2cidr error with integer value - argument of type 'int' is not iterable

2017-04-19 Thread Andreas Karis
Public bug reported:

 mask2cidr error with integer value - argument of type 'int' is not
iterable

~~~
def mask2cidr(mask):
if ':' in str(mask):
return ipv6mask2cidr(mask)
elif '.' in mask:
return ipv4mask2cidr(mask)
else:
return mask
~~~

is not type safe. It tries to take into account that this can be a
prefix (so it does not contain ':' not '.' and then return mask. The
problem is that if mask is an integer, then this returns:

~~~
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 513, in 
status_wrapper
ret = functor(name, args)
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 269, in 
main_init
init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
  File "/usr/lib/python2.7/site-packages/cloudinit/stages.py", line 641, in 
apply_network_config
return self.distro.apply_network_config(netcfg, bring_up=bring_up)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
150, in apply_network_config
dev_names = self._write_network_config(netconfig)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/rhel.py", line 59, 
in _write_network_config
ns = parse_net_config_data(netconfig)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
32, in parse_net_config_data
nsi.parse_config(skip_broken=skip_broken)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
205, in parse_config
handler(self, command)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
78, in decorator
return func(self, command, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
239, in handle_physical
subnet['netmask'] = mask2cidr(subnet['netmask'])
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
441, in mask2cidr
elif '.' in mask:
~~~

Made a modification to the code to troubleshoot this:
~~~
   # convert subnet ipv6 netmask to cidr as needed
subnets = command.get('subnets')
print subnets
if subnets:
for subnet in subnets:
if subnet['type'] == 'static':
if 'netmask' in subnet and ':' in subnet['address']:
subnet['netmask'] = mask2cidr(subnet['netmask'])
for route in subnet.get('routes', []):
if 'netmask' in route:
route['netmask'] = mask2cidr(route['netmask'])
~~~

This error can be hit on RHEL when running the following 2x (don't know
why 2x):

 rm -Rf /var/lib/cloud/data/*  ; cloud-init --force init

On the second run, this will be returned:
~~~
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 513, in 
status_wrapper
ret = functor(name, args)
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 269, in 
main_init
init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
  File "/usr/lib/python2.7/site-packages/cloudinit/stages.py", line 641, in 
apply_network_config
return self.distro.apply_network_config(netcfg, bring_up=bring_up)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
150, in apply_network_config
dev_names = self._write_network_config(netconfig)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/rhel.py", line 59, 
in _write_network_config
ns = parse_net_config_data(netconfig)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
32, in parse_net_config_data
nsi.parse_config(skip_broken=skip_broken)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
205, in parse_config
handler(self, command)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
78, in decorator
return func(self, command, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
239, in handle_physical
subnet['netmask'] = mask2cidr(subnet['netmask'])
  File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 
441, in mask2cidr
elif '.' in mask:
TypeError: argument of type 'int' is not iterable

[{u'routes': [{u'netmask': u'0.0.0.0', u'network': u'0.0.0.0', u'gateway': 
u'192.168.0.1'}], u'netmask': u'255.255.255.0', u'type': 'static', 'ipv4': 
True, 'address': u'192.168.0.11'}, {u'routes': [{u'netmask': 0, u'network': 
u'::', u'gateway': u'2000:192:168::1'}], u'netmask': 64, 'ipv6': True, u'type': 
'static', 'address': u'2000:192:168::4'}]
~~~

not the `u'netmask': 64` integer

This can be fixed by changing the code to:
~~~
def mask2cidr(mask):
if ':' in str(mask):
return ipv6mask2cidr(mask)
elif '.' in str(mask):
return ipv4mask2cidr(mask)

[Yahoo-eng-team] [Bug 1684016] Re: delete port will trow execptions in ovs agent

2017-04-19 Thread QunyingRan
I'm sorry that this question prodeced by my mistake, thank you.

** Changed in: neutron
   Status: Incomplete => Invalid

** Changed in: neutron
 Assignee: QunyingRan (ran-qunying) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684016

Title:
  delete port will trow execptions in ovs agent

Status in neutron:
  Invalid

Bug description:
  In master, when delete VM there is an exception:

  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2211, in rpc_loop
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
ovs_restarted)
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1820, in process_network_ports
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info['removed'])
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent TypeError: 
unsupported operand type(s) for |=: 'set' and 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684338] [NEW] tempest jobs failing with midonet-cluster complaining about keystone

2017-04-19 Thread YAMAMOTO Takashi
Public bug reported:

eg. http://logs.openstack.org/11/458011/1/check/gate-tempest-dsvm-
networking-midonet-ml2-ubuntu-xenial/86d989d/logs/midonet-cluster.txt.gz

2017.04.19 10:50:50.132 ERROR [rest-api-55] auth Login authorization error 
occurred for user null
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) 
~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 ~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
~[na:1.8.0_121]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 
~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:538) ~[na:1.8.0_121]
at sun.net.NetworkClient.doConnect(NetworkClient.java:180) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.(HttpClient.java:211) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.New(HttpClient.java:308) ~[na:1.8.0_121]
at sun.net.www.http.HttpClient.New(HttpClient.java:326) ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966) 
~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
 ~[na:1.8.0_121]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler$1$1.getOutputStream(URLConnectionClientHandler.java:238)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.CommittingOutputStream.commitStream(CommittingOutputStream.java:117)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.CommittingOutputStream.write(CommittingOutputStream.java:89)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.filter.LoggingFilter$LoggingOutputStream.write(LoggingFilter.java:110)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:1848)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1041)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:854) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:650) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.RequestWriter.writeRequestEntity(RequestWriter.java:300)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:217)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
... 39 common frames omitted
Wrapped by: com.sun.jersey.api.client.ClientHandlerException: 
java.net.ConnectException: Connection refused (Connection refused)
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.filter.LoggingFilter.handle(LoggingFilter.java:217) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at com.sun.jersey.api.client.Client.handle(Client.java:652) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:570) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
org.midonet.cluster.auth.keystone.KeystoneClient$$anonfun$org$midonet$cluster$auth$keystone$KeystoneClient$$post$1.apply(KeystoneClient.scala:412)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
org.midonet.cluster.auth.keystone.KeystoneClient.tryRequest(KeystoneClient.scala:422)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
... 32 common frames omitted

[Yahoo-eng-team] [Bug 1549516] Re: Too many reconnections to the SQLalchemy engine

2017-04-19 Thread D G Lee
** Changed in: oslo.db
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1549516

Title:
  Too many reconnections to the SQLalchemy engine

Status in OpenStack Identity (keystone):
  Invalid
Status in oslo.db:
  Invalid

Bug description:
  === Issue Description ===

  It looks like for every DB request oslo.db is reconnecting to the
  SQLalchemy engine, that leads to  "SELECT 1" request to the database
  per every meaningful request.

  === Prelude() ===

  I was testing osprofiler library (OpenStack profiler) changes, that
  are currently on review for Nova, Neutron and Keystone + OSprofiler
  integration, trying to perform nova-boot requests. After generating
  the trace for this request, I got the following html report:
  https://dinabelova.github.io/nova-boot-keystone-cache-turn-on.html .
  Total number of DB operations done for this request is 417, that seems
  too much for the instance creation. Half of these requests is "SELECT
  1" requests, that are used by oslo.db per engine connection via
  _connect_ping_listener function -
  
https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L53

  I ensured that all of these requests are coming from this method via
  adding _connect_ping_listener tracing https://dinabelova.github.io
  /nova-boot-oslodb-ping-listener-profiled.html - so we can see that all
  "SELECT 1" requests are placed under db_ping_listener section in the
  trace.

  These "SELECT 1"s are in fact spending 1/3 of all time SQLalchemy
  engine in oslo.db is spending on all requests. This seems to be a bug.

  === Env description & spets to reproduce ===

  I have devstack environment with latest 1.1.0 osprofiler installed. To
  install profiler on the devstack env I used the following additions to
  the local.conf:

  enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer master
  enable_plugin osprofiler https://git.openstack.org/openstack/osprofiler master

  Additionally I've used the following changes:
  - Nova: https://review.openstack.org/254703
  - Nova client: https://review.openstack.org/#/c/254699/
  - Neutron: https://review.openstack.org/273951
  - Neutron client: https://review.openstack.org/281508
  - Keystone: https://review.openstack.org/103368

  Also I've modified standard keystone.conf to turn memcache caching:

  [cache]

  memcache_servers = 127.0.0.1:11211
  backend = oslo_cache.memcache_pool
  enabled = True

  Then you can simply run nova --profile SECRET_KEY boot --image
   --flavor 42 vm1 to generate all notifications and then
  osprofiler trace show --html  --out nova-boot.html using the
  trace id printed in the bottom of nova boot output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1549516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684326] [NEW] MTU not set on nova instance's vif_type=bridge tap interface

2017-04-19 Thread iain MacDonnell
Public bug reported:

Using linuxbridge with VLAN networks with MTU<>1500, the nova instance's
VIF's tap interface's MTU needs to get set to that of the network it's
being plugged into, otherwise the first instance on a compute node gets
a tap interface (and bridge) with MTU 1500, but the VM tries to do MTU
9000, and frames get dropped.

Sequence on first instance launch goes like:

 * os_vif creates bridge (with initial MTU 1500)
 * libvirt creates the domain, which creates the tap interface and adds it to 
the bridge. The tap interface inherits the bridge's MTU of 1500
 * The L2 agent notices that a new tap interface showed up, and ensures that 
the VLAN interface gets added to the bridge - the VLAN interface has MTU 9000 
(inherited from the physical interface), but the bridge MTU remains at 1500 - 
the lowest amongs its member ports (i.e. the tap interface)

If that instance is then destroyed, the tap interface goes away, and the
bridge updates its MTU to the lowest amongst its members, which is now
the VLAN interface - i.e. 9000. A second instance launch then picks up
the bridge's MTU of 9000 work works fine.

This was previously solved in the l2 agent under
https://bugs.launchpad.net/networking-cisco/+bug/1443607, but the
solution was reverted in
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d352661c56d5f03713e615b7e0c2c9c8688e0132

Re-implementation should probably get the MTU from the neutron network,
rather than the VLAN interface.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684326

Title:
  MTU not set on nova instance's vif_type=bridge tap interface

Status in neutron:
  New

Bug description:
  Using linuxbridge with VLAN networks with MTU<>1500, the nova
  instance's VIF's tap interface's MTU needs to get set to that of the
  network it's being plugged into, otherwise the first instance on a
  compute node gets a tap interface (and bridge) with MTU 1500, but the
  VM tries to do MTU 9000, and frames get dropped.

  Sequence on first instance launch goes like:

   * os_vif creates bridge (with initial MTU 1500)
   * libvirt creates the domain, which creates the tap interface and adds it to 
the bridge. The tap interface inherits the bridge's MTU of 1500
   * The L2 agent notices that a new tap interface showed up, and ensures that 
the VLAN interface gets added to the bridge - the VLAN interface has MTU 9000 
(inherited from the physical interface), but the bridge MTU remains at 1500 - 
the lowest amongs its member ports (i.e. the tap interface)

  If that instance is then destroyed, the tap interface goes away, and
  the bridge updates its MTU to the lowest amongst its members, which is
  now the VLAN interface - i.e. 9000. A second instance launch then
  picks up the bridge's MTU of 9000 work works fine.

  This was previously solved in the l2 agent under
  https://bugs.launchpad.net/networking-cisco/+bug/1443607, but the
  solution was reverted in
  
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d352661c56d5f03713e615b7e0c2c9c8688e0132

  Re-implementation should probably get the MTU from the neutron
  network, rather than the VLAN interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684321] [NEW] tox -e npm fails to start Chrome

2017-04-19 Thread Jack Choy
Public bug reported:

When running 'tox -e npm' on Ubuntu, Chrome fails to start with the
following error:

19 04 2017 16:34:15.249:INFO [karma]: Karma v1.1.2 server started at 
http://localhost:9876/
19 04 2017 16:34:15.251:INFO [launcher]: Launching browser Chrome with 
unlimited concurrency
19 04 2017 16:34:15.261:INFO [launcher]: Starting browser Chrome
19 04 2017 16:34:15.461:ERROR [launcher]: Cannot start Chrome

19 04 2017 16:34:15.468:INFO [launcher]: Trying to start Chrome again (1/2).
19 04 2017 16:34:15.809:ERROR [launcher]: Cannot start Chrome

19 04 2017 16:34:15.810:INFO [launcher]: Trying to start Chrome again (2/2).
19 04 2017 16:34:16.415:ERROR [launcher]: Cannot start Chrome

19 04 2017 16:34:16.416:ERROR [launcher]: Chrome failed 2 times (cannot start). 
Giving up.

If you revise the [testenv:npm] rule to start chrome first, you'll see
why it failed:

grep: write error
mkdir: cannot create directory ‘/.local’: Permission denied
touch: cannot touch ‘/.local/share/applications/mimeapps.list’: No such file or 
directory
[7633:7633:0419/163528:ERROR:browser_main_loop.cc(267)] Gtk: cannot open 
display: 

Obviously, the permission denied error is valid in that you shouldn't be
able to create a directory off of root.  What's missing is the $HOME
preceding the directory name.

The second problem is due to an unset DISPLAY variable needed when
running this in *nix environments.

This tells me Chrome needs at least $HOME and $DISPLAY, but it is not
set because tox only passes the PATH variable in *nix environments as
mentioned in http://tox.readthedocs.io/en/latest/example/basic.html

To fix this, we can add the following lines to the [testenv:npm] section:
passenv =
  HOME
  DISPLAY

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1684321

Title:
  tox -e npm fails to start Chrome

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When running 'tox -e npm' on Ubuntu, Chrome fails to start with the
  following error:

  19 04 2017 16:34:15.249:INFO [karma]: Karma v1.1.2 server started at 
http://localhost:9876/
  19 04 2017 16:34:15.251:INFO [launcher]: Launching browser Chrome with 
unlimited concurrency
  19 04 2017 16:34:15.261:INFO [launcher]: Starting browser Chrome
  19 04 2017 16:34:15.461:ERROR [launcher]: Cannot start Chrome

  19 04 2017 16:34:15.468:INFO [launcher]: Trying to start Chrome again (1/2).
  19 04 2017 16:34:15.809:ERROR [launcher]: Cannot start Chrome

  19 04 2017 16:34:15.810:INFO [launcher]: Trying to start Chrome again (2/2).
  19 04 2017 16:34:16.415:ERROR [launcher]: Cannot start Chrome

  19 04 2017 16:34:16.416:ERROR [launcher]: Chrome failed 2 times (cannot 
start). Giving up.

  If you revise the [testenv:npm] rule to start chrome first, you'll see
  why it failed:

  grep: write error
  mkdir: cannot create directory ‘/.local’: Permission denied
  touch: cannot touch ‘/.local/share/applications/mimeapps.list’: No such file 
or directory
  [7633:7633:0419/163528:ERROR:browser_main_loop.cc(267)] Gtk: cannot open 
display: 

  Obviously, the permission denied error is valid in that you shouldn't
  be able to create a directory off of root.  What's missing is the
  $HOME preceding the directory name.

  The second problem is due to an unset DISPLAY variable needed when
  running this in *nix environments.

  This tells me Chrome needs at least $HOME and $DISPLAY, but it is not
  set because tox only passes the PATH variable in *nix environments as
  mentioned in http://tox.readthedocs.io/en/latest/example/basic.html

  To fix this, we can add the following lines to the [testenv:npm] section:
  passenv =
HOME
DISPLAY

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1684321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684292] [NEW] Get specific hypervisor based on id

2017-04-19 Thread Vivek Agrawal
Public bug reported:

Add a interface for getting specific Hypervisor object based on ID of
the hypervisor. This will be a useful API for custom extensions of
horizon. This API will map to the get() of hypervisor from nova client.
This is a small enhancement, IMO Horizon pages/extension which only need
to access single Hypervisor object can use this API.

** Affects: horizon
 Importance: Undecided
 Assignee: Vivek Agrawal (vivek-agrawal)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Vivek Agrawal (vivek-agrawal)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1684292

Title:
  Get specific hypervisor based on id

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Add a interface for getting specific Hypervisor object based on ID of
  the hypervisor. This will be a useful API for custom extensions of
  horizon. This API will map to the get() of hypervisor from nova
  client. This is a small enhancement, IMO Horizon pages/extension which
  only need to access single Hypervisor object can use this API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1684292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617001] Re: Dont' reinitalize themable selects once they've been initialized

2017-04-19 Thread Gloria Gu
What the change Diana put there doesn't seem to have effect. So mark
this invalid.

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Gloria Gu (gloria-gu) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1617001

Title:
  Dont' reinitalize themable selects once they've been initialized

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  init_themable_select should only work on things that have not already
  been initialized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1617001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684277] [NEW] Use ovsdbapp for Neutron OVSDB API

2017-04-19 Thread Terry Wilson
Public bug reported:

The OVSDB API has been split out into its own project: ovsdbapp. Neutron
should use ovsdbapp, but still provide a deprecated ability to import
the OVSDB API for projects that have not yet been switched over to using
ovsdbapp.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684277

Title:
  Use ovsdbapp for Neutron OVSDB API

Status in neutron:
  New

Bug description:
  The OVSDB API has been split out into its own project: ovsdbapp.
  Neutron should use ovsdbapp, but still provide a deprecated ability to
  import the OVSDB API for projects that have not yet been switched over
  to using ovsdbapp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684038] Re: ironic CI regression: dnsmasq doesn't respond to dhcp request

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/457904
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=37097a991b837b5508b4eb85649854a6c07cc280
Submitter: Jenkins
Branch:master

commit 37097a991b837b5508b4eb85649854a6c07cc280
Author: Vasyl Saienko 
Date:   Wed Apr 19 07:47:09 2017 +

Revert "Update auto-addresses on MAC change"

Original patch caused ironic CI regression.

This reverts commit 27746d1d16ceec59ca6576d565e5e4157427fa96.

Closes-Bug: #1684038

Change-Id: I29afcfac626f4947ad4db288185208c2c5c2b7a1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684038

Title:
  ironic CI regression: dnsmasq doesn't respond to dhcp request

Status in Ironic:
  New
Status in neutron:
  Fix Released

Bug description:
  All jobs that uses flat network_interface are failed because bootstrap
  can't get IP address from DHCP server.

  An example of failed job is:
  
http://logs.openstack.org/38/447538/6/check/gate-tempest-dsvm-ironic-ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-nv/f57afee/logs/

  In the syslog we can see that DHCP doesn't respond to requests:

  http://logs.openstack.org/38/447538/6/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
  nv/f57afee/logs/syslog.txt.gz#_Apr_18_12_30_00

  
  Apr 18 12:30:00ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available
  Apr 18 12:30:00ubuntu-xenial-internap-mtl01-8463102 ntpd[1715]: Listen 
normally on 15 vnet0 [fe80::fc54:ff:fef3:12ee%19]:123
  Apr 18 12:30:00ubuntu-xenial-internap-mtl01-8463102 ntpd[1715]: new 
interface(s) found: waking up resolver
  Apr 18 12:30:01ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available
  Apr 18 12:30:03ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available
  Apr 18 12:30:07ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1684038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671548] Re: Updating mac_address of port doesn't update its autoconfigured IPv6 address

2017-04-19 Thread Ihar Hrachyshka
We are reverting the patch.

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671548

Title:
  Updating mac_address of port doesn't update its autoconfigured IPv6
  address

Status in neutron:
  Confirmed

Bug description:
  PUT /v2.0/ports/d38564ff-8a98-4a21-a162-9b2841c78ebc.json HTTP/1.1
  ...
  {"port": {"mac_address": "fa:16:3e:d2:03:61"}}

  
  This updates the ports MAC address but doesn't update the IP address.
  If using slaac or stateless address mode it should as the IP address is 
derived for the MAC address.

  Version - Master from 20170127

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684241] [NEW] Bug in url parser

2017-04-19 Thread Anton Studenov
Public bug reported:

There is a code that does not support new auth URL format:

https://github.com/openstack/keystoneauth/blob/3364703d3b0e529f7c1b7d1d8ea81726c4f5f121/keystoneauth1/identity/generic/base.py#L143-L152

e.g. auth_url = http://example.com/foo/v2.0

And such format is now used in devstack master:

https://github.com/openstack-
dev/devstack/blob/6ed53156b6198e69d59d1cf3a3497e96f5b7a870/lib/keystone#L116

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1684241

Title:
  Bug in url parser

Status in OpenStack Identity (keystone):
  New

Bug description:
  There is a code that does not support new auth URL format:

  
https://github.com/openstack/keystoneauth/blob/3364703d3b0e529f7c1b7d1d8ea81726c4f5f121/keystoneauth1/identity/generic/base.py#L143-L152

  e.g. auth_url = http://example.com/foo/v2.0

  And such format is now used in devstack master:

  https://github.com/openstack-
  dev/devstack/blob/6ed53156b6198e69d59d1cf3a3497e96f5b7a870/lib/keystone#L116

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1684241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684028] Re: wrong status codes for v3-ext oauth1 create request token and create access token

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/457896
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=502f783bd3c095ae8506e35be8c88fcc9c7c8ccd
Submitter: Jenkins
Branch:master

commit 502f783bd3c095ae8506e35be8c88fcc9c7c8ccd
Author: Hemanth Nakkina 
Date:   Wed Apr 19 12:49:41 2017 +0530

Minor corrections in OS-OAUTH1 api documentation

Change response codes to 201 for the following API

https://developer.openstack.org/api-ref/identity/v3-ext/#create-request-token
https://developer.openstack.org/api-ref/identity/v3-ext/#create-access-token

Change request type to PUT for the following API

https://developer.openstack.org/api-ref/identity/v3-ext/#authorize-request-token

Change-Id: If9a4d66e6acb9379cef4335167f63631c034831a
Closes-Bug: #1684028


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1684028

Title:
  wrong status codes for v3-ext oauth1 create request token and create
  access token

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Following updates are required in api documentation

  The normal response codes for following API should be 201 instead of
  200

  https://developer.openstack.org/api-ref/identity/v3-ext/#create-request-token
  https://developer.openstack.org/api-ref/identity/v3-ext/#create-access-token

  The request type should be PUT instead of POST for the following API
  
https://developer.openstack.org/api-ref/identity/v3-ext/#authorize-request-token

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1684028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683832] Re: [neutron lbaasv2] session_perssion is updated successfully even if key do not exists

2017-04-19 Thread Akihiro Motoki
Octavia project maintains neutron-lbaas repo now.
Targeting the bug to Octavia

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683832

Title:
  [neutron lbaasv2] session_perssion is updated successfully even if key
  do not exists

Status in octavia:
  New

Bug description:
  I can update session_persistence of pool as such:
  neutron lbaas-pool-update ecfe0df2-a64d-4c3c-b5a0-d7255d410473 
--session_persistence type=dict type=HTTP_COOKIE,kidding_me=yes

  which should raise an exception like this:
  Invalid input for session_persistence . Reason: Invalid data format for 
session persistence:  'type=HTTP_COOKIE,kidding_me=yes'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1683832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683824] Re: [neutron lbaasv2] ip and subnet is dismatched when creating pool-member

2017-04-19 Thread Akihiro Motoki
Octavia project maintains neutron-lbaas repo now.
Targeting the bug to Octavia

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683824

Title:
  [neutron lbaasv2] ip and subnet is dismatched when creating pool-
  member

Status in octavia:
  In Progress

Bug description:
  when creating a pool-member, neutron lbaasv2 do not verify whether IP
  is valid on the subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1683824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682222] Re: Instance deployment failure due to neutron syntax error

2017-04-19 Thread Vladyslav Drok
Same comment, it was just run on a patch with merge conflict, no actual
issue with ironic or nova.

** Changed in: ironic
   Status: New => Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/168

Title:
  Instance deployment failure due to neutron syntax error

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  See error below in n-cpu.log
  Detailed logs available at: 
https://stash.opencrowbar.org/logs/27/456127/2/check/dell-hw-tempest-dsvm-ironic-pxe_ipmitool/d29e3b6/:
  or 
  
http://logs.openstack.org/27/456127/2/check/gate-ironic-docs-ubuntu-xenial/7838443/console.html

  2017-04-12 19:21:46.750 16295 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: cbd36fde5e9444f28f72acd31189cf31 __call__ 
/usr/local/lib/python2.7/d
  ist-packages/oslo_messaging/_drivers/amqpdriver.py:346
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall [-] Fixed 
interval looping call 'nova.virt.ironic.driver.IronicDriver._wait_for_active' 
failed
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137, 
in _run_loop
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall   File 
"/opt/stack/new/nova/nova/virt/ironic/driver.py", line 431, in _wait_for_active
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall raise 
exception.InstanceDeployFailure(msg)
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall 
InstanceDeployFailure: Failed to provision instance 
651a266c-ea66-472b-bc61-dafe4870fdd6: Failed to prepa
  re to deploy. Error: Failed to load DHCP provider neutron, reason: invalid 
syntax (neutron.py, line 153)
  2017-04-12 19:21:46.792 16295 ERROR oslo.service.loopingcall
  2017-04-12 19:21:46.801 16295 ERROR nova.virt.ironic.driver 
[req-9b30e546-51d9-4e4f-b4bd-cc5d75118ea3 tempest-BaremetalBasicOps-247864778 
tempest-BaremetalBasicOps-24
  7864778] Error deploying instance 651a266c-ea66-472b-bc61-dafe4870fdd6 on 
baremetal node 9ab67aec-921e-464d-8f2f-f9da65649a5e.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426241] Re: pci plugin needs to be re-enabled for V2 microversions

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/457854
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=75a7e6fc7d02608bf128ad72b2b8945515b12c21
Submitter: Jenkins
Branch:master

commit 75a7e6fc7d02608bf128ad72b2b8945515b12c21
Author: Matt Riedemann 
Date:   Tue Apr 18 21:14:43 2017 -0400

Remove unused os-pci API

The os-pci API was never part of the v2.0 API and was added
to the v3 API, but when the v3 API turned into the v2.1 API
which is backward compatible with the v2.0 API, the os-pci
API was removed from v2.1. The original intent was to enable
it in a microversion but that never happened.

We should just delete this API since it has a number of issues
anyway:

1. It's not documented (which makes sense since it's not enabled).
2. The PciHypervisorController just takes the compute_nodes.pci_stats
   dict and dumps it to json out of the REST API with no control over
   the keys in the response. That means if we ever change the fields
   in the PciDevicePool object, we implicitly introduce a backward
   incompatible change in the REST API.
3. We don't want to be reporting host stats out of the API [1].
4. To make the os-hypervisors extension work in a multi-cell environment
   we'd have to add uuids to the PciDevices model and change the API to
   return and take in uuids to identify the devices for GET requests.
5. And last but not least, no one has asked for this in over two years.

As a result of removing this API we can also remove the join on the
pci_devices table when showing details about an instance or listing
instances, which were added years ago because of the PciServerController:

Id3c8a0b187e399ce2acecd4aaa37ac95e731d46c

Id3e60c3c56c2eb4209e8aca8a2c26881ca86b435

[1] 
https://docs.openstack.org/developer/nova/policies.html?#metrics-gathering

Closes-Bug: #1426241
Closes-Bug: #1673869

Change-Id: I9099744264eeec175672d10d04da69648dec1a9d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426241

Title:
  pci plugin needs to be re-enabled for V2 microversions

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The PCI API support was enabled for v3 but never for V2. However V2.1
  is built on v3 and it includes everything in v3. So we are disabling
  pci support in v3. and then will renable in the v2 microversions as
  one of the early microversion changes.

  This bug is to keep track of this work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673869] Re: api-ref: os-pci API is not documented at all

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/457854
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=75a7e6fc7d02608bf128ad72b2b8945515b12c21
Submitter: Jenkins
Branch:master

commit 75a7e6fc7d02608bf128ad72b2b8945515b12c21
Author: Matt Riedemann 
Date:   Tue Apr 18 21:14:43 2017 -0400

Remove unused os-pci API

The os-pci API was never part of the v2.0 API and was added
to the v3 API, but when the v3 API turned into the v2.1 API
which is backward compatible with the v2.0 API, the os-pci
API was removed from v2.1. The original intent was to enable
it in a microversion but that never happened.

We should just delete this API since it has a number of issues
anyway:

1. It's not documented (which makes sense since it's not enabled).
2. The PciHypervisorController just takes the compute_nodes.pci_stats
   dict and dumps it to json out of the REST API with no control over
   the keys in the response. That means if we ever change the fields
   in the PciDevicePool object, we implicitly introduce a backward
   incompatible change in the REST API.
3. We don't want to be reporting host stats out of the API [1].
4. To make the os-hypervisors extension work in a multi-cell environment
   we'd have to add uuids to the PciDevices model and change the API to
   return and take in uuids to identify the devices for GET requests.
5. And last but not least, no one has asked for this in over two years.

As a result of removing this API we can also remove the join on the
pci_devices table when showing details about an instance or listing
instances, which were added years ago because of the PciServerController:

Id3c8a0b187e399ce2acecd4aaa37ac95e731d46c

Id3e60c3c56c2eb4209e8aca8a2c26881ca86b435

[1] 
https://docs.openstack.org/developer/nova/policies.html?#metrics-gathering

Closes-Bug: #1426241
Closes-Bug: #1673869

Change-Id: I9099744264eeec175672d10d04da69648dec1a9d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1673869

Title:
  api-ref: os-pci API is not documented at all

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This API is not in the compute API reference at all:

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/pci.py

  https://developer.openstack.org/api-ref/compute/

  There are really three parts there:

  1. PciServerController shows PCI information for a given server, so
  that's an extension of the /servers API. That puts the os-
  pci:pci_devices key in the server response body.

  2. PciHypervisorController shows PCI devices on a given compute node,
  so that's an extension of the /os-hypervisors API. That puts the os-
  pci:pci_stats key in the os-hypervisors response body.

  3. PciController is for listing all PCI devices and showing details
  about a specific PCI device. When listing PCI devices in this API, we
  query all compute nodes, and then for each compute node we get the PCI
  devices and dump those into a list of dicts where the keys are
  whitelisted and based on whether or not we're listing PCI devices with
  details or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1673869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649616] Re: Keystone Token Flush job does not complete in HA deployed environment

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/454351
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=dc7f81083180eeb5233f7007e3d2514cc0d7c6d3
Submitter: Jenkins
Branch:master

commit dc7f81083180eeb5233f7007e3d2514cc0d7c6d3
Author: Peter Sabaini 
Date:   Thu Apr 6 23:06:29 2017 +0200

Make flushing tokens more robust

Commit token flushes between batches in order to lower resource
consumption and make flushing more robust for replication

Change-Id: I9be37e420353a336a8acd820eadd47d4bcf7324f
Closes-Bug: #1649616


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649616

Title:
  Keystone Token Flush job does not complete in HA deployed environment

Status in OpenStack Identity (keystone):
  Fix Released
Status in puppet-keystone:
  Triaged
Status in tripleo:
  Triaged

Bug description:
  The Keystone token flush job can get into a state where it will never
  complete because the transaction size exceeds the mysql galara
  transaction size - wsrep_max_ws_size (1073741824).

  
  Steps to Reproduce:
  1. Authenticate many times
  2. Observe that keystone token flush job runs (should be a very long time 
depending on disk) >20 hours in my environment
  3. Observe errors in mysql.log indicating a transaction that is too large

  
  Actual results:
  Expired tokens are not actually flushed from the database without any errors 
in keystone.log.  Only errors appear in mysql.log.

  
  Expected results:
  Expired tokens to be removed from the database

  
  Additional info:
  It is likely that you can demonstrate this with less than 1 million tokens as 
the >1 million token table is larger than 13GiB and the max transaction size is 
1GiB, my token bench-marking Browbeat job creates more than needed.  

  Once the token flush job can not complete the token table will never
  decrease in size and eventually the cloud will run out of disk space.

  Furthermore the flush job will consume disk utilization resources.
  This was demonstrated on slow disks (Single 7.2K SATA disk).  On
  faster disks you will have more capacity to generate tokens, you can
  then generate the number of tokens to exceed the transaction size even
  faster.

  Log evidence:
  [root@overcloud-controller-0 log]# grep " Total expired" 
/var/log/keystone/keystone.log
  2016-12-08 01:33:40.530 21614 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1082434
  2016-12-09 09:31:25.301 14120 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1084241
  2016-12-11 01:35:39.082 4223 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1086504
  2016-12-12 01:08:16.170 32575 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1087823
  2016-12-13 01:22:18.121 28669 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1089202
  [root@overcloud-controller-0 log]# tail mysqld.log 
  161208  1:33:41 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161208  1:33:41 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161209  9:31:26 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161209  9:31:26 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161211  1:35:39 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161211  1:35:40 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161212  1:08:16 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161212  1:08:17 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161213  1:22:18 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161213  1:22:19 [ERROR] WSREP: rbr write fail, data_len: 0, 2

  
  Disk utilization issue graph is attached.  The entire job in that graph takes 
from the first spike is disk util(~5:18UTC) and culminates in about ~90 minutes 
of pegging the disk (between 1:09utc to 2:43utc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684158] [NEW] Add tenant_id attribute for ha network

2017-04-19 Thread Dongcan Ye
Public bug reported:

Currently, create a ha router will create a ha network, but the network lack of 
tenant_id attribute.
It worth to know the ha network's owner for the administrate purpose.

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684158

Title:
  Add tenant_id attribute for ha network

Status in neutron:
  New

Bug description:
  Currently, create a ha router will create a ha network, but the network lack 
of tenant_id attribute.
  It worth to know the ha network's owner for the administrate purpose.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675363] Re: Bump default quotas for ports, subnets, and networks

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/457684
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=cebe1c1527372fc34277ef409694fba93d36646e
Submitter: Jenkins
Branch:master

commit cebe1c1527372fc34277ef409694fba93d36646e
Author: Ihar Hrachyshka 
Date:   Tue Apr 18 08:04:16 2017 -0700

Updated network/port/subnet quotas in docs

Since Ocata, Neutron bumped the default quotas for the resources x10
times. This patch updates the documentation to reflect that.

Change-Id: Ie1dd0e33ae9bf96a8e6bbeb03811e09a1ba3b75c
Closes-Bug: #1675363


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1675363

Title:
  Bump default quotas for ports, subnets, and networks

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/444030
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 95f621f717b2e9fe0c89f7188f6d1668200475c8
  Author: Ihar Hrachyshka 
  Date:   Mon Mar 6 17:03:33 2017 +

  Bump default quotas for ports, subnets, and networks
  
  It's probably not very realistic to expect power users to be happy with
  the default quotas (10 networks, 50 ports, 10 subnets). I believe that
  larger defaults would be more realistic. This patch bumps existing
  quotas for the aforementioned neutron resources x10 times.
  
  DocImpact change default quotas in documentation if used in examples
anywhere.
  UpgradeImpact operators may need to revisit quotas they use.
  Closes-Bug: #1674787
  Change-Id: I04993934627d2d663a1bfccd7467ac4fbfbf1434

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1675363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602057] Re: [SRU] (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

2017-04-19 Thread James Page
Removing sponsors as update is already in the unapproved queue for
xenial

** Changed in: nova (Ubuntu)
   Status: Confirmed => Fix Released

** Changed in: nova (Ubuntu Xenial)
   Status: Incomplete => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1602057

Title:
  [SRU] (libvirt) KeyError updating resources for some node, guest.uuid
  is not in BDM list

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Won't Fix
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Triaged

Bug description:
  [Impact]

  There currently exists a race condition whereby the compute
  resource_tracker periodic task polls extant instances and checks their
  BDMs which can occur prior to any mappings having yet been created
  e.g. root disk mapping for new instances. This patch ensures that
  instances without any BDMs are skipped.

  [Test Case]
    * deploy Openstack Mitaka with debug logging enabled (not essential but 
helps)

    * create an instance

    * delete its BDMs - pastebin.ubuntu.com/24287419/

    * watch /var/log/nova/nova-compute.log on hypervisor hosting
  instance and wait for next resource_tracker tick

    * ensure that exception mentioned in LP does not occur (happens
  after "Auditing locally available compute resources for node")

  [Regression Potential]

  The resource tracker information is used by the scheduler when
  deciding which compute hosts are able to have an instances scheduled
  to them. In this case the resource tracker would be skipping instances
  that would contribute to disk overcommit ratios. As such it is
  possible that that scheduler will have momentarily skewed information
  about resource consumption on that compute host until the next
  resource_tracker tick. Since the likelihood of this race condition
  occurring is hopefully slim and provided that users have a reasonable
  frequency for the resource_tracker, the likelihood of this becoming a
  long term problem is low since the issue will always be corrected by a
  subsequent tick (although if the compute host in question were
  saturated that would not be fixed until an instances was deleted or
  migrated).

  [Other]
  Note that this patch did not make it into upstream stable/mitaka branch due 
to the stable cutoff so the proposal is to carry in the archive (indefinitely).

  

  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
[req-d5d5d486-b488-4429-bbb5-24c9f19ff2c0 - - - - -] Error updating resources 
for node controller.
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6726, in 
update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 500, 
in update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5728, in 
get_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
disk_over_committed = self._get_disk_over_committed_size_total()
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7397, in 
_get_disk_over_committed_size_total
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager KeyError: 
'0a5c5743-9555-4dfd-b26e-198449ebeee5'
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1602057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684065] Re: No tests available for l3-ha extension under neutron tempest tests

2017-04-19 Thread Anshul
** Project changed: tempest => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684065

Title:
  No tests available for l3-ha extension under neutron tempest tests

Status in neutron:
  New

Bug description:
  After doing a grep on neutron test repo for the extension I am not
  able to find any tests that are related to this extension. I believe
  coverage should be increased in this case.

  I am adding below snippet of the discussion I had with Ihar regarding
  this.

  """
  Indeed it seems there are no tests that explicitly target the
  extension (meaning, they don't utilize the 'ha' attribute added by
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/l3_ext_ha_mode.py#L23)

  That doesn't mean that there are no tests that cover the
  implementation. Instead, existing tests utilizing neutron routers will
  use keepalived implementation if neutron.conf is configured to use HA
  routers for router creation:

  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L62

  I agree that it's not ideal, and we should have some tests that
  actually check that 'ha' attribute works as expected. You may want to
  report a bug for that matter in upstream Launchpad if you feel like.
  """

  Please let me know if more information is needed from my end.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684068] [NEW] No tests available for multi-provider extension under neutron tempest tests

2017-04-19 Thread Anshul
Public bug reported:

This is a new feature, and I think we just miss proper tempest test
coverage for it.

Filing this bug to take care of the same

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: tempest => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684068

Title:
  No tests available for multi-provider extension under neutron tempest
  tests

Status in neutron:
  New

Bug description:
  This is a new feature, and I think we just miss proper tempest test
  coverage for it.

  Filing this bug to take care of the same

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684069] [NEW] No tests available for availability-zone, network-availability-zone, router-availability-zone under neutron tempest tests.

2017-04-19 Thread Anshul
Public bug reported:

 There are AZ tests in
tempest tree, but they cover compute and storage only. AZ tests should also be 
added for Neutron, filing this bug for the same.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- No tests available for availability-zone, 
network-availability-zone,router-availability-zone
+ No tests available for availability-zone, 
network-availability-zone,router-availability-zone under neutron tempest tests.

** Project changed: tempest => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684069

Title:
  No tests available for availability-zone, network-availability-zone
  ,router-availability-zone under neutron tempest tests.

Status in neutron:
  New

Bug description:
   There are AZ tests in
  tempest tree, but they cover compute and storage only. AZ tests should also 
be added for Neutron, filing this bug for the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684071] [NEW] No tests available for default-subnetpools in neutron tempest tests

2017-04-19 Thread Anshul
Public bug reported:

There are no tests available under neutron tests for default-
subnetpools, filing this bug to take care of the same.

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: tempest => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684071

Title:
  No tests available for  default-subnetpools in neutron tempest tests

Status in neutron:
  New

Bug description:
  There are no tests available under neutron tests for default-
  subnetpools, filing this bug to take care of the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684071/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684065] [NEW] No tests available for l3-ha extension under neutron tempest tests

2017-04-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

After doing a grep on neutron test repo for the extension I am not able
to find any tests that are related to this extension. I believe coverage
should be increased in this case.

I am adding below snippet of the discussion I had with Ihar regarding
this.

"""
Indeed it seems there are no tests that explicitly target the
extension (meaning, they don't utilize the 'ha' attribute added by
https://github.com/openstack/neutron/blob/master/neutron/extensions/l3_ext_ha_mode.py#L23)

That doesn't mean that there are no tests that cover the
implementation. Instead, existing tests utilizing neutron routers will
use keepalived implementation if neutron.conf is configured to use HA
routers for router creation:

https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L62

I agree that it's not ideal, and we should have some tests that
actually check that 'ha' attribute works as expected. You may want to
report a bug for that matter in upstream Launchpad if you feel like.
"""

Please let me know if more information is needed from my end.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
No tests available for l3-ha extension under neutron tempest tests
https://bugs.launchpad.net/bugs/1684065
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681440] Re: QoS policy object can't be suitable with 1.2 version of object

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/455338
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0376d2f6adada320554f247e4c67e15243d01d74
Submitter: Jenkins
Branch:master

commit 0376d2f6adada320554f247e4c67e15243d01d74
Author: Sławek Kapłoński 
Date:   Mon Apr 10 14:09:51 2017 +

Make QoS policy object compatible with versions 1.2 and higher

For version 1.2 or higher of QoS policy object it can contain
QoSMinumumBandwidtLimit rules and appending of such rule type
was missing in make_obj_compatible function.
Now such rules are appended to QoS policy.

Change-Id: I40d699db58c34e83272432376d1d59679a680db2
Closes-Bug: #1681440


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681440

Title:
  QoS policy object can't be suitable with 1.2 version of object

Status in neutron:
  Fix Released

Bug description:
  In
  
https://github.com/openstack/neutron/blob/master/neutron/objects/qos/policy.py#L220
  there is no function to make QoS policy object compatible with version
  1.2 and higher (append QoSMinimumBandwidthLimit rules to policy)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684071] [NEW] No tests available for default-subnetpools in neutron tempest tests

2017-04-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

There are no tests available under neutron tests for default-
subnetpools, filing this bug to take care of the same.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
No tests available for  default-subnetpools in neutron tempest tests
https://bugs.launchpad.net/bugs/1684071
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684068] [NEW] No tests available for multi-provider extension under neutron tempest tests

2017-04-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

This is a new feature, and I think we just miss proper tempest test
coverage for it.

Filing this bug to take care of the same

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
No tests available for multi-provider extension under neutron tempest tests 
https://bugs.launchpad.net/bugs/1684068
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684069] [NEW] No tests available for availability-zone, network-availability-zone, router-availability-zone under neutron tempest tests.

2017-04-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

 There are AZ tests in
tempest tree, but they cover compute and storage only. AZ tests should also be 
added for Neutron, filing this bug for the same.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
No tests available for availability-zone, 
network-availability-zone,router-availability-zone under neutron tempest tests.
https://bugs.launchpad.net/bugs/1684069
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648242] Re: [SRU] Failure to retry update_ha_routers_states

2017-04-19 Thread James Page
** Changed in: neutron (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Fix Released

** Changed in: cloud-archive/mitaka
   Importance: Undecided => Low

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648242

Title:
  [SRU] Failure to retry update_ha_routers_states

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Triaged

Bug description:
  [Impact]

Mitigates risk of incorrect ha_state reported by l3-agent for HA
routers in case where rmq connection is lost during update
window. Fix is already in Ubuntu for O and N but upstream
backport just missed the Mitaka PR hence this SRU.

  [Test Case]

* deploy Openstack Mitaka (Xenial) with l3-ha enabled and min/max l3
  -agents-per-router set to 3

* configure network, router, boot instance with floating ip and
  start pinging

* check that status is 1 agent showing active and 2 showing standby

* trigger some router failovers while rabbit server stopped e.g.

  - go to l3-agent hosting your router and do:

ip netns exec qrouter-${router} ip link set dev  down

check other units to see if ha iface has been failed over

ip netns exec qrouter-${router} ip link set dev  up
 
* ensure ping still running

* eventually all agents will be xxx/standby

* start rabbit server

* wait for correct ha_state to be set (takes a few seconds)

  [Regression Potential]

   I do not envisage any regression from this patch. One potential side-effect 
is
   mildy increased rmq traffic but should be negligible.

  
  

  Version: Mitaka

  While performing failover testing of L3 HA routers, we've discovered
  an issue with regards to the failure of an agent to report its state.

  In this scenario, we have a router (7629f5d7-b205-4af5-8e0e-
  a3c4d15e7677) scheduled to (3) L3 agents:

  
+--+--++---+--+
  | id   | host 
| admin_state_up | alive | ha_state |
  
+--+--++---+--+
  | 4434f999-51d0-4bbb-843c-5430255d5c64 | 
726404-infra03-neutron-agents-container-a8bb0b1f | True   | :-)   | 
active  |
  | 710e7768-df47-4bfe-917f-ca35c138209a | 
726402-infra01-neutron-agents-container-fc937477 | True   | :-)   | 
standby   |
  | 7f0888ba-1e8a-4a36-8394-6448b8c606fb | 
726403-infra02-neutron-agents-container-0338af5a | True   | :-)   | 
standby   |
  
+--+--++---+--+

  The infra03 node was shut down completely and abruptly. The router
  transitioned to master on infra02 as indicated in these log messages:

  2016-12-06 16:15:06.457 18450 INFO neutron.agent.linux.interface [-] Device 
qg-d48918fa-eb already exists
  2016-12-07 15:16:51.145 18450 INFO neutron.agent.l3.ha [-] Router 
c8b5d5b7-ab57-4f56-9838-0900dc304af6 transitioned to master
  2016-12-07 15:16:51.811 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:16:51] "GET / HTTP/1.1" 200 115 0.666464
  2016-12-07 15:18:29.167 18450 INFO neutron.agent.l3.ha [-] Router 
c8b5d5b7-ab57-4f56-9838-0900dc304af6 transitioned to backup
  2016-12-07 15:18:29.229 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:18:29] "GET / HTTP/1.1" 200 115 0.062110
  2016-12-07 15:21:48.870 18450 INFO neutron.agent.l3.ha [-] Router 
7629f5d7-b205-4af5-8e0e-a3c4d15e7677 transitioned to master
  2016-12-07 15:21:49.537 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:21:49] "GET / HTTP/1.1" 200 115 0.667920
  2016-12-07 15:22:08.796 18450 INFO neutron.agent.l3.ha [-] Router 
4676e7a5-279c-4114-8674-209f7fd5ab1a transitioned to master
  2016-12-07 15:22:09.515 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:22:09] "GET / HTTP/1.1" 200 115 0.719848

  Traffic to/from VMs through the new master router functioned as
  expected. However, the ha_state remained 'standby':

  
+--+--++---+--+
  | id   | host 
| admin_state_up | alive | ha_state |
  

[Yahoo-eng-team] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-04-19 Thread James Page
** No longer affects: horizon (Ubuntu Trusty)

** Changed in: horizon (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Xenial:
  Triaged
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668783] Re: Deleted image should not show as a link on instance details panel

2017-04-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/439859
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a35cf2da28d2732acebc297f44a222f2cd7028da
Submitter: Jenkins
Branch:master

commit a35cf2da28d2732acebc297f44a222f2cd7028da
Author: Ying Zuo 
Date:   Wed Mar 1 13:21:41 2017 -0800

Only show image name as a link when the image exists

Updated the nova api to return None when the image name
is not available, for example, when it's deleted.

Added a check for the image name before showing it as
a link.

Change-Id: I342b23dfd8352182f50c41054d2dbb3eae854839
Closes-bug:1668783


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1668783

Title:
  Deleted image should not show as a link on instance details panel

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce:
  1. Create an instance
  2. Delete the image used for the instance
  3. Go to instance details panel

  Note that the Image Name field is a dash with a link. The user will
  get an error when clicking on the link.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1668783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684038] Re: ironic CI regression: dnsmasq doesn't respond to dhcp request

2017-04-19 Thread Vasyl Saienko
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: ironic
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684038

Title:
  ironic CI regression: dnsmasq doesn't respond to dhcp request

Status in Ironic:
  New
Status in neutron:
  New

Bug description:
  All jobs that uses flat network_interface are failed because bootstrap
  can't get IP address from DHCP server.

  An example of failed job is:
  
http://logs.openstack.org/38/447538/6/check/gate-tempest-dsvm-ironic-ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-nv/f57afee/logs/

  In the syslog we can see that DHCP doesn't respond to requests:

  http://logs.openstack.org/38/447538/6/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
  nv/f57afee/logs/syslog.txt.gz#_Apr_18_12_30_00

  
  Apr 18 12:30:00ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available
  Apr 18 12:30:00ubuntu-xenial-internap-mtl01-8463102 ntpd[1715]: Listen 
normally on 15 vnet0 [fe80::fc54:ff:fef3:12ee%19]:123
  Apr 18 12:30:00ubuntu-xenial-internap-mtl01-8463102 ntpd[1715]: new 
interface(s) found: waking up resolver
  Apr 18 12:30:01ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available
  Apr 18 12:30:03ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available
  Apr 18 12:30:07ubuntu-xenial-internap-mtl01-8463102 dnsmasq-dhcp[3453]: 
DHCPDISCOVER(tap6a904c1b-03) 52:54:00:f3:12:ee no address available

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1684038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684028] [NEW] wrong status codes for v3-ext oauth1 create request token and create access token

2017-04-19 Thread Hemanth Nakkina
Public bug reported:

Following updates are required in api documentation

The normal response codes for following API should be 201 instead of 200

https://developer.openstack.org/api-ref/identity/v3-ext/#create-request-token
https://developer.openstack.org/api-ref/identity/v3-ext/#create-access-token

The request type should be PUT instead of POST for the following API
https://developer.openstack.org/api-ref/identity/v3-ext/#authorize-request-token

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1684028

Title:
  wrong status codes for v3-ext oauth1 create request token and create
  access token

Status in OpenStack Identity (keystone):
  New

Bug description:
  Following updates are required in api documentation

  The normal response codes for following API should be 201 instead of
  200

  https://developer.openstack.org/api-ref/identity/v3-ext/#create-request-token
  https://developer.openstack.org/api-ref/identity/v3-ext/#create-access-token

  The request type should be PUT instead of POST for the following API
  
https://developer.openstack.org/api-ref/identity/v3-ext/#authorize-request-token

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1684028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684016] [NEW] delete port will trow execptions in ovs agent

2017-04-19 Thread QunyingRan
Public bug reported:

In master, when delete VM there is an exception:

2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2211, in rpc_loop
2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
ovs_restarted)
2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1820, in process_network_ports
2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info['removed'])
2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent TypeError: 
unsupported operand type(s) for |=: 'set' and 'NoneType'

** Affects: neutron
 Importance: Undecided
 Assignee: QunyingRan (ran-qunying)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => QunyingRan (ran-qunying)

** Description changed:

- when delete VM there is an exception:
- 
+ In master, when delete VM there is an exception:
  
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2211, in rpc_loop
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
ovs_restarted)
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1820, in process_network_ports
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info['removed'])
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent TypeError: 
unsupported operand type(s) for |=: 'set' and 'NoneType'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684016

Title:
  delete port will trow execptions in ovs agent

Status in neutron:
  New

Bug description:
  In master, when delete VM there is an exception:

  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2211, in rpc_loop
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
ovs_restarted)
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1820, in process_network_ports
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info['removed'])
  2017-04-19 13:25:18.442 14865 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent TypeError: 
unsupported operand type(s) for |=: 'set' and 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp