[Yahoo-eng-team] [Bug 1758486] Re: nova cant attach volume, unathorized

2018-03-23 Thread Huy Doan
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1758486

Title:
  nova cant attach volume, unathorized

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  ater upgrade to queens, nova unable to attach volume from cinder.

  
  ```
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi 
[req-6cb77dfe-f718-42d5-a83a-10fa80dea989 fa4ca618dd5247a0841adeac574b54d6 
7265d9424e8e4719aa192b08b6d0227b - default default] Unexpected exception in API 
method: Unauthorized: The request you have made requires authentication. (HTTP 
401)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 788, in 
wrapped
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
f(*args, **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/volumes.py", line 
336, in create
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi 
supports_multiattach=supports_multiattach)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 203, in inner
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
function(self, context, instance, *args, **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 151, in inner
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
f(self, context, instance, *args, **kw)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 3940, in 
attach_volume
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi volume = 
self.volume_api.get(context, volume_id)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 291, in wrapper
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi res = 
method(self, ctx, *args, **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 313, in wrapper
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi res = 
method(self, ctx, volume_id, *args, **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 379, in get
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi context, 
microversion=microversion).volumes.get(volume_id)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/v2/volumes.py", line 308, 
in get
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
self._get("/volumes/%s" % volume_id, "volume")
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/base.py", line 321, in _get
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi resp, body = 
self.api.client.get(url)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 199, in 
get
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
self._cs_request(url, 'GET', **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 190, in 
_cs_request
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
self.request(url, method, **kwargs)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 176, in 
request
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi raise 
exceptions.from_response(resp, body)
  2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi Unauthorized: The 
request you have made requires authentication. (HTTP 401)
  2018-03-24 09:24:12.781 23

[Yahoo-eng-team] [Bug 1155633] Re: Inconsistent formats in eventlet.wsgi records

2018-03-23 Thread Launchpad Bug Tracker
[Expired for oslo.log because there has been no activity for 60 days.]

** Changed in: oslo.log
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1155633

Title:
  Inconsistent formats in eventlet.wsgi records

Status in Glance:
  Invalid
Status in oslo.log:
  Expired

Bug description:
  The section of a DEBUG eventlet.wsgi record normally contains a field
  that looks like this:

  [f9ebed7e-1efd-4ce7-af91-72150d032364 86628093430250 58924572785796]

  and while running some code that parses the data into fields based on
  whitespace I just found a record containing this: [-], resulting in
  the fields positions of the records that follow changing.  I'd suggest
  changing [-] to [- - -] for consistency.

  If you consider [] a single field this is not a bug, but if you
  consider the whole record as space-separated values which I believe
  most people like me do, this makes parsing a more complicated and also
  sets a bad precedence.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1155633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641026] Re: Keystone ldap tree_dn does not support Chinese

2018-03-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641026

Title:
  Keystone ldap tree_dn does not support Chinese

Status in OpenStack Identity (keystone):
  Expired
Status in oslo.config:
  Expired

Bug description:
  Keystone ldap tree_dn does not support Chinese
  My keystone.conf:

  url = ldap://10.153.195.125
  user = CN=Administrator,CN=Users,DC=h3c,DC=com
  password = h3C123456
  suffix = DC=h3c,DC=com
  query_scope = sub
  user_tree_dn = OU=华三,DC=h3c,DC=com
  ...
  ...

  My tree_dn config ou is Chinese,I try login openstack dashboard
  It throw:UnicodeDecodeError:'ascii' codes can't decode bytes 0xe5 in position 
3:ordinal not in range(128)
  I think tree_dn need unicode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641026] Re: Keystone ldap tree_dn does not support Chinese

2018-03-23 Thread Launchpad Bug Tracker
[Expired for oslo.config because there has been no activity for 60
days.]

** Changed in: oslo.config
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641026

Title:
  Keystone ldap tree_dn does not support Chinese

Status in OpenStack Identity (keystone):
  Expired
Status in oslo.config:
  Expired

Bug description:
  Keystone ldap tree_dn does not support Chinese
  My keystone.conf:

  url = ldap://10.153.195.125
  user = CN=Administrator,CN=Users,DC=h3c,DC=com
  password = h3C123456
  suffix = DC=h3c,DC=com
  query_scope = sub
  user_tree_dn = OU=华三,DC=h3c,DC=com
  ...
  ...

  My tree_dn config ou is Chinese,I try login openstack dashboard
  It throw:UnicodeDecodeError:'ascii' codes can't decode bytes 0xe5 in position 
3:ordinal not in range(128)
  I think tree_dn need unicode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758486] [NEW] nova cant attach volume, unathorized

2018-03-23 Thread Huy Doan
Public bug reported:

ater upgrade to queens, nova unable to attach volume from cinder.


```
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi 
[req-6cb77dfe-f718-42d5-a83a-10fa80dea989 fa4ca618dd5247a0841adeac574b54d6 
7265d9424e8e4719aa192b08b6d0227b - default default] Unexpected exception in API 
method: Unauthorized: The request you have made requires authentication. (HTTP 
401)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 788, in 
wrapped
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return f(*args, 
**kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/volumes.py", line 
336, in create
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi 
supports_multiattach=supports_multiattach)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 203, in inner
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
function(self, context, instance, *args, **kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 151, in inner
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return f(self, 
context, instance, *args, **kw)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 3940, in 
attach_volume
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi volume = 
self.volume_api.get(context, volume_id)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 291, in wrapper
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi res = 
method(self, ctx, *args, **kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 313, in wrapper
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi res = 
method(self, ctx, volume_id, *args, **kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 379, in get
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi context, 
microversion=microversion).volumes.get(volume_id)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/v2/volumes.py", line 308, 
in get
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
self._get("/volumes/%s" % volume_id, "volume")
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/base.py", line 321, in _get
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi resp, body = 
self.api.client.get(url)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 199, in 
get
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
self._cs_request(url, 'GET', **kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 190, in 
_cs_request
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi return 
self.request(url, method, **kwargs)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 176, in 
request
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi raise 
exceptions.from_response(resp, body)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi Unauthorized: The 
request you have made requires authentication. (HTTP 401)
2018-03-24 09:24:12.781 23797 ERROR nova.api.openstack.wsgi 
2018-03-24 09:24:12.783 23797 INFO nova.api.openstack.wsgi 
[req-6cb77dfe-f718-42d5-a83a-10fa80dea989 fa4ca618dd5247a0841adeac574b54d6 
7265d9424e8e4719aa192b08b6d0227b - default default] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.


```

** Affects: nova
 Importance: Undecided
 Status: N

[Yahoo-eng-team] [Bug 1758460] [NEW] UUID (or any persistent) token providers unable to validate federation token

2018-03-23 Thread Guang Yee
Public bug reported:

With the UUID token provider and WebSSO enabled. A token obtain via
WebSSO will not be able to validate in Keystone. In the Keystone log,
you'll see something similar to these.

46386 (keystone.token.providers.common): 2018-03-23 20:24:09,581 DEBUG common 
_populate_roles User 7e93953eda38423f919d83da2544c683 has no access to project 
8d344d1178964026b20be32438b484be
46386 (keystone.token.provider): 2018-03-23 20:24:09,581 DEBUG provider 
validate_token Unable to validate token: The request you have made requires 
authentication.
46386 (keystone.common.wsgi): 2018-03-23 20:24:09,583 WARNING wsgi __call__ 
Could not find token: {u'tenant': {u'domain': {u'id': 
u'6c30c2dba285403e8aa70de9ecb47d0d', u'name': u'websso-domain1'}, u'id': 
u'8d344d1178964026b20be32438b484be', u'name': u'websso-project1'}, 
u'is_domain': None, 'user_id': u'7e93953eda38423f919d83da2544c683', 'expires': 
datetime.datetime(2018, 3, 24, 0, 24, 8), u'token_data': {u'token': 
{u'is_domain': False, u'service_providers': [{u'sp_url': 
u'https://mytest:5000/Shibboleth.sso/SAML2/ECP', u'auth_url': 
u'https://mytest:5000/v3', u'id': u'ks-sp-server'}], u'methods': [u'token', 
u'saml2'], u'roles': [{u'domain_id': None, u'id': 
u'9fe2ff9ee4384b1894a90878d3e92bab', u'name': u'_member_'}], 
u'is_admin_project': False, u'project': {u'domain': {u'id': 
u'6c30c2dba285403e8aa70de9ecb47d0d', u'name': u'websso-domain1'}, u'id': 
u'8d344d1178964026b20be32438b484be', u'name': u'websso-project1'},
...


Looking at the code, it appears we never rebuild federated token roles for UUID 
(persistence) tokens.

https://github.com/openstack/keystone/blob/stable/pike/keystone/token/providers/common.py#L610

We only do that for Fernet (non-persistence) tokens.

https://github.com/openstack/keystone/blob/stable/pike/keystone/token/providers/common.py#L635

Consequently, when we try to glue the token data together, the roles are
being rebuilt as if the token is a regular token which result in role
assignment not found.

https://github.com/openstack/keystone/blob/stable/pike/keystone/token/providers/common.py#L649
https://github.com/openstack/keystone/blob/stable/pike/keystone/token/providers/common.py#L418
https://github.com/openstack/keystone/blob/stable/pike/keystone/token/providers/common.py#L344


Step to reproduce:

1. Follow the Keystone Doc to setup WebSSO and use UUID token provider.
2. Login from Horizon
3. After successfully logged in, you'll see all kinds of "Unable to retrieve 
..." messages from
Horizon. Basically, Horizon is unable to use the federated token to retrieve 
users resources (i.e. compute, network, etc)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1758460

Title:
  UUID (or any persistent) token providers unable to validate federation
  token

Status in OpenStack Identity (keystone):
  New

Bug description:
  With the UUID token provider and WebSSO enabled. A token obtain via
  WebSSO will not be able to validate in Keystone. In the Keystone log,
  you'll see something similar to these.

  46386 (keystone.token.providers.common): 2018-03-23 20:24:09,581 DEBUG common 
_populate_roles User 7e93953eda38423f919d83da2544c683 has no access to project 
8d344d1178964026b20be32438b484be
  46386 (keystone.token.provider): 2018-03-23 20:24:09,581 DEBUG provider 
validate_token Unable to validate token: The request you have made requires 
authentication.
  46386 (keystone.common.wsgi): 2018-03-23 20:24:09,583 WARNING wsgi __call__ 
Could not find token: {u'tenant': {u'domain': {u'id': 
u'6c30c2dba285403e8aa70de9ecb47d0d', u'name': u'websso-domain1'}, u'id': 
u'8d344d1178964026b20be32438b484be', u'name': u'websso-project1'}, 
u'is_domain': None, 'user_id': u'7e93953eda38423f919d83da2544c683', 'expires': 
datetime.datetime(2018, 3, 24, 0, 24, 8), u'token_data': {u'token': 
{u'is_domain': False, u'service_providers': [{u'sp_url': 
u'https://mytest:5000/Shibboleth.sso/SAML2/ECP', u'auth_url': 
u'https://mytest:5000/v3', u'id': u'ks-sp-server'}], u'methods': [u'token', 
u'saml2'], u'roles': [{u'domain_id': None, u'id': 
u'9fe2ff9ee4384b1894a90878d3e92bab', u'name': u'_member_'}], 
u'is_admin_project': False, u'project': {u'domain': {u'id': 
u'6c30c2dba285403e8aa70de9ecb47d0d', u'name': u'websso-domain1'}, u'id': 
u'8d344d1178964026b20be32438b484be', u'name': u'websso-project1'},
  ...

  
  Looking at the code, it appears we never rebuild federated token roles for 
UUID (persistence) tokens.

  
https://github.com/openstack/keystone/blob/stable/pike/keystone/token/providers/common.py#L610

  We only do that for Fernet (non-persistence) tokens.

  
https://github.com/openstack/keystone/blob/stable/pike/keystone/token/providers/common.py#L635

  Consequently, when we try to glue the token data together, the roles
  are being rebuilt as if

[Yahoo-eng-team] [Bug 1667863] Re: if a subnet has multiple static routes, the network interfaces file is invalid

2018-03-23 Thread Mike Pontillo
Adding cloud-init. This looks like an issue with how the netplan gets
rendered.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1667863

Title:
  if a subnet has multiple static routes, the network interfaces file is
  invalid

Status in cloud-init:
  New
Status in curtin:
  New
Status in MAAS:
  Incomplete

Bug description:
  I have multiple subnets, each has an additional custom static route.

  those subnets are used by different bridges on the same node.

  example:
  brAdm (on interface enp9s0) - subnet 172.30.72.128/25 - static route 
172.30.72.0/21 gw 172.30.72.129
  brPublic (on interface ens9.2002) - subnet 172.30.80.128/25 - static route 
172.30.80.0/21 gw 172.30.80.129

  the resulting pre-up and post-up lines in /etc/network/interfaces are
  malformed, which creates then the wrong routing table.

  It seems the pre-down of one route and the post-up of the next route
  are not separated by a newline.

  See below:

  post-up route add -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || true
  pre-down route del -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || truepost-up route add -net 172.30.72.0 netmask 255.255.248.0 gw 
172.30.72.129 metric 0 || true
  pre-down route del -net 172.30.72.0 netmask 255.255.248.0 gw 172.30.72.129 
metric 0 || true

  
  Here's the entire resulting network configuration for reference.
  note that a bunch of other bridge interfaces are created, but not used on 
this machine, so not configured.

  
  cat /etc/network/interfaces
  auto lo
  iface lo inet loopback
  dns-nameservers 172.30.72.130
  dns-search r16maas.os maas

  auto enp9s0
  iface enp9s0 inet manual
  mtu 9000

  auto ens9
  iface ens9 inet manual
  mtu 9000

  auto brAdm
  iface brAdm inet static
  address 172.30.72.132/25
  hwaddress ether 08:9e:01:ab:fc:f6
  bridge_ports enp9s0
  bridge_fd 15
  mtu 9000

  auto brData
  iface brData inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brExt
  iface brExt inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brInt
  iface brInt inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brPublic
  iface brPublic inet static
  address 172.30.80.132/25
  gateway 172.30.80.129
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brStoClu
  iface brStoClu inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brStoData
  iface brStoData inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brAdm.52
  iface brAdm.52 inet manual
  vlan_id 52
  mtu 1500
  vlan-raw-device brAdm

  auto ens9.0
  iface ens9.0 inet manual
  mtu 9000
  vlan-raw-device ens9
  post-up route add -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || true
  pre-down route del -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || truepost-up route add -net 172.30.72.0 netmask 255.255.248.0 gw 
172.30.72.129 metric 0 || true
  pre-down route del -net 172.30.72.0 netmask 255.255.248.0 gw 172.30.72.129 
metric 0 || true
  source /etc/network/interfaces.d/*.cfg

  
  route
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric RefUse Iface
  172.30.72.128   *   255.255.255.128 U 0  00 brAdm
  172.30.80.128   *   255.255.255.128 U 0  00 
brPublic

  
  
  ifconfig
  brAdm Link encap:Ethernet  HWaddr 08:9e:01:ab:fc:f6
inet addr:172.30.72.132  Bcast:172.30.72.255  Mask:255.255.255.128
inet6 addr: fe80::a9e:1ff:feab:fcf6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:15029 errors:0 dropped:0 overruns:0 frame:0
TX packets:1447 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7393978 (7.3 MB)  TX bytes:182411 (182.4 KB)

  brAdm.52  Link encap:Ethernet  HWaddr 08:9e:01:ab:fc:f6
inet6 addr: fe80::a9e:1ff:feab:fcf6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:7885 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:398943 (398.9 KB)  TX bytes:488 (488.0 B)

  brDataLink encap:Ethernet  HWaddr 00:02:c9:ce:7c:16
  

[Yahoo-eng-team] [Bug 1758453] [NEW] Port binding 'migrating_to' attribute needlessly updated in post-live-migration for neutron

2018-03-23 Thread Matt Riedemann
Public bug reported:

During live migration, the setup_networks_on_host neutronv2 API method
checks to see if the provided host is different from the existing
instance.host and if so, it knows the instance is being migrated and
sets the 'migrating_to' attribute to the new host in the ports binding
profile:

https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/network/neutronv2/api.py#L351

https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/network/neutronv2/api.py#L322

And then updates the port.

That happens in pre_live_migration which runs on the dest host:

https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/compute/manager.py#L6007

And it also happens again in post_live_migration_at_destination which
runs on the destination host:

https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/compute/manager.py#L6382

Since the neutronv2 API code doesn't check to see if the 'migrating_to'
attribute is already set:

https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/network/neutronv2/api.py#L322

It does a redundant PUT (port update) for no actual change.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: live-migration neutron performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1758453

Title:
  Port binding 'migrating_to' attribute needlessly updated in post-live-
  migration for neutron

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  During live migration, the setup_networks_on_host neutronv2 API method
  checks to see if the provided host is different from the existing
  instance.host and if so, it knows the instance is being migrated and
  sets the 'migrating_to' attribute to the new host in the ports binding
  profile:

  
https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/network/neutronv2/api.py#L351

  
https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/network/neutronv2/api.py#L322

  And then updates the port.

  That happens in pre_live_migration which runs on the dest host:

  
https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/compute/manager.py#L6007

  And it also happens again in post_live_migration_at_destination which
  runs on the destination host:

  
https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/compute/manager.py#L6382

  Since the neutronv2 API code doesn't check to see if the
  'migrating_to' attribute is already set:

  
https://github.com/openstack/nova/blob/55b22a54e65728712670c5dde5a833f5349e5b2f/nova/network/neutronv2/api.py#L322

  It does a redundant PUT (port update) for no actual change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1758453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746509] Re: TypeError: Can't upgrade a READER transaction to a WRITER mid-transaction

2018-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/555093
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b1ed92c7af01a9ac7e122a541ce1bdb9be0524c4
Submitter: Zuul
Branch:master

commit b1ed92c7af01a9ac7e122a541ce1bdb9be0524c4
Author: melanie witt 
Date:   Wed Mar 21 22:57:50 2018 +

Move _make_instance_list call outside of DB transaction context

The _make_instance_list method is used to make an InstanceList object
out of database dict-like instance objects. It's possible while making
the list that the various _from_db_object methods that are called might
do their own database writes.

Currently, we're calling _make_instance_list nested inside of a 'reader'
database transaction context and we hit the error:

  TypeError: Can't upgrade a READER transaction to a WRITER
  mid-transaction

during the _make_instance_list call if anything tries to do a database
write. The scenario encountered was after an upgrade to Pike, older
service records without UUIDs were attempted to be updated with UUIDs
upon access, and that access happened to be during an instance list,
so it failed when trying to write the service UUID while nested inside
the 'reader' database transaction context.

This simply moves the _make_instance_list method call out from the
@db.select_db_reader_mode decorated _get_by_filters_impl method to the
get_by_filters method to remove the nesting.

Closes-Bug: #1746509

Change-Id: Ifadf408802cc15eb9769d2dc1fc920426bb7fc20


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746509

Title:
  TypeError: Can't upgrade a READER transaction to a WRITER mid-
  transaction

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  Hi, I was running OPenstack Newton with no nova_cell0 database and 
placement-api setup . After migrate to Openstack Pike and correctly setup the 
nova_cell0 and placement-api everything is working fine except the openstack 
server list on tenant that already exist . 
  For example : 

  1. For a new tenant created after the migration at Pike. 
  nova --os-project-name="New Project" list 
  
+--+-+++-+---+
  | ID   | Name| Status | Task 
State | Power State | Networks  |
  
+--+-+++-+---+
  | c41c7e8d-4bc0-4a0f-a9d3-dc719ae2aff0 | SAMPLE_VM   | ACTIVE | - 
 | Running | SAMPLE-SUBNET=192.168.0.8 |
  | 3d3d3e10-f326-4a92-9253-1511e738d1cc | SECOND_INSTANCE | ACTIVE | - 
 | Running | SAMPLE-SUBNET=192.168.0.5 |
  
+--+-+++-+---+
  2. For a old tenant created before the migration at Newton .

  nova --os-project-name="InterCon" list 
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-0dd4ef4d-54c2-4cfd-b8d9-636c5736ef5f)

  And here the log related to this error .

  2018-01-31 07:45:35.832 2340 DEBUG nova.compute.api 
[req-254ac685-f218-4a23-8fa6-9c6a2d48bb07 45bd2d15e8534c469bf08b7db268e8d4 
f80e111e5030457a872bee3a4c11ca70 - 8433db4810f947168950770f8c93a4f2 
8433db4810f947168950770f8c93a4f2] Searching by: {'deleted': False} get_all 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2311
  2018-01-31 07:45:35.837 2340 DEBUG oslo_concurrency.lockutils 
[req-254ac685-f218-4a23-8fa6-9c6a2d48bb07 45bd2d15e8534c469bf08b7db268e8d4 
f80e111e5030457a872bee3a4c11ca70 - 8433db4810f947168950770f8c93a4f2 
8433db4810f947168950770f8c93a4f2] Lock "----" 
acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 
0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
  2018-01-31 07:45:35.837 2340 DEBUG oslo_concurrency.lockutils 
[req-254ac685-f218-4a23-8fa6-9c6a2d48bb07 45bd2d15e8534c469bf08b7db268e8d4 
f80e111e5030457a872bee3a4c11ca70 - 8433db4810f947168950770f8c93a4f2 
8433db4810f947168950770f8c93a4f2] Lock "----" 
released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 
0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
  2018-01-31 07:45:35.858 2340 DEBUG nova.compute.api 
[req-254ac685-f218-4a23-8fa6-9c6a2d48bb07 45bd2d15e8534c469bf08b7db268e8d4 
f80e111e5030457

[Yahoo-eng-team] [Bug 1758409] [NEW] integration tests: restructure ssh timeout

2018-03-23 Thread Joshua Powers
Public bug reported:

# Summary
During the integration tests, currently if SSH to instance times out it holds 
up testing for over an hour in an attempt to SSH to an instance; note the 
timestamp jump on: https://paste.ubuntu.com/p/NBQKwm9wdG/

The _ssh_connect function was originally written for the nocloud_kvm
platform and used as a method for determining if an instance was up and
accessible. As such, the function is doing double duty and not correctly
focused on SSH'ing to an up and running instance and has a bug in it as
it is waiting far too long.

# Action plan

1. For the nocloud_kvm platform when when starting and before
_wait_for_system, there should be a check if an instance is accessible
during the is_running check. This could be done again by SSH with a
number of retries, but should be taken care of inside the nocloud_kvm
platform itself and not in the SSH connect function.

2. Update the _ssh_connect to timeout quickly, reduce wait on banner,
and only retry up to 3 times.

# Noted Files
tests/cloud_tests/platforms/platforms.py:_ssh_connect()
tests/cloud_tests/platforms/nocloudkvm/instance.py:start()

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1758409

Title:
  integration tests: restructure ssh timeout

Status in cloud-init:
  New

Bug description:
  # Summary
  During the integration tests, currently if SSH to instance times out it holds 
up testing for over an hour in an attempt to SSH to an instance; note the 
timestamp jump on: https://paste.ubuntu.com/p/NBQKwm9wdG/

  The _ssh_connect function was originally written for the nocloud_kvm
  platform and used as a method for determining if an instance was up
  and accessible. As such, the function is doing double duty and not
  correctly focused on SSH'ing to an up and running instance and has a
  bug in it as it is waiting far too long.

  # Action plan

  1. For the nocloud_kvm platform when when starting and before
  _wait_for_system, there should be a check if an instance is accessible
  during the is_running check. This could be done again by SSH with a
  number of retries, but should be taken care of inside the nocloud_kvm
  platform itself and not in the SSH connect function.

  2. Update the _ssh_connect to timeout quickly, reduce wait on banner,
  and only retry up to 3 times.

  # Noted Files
  tests/cloud_tests/platforms/platforms.py:_ssh_connect()
  tests/cloud_tests/platforms/nocloudkvm/instance.py:start()

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1758409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597596] Re: network not always cleaned up when spawning VMs

2018-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/520248
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3a503a8f2b934f19049531c5c92130ca7cdd6a7f
Submitter: Zuul
Branch:master

commit 3a503a8f2b934f19049531c5c92130ca7cdd6a7f
Author: Matt Riedemann 
Date:   Wed Nov 15 19:15:44 2017 -0500

Always deallocate networking before reschedule if using Neutron

When a server build fails on a selected compute host, the compute
service will cast to conductor which calls the scheduler to select
another host to attempt the build if retries are not exhausted.

With commit 08d24b733ee9f4da44bfbb8d6d3914924a41ccdc, if retries
are exhausted or the scheduler raises NoValidHost, conductor will
deallocate networking for the instance. In the case of neutron, this
means unbinding any ports that the user provided with the server
create request and deleting any ports that nova-compute created during
the allocate_for_instance() operation during server build.

When an instance is deleted, it's networking is deallocated in the same
way - unbind pre-existing ports, delete ports that nova created.

The problem is when rescheduling from a failed host, if we successfully
reschedule and build on a secondary host, any ports created from the
original host are not cleaned up until the instance is deleted. For
Ironic or SR-IOV ports, those are always deallocated.

The ComputeDriver.deallocate_networks_on_reschedule() method defaults
to False just so that the Ironic driver could override it, but really
we should always cleanup neutron ports before rescheduling.

Looking over bug report history, there are some mentions of different
networking backends handling reschedules with multiple ports differently,
in that sometimes it works and sometimes it fails. Regardless of the
networking backend, however, we are at worst taking up port quota for
the tenant for ports that will not be bound to whatever host the instance
ends up on.

There could also be legacy reasons for this behavior with nova-network,
so that is side-stepped here by just restricting this check to whether
or not neutron is being used. When we eventually remove nova-network we
can then also remove the deallocate_networks_on_reschedule() method and
SR-IOV check.

Change-Id: Ib2abf73166598ff14fce4e935efe15eeea0d4f7d
Closes-Bug: #1597596


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597596

Title:
  network not always cleaned up when spawning VMs

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  Here are the scenario:
  1). Nova scheduler/conductor selects a nova-compute A to spin a VM
  2). Nova compute A tries to spin the VM, but the process failed, and 
generates a RE-SCHEDULE exception.
  3). in re-schedule exception, only when retry is none, network resource is 
properly cleaned up. when retry is not none, the network is not cleaned up, the 
port information still stays with the VM.
  4). Nova condutor was notified about the failure. It selects nova-compute-B 
to spin VM.
  5). nova compute B spins up VM successfully. However, from the 
instance_info_cache, the network_info showed two ports allocated for VM, one 
from the origin network A that associated with nova-compute A nad one from 
network B that associated with nova compute B.

  To simulate the case, raise a fake exception in
  _do_build_and_run_instance in nova-compute A:

  diff --git a/nova/compute/manager.py b/nova/compute/manager.py
  index ac6d92c..8ce8409 100644
  --- a/nova/compute/manager.py
  +++ b/nova/compute/manager.py
  @@ -1746,6 +1746,7 @@ class ComputeManager(manager.Manager):
   filter_properties)
   LOG.info(_LI('Took %0.2f seconds to build instance.'),
timer.elapsed(), instance=instance)
  +raise exception.RescheduledException( 
instance_uuid=instance.uuid, reason="simulated-fault")
   return build_results.ACTIVE
   except exception.RescheduledException as e:
   retry = filter_properties.get('retry')

  environments: 
  *) nova master branch
  *) ubuntu 12.04
  *) kvm
  *) bridged network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1757259] Re: Netlink error raised when trying to delete not existing IP address from device

2018-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/554697
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bbe1bac3f78eb15190289b6bc8d6b3a9ae77b412
Submitter: Zuul
Branch:master

commit bbe1bac3f78eb15190289b6bc8d6b3a9ae77b412
Author: Sławek Kapłoński 
Date:   Tue Mar 20 21:42:26 2018 +0100

Don't raise error when removing not existing IP address

When privileged delete_ip_address function is called to delete
IP address which is already not configured on device, it should
not fail with any error.

Change-Id: I9247ac899a76e5d9a2962d2cb81279f2d6f16c0b
Closes-Bug: #1757259


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1757259

Title:
  Netlink error raised when trying to delete not existing IP address
  from device

Status in neutron:
  Fix Released

Bug description:
  In DVR multinode scenario tests L3 agent fails many times with errors
  in L3 agent logs: http://logs.openstack.org/76/550676/10/check
  /neutron-tempest-plugin-dvr-multinode-
  scenario/1c3297a/logs/screen-q-l3.txt.gz?

  It looks that there is some problem during deletion of IP address from
  device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1757259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758359] [NEW] nova set-password fails if password already set

2018-03-23 Thread Remy van Elst
Public bug reported:

If the nova password has been set, trying to set it again (with the
purpose of re-setting the password) fails. Both the nova set-password
command (couldn't find the counterpart in the openstack server help) as
posting the password from inside the instance.

This code seems to not have a retry, if the password is set it returns
an error

if meta_data.password:
raise exc.HTTPConflict()

https://github.com/openstack/nova/blob/master/nova/api/metadata/password.py#L65

I'm running libvirt with KVM/qemu on Ocata. This bug is not related:
https://bugs.launchpad.net/nova/+bug/1757061, that is the effect that
happens after a password set fails.

Could this be changed to allow password changing/resetting if a password
has already been set? For example by accepting an HTTP DELETE request or
allowing an empty password to trigger the reset? ('')

The api does have such an endpoint but it's admin-only by default:
https://developer.openstack.org/api-ref/compute/#clear-admin-password

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  If the nova password has been set, trying to set it again (with the
- purpose of re-setting the password) fails.
+ purpose of re-setting the password) fails. Both the nova set-password
+ command (couldn't find the counterpart in the openstack server help) as
+ posting the password from inside the instance.
  
  This code seems to not have a retry, if the password is set it returns
  an error
  
- if meta_data.password:
- raise exc.HTTPConflict()
+ if meta_data.password:
+ raise exc.HTTPConflict()
  
  
https://github.com/openstack/nova/blob/master/nova/api/metadata/password.py#L65
  
  I'm running libvirt with KVM/qemu on Ocata. This bug is not related:
  https://bugs.launchpad.net/nova/+bug/1757061, that is the effect that
  happens after a password set fails.
  
  Could this be changed to allow password changing/resetting if a password
  has already been set? For example by accepting an HTTP DELETE request or
  allowing an empty password to trigger the reset? ('')

** Description changed:

  If the nova password has been set, trying to set it again (with the
  purpose of re-setting the password) fails. Both the nova set-password
  command (couldn't find the counterpart in the openstack server help) as
  posting the password from inside the instance.
  
  This code seems to not have a retry, if the password is set it returns
  an error
  
  if meta_data.password:
  raise exc.HTTPConflict()
  
  
https://github.com/openstack/nova/blob/master/nova/api/metadata/password.py#L65
  
  I'm running libvirt with KVM/qemu on Ocata. This bug is not related:
  https://bugs.launchpad.net/nova/+bug/1757061, that is the effect that
  happens after a password set fails.
  
  Could this be changed to allow password changing/resetting if a password
  has already been set? For example by accepting an HTTP DELETE request or
  allowing an empty password to trigger the reset? ('')
+ 
+ The api does have such an endpoint but it's admin-only by default:
+ https://developer.openstack.org/api-ref/compute/#clear-admin-password

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1758359

Title:
  nova set-password fails if password already set

Status in OpenStack Compute (nova):
  New

Bug description:
  If the nova password has been set, trying to set it again (with the
  purpose of re-setting the password) fails. Both the nova set-password
  command (couldn't find the counterpart in the openstack server help)
  as posting the password from inside the instance.

  This code seems to not have a retry, if the password is set it returns
  an error

  if meta_data.password:
  raise exc.HTTPConflict()

  
https://github.com/openstack/nova/blob/master/nova/api/metadata/password.py#L65

  I'm running libvirt with KVM/qemu on Ocata. This bug is not related:
  https://bugs.launchpad.net/nova/+bug/1757061, that is the effect that
  happens after a password set fails.

  Could this be changed to allow password changing/resetting if a
  password has already been set? For example by accepting an HTTP DELETE
  request or allowing an empty password to trigger the reset? ('')

  The api does have such an endpoint but it's admin-only by default:
  https://developer.openstack.org/api-ref/compute/#clear-admin-password

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1758359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758353] [NEW] neutron - qr- and qg- interfaces looses their vlan tag

2018-03-23 Thread Ian Kumlien
Public bug reported:

On a running instance, we have had several occasions of network issues.

During the last issue we noticed that the interfaces lost their vlan tags in 
openvswitch:
ovs-vsctl show |grep qg-fb5a3595-48 -A 10
Port "qg-fb5a3595-48"
Interface "qg-fb5a3595-48"
type: internal
---

A complete restart of neutron-l3-agent caused a migration and now it works with 
a different vlan tag:
ovs-vsctl show |grep -A 10 qg-fb5a3595-48   

   
Port "qg-fb5a3595-48"
tag: 52
Interface "qg-fb5a3595-48"
type: internal
---

How is this possible?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1758353

Title:
  neutron - qr- and qg- interfaces looses their vlan tag

Status in neutron:
  New

Bug description:
  On a running instance, we have had several occasions of network
  issues.

  During the last issue we noticed that the interfaces lost their vlan tags in 
openvswitch:
  ovs-vsctl show |grep qg-fb5a3595-48 -A 10
  Port "qg-fb5a3595-48"
  Interface "qg-fb5a3595-48"
  type: internal
  ---

  A complete restart of neutron-l3-agent caused a migration and now it works 
with a different vlan tag:
  ovs-vsctl show |grep -A 10 qg-fb5a3595-48 

 
  Port "qg-fb5a3595-48"
  tag: 52
  Interface "qg-fb5a3595-48"
  type: internal
  ---

  How is this possible?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1758353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758354] [NEW] Excessive warnings about snapshotting a paused instance

2018-03-23 Thread Matt Riedemann
Public bug reported:

We see this warning from the compute manager in CI runs:

https://github.com/openstack/nova/blob/7b96206699ac28f807676bd08c6dee7a89bcb77c/nova/compute/manager.py#L3340

http://logs.openstack.org/20/541420/2/check/tempest-
full/a416830/controller/logs/screen-n-cpu.txt.gz?level=WARNING#_Mar_22_16_45_28_061143

Mar 22 16:45:28.061143 ubuntu-xenial-ovh-bhs1-0003133028 nova-
compute[13877]: WARNING nova.compute.manager [None req-ee41217e-
bf1a-4622-aed5-f872fc772d5f tempest-ImagesTestJSON-641968735 tempest-
ImagesTestJSON-641968735] [instance:
6a3e1cb2-63ff-4514-aa32-c5c4c73f84d8] trying to snapshot a non-running
instance: (state: 3 expected: 1)

state=3 is PAUSED and 1=RUNNING.

And this:

Mar 22 16:45:51.800064 ubuntu-xenial-ovh-bhs1-0003133028 nova-
compute[13877]: WARNING nova.compute.manager [None req-d9c5694c-899c-
4a57-8318-043df137d564 tempest-ImagesTestJSON-641968735 tempest-
ImagesTestJSON-641968735] [instance: 3b7589a5-ce3b-4f82-a43a-
a48497de9382] trying to snapshot a non-running instance: (state: 4
expected: 1)

state=4 is SHUTDOWN.

Maybe this is related to bug 1741667 which for older versions of
libvirt, trying to snapshot a PAUSED instance would hang:

https://review.openstack.org/#/c/532214/

If you look at that patch, it's specifically about doing *live*
snapshots with the libvirt driver on a SHUTDOWN or PAUSED instance. Live
snapshot is controlled in the libvirt driver via a config option:

[workarounds]/disable_libvirt_livesnapshot

That now defaults to False so we always attempt a live snapshot with the
libvirt driver, at least in CI runs.

Given the guest state of the instance during the snapshot is only a
concern for the underlying virt driver and if it's doing a live snapshot
or not, (which should probably be a capability trait on the compute node
via the driver btw), and the API allows users to snapshot paused and
stopped instances:

https://github.com/openstack/nova/blob/7b96206699ac28f807676bd08c6dee7a89bcb77c/nova/compute/api.py#L2717

We should either downgrade the warning to DEBUG level or remove it
completely from the compute manager since it's really virt-driver
specific.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: compute serviceability

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1758354

Title:
  Excessive warnings about snapshotting a paused instance

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  We see this warning from the compute manager in CI runs:

  
https://github.com/openstack/nova/blob/7b96206699ac28f807676bd08c6dee7a89bcb77c/nova/compute/manager.py#L3340

  http://logs.openstack.org/20/541420/2/check/tempest-
  
full/a416830/controller/logs/screen-n-cpu.txt.gz?level=WARNING#_Mar_22_16_45_28_061143

  Mar 22 16:45:28.061143 ubuntu-xenial-ovh-bhs1-0003133028 nova-
  compute[13877]: WARNING nova.compute.manager [None req-ee41217e-
  bf1a-4622-aed5-f872fc772d5f tempest-ImagesTestJSON-641968735 tempest-
  ImagesTestJSON-641968735] [instance:
  6a3e1cb2-63ff-4514-aa32-c5c4c73f84d8] trying to snapshot a non-running
  instance: (state: 3 expected: 1)

  state=3 is PAUSED and 1=RUNNING.

  And this:

  Mar 22 16:45:51.800064 ubuntu-xenial-ovh-bhs1-0003133028 nova-
  compute[13877]: WARNING nova.compute.manager [None req-d9c5694c-899c-
  4a57-8318-043df137d564 tempest-ImagesTestJSON-641968735 tempest-
  ImagesTestJSON-641968735] [instance: 3b7589a5-ce3b-4f82-a43a-
  a48497de9382] trying to snapshot a non-running instance: (state: 4
  expected: 1)

  state=4 is SHUTDOWN.

  Maybe this is related to bug 1741667 which for older versions of
  libvirt, trying to snapshot a PAUSED instance would hang:

  https://review.openstack.org/#/c/532214/

  If you look at that patch, it's specifically about doing *live*
  snapshots with the libvirt driver on a SHUTDOWN or PAUSED instance.
  Live snapshot is controlled in the libvirt driver via a config option:

  [workarounds]/disable_libvirt_livesnapshot

  That now defaults to False so we always attempt a live snapshot with
  the libvirt driver, at least in CI runs.

  Given the guest state of the instance during the snapshot is only a
  concern for the underlying virt driver and if it's doing a live
  snapshot or not, (which should probably be a capability trait on the
  compute node via the driver btw), and the API allows users to snapshot
  paused and stopped instances:

  
https://github.com/openstack/nova/blob/7b96206699ac28f807676bd08c6dee7a89bcb77c/nova/compute/api.py#L2717

  We should either downgrade the warning to DEBUG level or remove it
  completely from the compute manager since it's really virt-driver
  specific.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1758354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsu

[Yahoo-eng-team] [Bug 1756507] Re: The function _cleanup_running_deleted_instances repeat detach volume

2018-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/554090
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1a241d0fb374c8747e3712313417a4fa284a4e43
Submitter: Zuul
Branch:master

commit 1a241d0fb374c8747e3712313417a4fa284a4e43
Author: zhengyao1 
Date:   Mon Mar 19 10:51:00 2018 +0800

remove _cleanup_running_deleted_instances repeat detach volume

the volumes already detached during the above _shutdown_instance() call.
So detach is not requested from _cleanup_volumes() in this case. So the
call change to self._cleanup_volumes(context, instance, bdms, detach=False).

Change-Id: I833f259972dd2e0e6bb3bbaa0e5a78a93f59b076
Closes-Bug: #1756507


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1756507

Title:
  The function _cleanup_running_deleted_instances repeat detach volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n7421,
  The volumes already detached during the above _shutdown_instance()
  call. So detach is not requested from _cleanup_volumes() in this case.
  So the call maybe change to self._cleanup_volumes(context, instance,
  bdms,detach=False).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1756507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1757273] Re: nova-compute fails to start even if [placement]/region_name is set

2018-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/554759
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=be9854b0fdff22a72a9699a0900e53e0595bd533
Submitter: Zuul
Branch:master

commit be9854b0fdff22a72a9699a0900e53e0595bd533
Author: Kevin_Zheng 
Date:   Wed Mar 21 09:43:21 2018 +0800

Change compute mgr placement check to region_name

Change https://review.openstack.org/#/c/492247/ in queens deprecated the
[placement]/os_region_name config option and you should be using
'region_name' in that group now, and you'll get a deprecation warning if
using 'os_region_name', but if you do that, nova-compute fails to start.

This patch fix the bug by adding [placement]/region_name to the check.

Change-Id: Iea7d5d0d6907adbcb236dc43b5af7469de2ba78b
Closes-Bug: #1757273


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1757273

Title:
  nova-compute fails to start even if [placement]/region_name is set

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  Change https://review.openstack.org/#/c/492247/ in queens deprecated
  the [placement]/os_region_name config option and you should be using
  'region_name' in that group now, and you'll get a deprecation warning
  if using 'os_region_name', but if you do that, nova-compute fails to
  start, as seen here:

  http://logs.openstack.org/77/554577/1/check/tempest-full-
  py3/df52956/controller/logs/screen-n-cpu.txt.gz#_Mar_20_15_16_30_122538

  Mar 20 15:16:30.122538 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service [None req-19eb4465-6304-40fe-bb23-4bc7ce96f03a None 
None] Error starting thread.: nova.exception.PlacementNotConfigured: This 
compute is not configured to talk to the placement service. Configure the 
[placement] section of nova.conf and restart the service.
  Mar 20 15:16:30.122790 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service Traceback (most recent call last):
  Mar 20 15:16:30.122925 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service   File 
"/usr/local/lib/python3.5/dist-packages/oslo_service/service.py", line 729, in 
run_service
  Mar 20 15:16:30.123087 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service service.start()
  Mar 20 15:16:30.123219 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service   File "/opt/stack/nova/nova/service.py", line 162, 
in start
  Mar 20 15:16:30.123348 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service self.manager.init_host()
  Mar 20 15:16:30.123484 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service   File "/opt/stack/nova/nova/compute/manager.py", 
line 1135, in init_host
  Mar 20 15:16:30.123628 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service raise exception.PlacementNotConfigured()
  Mar 20 15:16:30.123749 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service nova.exception.PlacementNotConfigured: This compute 
is not configured to talk to the placement service. Configure the [placement] 
section of nova.conf and restart the service.
  Mar 20 15:16:30.123870 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
ERROR oslo_service.service 

  From the config:

  Mar 20 15:16:30.044109 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
DEBUG oslo_service.service [None req-19eb4465-6304-40fe-bb23-4bc7ce96f03a None 
None] placement.os_region_name   = None {{(pid=19979) log_opt_values 
/usr/local/lib/python3.5/dist-packages/oslo_config/cfg.py:2898}}
  Mar 20 15:16:30.044351 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
DEBUG oslo_service.service [None req-19eb4465-6304-40fe-bb23-4bc7ce96f03a None 
None] placement.randomize_allocation_candidates = False {{(pid=19979) 
log_opt_values /usr/local/lib/python3.5/dist-packages/oslo_config/cfg.py:2898}}
  Mar 20 15:16:30.044612 ubuntu-xenial-ovh-gra1-0003080089 nova-compute[19979]: 
DEBUG oslo_service.service [None req-19eb4465-6304-40fe-bb23-4bc7ce96f03a None 
None] placement.region_name  = RegionOne {{(pid=19979) log_opt_values 
/usr/local/lib/python3.5/dist-packages/oslo_config/cfg.py:2898}}

  And this is the code that fails:

  
https://github.com/openstack/nova/blob/3fd863d8bf2fa1fc09acd08d976689462cffd2e3/nova/compute/manager.py#L1134

  That needs to be changed to:

  if CONF.placement.os_region_name is None and CONF.placement.region_name is 
None:
 ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1757273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to 

[Yahoo-eng-team] [Bug 1719460] Re: (perf) Unnecessarily joining instance.services when listing instances regardless of microversion

2018-03-23 Thread Matt Riedemann
** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => (unassigned)

** Changed in: nova
 Assignee: (unassigned) => Andrey Volkov (avolkov)

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719460

Title:
  (perf) Unnecessarily joining instance.services when listing instances
  regardless of microversion

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  Microversion 2.16 adds the ability to show the host status of an
  instance when listing servers with details or showing a single
  server's details. By default that is only shown for an admin.

  Change https://review.openstack.org/#/c/38/ helped improve the
  performance for this by avoiding lazy-loading the instance.services
  column by doing the join in the DB API when querying the instances
  from the database.

  However, that check is not based on version 2.16, like the 2.26 tags
  check below it.

  This means that we are unnecessarily joining with the services table
  when querying instances with microversions < 2.16, which happens, for
  example, by default in the openstack CLI which uses microversion 2.1.

  We arguably should make this also conditional on policy so we don't
  join for non-admins by default, but that's less of an issue probably
  as non-admins probably aren't listing thousands of instances from the
  deployment like an admin would.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1719460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758339] [NEW] cloud-init init-local tries to get data over network

2018-03-23 Thread Rok Zlender
Public bug reported:

init-local tries to gather data from metadata service before the network
is ready which causes a WARNING message sent to syslog. I believe
everything ends up being successful and it's just some noise in syslog
during instance launch.

Mar 23 04:18:27 task-302 systemd[1]: Starting Initial cloud-init job 
(pre-networking)...
Mar 23 04:18:33 task-302 cloud-init[399]: Cloud-init v. 17.2 running 
'init-local' at Fri, 23 Mar 2018 04:18:28 +. Up 4.98 seconds.
Mar 23 04:18:33 task-302 cloud-init[399]: 2018-03-23 04:18:33,395 - 
util.py[WARNING]: Failed fetching dynamic/instance-identity from url 
http://169.254.169.254/2009-04-04/dynamic/instance-identity
Mar 23 04:18:33 task-302 systemd[1]: Started Initial cloud-init job 
(pre-networking).
Mar 23 04:18:34 task-302 systemd[1]: Starting Initial cloud-init job (metadata 
service crawler)...
Mar 23 04:18:34 task-302 cloud-init[989]: Cloud-init v. 17.2 running 'init' at 
Fri, 23 Mar 2018 04:18:34 +. Up 11.03 seconds.
Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: 
++Net device 
info++
Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: 
++--+-+---+---+---+
Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: | Device |  Up  |   
Address   |  Mask | Scope | Hw-Address|
Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: 
++--+-+---+---+---+

cloud-init version
cloud-init -v
/usr/bin/cloud-init 17.2
ii  cloud-init   17.2-35-gf576b2a2-0ubuntu1~16.04.2 
 all  Init scripts for cloud instances

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.tar"
   
https://bugs.launchpad.net/bugs/1758339/+attachment/5088215/+files/cloud-init.tar

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1758339

Title:
  cloud-init init-local tries to get data over network

Status in cloud-init:
  New

Bug description:
  init-local tries to gather data from metadata service before the
  network is ready which causes a WARNING message sent to syslog. I
  believe everything ends up being successful and it's just some noise
  in syslog during instance launch.

  Mar 23 04:18:27 task-302 systemd[1]: Starting Initial cloud-init job 
(pre-networking)...
  Mar 23 04:18:33 task-302 cloud-init[399]: Cloud-init v. 17.2 running 
'init-local' at Fri, 23 Mar 2018 04:18:28 +. Up 4.98 seconds.
  Mar 23 04:18:33 task-302 cloud-init[399]: 2018-03-23 04:18:33,395 - 
util.py[WARNING]: Failed fetching dynamic/instance-identity from url 
http://169.254.169.254/2009-04-04/dynamic/instance-identity
  Mar 23 04:18:33 task-302 systemd[1]: Started Initial cloud-init job 
(pre-networking).
  Mar 23 04:18:34 task-302 systemd[1]: Starting Initial cloud-init job 
(metadata service crawler)...
  Mar 23 04:18:34 task-302 cloud-init[989]: Cloud-init v. 17.2 running 'init' 
at Fri, 23 Mar 2018 04:18:34 +. Up 11.03 seconds.
  Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: 
++Net device 
info++
  Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: 
++--+-+---+---+---+
  Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: | Device |  Up  |  
 Address   |  Mask | Scope | Hw-Address|
  Mar 23 04:18:34 task-302 cloud-init[989]: ci-info: 
++--+-+---+---+---+

  cloud-init version
  cloud-init -v
  /usr/bin/cloud-init 17.2
  ii  cloud-init   17.2-35-gf576b2a2-0ubuntu1~16.04.2   
   all  Init scripts for cloud instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1758339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708731] Re: ovs-fw does not reinstate GRE conntrack entry .

2018-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/540943
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6f7ba76075dd0d645ad6cee6854f87cc41cba1fa
Submitter: Zuul
Branch:master

commit 6f7ba76075dd0d645ad6cee6854f87cc41cba1fa
Author: Jakub Libosvar 
Date:   Mon Feb 5 17:20:09 2018 +

ovs-fw: Fix firewall blink

Previously, when security group was updated for given port, the firewall
removed all flows related to the port and added new rules. That
introduced a time window where there were no rules for the port.

This patch adds a new mechanism using cookie that can be described in
three states:

1) Create new openflow rules with non-default cookie that is considered
an updated cookie. All newly generated flows will be added with the next
cookie and all existing rules with default cookie are rewritten with the
default cookie.
2) Delete all rules for given port with the old default cookie. This
will leave the newly added rules in place.
3) Update the newly added flows with update cookie back to the default
cookie in order to avoid such flows being cleaned on the next restart of
ovs agent, as it fetches for stale flows.

Change-Id: I85d9e49c24ee7c91229b43cd329c42149637f254
Closes-bug: #1708731


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708731

Title:
   ovs-fw does not reinstate GRE conntrack entry .

Status in neutron:
  Fix Released

Bug description:
   *High level description:*

  We have VMs running GRE tunnels between them with OVSFW and SG
  implemented along with GRE conntrack helper loaded on the hypervisor.
  GRE works as expected but the tunnel breaks whenever there is a
  neutron ovs agent event causing some exception like the below AMQP
  timeouts or OVSFW port not found :

  AMQP Timeout :

  2017-04-07 19:07:03.001 5275 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
MessagingTimeout: Timed out waiting for a reply to message ID 
4035644808d24ce9aae65a6ee567021c
  2017-04-07 19:07:03.001 5275 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2017-04-07 19:07:03.003 5275 WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent._report_state'
 run outlasted interval by 120.01 sec
  2017-04-07 19:07:03.041 5275 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Agent has 
just been revived. Doing a full sync.
  2017-04-07 19:07:06.747 5275 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-521c07b4-f53d-4665-b728-fc5f00191294 - - - - -] rpc_loop doing a full sync.
  2017-04-07 19:07:06.841 5275 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-521c07b4-f53d-4665-b728-fc5f00191294 - - - - -] Agent out of sync with 
plugin!

  OVSFWPortNOtFound:

  2017-03-30 18:31:05.048 5160 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.firewall.prepare_port_filter(device)
  2017-03-30 18:31:05.048 5160 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/openstack/venvs/neutron-14.0.5/lib/python2.7/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 272, in prepare_port_filter
  2017-03-30 18:31:05.048 5160 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent of_port = 
self.get_or_create_ofport(port)
  2017-03-30 18:31:05.048 5160 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/openstack/venvs/neutron-14.0.5/lib/python2.7/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 246, in get_or_create_ofport
  2017-03-30 18:31:05.048 5160 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent raise 
OVSFWPortNotFound(port_id=port_id)
  2017-03-30 18:31:05.048 5160 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
OVSFWPortNotFound: Port 01f7c714-1828-4768-9810-a0ec25dd2b92 is not managed by 
this agent.
  2017-03-30 18:31:05.048 5160 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2017-03-30 18:31:05.072 5160 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-db74f32b-5370-4a5f-86bf-935eba1490d0 - - - - -] Agent out of sync with 
plugin!

  
  The agent throws out of sync messages and starts to initialize neutron ports 
once again along with fresh SG rules.

  2017-04-07 19:07:07.110 5275 INFO neutron.agent.securitygroups_rpc 
[req-521c07b4-f53d-4665-b728-fc5f00191294 - - - - -] Preparing filters for 
devices set([u'4b14619f-3b9e-4103-b9d7-9c7e52c797d8'])
  2017-04-07 19:07:07.215 5275 ERROR 
neutron.agent.linux.openvswitch_firewall.firewall 
[req-521c07b4-f53d-4665-b728-fc5f00191294 - - - - -] Initia

[Yahoo-eng-team] [Bug 1758316] [NEW] Floating IP QoS don't work in DVR router

2018-03-23 Thread Slawek Kaplonski
Public bug reported:

It looks that QoS for FIP in DVR router doesn't work.
Scenario test is failing all the time with error like: 
http://logs.openstack.org/63/555263/3/check/neutron-tempest-plugin-dvr-multinode-scenario/e7c012f/logs/testr_results.html.gz


>From what I found in job logs it looks that qos is applied in "snat-XXX" 
>namespace on subnode-2:
http://logs.openstack.org/63/555263/3/check/neutron-tempest-plugin-dvr-multinode-scenario/e7c012f/logs/subnode-2/screen-q-l3.txt.gz#_Mar_23_09_20_07_420244

but transfer to VM is not limited as expected.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: l3-dvr-backlog qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1758316

Title:
  Floating IP QoS don't work in DVR router

Status in neutron:
  Confirmed

Bug description:
  It looks that QoS for FIP in DVR router doesn't work.
  Scenario test is failing all the time with error like: 
http://logs.openstack.org/63/555263/3/check/neutron-tempest-plugin-dvr-multinode-scenario/e7c012f/logs/testr_results.html.gz

  
  From what I found in job logs it looks that qos is applied in "snat-XXX" 
namespace on subnode-2:
  
http://logs.openstack.org/63/555263/3/check/neutron-tempest-plugin-dvr-multinode-scenario/e7c012f/logs/subnode-2/screen-q-l3.txt.gz#_Mar_23_09_20_07_420244

  but transfer to VM is not limited as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1758316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746393] Re: 'cpu_thread_policy' impacts on emulator threads

2018-03-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/538700
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=113724859a8f1dc3f004636e8c016d060409a1f6
Submitter: Zuul
Branch:master

commit 113724859a8f1dc3f004636e8c016d060409a1f6
Author: Tetsuro Nakamura 
Date:   Sun Jan 28 16:39:33 2018 +0900

Not use thread alloc policy for emulator thread

When CPUEmulatorThreadsPolicy and CPUThreadAllocationPolicy were both
set to isolate, pcpus for emulator threads were also allocated
according to the thread isolation policy. (i.e. only chosen from a
siblings_set where full threads are available.)

For optimization purposes, this patch allows emulator threads and the
VM's I/O threads to be collocated on the same sibling sets of pCPUs
even when both I/O thread and emulator thread are set to "isolate".

Note that this patch adds a new function of _get_reserved(), where
cpus are reserved for I/O thread and emulator thread, pulling the part
out from _get_pinning().

Change-Id: I23a5142398900873364bb07d8e91595d02a7a13d
Closes-Bug: #1746393


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746393

Title:
  'cpu_thread_policy' impacts on emulator threads

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
 In bug#1744965(https://bugs.launchpad.net/nova/+bug/1744965), it is 
reported that when emulator_threads_policy=isolate and 
cpu_threads_policy=prefer(default) is not optimal.
 For example, when the host is configured as below and a VM requests 2 
vCPUs with emulator_threads_policy=isolate, the ideal result is getting the 
vCPUs from (CPU#0, CPU#2) and reserving CPU#1 for emulator_threads. But the 
actual result is no valid host, which means there are not enough resources 
available.

  * host configuration
  (Note CPU #3 is missing because it is excluded by 'vcpu_pin_set' in nova.conf)
  --
socket 0
  core 0
thread 0  (CPU #0)
thread 1  (CPU #2)
  core 1
thread 0  (CPU #1)
  --

  
 This bug report of #1746393 reports that the same thing can be said about 
emulator_threads_policy=isolate and *cpu_threads_policy=isolate* case.
 For example, when the host is configured as above and a VM requests 1 vCPU 
with cpu_threads_policy=isolate, the ideal result is getting the vCPU from 
CPU#0 (with CPU#2 for isolation purpose) and reserving CPU#1 for 
emulator_threads. But the actual result is no valid host, which means there are 
not enough resources available.

This should be fixed in a different way from bug#1744965, because
  the code path for getting and reserving host's pCPUs differs between
  cpu_threads_policy=isolate case and cpu_threads_policy=prefer case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758295] [NEW] nova image-list command display (HTTP 500)

2018-03-23 Thread Brian
Public bug reported:

Hi,

I need your support here. I have Installed 
"contrail-install-packages_3.2.0.0-19-ubuntu-14-04mitaka_all" on a Virtual Box 
ubuntu 14.04 server edition.
The installation went fine as per the document I followed and I get into the 
web GUI of Openstack Horizon and Contrail. I have 12G RAM/ 4 vCPU's and 100Gig 
disk space
I am able to upload cirros image and create networking in Openstack. But When I 
try to create a instance in Openstack Horizon it throws an error saying "error 
unable to create the server".
I then went to the CLI and display nova flavor-list I am able to see the 
different flavors. But when I try to run nova 'image-list' I get errors 

bmenezes@contrailsys:~$ nova flavor-list
+++---+--+---+--+---+-+---+
| ID | Name   | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
+++---+--+---+--+---+-+---+
| 1  | m1.tiny| 512   | 1| 0 |  | 1 | 1.0 | 
True  |
| 2  | m1.small   | 2048  | 20   | 0 |  | 1 | 1.0 | 
True  |
| 3  | m1.medium  | 4096  | 40   | 0 |  | 2 | 1.0 | 
True  |
| 4  | m1.large   | 8192  | 80   | 0 |  | 4 | 1.0 | 
True  |
| 5  | m1.xlarge  | 16384 | 160  | 0 |  | 8 | 1.0 | 
True  |
| 6  | web-flavor | 128   | 1| 0 |  | 1 | 1.0 | 
True  |
+++---+--+---+--+---+-+---+

I am able to create a new flavor list as seen ID 6.

bmenezes@contrailsys:~$ nova image-list
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-d8494875-7e03-4fda-ac4c-16fa41b01eb0)
bmenezes@contrailsys:~$ 


Below is the --debug image list

bmenezes@contrailsys:~$ openstack --debug image list
START with options: ['--debug', 'image', 'list']
options: Namespace(access_token_endpoint='', auth_type='', 
auth_url='http://127.0.0.1:5000/v2.0', cacert='', client_id='', 
client_secret='***', cloud='', debug=True, default_domain='default', 
deferred_help=False, domain_id='', domain_name='', endpoint='', 
identity_provider='', identity_provider_url='', insecure=None, interface='', 
log_file=None, os_clustering_api_version='1', os_compute_api_version='', 
os_data_processing_api_version='1.1', os_data_processing_url='', 
os_dns_api_version='2', os_identity_api_version='', os_image_api_version='', 
os_key_manager_api_version='1', os_network_api_version='', 
os_object_api_version='', os_orchestration_api_version='1', os_project_id=None, 
os_project_name=None, os_queues_api_version='1.1', os_volume_api_version='', 
os_workflow_api_version='2', password='***', profile=None, 
project_domain_id='', project_domain_name='', project_id='', 
project_name='admin', protocol='', region_name='', scope='', 
service_provider_endpoint='', timing=False, token='***', trust_id='', url='', 
user_domain_id='', user_domain_name='', user_id='', username='admin', 
verbose_level=3, verify=None)
defaults: {u'auth_type': 'password', u'compute_api_version': u'2', 'key': None, 
u'database_api_version': u'1.0', 'api_timeout': None, u'baremetal_api_version': 
u'1', u'image_api_version': u'2', 'cacert': None, u'image_api_use_tasks': 
False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
u'interface': None, u'network_api_version': u'2', u'image_format': u'qcow2', 
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'verify': 
True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': 
None, u'secgroup_source': u'neutron', u'container_api_version': u'1', 
u'dns_api_version': u'2', u'object_store_api_version': u'1', 
u'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'password', u'compute_api_version': u'2', 
u'orchestration_api_version': '1', u'database_api_version': u'1.0', 
'data_processing_api_version': '1.1', u'network_api_version': u'2', 
u'image_format': u'qcow2', u'image_api_version': u'2', 
'clustering_api_version': '1', 'verify': True, u'dns_api_version': '2', 
u'object_store_api_version': u'1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': u'1', 'queues_api_version': 
'1.1', 'auth': {'username': 'admin', 'project_name': 'admin', 'password': 
'***', 'auth_url': 'http://127.0.0.1:5000/v2.0'}, 'default_domain': 'default', 
u'container_api_version': u'1', u'image_api_use_tasks': False, 
u'floating_ip_source': u'neutron', 'key': None, 'timing': False, 'cacert': 
None, u'key_manager_api_version': '1', u'metering_api_version': u'2', 
'deferred_help': False, u'identity_api_version': u'2.0', 
'workflow_api_version': '2', u'volume_api_version': u'2', 'cert': None, 
u'secgroup_source': u'neutron', 'debug': Tru

[Yahoo-eng-team] [Bug 1758278] [NEW] disk_available_least become a negative value unexpectedly

2018-03-23 Thread yangjie
Public bug reported:

The value of disk_available_least become negative unexpectedly, because we 
allow to boot a image-based vm using a flavor with 0 GB disk size.
When user try to boot a image-based vm using a flavor with 0 GB disk size, Nova 
use the virtual size from image property to replace the '0' size to create disk 
file in instance folder.
This virtual size from image property make the value of disk_available_least 
inconsistent with flavor-based estimate. It makes user confused sometimes.
Maybe we should forbid the creation of a vm from a flavor with zero disk size, 
and throw out a HTTP exception in Nova API.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1758278

Title:
  disk_available_least become a negative value unexpectedly

Status in OpenStack Compute (nova):
  New

Bug description:
  The value of disk_available_least become negative unexpectedly, because we 
allow to boot a image-based vm using a flavor with 0 GB disk size.
  When user try to boot a image-based vm using a flavor with 0 GB disk size, 
Nova use the virtual size from image property to replace the '0' size to create 
disk file in instance folder.
  This virtual size from image property make the value of disk_available_least 
inconsistent with flavor-based estimate. It makes user confused sometimes.
  Maybe we should forbid the creation of a vm from a flavor with zero disk 
size, and throw out a HTTP exception in Nova API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1758278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758062] Re: fwaas unit tests are failing

2018-03-23 Thread YAMAMOTO Takashi
** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: fwaas

** Changed in: neutron
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1758062

Title:
  fwaas unit tests are failing

Status in networking-midonet:
  In Progress
Status in neutron:
  In Progress

Bug description:
  ft1.16: 
midonet.neutron.tests.unit.test_extension_fwaas.FirewallTestCaseML2.test_delete_error_in_midonet_does_not_delete_firewall_StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron_fwaas/tests/unit/services/firewall/test_fwaas_plugin.py",
 line 317, in setUp
  super(TestFirewallPluginBase, self).setUp(fw_plugin=FW_PLUGIN_KLASS)
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron_fwaas/tests/unit/services/firewall/test_fwaas_plugin.py",
 line 93, in setUp
  plugin=plugin, service_plugins=service_plugins, ext_mgr=ext_mgr)
File "midonet/neutron/tests/unit/test_midonet_plugin_ml2.py", line 112, in 
setUp
  self.setup_parent(service_plugins=service_plugins, ext_mgr=ext_mgr)
File "midonet/neutron/tests/unit/test_midonet_plugin_ml2.py", line 108, in 
setup_parent
  MidonetPluginConf.setUp(self, parent_setup)
File "midonet/neutron/tests/unit/test_midonet_plugin_ml2.py", line 81, in 
setUp
  parent_setup()
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/unit/db/test_db_base_plugin_v2.py",
 line 112, in setUp
  super(NeutronDbPluginV2TestCase, self).setUp()
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/unit/testlib_api.py",
 line 394, in setUp
  super(WebTestCase, self).setUp()
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/unit/testlib_api.py",
 line 289, in setUp
  super(BaseSqlTestCase, self).setUp()
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py",
 line 329, in setUp
  self.setup_config()
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron_fwaas/tests/base.py",
 line 44, in setup_config
  self.config_parse(args=args)
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py",
 line 305, in config_parse
  config.init(args=args)
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/common/config.py",
 line 78, in init
  **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 2498, in __call__
  else sys.argv[1:])
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 3162, in _parse_cli_opts
  return self._parse_config_files()
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 3198, in _parse_config_files
  self._oparser.parse_args(self._args, namespace)
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 2326, in parse_args
  return super(_CachedArgumentParser, self).parse_args(args, namespace)
File "/usr/lib/python2.7/argparse.py", line 1701, in parse_args
  args, argv = self.parse_known_args(args, namespace)
File "/usr/lib/python2.7/argparse.py", line 1733, in parse_known_args
  namespace, args = self._parse_known_args(args, namespace)
File "/usr/lib/python2.7/argparse.py", line 1939, in _parse_known_args
  start_index = consume_optional(start_index)
File "/usr/lib/python2.7/argparse.py", line 1879, in consume_optional
  take_action(action, args, option_string)
File "/usr/lib/python2.7/argparse.py", line 1807, in take_action
  action(self, namespace, argument_values, option_string)
File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 1741, in __call__
  raise ConfigDirNotFoundError(values)
  oslo_config.cfg.ConfigDirNotFoundError: Failed to read config file directory: 
/home/zuul/src/git.openstack.org

[Yahoo-eng-team] [Bug 1758260] [NEW] Create instance throws TypeError: argument of type 'NoneType' is not iterable in nova-conductor

2018-03-23 Thread Gildas Cherruel
Public bug reported:

Installed a brand new Openstack Queenson Ubuntu 16.04 following the
OpenStack documentation.

I have 1 controller and 1 compute (kvm) nodes so far, no storage yet.
All are registered properly:

```
$ sudo -u nova nova-manage --use-json cell_v2 list_cells
+---+--+--++
|  Name | UUID |  Transport URL   | 
 Database Connection   |
+---+--+--++
| cell0 | ---- |  none:/  | 
mysql+pymysql://nova:@localhost/nova_cell0 |
| cell1 | d30a8c57-dad1-406f-9ff1-50d93dc70b2b | rabbit://openstack:@ |
mysql+pymysql://nova:@localhost/nova|
+---+--+--++
```
And:
```
sudo -u nova nova-manage cell_v2 list_hosts
+---+--+--+
| Cell Name |  Cell UUID   | Hostname |
+---+--+--+
|   cell1   | d30a8c57-dad1-406f-9ff1-50d93dc70b2b | grunt01  |
+---+--+--+
```

When I create an instance:

```
$ openstack server create --flavor m1.nano --image cirros \
  --nic net-id=$(openstack network show demo -c id -f value) \
  --security-group default
  --key-name 
  first -test
```

The image gets created but stays stuck in the BUILD state.

The logs on the compute noad (/var/log/nova-compute.log) never get any
entry written.

The version for nova is:

```
$ dpkg -l | grep nova
ii  nova-api2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute - API frontend
ii  nova-common 2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute - common files
ii  nova-conductor  2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute - conductor service
ii  nova-consoleauth2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute - Console Authenticator
ii  nova-novncproxy 2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute - NoVNC proxy
ii  nova-placement-api  2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute - placement API frontend
ii  nova-scheduler  2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute - virtual machine scheduler
ii  python-nova 2:17.0.0-0ubuntu1~cloud0
   all  OpenStack Compute Python libraries
ii  python-novaclient   2:9.1.1-0ubuntu1~cloud0 
   all  client library for OpenStack Compute API - Python 2.7
```

logs and config attached (sosreport)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: queens

** Attachment added: "logs & config (sosreport)"
   
https://bugs.launchpad.net/bugs/1758260/+attachment/5087784/+files/sosreport-warchief-20180323162005.tar.xz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1758260

Title:
  Create instance throws TypeError: argument of type 'NoneType' is not
  iterable in nova-conductor

Status in OpenStack Compute (nova):
  New

Bug description:
  Installed a brand new Openstack Queenson Ubuntu 16.04 following the
  OpenStack documentation.

  I have 1 controller and 1 compute (kvm) nodes so far, no storage yet.
  All are registered properly:

  ```
  $ sudo -u nova nova-manage --use-json cell_v2 list_cells
  
+---+--+--++
  |  Name | UUID |  Transport URL   |   
   Database Connection   |
  
+---+--+--++
  | cell0 | ---- |  none:/  | 
mysql+pymysql://nova:@localhost/nova_cell0 |
  | cell1 | d30a8c57-dad1-406f-9ff1-50d93dc70b2b | rabbit://openstack:@ |   
 mysql+pymysql://nova:@localhost/nova|
  
+---+--+--++
  ```
  And:
  ```
  sudo -u nova nova-manage cell_v2 list_hosts
  +---+--+--+
  | Cell Name |  Cell UUID   | Hostname |
  +---+--+--+
  |