[Yahoo-eng-team] [Bug 1641837] Re: neutron-openvswitch-agent failed to add default table

2016-11-14 Thread Andreas Scheuring
Along yesterday Neutron meeting [1]:

21:05:22  when reviewing stable patches or triaging bug reports
21:05:34  please take into account of the fact that mitaka is security 
fixes only

-> This is not a security issue IMO, so setting it to "won't fix".


[1] 
http://eavesdrop.openstack.org/meetings/networking/2016/networking.2016-11-14-21.00.log.html

** Tags added: ovs

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641837

Title:
  neutron-openvswitch-agent failed to add default table

Status in neutron:
  Won't Fix

Bug description:
  Problem

  
  After power down and power off the host, the tenant network is not available.

  The cause is that default flow tables of br-int is not setup
  successfully when neutron-openvswitch-agent.starts:

  1) The neutron-openvswitch-agent fails to add the flow table 0 but
  adds the flow table 23 successfully in setup_default_table(). The
  flows look like as follows:

  cookie=0x8f4c30f934586d9c, duration=617166.781s, table=0, 
n_packets=31822416, n_bytes=2976996304, idle_age=0, hard_age=65534, 
priority=2,in_port=1 actions=drop
  cookie=0x8f4c30f934586d9c, duration=617167.023s, table=23, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
  cookie=0x8f4c30f934586d9c, duration=617167.007s, table=24, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

  2) In the rpc_roop, the neutron-openvswitch-agent will check the
  ovs status by checking the flow table 23, and the flow table 23
  exists. The neutron-openvswitch-agent thinks the ovs is normal, but
  the flow table 0 does not exist and the network connection is not
  availble.

  Affected Neutron version:
  kilo mitaka

  Possible Solution:
  Check the default table 0 or check all the default flow tables in 
check_ovs_status().
  Or add the default flow table 23 first and then add the default table 0 in 
setup_default_table()

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641842] [NEW] Glance installation does not appear to detect admin role

2016-11-14 Thread Suman Saurabh
Public bug reported:

Issue seen on Openstack Newton on Ubuntu 16.04
openstack --debug image create "cirros"   --file cirros-0.3.4-x86_64-disk.img   
--disk-format qcow2 --container-format bare   --public
START with options: [u'--debug', u'image', u'create', u'cirros', u'--file', 
u'cirros-0.3.4-x86_64-disk.img', u'--disk-format', u'qcow2', 
u'--container-format', u'bare', u'--public']
options: Namespace(access_key='', access_secret='***', access_token='***', 
access_token_endpoint='', access_token_type='', auth_type='', 
auth_url='http://controller:35357/v3', authorization_code='', cacert=None, 
cert='', client_id='', client_secret='***', cloud='', consumer_key='', 
consumer_secret='***', debug=True, default_domain='default', 
default_domain_id='', default_domain_name='', deferred_help=False, 
discovery_endpoint='', domain_id='', domain_name='', endpoint='', 
identity_provider='', identity_provider_url='', insecure=None, interface='', 
key='', log_file=None, old_profile=None, openid_scope='', 
os_beta_command=False, os_compute_api_version='', os_identity_api_version='3', 
os_image_api_version='2', os_network_api_version='', os_object_api_version='', 
os_project_id=None, os_project_name=None, os_volume_api_version='', 
passcode='', password='***', profile=None, project_domain_id='', 
project_domain_name='default', project_id='', project_name='admin', 
protocol='', redirect_uri='
 ', region_name='', timing=False, token='***', trust_id='', url='', 
user_domain_id='', user_domain_name='default', user_id='', username='admin', 
verbose_level=3, verify=None)
Auth plugin password selected
auth_config_hook(): {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'interface': None, 'auth_url': 'http://controller:35357/v3', 
u'network_api_version': u'2', u'image_format': u'qcow2', 'networks': [], 
u'image_api_version': '2', 'verify': True, u'dns_api_version': u'2', 
u'object_store_api_version': u'1', 'username': 'admin', 'verbose_level': 3, 
'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': 
{'user_domain_name': 'default', 'project_name': 'admin', 'project_domain_name': 
'default'}, 'default_domain': 'default', 'debug': True, u'image_api_use_tasks': 
False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
'timing': False, 'password': 'smn@1234', 'cacert': None, 
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 
'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': 
u'2', 'cert': None, u'secgroup_source': u'neutron', u'c
 ontainer_api_version': u'1', u'disable_vendor_agent': {}}
defaults: {u'auth_type': 'password', u'compute_api_version': u'2', 'key': None, 
u'database_api_version': u'1.0', 'api_timeout': None, u'baremetal_api_version': 
u'1', u'image_api_version': u'2', 'cacert': None, u'image_api_use_tasks': 
False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
u'interface': None, u'network_api_version': u'2', u'image_format': u'qcow2', 
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'verify': 
True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': 
None, u'secgroup_source': u'neutron', u'container_api_version': u'1', 
u'dns_api_version': u'2', u'object_store_api_version': u'1', 
u'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', u'orchestration_api_version': u'1', 
u'database_api_version': u'1.0', 'timing': False, 'auth_url': 
'http://controller:35357/v3', u'network_api_version': u'2', u'image_format': 
u'qcow2', 'networks': [], u'image_api_version': '2', 'verify': True, 
u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': 
'admin', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, 
u'baremetal_api_version': u'1', 'auth': {'username': 'admin', 'project_name': 
'admin', 'user_domain_name': 'default', 'auth_url': 
'http://controller:35357/v3', 'password': '***', 'project_domain_name': 
'default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 
u'interface': None, 'password': '***', 'cacert': None, 
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 
'deferred_help': False, u'identity_api_version'
 : '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': 
u'neutron', 'debug': True, u'disable_vendor_agent': {}}
compute API version 2, cmd group openstack.compute.v2
network API version 2, cmd group openstack.network.v2
image API version 2, cmd group openstack.image.v2
volume API version 2, cmd group openstack.volume.v2
identity API version 3, cmd group openstack.identity.v3
object_store API version 1, cmd group openstack.object_store.v1
neutronclient API version 2, cmd group openstack.neutronclient.v2
Auth plugin password selected
auth_config_hook(): {'auth_type': 

[Yahoo-eng-team] [Bug 1641842] [NEW] Glance installation does not appear to detect admin role

2016-11-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Issue seen on Openstack Newton on Ubuntu 16.04
openstack --debug image create "cirros"   --file cirros-0.3.4-x86_64-disk.img   
--disk-format qcow2 --container-format bare   --public
START with options: [u'--debug', u'image', u'create', u'cirros', u'--file', 
u'cirros-0.3.4-x86_64-disk.img', u'--disk-format', u'qcow2', 
u'--container-format', u'bare', u'--public']
options: Namespace(access_key='', access_secret='***', access_token='***', 
access_token_endpoint='', access_token_type='', auth_type='', 
auth_url='http://controller:35357/v3', authorization_code='', cacert=None, 
cert='', client_id='', client_secret='***', cloud='', consumer_key='', 
consumer_secret='***', debug=True, default_domain='default', 
default_domain_id='', default_domain_name='', deferred_help=False, 
discovery_endpoint='', domain_id='', domain_name='', endpoint='', 
identity_provider='', identity_provider_url='', insecure=None, interface='', 
key='', log_file=None, old_profile=None, openid_scope='', 
os_beta_command=False, os_compute_api_version='', os_identity_api_version='3', 
os_image_api_version='2', os_network_api_version='', os_object_api_version='', 
os_project_id=None, os_project_name=None, os_volume_api_version='', 
passcode='', password='***', profile=None, project_domain_id='', 
project_domain_name='default', project_id='', project_name='admin', 
protocol='', redirect_uri='
 ', region_name='', timing=False, token='***', trust_id='', url='', 
user_domain_id='', user_domain_name='default', user_id='', username='admin', 
verbose_level=3, verify=None)
Auth plugin password selected
auth_config_hook(): {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'interface': None, 'auth_url': 'http://controller:35357/v3', 
u'network_api_version': u'2', u'image_format': u'qcow2', 'networks': [], 
u'image_api_version': '2', 'verify': True, u'dns_api_version': u'2', 
u'object_store_api_version': u'1', 'username': 'admin', 'verbose_level': 3, 
'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': 
{'user_domain_name': 'default', 'project_name': 'admin', 'project_domain_name': 
'default'}, 'default_domain': 'default', 'debug': True, u'image_api_use_tasks': 
False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
'timing': False, 'password': 'smn@1234', 'cacert': None, 
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 
'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': 
u'2', 'cert': None, u'secgroup_source': u'neutron', u'c
 ontainer_api_version': u'1', u'disable_vendor_agent': {}}
defaults: {u'auth_type': 'password', u'compute_api_version': u'2', 'key': None, 
u'database_api_version': u'1.0', 'api_timeout': None, u'baremetal_api_version': 
u'1', u'image_api_version': u'2', 'cacert': None, u'image_api_use_tasks': 
False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
u'interface': None, u'network_api_version': u'2', u'image_format': u'qcow2', 
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'verify': 
True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': 
None, u'secgroup_source': u'neutron', u'container_api_version': u'1', 
u'dns_api_version': u'2', u'object_store_api_version': u'1', 
u'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', u'orchestration_api_version': u'1', 
u'database_api_version': u'1.0', 'timing': False, 'auth_url': 
'http://controller:35357/v3', u'network_api_version': u'2', u'image_format': 
u'qcow2', 'networks': [], u'image_api_version': '2', 'verify': True, 
u'dns_api_version': u'2', u'object_store_api_version': u'1', 'username': 
'admin', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, 
u'baremetal_api_version': u'1', 'auth': {'username': 'admin', 'project_name': 
'admin', 'user_domain_name': 'default', 'auth_url': 
'http://controller:35357/v3', 'password': '***', 'project_domain_name': 
'default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key': None, 
u'interface': None, 'password': '***', 'cacert': None, 
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 
'deferred_help': False, u'identity_api_version'
 : '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': 
u'neutron', 'debug': True, u'disable_vendor_agent': {}}
compute API version 2, cmd group openstack.compute.v2
network API version 2, cmd group openstack.network.v2
image API version 2, cmd group openstack.image.v2
volume API version 2, cmd group openstack.volume.v2
identity API version 3, cmd group openstack.identity.v3
object_store API version 1, cmd group openstack.object_store.v1
neutronclient API version 2, cmd group openstack.neutronclient.v2
Auth plugin password selected
auth_config_hook(): 

[Yahoo-eng-team] [Bug 1641837] [NEW] neutron-openvswitch-agent failed to add default table

2016-11-14 Thread sunzuohua
Public bug reported:

Problem


After power down and power off the host, the tenant network is not available.

The cause is that default flow tables of br-int is not setup
successfully when neutron-openvswitch-agent.starts:

1) The neutron-openvswitch-agent fails to add the flow table 0 but
adds the flow table 23 successfully in setup_default_table(). The flows
look like as follows:

cookie=0x8f4c30f934586d9c, duration=617166.781s, table=0, 
n_packets=31822416, n_bytes=2976996304, idle_age=0, hard_age=65534, 
priority=2,in_port=1 actions=drop
cookie=0x8f4c30f934586d9c, duration=617167.023s, table=23, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x8f4c30f934586d9c, duration=617167.007s, table=24, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

2) In the rpc_roop, the neutron-openvswitch-agent will check the ovs
status by checking the flow table 23, and the flow table 23 exists. The
neutron-openvswitch-agent thinks the ovs is normal, but the flow table 0
does not exist and the network connection is not availble.

Affected Neutron version:
kilo mitaka

Possible Solution:
Check the default table 0 or check all the default flow tables in 
check_ovs_status().
Or add the default flow table 23 first and then add the default table 0 in 
setup_default_table()

Thanks

** Affects: neutron
 Importance: Undecided
 Assignee: sunzuohua (zuohuasun)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => sunzuohua (zuohuasun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641837

Title:
  neutron-openvswitch-agent failed to add default table

Status in neutron:
  New

Bug description:
  Problem

  
  After power down and power off the host, the tenant network is not available.

  The cause is that default flow tables of br-int is not setup
  successfully when neutron-openvswitch-agent.starts:

  1) The neutron-openvswitch-agent fails to add the flow table 0 but
  adds the flow table 23 successfully in setup_default_table(). The
  flows look like as follows:

  cookie=0x8f4c30f934586d9c, duration=617166.781s, table=0, 
n_packets=31822416, n_bytes=2976996304, idle_age=0, hard_age=65534, 
priority=2,in_port=1 actions=drop
  cookie=0x8f4c30f934586d9c, duration=617167.023s, table=23, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
  cookie=0x8f4c30f934586d9c, duration=617167.007s, table=24, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

  2) In the rpc_roop, the neutron-openvswitch-agent will check the
  ovs status by checking the flow table 23, and the flow table 23
  exists. The neutron-openvswitch-agent thinks the ovs is normal, but
  the flow table 0 does not exist and the network connection is not
  availble.

  Affected Neutron version:
  kilo mitaka

  Possible Solution:
  Check the default table 0 or check all the default flow tables in 
check_ovs_status().
  Or add the default flow table 23 first and then add the default table 0 in 
setup_default_table()

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641830] [NEW] Missing RAM units in Launch instance Wizard

2016-11-14 Thread Eddie Ramirez
Public bug reported:

Small usability issue: No units of RAM are displayed.

1. Open the Launch Instance Wizard
2. Go to Flavor step
3. Expand drawer of any Flavor.

Expected result:
Should be able to see an unit of RAM, e.g. MB or GB.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "screenshot"
   
https://bugs.launchpad.net/bugs/1641830/+attachment/4777504/+files/Instances%20%20%20OpenStack%20Dashboard.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1641830

Title:
  Missing RAM units in Launch instance Wizard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Small usability issue: No units of RAM are displayed.

  1. Open the Launch Instance Wizard
  2. Go to Flavor step
  3. Expand drawer of any Flavor.

  Expected result:
  Should be able to see an unit of RAM, e.g. MB or GB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1641830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641821] [NEW] admin guide: Cleanup LDAP

2016-11-14 Thread Steve Martinelli
Public bug reported:

There exist three different documents [1] related to LDAP in the admin-
guide [2]. They should be collapsed into one. Further, they recommend
deploying a single backend LDAP, which is not what the keystone team
recommends.

[1] 1) identity-integrate-with-ldap.rst
2) identity-ldap-server.rst
3) identity-secure-ldap-backend.rst 

[2] https://github.com/openstack/openstack-manuals/tree/master/doc
/admin-guide/source

** Affects: keystone
 Importance: Low
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641821

Title:
  admin guide: Cleanup LDAP

Status in OpenStack Identity (keystone):
  New

Bug description:
  There exist three different documents [1] related to LDAP in the
  admin-guide [2]. They should be collapsed into one. Further, they
  recommend deploying a single backend LDAP, which is not what the
  keystone team recommends.

  [1] 1) identity-integrate-with-ldap.rst
  2) identity-ldap-server.rst
  3) identity-secure-ldap-backend.rst   

  [2] https://github.com/openstack/openstack-manuals/tree/master/doc
  /admin-guide/source

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641822] [NEW] admin guide: create a PCI section

2016-11-14 Thread Steve Martinelli
Public bug reported:

A section dedicated to PCI should be created in the admin guide [1]. The
content can largely come from our developer docs [2], but should be
modified for a deployer in mind.

[1] 
https://github.com/openstack/openstack-manuals/tree/master/doc/admin-guide/source
[2] http://docs.openstack.org/developer/keystone/security_compliance.html

** Affects: keystone
 Importance: Low
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641822

Title:
  admin guide: create a PCI section

Status in OpenStack Identity (keystone):
  New

Bug description:
  A section dedicated to PCI should be created in the admin guide [1].
  The content can largely come from our developer docs [2], but should
  be modified for a deployer in mind.

  [1] 
https://github.com/openstack/openstack-manuals/tree/master/doc/admin-guide/source
  [2] http://docs.openstack.org/developer/keystone/security_compliance.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641823] [NEW] Config reference: add PCI options

2016-11-14 Thread Steve Martinelli
Public bug reported:

Add configuration options to the config reference [1].


[1] 
https://github.com/openstack/openstack-manuals/tree/master/doc/config-reference/source/identity

** Affects: keystone
 Importance: Low
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641823

Title:
  Config reference: add PCI options

Status in OpenStack Identity (keystone):
  New

Bug description:
  Add configuration options to the config reference [1].

  
  [1] 
https://github.com/openstack/openstack-manuals/tree/master/doc/config-reference/source/identity

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641818] [NEW] admin guide: update caching document

2016-11-14 Thread Steve Martinelli
Public bug reported:

The caching document in the admin guide is sorely out of date by at
least 2 releases. Update it to reflect current status.
http://docs.openstack.org/admin-guide/identity-caching-layer.html

** Affects: keystone
 Importance: Low
 Assignee: Eric Brown (ericwb)
 Status: Fix Released


** Tags: documentation

** Changed in: keystone
 Assignee: Rob B (browne) => Eric Brown (ericwb)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641818

Title:
  admin guide: update caching document

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The caching document in the admin guide is sorely out of date by at
  least 2 releases. Update it to reflect current status.
  http://docs.openstack.org/admin-guide/identity-caching-layer.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641814] [NEW] Don't apply multi-queue to SRIOV ports

2016-11-14 Thread Zhenyu Zheng
Public bug reported:

The multi-queue feature was added:
https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-net-multiqueue
and it is controlled using image metadata: hw_vif_mutliqueue_enabled=true|false
when it is set to be true, the related xml config will be handled:
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/vif.py#n130

when users want to launch an instance with a SRIOV port and several normal ports
with multi-queue feature, ERROR can take place due to wrong driver name for 
SRIOV
interface:

2016-11-15T06:15:41.621+08:00 localhost nova-compute DEBUG [pid:17224] 
[MainThread] [tid:115210352] [vif.py:745 plug] 
[req-52fba1f0-008e-43dc-bc02-16ea378a41bd] vif_type=hw_veb 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='az1.dc1',cell_name=None,cleaned=False,config_drive='',created_at=2016-11-14T22:15:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='fzx_sriov',display_name='fzx_sriov',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(11),host='68D366CE-0AF4-8E11-8567-00821800',hostname='fzx-sriov',id=137,image_ref='5d4b082a-3b47-4505-93b7-f242ca59e940',info_cache=InstanceInfoCache,instance_type_id=11,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=None,launched_on='68D366CE-0AF4-8E11-8567-00821800',locked=False,locked_by=None,memory_mb=1024,metadata={},migration_context=,ne
 
w_flavor=None,node='68D366CE-0AF4-8E11-8567-00821800',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a00fefb88cd407a9b768678fad26f5c',ramdisk_id='',reservation_id='r-0050r6iu',root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={booted_volume='False',image_base_image_ref='5d4b082a-3b47-4505-93b7-f242ca59e940',image_container_format='bare',image_disk_format='qcow2',image_hw_vif_multiqueue_enabled='true',image_min_disk='1',image_min_ram='0',network_allocated='True'},tags=,task_state='spawning',terminated_at=None,updated_at=2016-11-14T22:15:40Z,user_data=None,user_id='20ad13b54d7a4950a30ccb3697eea438',uuid=d8a1c18a-6e20-41da-9115-c3f3e6c6b836,vcpu_model=VirtCPUModel,vcpus=4,vm_mode=None,vm_state='building')
 vif=VIF({'profile': {u'pci_slot': u':02:10.6', u'physical_network': 
u'sriov_phynet', u'pci_vendor_in
 fo': u'8086:10ed'}, 'ovs_interfaceid': None, 'preserve_on_delete': True, 
'network': Network({'bridge': None, 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'floating_ips': [], 
'address': u'10.38.0.3'})], 'version': 4, 'meta': {'dhcp_server': 
u'10.38.0.2'}, 'dns': [], 'routes': [], 'cidr': u'10.38.0.0/16', 'gateway': 
IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'10.38.0.1'})})], 
'meta': {'injected': False, 'tenant_id': u'0a00fefb88cd407a9b768678fad26f5c', 
'physical_network': u'sriov_phynet', 'mtu': 1500}, 'id': 
u'28c3c2a5-12b5-45ae-910e-64426f6228a1', 'label': u'sriov-net'}), 'devname': 
u'tap63a1f051-00', 'vnic_type': u'direct', 'qbh_params': None, 'meta': 
{'pci_slotnum': 3}, 'details': {u'port_filter': False, u'vlan': u'63'}, 
'address': u'fa:16:3e:91:f3:4f', 'active': False, 'type': u'hw_veb', 'id': 
u'63a1f051-0025-4ff4-a050-eef37a67f245', 'qbg_params': None}) plug 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py:745
2016-11-15T06:15:41.721+08:00 localhost nova-compute ERROR [pid:17224] 
[MainThread] [tid:115210352] [guest.py:127 create] 
[req-52fba1f0-008e-43dc-bc02-16ea378a41bd] Error defining a domain with XML: 

  d8a1c18a-6e20-41da-9115-c3f3e6c6b836
  instance-0089
  1048576
  4398046511104
  255
  
http://openstack.org/xmlns/libvirt/nova/1.0;>
  
  fzx_sriov
  2016-11-14 22:15:41
  
1024
1
0
0
4
  
  
nova
service
  
  
  
False
  

  
  

  OpenStack Foundation
  OpenStack Nova
  13.0.1-0.20161112210228.6f14d8b
  02b471f9-21bb-4e31-a835-72846f082ac0
  d8a1c18a-6e20-41da-9115-c3f3e6c6b836
  Virtual Machine

  
  
hvm

  
  


  
  
4096
  
  
  


  

  
  

  
  
  
  


  
  
  

  
  

  
  


  



  


  



  
  
  


  
  
  


  
  
  


  
  
  

  
  http://libvirt.org/schemas/domain/qemu/1.0;>








  

2016-11-15T06:15:41.721+08:00 localhost nova-compute ERROR [pid:17224] 
[MainThread] [tid:115210352] [manager.py:542 _build_resources] 
[req-52fba1f0-008e-43dc-bc02-16ea378a41bd] [instance: 
d8a1c18a-6e20-41da-9115-c3f3e6c6b836] 

[Yahoo-eng-team] [Bug 1641816] [NEW] enable ``cache_on_issue`` by default

2016-11-14 Thread Steve Martinelli
Public bug reported:

keystone provides a configuration option to "pre-cache" a token, it is
cached upon issue. In the Newton release this was disabled by default,
we should enable it in Ocata.

** Affects: keystone
 Importance: Medium
 Assignee: Matt Fischer (mfisch)
 Status: In Progress


** Tags: performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641816

Title:
  enable ``cache_on_issue`` by default

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  keystone provides a configuration option to "pre-cache" a token, it is
  cached upon issue. In the Newton release this was disabled by default,
  we should enable it in Ocata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641813] [NEW] stable/newton branch creation request for networking-odl

2016-11-14 Thread Isaku Yamahata
Public bug reported:

Please create stable/newton branch of networking-odl on
f313b7f5b3ebba5bd6f7e8e855315e1570c71f54

the corresponding patch for openstack/release can be found at
https://review.openstack.org/#/c/395415/

** Affects: networking-odl
 Importance: High
 Assignee: Isaku Yamahata (yamahata)
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: networking-odl
   Importance: Undecided
   Status: New

** Changed in: networking-odl
 Assignee: (unassigned) => Isaku Yamahata (yamahata)

** Changed in: networking-odl
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641813

Title:
  stable/newton branch creation request for networking-odl

Status in networking-odl:
  New
Status in neutron:
  New

Bug description:
  Please create stable/newton branch of networking-odl on
  f313b7f5b3ebba5bd6f7e8e855315e1570c71f54

  the corresponding patch for openstack/release can be found at
  https://review.openstack.org/#/c/395415/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1641813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641811] [NEW] Wrong ha_state, when l3-agent that host the master router is down

2016-11-14 Thread wujun
Public bug reported:

In an L3 HA Setup with multiple network nodes, we can query the agent
hosting the Master HA router via l3-agent-list-hosting-router.

root@node1:~# neutron l3-agent-list-hosting-router demo-router
+--+---++---+--+
| id   | host  | admin_state_up | alive | 
ha_state |
+--+---++---+--+
| 58fbfcf3-6403-4388-b713-523595411de6 | node1 | True   | :-)   | 
active   |
| a74be278-e428-41a4-a375-9888e9b99bcd | node2 | True   | :-)   | 
standby  |
+--+---++---+--+

Now, On the node1, I stop the neutron-l3-agnet, and then check the
state.

root@node1:~# neutron l3-agent-list-hosting-router demo-router
+--+---++---+--+
| id   | host  | admin_state_up | alive | 
ha_state |
+--+---++---+--+
| 58fbfcf3-6403-4388-b713-523595411de6 | node1 | True   | xxx   | 
standby  |
| a74be278-e428-41a4-a375-9888e9b99bcd | node2 | True   | :-)   | 
standby  |
+--+---++---+--+

You can see that there is no "active" router, but north-south traffic is
still though the node1 and the keepalived work normally. I think the
ha_state of node1 shoud be "active".

** Affects: neutron
 Importance: Undecided
 Assignee: wujun (wujun)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => wujun (wujun)

** Description changed:

- In an L3 HA Setup with multiple network nodes, we can query the agent hosting 
the Master HA router via 
- l3-agent-list-hosting-router.
+ In an L3 HA Setup with multiple network nodes, we can query the agent
+ hosting the Master HA router via l3-agent-list-hosting-router.
  
  root@node1:~# neutron l3-agent-list-hosting-router demo-router
  
+--+---++---+--+
  | id   | host  | admin_state_up | alive | 
ha_state |
  
+--+---++---+--+
  | 58fbfcf3-6403-4388-b713-523595411de6 | node1 | True   | :-)   | 
active   |
  | a74be278-e428-41a4-a375-9888e9b99bcd | node2 | True   | :-)   | 
standby  |
  
+--+---++---+--+
  
  Now, On the node1, I stop the neutron-l3-agnet, and then check the
  state.
  
  root@node1:~# neutron l3-agent-list-hosting-router demo-router
  
+--+---++---+--+
  | id   | host  | admin_state_up | alive | 
ha_state |
  
+--+---++---+--+
  | 58fbfcf3-6403-4388-b713-523595411de6 | node1 | True   | xxx   | 
standby  |
  | a74be278-e428-41a4-a375-9888e9b99bcd | node2 | True   | :-)   | 
standby  |
  
+--+---++---+--+
  
- You can see that there is no "active" router, but north-south traffic is 
still though the node1 and the 
- keepalived work normally. I think the ha_state of node1 shoud be "active".
+ You can see that there is no "active" router, but north-south traffic is
+ still though the node1 and the keepalived work normally. I think the
+ ha_state of node1 shoud be "active".

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641811

Title:
  Wrong ha_state, when l3-agent that host the master router is down

Status in neutron:
  New

Bug description:
  In an L3 HA Setup with multiple network nodes, we can query the agent
  hosting the Master HA router via l3-agent-list-hosting-router.

  root@node1:~# neutron l3-agent-list-hosting-router demo-router
  
+--+---++---+--+
  | id   | host  | admin_state_up | alive | 
ha_state |
  
+--+---++---+--+
  | 58fbfcf3-6403-4388-b713-523595411de6 | node1 | True   | :-)   | 
active   |
  | a74be278-e428-41a4-a375-9888e9b99bcd | node2 | True   | :-)   | 
standby  |
  
+--+---++---+--+

  Now, On the node1, I stop the neutron-l3-agnet, and then check the
  state.

  root@node1:~# neutron l3-agent-list-hosting-router demo-router
  
+--+---++---+--+
  | id   | host  | admin_state_up | 

[Yahoo-eng-team] [Bug 1641808] [NEW] neutron net-create VlanTransparencyDriverError "Backend does not support VLAN Transparency"

2016-11-14 Thread Fang Fang
Public bug reported:

PackStack N Version
When I create a vlan-transparent network, it returns Error.
I can create a none-vlan-transparent network, it's OK.

[root@controller ~(keystone_admin)]# neutron net-create --vlan-transparent True 
vlan_transparent_net 
Backend does not support VLAN Transparency.
Neutron server returns request_ids: ['req-bf8c1eb9-82db-46dd-a531-8385e25453d0']
[root@controller ~(keystone_admin)]# vim /var/log/neutron/server.log

2016-11-15 11:31:52.184 12101 INFO neutron.quota 
[req-bf8c1eb9-82db-46dd-a531-8385e25453d0 ea8dfe00dd1749d4882b2d0315962a40 
6cfe231b5a484318b01b025ba966ff89 - - -] Loaded quota_driver: 
.
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
[req-bf8c1eb9-82db-46dd-a531-8385e25453d0 ea8dfe00dd1749d4882b2d0315962a40 
6cfe231b5a484318b01b025ba966ff89 - - -] create failed: No details.
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 79, in 
resource
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 430, in create
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
traceback.format_exc())
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in 

[Yahoo-eng-team] [Bug 1641808] [NEW] neutron net-create VlanTransparencyDriverError "Backend does not support VLAN Transparency"

2016-11-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

PackStack N Version
When I create a vlan-transparent network, it returns Error.
I can create a none-vlan-transparent network, it's OK.

[root@controller ~(keystone_admin)]# neutron net-create --vlan-transparent True 
vlan_transparent_net 
Backend does not support VLAN Transparency.
Neutron server returns request_ids: ['req-bf8c1eb9-82db-46dd-a531-8385e25453d0']
[root@controller ~(keystone_admin)]# vim /var/log/neutron/server.log

2016-11-15 11:31:52.184 12101 INFO neutron.quota 
[req-bf8c1eb9-82db-46dd-a531-8385e25453d0 ea8dfe00dd1749d4882b2d0315962a40 
6cfe231b5a484318b01b025ba966ff89 - - -] Loaded quota_driver: 
.
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
[req-bf8c1eb9-82db-46dd-a531-8385e25453d0 ea8dfe00dd1749d4882b2d0315962a40 
6cfe231b5a484318b01b025ba966ff89 - - -] create failed: No details.
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 79, in 
resource
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 430, in create
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 88, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 124, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
traceback.format_exc())
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 84, in wrapped
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-15 11:31:52.324 12101 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1641788] [NEW] AgentStatusCheckWorker doesn't reset or start after stop

2016-11-14 Thread Kevin Benton
Public bug reported:

The AgentStatusCheckWorker we have in tree doesn't correctly recover if
the .stop() or .reset() methods are called to it due to some bad
conditionals. This doesn't currently impact the in-tree use-case since
we don't stop and restart the status checkers, but it should be fixed so
it can be safely re-used elsewhere for periodic workers.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641788

Title:
  AgentStatusCheckWorker doesn't reset or start after stop

Status in neutron:
  New

Bug description:
  The AgentStatusCheckWorker we have in tree doesn't correctly recover
  if the .stop() or .reset() methods are called to it due to some bad
  conditionals. This doesn't currently impact the in-tree use-case since
  we don't stop and restart the status checkers, but it should be fixed
  so it can be safely re-used elsewhere for periodic workers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632884] Re: Missing a slaac ipv6 address mode

2016-11-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/386121
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f047962e7572f40116afb834b35d976a0fdc434e
Submitter: Jenkins
Branch:master

commit f047962e7572f40116afb834b35d976a0fdc434e
Author: Ying Zuo 
Date:   Wed Oct 12 16:14:59 2016 -0700

Add the slaac ipv6 address mode without ra mode

SLAAC can be used without the ipv6_ra_mode being set and an external router 
will
be used for routing.

Change-Id: Ic49ab978cff92d52dddbbe37d4c7ef0f7ca51bd3
Closes-bug: #1632884


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1632884

Title:
  Missing a slaac ipv6 address mode

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Currently, Horizon only provides three ipv6 address modes on create
  network/subnet modal. They are slaac/slaac,
  DHCPv6-stateless/DHCPv6-stateless, DHCPv6-stateful/DHCPv6-stateless.
  What's missing is none/slaac for using an external Router for routing.

  See section Using SLAAC for addressing on
  http://docs.openstack.org/newton/networking-guide/config-ipv6.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1632884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641753] [NEW] Feature support matrix should list support for force-completing a live migration

2016-11-14 Thread Matt Riedemann
Public bug reported:

The feature support matrix has a section for virt drivers that support
live migration:

http://docs.openstack.org/developer/nova/support-
matrix.html#operation_live_migrate

But the 2.22 microversion added support for forcefully completing a live
migration:

http://docs.openstack.org/developer/nova/api_microversion_history.html#id20

However, that's currently only supported by the libvirt driver, and even
then only in certain cases, like libvirt>=1.3.3, qemu>=2.5.0, and the
VIR_MIGRATE_POSTCOPY flag must be set.

We should, however, document that this is at least an operation people
can try to perform via the REST API and describe what it does and which
virt drivers support it.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: doc low-hanging-fruit

** Summary changed:

- Feature support matrix should list support for force-completion a live 
migration
+ Feature support matrix should list support for force-completing a live 
migration

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1641753

Title:
  Feature support matrix should list support for force-completing a live
  migration

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The feature support matrix has a section for virt drivers that support
  live migration:

  http://docs.openstack.org/developer/nova/support-
  matrix.html#operation_live_migrate

  But the 2.22 microversion added support for forcefully completing a
  live migration:

  http://docs.openstack.org/developer/nova/api_microversion_history.html#id20

  However, that's currently only supported by the libvirt driver, and
  even then only in certain cases, like libvirt>=1.3.3, qemu>=2.5.0, and
  the VIR_MIGRATE_POSTCOPY flag must be set.

  We should, however, document that this is at least an operation people
  can try to perform via the REST API and describe what it does and
  which virt drivers support it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1641753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597686] Re: the return value in func process_request in nova/wsgi.py is not proper

2016-11-14 Thread Sivasathurappan Radhakrishnan
I see the above method is present in the base class of Middleware.
Currently, Nova doesn't implement any middleware which overrides the
above method. In future it might be helpful if any middlewares are
developed specific to nova. This doesn't seem to be a valid bug for me.
Hence invalidating it.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597686

Title:
  the return value in func process_request in nova/wsgi.py is not proper

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In nova/wsgi.py, there is a function but the return value is limited
  to None, which can also be a response.

  def process_request(self, req):
  """Called on each request.
  If this returns None, the next application down the stack will be
  executed. If it returns a response then that response will be returned
  and execution will stop here.
  """
  return None

  From thte comments we can see, the return value for this function
  should be "None" or response.

  
  Thanks,
  Jeffrey Guan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641750] [NEW] PCI devices are sometime not freed after a migration

2016-11-14 Thread Ludovic Beliveau
Public bug reported:

Description
===

During stress testing of cold migration, it has been observed that
sometimes the PCI devices are not freed by the resource tracker on the
source node.

If on the source node the periodic resource audit kicks-in in the middle
of the migration, the instance uuid is moved from tracked_migrations to
tracked_instances.  In which case the PCI devices won't get freed
because the current logic in the code only cares about tracked_migration
(see
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L355).

Steps to reproduce
==

1) Boot a guest with a SR-IOV device.
2) Migrate and confirm the migration
3) Repeat 2 over and over

Expected result
===

In this case the PCI devices will only get freed on the next periodic
audit.  For PCI resources such as PCI passthrough, those are limited in
number and should be freed right away.

Actual result
=

The PCI devices are not freed during the confirm_resize stage.

Environment
===

$ git log -1
commit 633c817de5a67e798d8610d0df1135e5a568fd8a
Author: Matt Riedemann 
Date:   Sat Nov 12 11:59:13 2016 -0500

api-ref: fix server_id in metadata docs

The api-ref was saying that the server_id was in the body of the
server metadata requests but it's actually in the path for all
of the requests.

Change-Id: Icdecd980767f89ee5fcc5bdd4802b2c263268a26
Closes-Bug: #1641331

** Affects: nova
 Importance: Undecided
 Assignee: Ludovic Beliveau (ludovic-beliveau)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1641750

Title:
  PCI devices are sometime not freed after a migration

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  During stress testing of cold migration, it has been observed that
  sometimes the PCI devices are not freed by the resource tracker on the
  source node.

  If on the source node the periodic resource audit kicks-in in the
  middle of the migration, the instance uuid is moved from
  tracked_migrations to tracked_instances.  In which case the PCI
  devices won't get freed because the current logic in the code only
  cares about tracked_migration (see
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L355).

  Steps to reproduce
  ==

  1) Boot a guest with a SR-IOV device.
  2) Migrate and confirm the migration
  3) Repeat 2 over and over

  Expected result
  ===

  In this case the PCI devices will only get freed on the next periodic
  audit.  For PCI resources such as PCI passthrough, those are limited
  in number and should be freed right away.

  Actual result
  =

  The PCI devices are not freed during the confirm_resize stage.

  Environment
  ===

  $ git log -1
  commit 633c817de5a67e798d8610d0df1135e5a568fd8a
  Author: Matt Riedemann 
  Date:   Sat Nov 12 11:59:13 2016 -0500

  api-ref: fix server_id in metadata docs
  
  The api-ref was saying that the server_id was in the body of the
  server metadata requests but it's actually in the path for all
  of the requests.
  
  Change-Id: Icdecd980767f89ee5fcc5bdd4802b2c263268a26
  Closes-Bug: #1641331

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1641750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634258] Re: Errors in the upstream OVS Python IDL lib are not logged anywhere

2016-11-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/387672
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=93104d3d703153cb99d67dfe62097f8332d1af97
Submitter: Jenkins
Branch:master

commit 93104d3d703153cb99d67dfe62097f8332d1af97
Author: Terry Wilson 
Date:   Fri Oct 14 17:07:08 2016 -0500

Log OVS IDL library errors to neutron logs

The OVS IDL library, instead of passing exceptions, logs them
via its Vlog wrapper around Python's logging module. Currently,
we aren't getting any of these log messages. This patch replaces
the Vlog class log methods with the equivalent oslo_log methods.

Closes-Bug: #1634258
Change-Id: Id5a55b5fc323641d0dfd6e3e78b2d2422482fbe0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1634258

Title:
  Errors in the upstream OVS Python IDL lib are not logged anywhere

Status in neutron:
  Fix Released

Bug description:
  The OVS IDL library, instead of passing exceptions, logs them via its
  Vlog wrapper around Python's logging module. Currently, we aren't
  getting any of these log messages. Since the library also doesn't pass
  exceptions to use, instead catching and logging them, this makes
  debugging difficult.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1634258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641713] [NEW] There is no api-ref for 2.24 and aborting a live migration

2016-11-14 Thread Matt Riedemann
Public bug reported:

This change added the 2.24 microversion for aborting a live migration:

https://review.openstack.org/#/c/277971/

However, we don't have any API reference for a DELETE action on a
migration resource:

http://developer.openstack.org/api-ref/compute/?expanded=migrate-server-
migrate-action-detail#servers-run-an-administrative-action-servers-
action

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: api-ref

** Tags added: api-ref

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1641713

Title:
  There is no api-ref for 2.24 and aborting a live migration

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This change added the 2.24 microversion for aborting a live migration:

  https://review.openstack.org/#/c/277971/

  However, we don't have any API reference for a DELETE action on a
  migration resource:

  http://developer.openstack.org/api-ref/compute/?expanded=migrate-
  server-migrate-action-detail#servers-run-an-administrative-action-
  servers-action

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1641713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1637530] Re: Python keystone client `users` method get() is not working

2016-11-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/396260
Committed: 
https://git.openstack.org/cgit/openstack/python-keystoneclient/commit/?id=0bfd6251b674c998680ee0018fc95f47e1d26fe6
Submitter: Jenkins
Branch:master

commit 0bfd6251b674c998680ee0018fc95f47e1d26fe6
Author: Boris Bobrov 
Date:   Thu Nov 10 17:56:30 2016 +0300

Do not add last_request_id

It is untested and doesn't work for a while. It also causes a failure
when the method is used by other client or by keystoneclient itself.

Change-Id: Icdd53936a107933e275acd43b5ebe94b8d04bc4b
Closes-Bug: 1637530


** Changed in: python-keystoneclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1637530

Title:
  Python keystone client `users` method get() is not working

Status in OpenStack Identity (keystone):
  Invalid
Status in python-keystoneclient:
  Fix Released

Bug description:
  Using python keystone-client to update user is working not properly.
  ipdb> test_user.manager.client
  
  ipdb> test_user.manager.client.last_request_id
  *** AttributeError: '_KeystoneAdapter' object has no attribute 
'last_request_id'

  Steps to reproduce:
   1. authenticate

   2. create test_user:
  name='test_user_005'
  password='test'
  email='t...@test.com'

   3. update test_user
  email='upda...@updated.com'
   4. execute test_user.get()
  Expected result:
  Command is executed without any errors

  Actual result:
  *** AttributeError: '_KeystoneAdapter' object has no attribute 
'last_request_id'

  Related bug:
  https://bugs.launchpad.net/keystone/+bug/1637484

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1637530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621200] Re: password created_at does not honor timezones

2016-11-14 Thread Steve Martinelli
** Changed in: keystone/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1621200

Title:
  password created_at does not honor timezones

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  Fix Released

Bug description:
  This was initially discovered when running the unit tests for
  migration 002 in a timezone that is UTC+3.

  Migration 002 sets the password created_at column to a TIMESTAMP type
  with a server_default=sql.func.now(). There are a couple problems
  that have been uncovered with this change:
  * We cannot guarantee that func.now() will generate a UTC timestamp.
  * For some older versions of MySQL, the TIMESTAMP column will
  automatically be updated when other columns are updated:
  https://dev.mysql.com/doc/refman/5.5/en/timestamp-initialization.html

  Steps to reproduce:
  1. dpkg-reconfigure tzdata and select there Europe/Moscow (UTC+3).
  2. Restart mysql
  3. Configure opportunistic tests with the following command in mysql:
  GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest' @'%' identified by 
'openstack_citest' WITH GRANT OPTION;
  4. Run 
keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password

  Expected result: test pass

  Actual result:
  Traceback (most recent call last):
    File "keystone/tests/unit/identity/backends/test_base.py", line 255, in 
test_change_password
  self.driver.authenticate(user['id'], new_password)
    File "keystone/identity/backends/sql.py", line 65, in authenticate
  raise AssertionError(_('Invalid user / password'))
  AssertionError: Invalid user / password

  Aside from the test issue, we should be saving all time related data
  in DateTime format instead of TIMESTAMP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1621200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641670] [NEW] Functional reload tests are flakey

2016-11-14 Thread Ian Cordasco
Public bug reported:

http://logs.openstack.org/periodic-stable/periodic-glance-python27-db-
newton/95226d2/testr_results.html.gz is an example of a periodic gate
failure in Glance's functional test suite, specifically:
glance.tests.functional.test_reload.TestReload.test_reload

This test fails occasionally trying to assert that new log files are
created. At the moment, it's unclear exactly what the root cause of this
flakey test is. The test seems to work just fine locally, so reproducing
it may be time consume. For others, the complete output from the test
failure is:

Traceback (most recent call last):
  File "glance/tests/functional/test_reload.py", line 251, in test_reload
for _ in self.ticker(msg):
  File "glance/tests/functional/test_reload.py", line 72, in ticker
self.fail(message)
  File 
"/home/jenkins/workspace/periodic-glance-python27-db-newton/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: No new log file created

And the api log information has been reproduced here:
http://paste.openstack.org/show/589157/

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: glance/newton
 Importance: Undecided
 Status: New

** Affects: glance/ocata
 Importance: Undecided
 Status: New

** Also affects: glance/ocata
   Importance: Undecided
   Status: New

** Also affects: glance/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1641670

Title:
  Functional reload tests are flakey

Status in Glance:
  New
Status in Glance newton series:
  New
Status in Glance ocata series:
  New

Bug description:
  http://logs.openstack.org/periodic-stable/periodic-glance-python27-db-
  newton/95226d2/testr_results.html.gz is an example of a periodic gate
  failure in Glance's functional test suite, specifically:
  glance.tests.functional.test_reload.TestReload.test_reload

  This test fails occasionally trying to assert that new log files are
  created. At the moment, it's unclear exactly what the root cause of
  this flakey test is. The test seems to work just fine locally, so
  reproducing it may be time consume. For others, the complete output
  from the test failure is:

  Traceback (most recent call last):
File "glance/tests/functional/test_reload.py", line 251, in test_reload
  for _ in self.ticker(msg):
File "glance/tests/functional/test_reload.py", line 72, in ticker
  self.fail(message)
File 
"/home/jenkins/workspace/periodic-glance-python27-db-newton/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: No new log file created

  And the api log information has been reproduced here:
  http://paste.openstack.org/show/589157/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1641670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629133] Re: New neutron subnet pool support breaks multinode testing.

2016-11-14 Thread Mathieu Rohon
** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629133

Title:
  New neutron subnet pool support breaks multinode testing.

Status in networking-bgpvpn:
  New
Status in devstack:
  Fix Released
Status in Ironic:
  Fix Committed
Status in ironic-python-agent:
  Confirmed
Status in Magnum:
  In Progress
Status in Manila:
  New
Status in neutron:
  New

Bug description:
  The new subnet pool support in devstack breaks multinode testing bceause it 
results in the route for 10.0.0.0/8 being set to via br-ex when the host has 
interfaces that are actually on 10 nets and that is where we need the routes to 
go out. Multinode testing is affected because it uses these 10 net addresses to 
run the vxlan overlays between hosts.
  For many years devstack-gate has set FIXED_RANGE to ensure that the networks 
devstack uses do not interfere with the underlying test host's networking. 
However this setting was completely ignored when setting up the subnet pools.

  I think the correct way to fix this is to use a much smaller subnet
  pool range to avoid conflicting with every possible 10.0.0.0/8 network
  in the wild, possibly by defaulting to the existing FIXED_RANGE
  information. Using the existing information will help ensure that
  anyone with networks in 10.0.0.0/8 will continue to work if they have
  specified a range that doesn't conflict using this variable.

  In addition to this we need to clearly document what this new stuff in
  devstack does and how people can work around it should they conflict
  with the new defaults we end up choosing.

  I have proposed https://review.openstack.org/379543 which mostly works
  except it fails one tempest test that apparently depends on this new
  subnet pool stuff. Specifically the V6 range isn't large enough aiui.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1629133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629133] Re: New neutron subnet pool support breaks multinode testing.

2016-11-14 Thread Monty Taylor
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629133

Title:
  New neutron subnet pool support breaks multinode testing.

Status in networking-bgpvpn:
  New
Status in devstack:
  Fix Released
Status in Ironic:
  Fix Committed
Status in ironic-python-agent:
  Confirmed
Status in Magnum:
  In Progress
Status in Manila:
  New
Status in neutron:
  New

Bug description:
  The new subnet pool support in devstack breaks multinode testing bceause it 
results in the route for 10.0.0.0/8 being set to via br-ex when the host has 
interfaces that are actually on 10 nets and that is where we need the routes to 
go out. Multinode testing is affected because it uses these 10 net addresses to 
run the vxlan overlays between hosts.
  For many years devstack-gate has set FIXED_RANGE to ensure that the networks 
devstack uses do not interfere with the underlying test host's networking. 
However this setting was completely ignored when setting up the subnet pools.

  I think the correct way to fix this is to use a much smaller subnet
  pool range to avoid conflicting with every possible 10.0.0.0/8 network
  in the wild, possibly by defaulting to the existing FIXED_RANGE
  information. Using the existing information will help ensure that
  anyone with networks in 10.0.0.0/8 will continue to work if they have
  specified a range that doesn't conflict using this variable.

  In addition to this we need to clearly document what this new stuff in
  devstack does and how people can work around it should they conflict
  with the new defaults we end up choosing.

  I have proposed https://review.openstack.org/379543 which mostly works
  except it fails one tempest test that apparently depends on this new
  subnet pool stuff. Specifically the V6 range isn't large enough aiui.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1629133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641660] [NEW] enable CADF notification format by default

2016-11-14 Thread Steve Martinelli
Public bug reported:

The current default notification format is the home-brewed openstack-
styled format, that provides minimal information about the user. For a
few releases now, all new notifications have adhered to the CADF format.
We should switch over to the CADF format, which provides compatibility
with the order format.

Additionally, we probably want to squelch authentication messages, since
there has been feedback that it is too noisy.

Implementation wise, this would mean the following changes to the
default config file:

  [default] ``notification_format=cadf``
  [default] ``notification_opt_out=identity.authenticate.success``
  [default] ``notification_opt_out=identity.authenticate.pending``
  [default] ``notification_opt_out=identity.authenticate.failed``

** Affects: keystone
 Importance: Medium
 Status: New


** Tags: notifications

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641660

Title:
  enable CADF notification format by default

Status in OpenStack Identity (keystone):
  New

Bug description:
  The current default notification format is the home-brewed openstack-
  styled format, that provides minimal information about the user. For a
  few releases now, all new notifications have adhered to the CADF
  format. We should switch over to the CADF format, which provides
  compatibility with the order format.

  Additionally, we probably want to squelch authentication messages,
  since there has been feedback that it is too noisy.

  Implementation wise, this would mean the following changes to the
  default config file:

[default] ``notification_format=cadf``
[default] ``notification_opt_out=identity.authenticate.success``
[default] ``notification_opt_out=identity.authenticate.pending``
[default] ``notification_opt_out=identity.authenticate.failed``

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641654] [NEW] include healthcheck middleware by default

2016-11-14 Thread Steve Martinelli
Public bug reported:

The healthcheck middleware is published by oslo, used in glance and
magnum, and one less thing for deployers to add to keystone. Let's add
it in.

Patch: https://review.openstack.org/#/c/387731/

** Affects: keystone
 Importance: Medium
 Assignee: Jesse Keating (jesse-keating)
 Status: Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641654

Title:
  include healthcheck middleware by default

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The healthcheck middleware is published by oslo, used in glance and
  magnum, and one less thing for deployers to add to keystone. Let's add
  it in.

  Patch: https://review.openstack.org/#/c/387731/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641652] [NEW] cache invalidation should be wrapped to local context

2016-11-14 Thread Steve Martinelli
Public bug reported:

When [1] merged, it fixed many caching issues and bug, but created
another. The region invalidation should be wrapped to the local context.

Patch: https://review.openstack.org/#/c/380376/

** Affects: keystone
 Importance: High
 Assignee: Boris Bobrov (bbobrov)
 Status: In Progress


** Tags: performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641652

Title:
  cache invalidation should be wrapped to local context

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  When [1] merged, it fixed many caching issues and bug, but created
  another. The region invalidation should be wrapped to the local
  context.

  Patch: https://review.openstack.org/#/c/380376/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641642] [NEW] users that are blacklisted for PCI support should not have failed login attempts counted

2016-11-14 Thread Steve Martinelli
Public bug reported:

The main idea behind the user ID blacklist for PCI was to allow service
accounts to not have to change their password. As noted in [1], a by-
product of any PCI implementation is a vulnerability to a DoS (a
malicious user attempting to login X times and locking out a user). This
case is worsened by the fact that openstack uses a few very common
usernames: "nova", "admin", "service", etc.

Since blacklisted users are already exempt from changing their password
every Y days, then they should be equally exempt from the consequences
of too many logins.

[1] http://www.mattfischer.com/blog/?p=769

** Affects: keystone
 Importance: Medium
 Status: Confirmed


** Tags: pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641642

Title:
  users that are blacklisted for PCI support should not have failed
  login attempts counted

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  The main idea behind the user ID blacklist for PCI was to allow
  service accounts to not have to change their password. As noted in
  [1], a by-product of any PCI implementation is a vulnerability to a
  DoS (a malicious user attempting to login X times and locking out a
  user). This case is worsened by the fact that openstack uses a few
  very common usernames: "nova", "admin", "service", etc.

  Since blacklisted users are already exempt from changing their
  password every Y days, then they should be equally exempt from the
  consequences of too many logins.

  [1] http://www.mattfischer.com/blog/?p=769

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641645] [NEW] PCI: a locked out user must ask an admin to unlock their account

2016-11-14 Thread Steve Martinelli
Public bug reported:

As noted in the bug title, this is a cumbersome process, a user should
be able to reset their password if it expired. (and potentially if
locked out -- that's up for debate).

** Affects: keystone
 Importance: Medium
 Status: New


** Tags: pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641645

Title:
  PCI: a locked out user must ask an admin to unlock their account

Status in OpenStack Identity (keystone):
  New

Bug description:
  As noted in the bug title, this is a cumbersome process, a user should
  be able to reset their password if it expired. (and potentially if
  locked out -- that's up for debate).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641639] [NEW] use mapping_id for shadow users

2016-11-14 Thread Steve Martinelli
Public bug reported:

Currently, shadow users are created for users that log in through
federation. New "local_user" accounts are created with a new UUID.
Rather than creating a new UUID, we should re-use the mapping_id backend
that was employed with LDAP users.

** Affects: keystone
 Importance: Medium
 Status: Confirmed


** Tags: federation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641639

Title:
  use mapping_id for shadow users

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  Currently, shadow users are created for users that log in through
  federation. New "local_user" accounts are created with a new UUID.
  Rather than creating a new UUID, we should re-use the mapping_id
  backend that was employed with LDAP users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1638681] Re: resource tracker sets wrong max_unit in placement Inventory

2016-11-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/395971
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=00bc0cb53d6113fae9a7714386953d1d75db71c1
Submitter: Jenkins
Branch:master

commit 00bc0cb53d6113fae9a7714386953d1d75db71c1
Author: Prateek Arora 
Date:   Thu Nov 10 04:09:32 2016 -0500

Correct wrong max_unit in placement inventory

When the resource tracker creates Inventory in the placement
API for its VCPU, MEMORY_MB and DISK_GB, max_unit is set to 1.

Until commit I18596a3c0f2b0049aaccd0f3e73aef90b684c4a8 the
min_unit,max_unit and step_size constraints on Inventory were
not being checked when making Allocations. When that
enforcement merges, the resource tracker will no longer be
able to make allocations of anything other than unit 1.

This patch tries to fix the above stated problem by changing
the value of max_unit to reflect the real limits on the machine
when creating the inventory.

Change-Id: I23fa868fec7f71c01e78e1a3bba5b08407c1e3ef
Closes-bug: #1638681
Co-Authored-By: Chris Dent 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1638681

Title:
  resource tracker sets wrong max_unit in placement Inventory

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the resource tracker creates Inventory in the placement (v1.0)
  API for its VCPU, MEMORY_MB, and DISK_GB, max_unit is being set to 1.

  Until https://review.openstack.org/#/c/392933/ the min_unit, max_unit
  and step_size constraints on Inventory were not being checked when
  making Allocations. When that enforcement merges, the resource tracker
  will no longer be able to make allocations of anything other than unit
  1.

  The immediate fix for this is for the value of max_unit to reflect the
  real limits on the machine (how many cores, how much RAM, how much
  disk) when creating Inventory. In the future fancier things will be
  possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1638681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641621] [NEW] keystone-manage doctor needs tests

2016-11-14 Thread Steve Martinelli
Public bug reported:

there are no tests for any keystone-manage doctor commands. they should
be created here:
https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_cli.py

** Affects: keystone
 Importance: Low
 Status: Triaged


** Tags: test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641621

Title:
  keystone-manage doctor needs tests

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  there are no tests for any keystone-manage doctor commands. they
  should be created here:
  
https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_cli.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641623] [NEW] keystone-manage doctor needs developer docs

2016-11-14 Thread Steve Martinelli
Public bug reported:

There are no developer docs on how to create a new doctor check, or how
the existing ones work. They should be added to a new section in the
"developer docs" here: http://docs.openstack.org/developer/keystone
/#developers-documentation

** Affects: keystone
 Importance: Medium
 Status: Triaged


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641623

Title:
  keystone-manage doctor needs developer docs

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  There are no developer docs on how to create a new doctor check, or
  how the existing ones work. They should be added to a new section in
  the "developer docs" here:
  http://docs.openstack.org/developer/keystone/#developers-documentation

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641625] [NEW] RFE: add more info in the k2k assertion

2016-11-14 Thread Steve Martinelli
Public bug reported:

Currently, the user's name (and domain name), their roles, the project
they authenticated with (and project's domain name) are supplied in the
k2k assertion that keystone generates.

There has been a request that the user's groups also be included in the
assertion.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: federation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641625

Title:
  RFE: add more info in the k2k assertion

Status in OpenStack Identity (keystone):
  New

Bug description:
  Currently, the user's name (and domain name), their roles, the project
  they authenticated with (and project's domain name) are supplied in
  the k2k assertion that keystone generates.

  There has been a request that the user's groups also be included in
  the assertion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351144] Re: neutron, divergence in behavior wrt floatingips and l3 routers

2016-11-14 Thread Alexander Ignatov
** Project changed: neutron => mos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351144

Title:
  neutron, divergence in behavior wrt floatingips and l3 routers

Status in Juniper Openstack:
  Opinion
Status in Mirantis OpenStack:
  Opinion

Bug description:
  
  From: Rahul Sharma 
  Date: Friday, August 1, 2014 at 1:17 AM
  To: Sachin Bansal 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  Following is what I see on standard neutron.

  Error: 404-{u'NeutronError': {u'message': u'External network
  695c1164-73bb-4905-8b93-943ebcfae517 is not reachable from subnet
  f2db88dc-378e-48e2-ac6f-23e9887d02b3. Therefore, cannot associate Port
  23b45869-11db-4b7e-aabe-caa73f2826a8 with a Floating IP.', u'type':
  u'ExternalGatewayForFloatingIPNotFound', u'detail': u'’}}

  
  -
  Rahul

  From: Sachin Bansal 
  Date: Friday, August 1, 2014 at 1:05 AM
  To: Rahul Sharma 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  Per Ajay, devstack also behaves similar to ours.

  Sachin

  On Jul 31, 2014, at 12:17 PM, Sachin Bansal 
  wrote:

  I hadn't seen this before and it does seem contradictory to our
  understanding. This will make the entire concept of routers useless.
  We will need to clarify.

  Sachin


  On Jul 31, 2014, at 12:08 PM, Rahul Sharma  wrote:

  As far as I read/understood the spec, it talks of using external
  networks. And that’s why we set “router:external” on FIP networks. And
  for any port to use the FIP, its subnet should have an interface on
  the router.

  
  "Floating Ips can be created on any external network.  In order to associate 
a port with a floating IP, that port must be on a quantum network that has
   an interface on a router that has a gateway to that external network."

  
  Floating IPs

  Instead of having a separate notion of floating-ip pools, we just use this 
same notion of
   an external network.  IPs for use as floating-ips can be allocated from any 
available subnet associated with an external networks (Note: the idea of having 
a separate notion of an external network from a public/shared network is 
because the provider may not
   want to let tenants create VMs directly connected to the external network.)

  Floating Ips can be created on any external network.  In order to associate a 
port with a floating IP, that port must be on a quantum network that has an 
interface
   on a router that has a gateway to that external network.

  
  
https://docs.google.com/document/d/1RqvZ50k60Dd19paKePHLHbk1x1lN2fXSXyWuC9OiJWI/edit?pli=1

  https://blueprints.launchpad.net/neutron/+spec/quantum-l3-fwd-nat

  
  -
  Rahul

  From: Sachin Bansal 
  Date: Friday, August 1, 2014 at 12:25 AM
  To: Rahul Sharma 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  I don't think your understanding is correct. Floating ip and routers
  are two different ways of solving the same problem: Access to external
  networks. We support both.

  Sachin

  On Jul 31, 2014, at 11:53 AM, Rahul Sharma  wrote:

  Access using floating ip’s shouldn’t work, until an interface from
  private subnets is attached to the l3 router.

  In our solution above is not required, neither we need to create l3
  router nor set the public nets as router’s gateway.

  In a nutshell floating ip functionality shouldn’t work without l3
  routers, but in our case l3 routers aren’t a must.

  From: Sachin Bansal 
  Date: Friday, August 1, 2014 at 12:14 AM
  To: Rahul Sharma 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  I am not sure what is the divergence. We need the exact same steps.

  Sachin

  On Jul 31, 2014, at 11:33 AM, Rahul Sharma  wrote:

  Hi,
  As per various articles, following is what needs to be done to get access 
with Floating IP’s and L3 routers. We diverge from following, in a way that we 
don’t have to add an interface from the subnet that we intend to provide access 
to ..to the router.

  Is our divergence correct?

  neutron router-create router1
  neutron net-create private
  neutron subnet-create private 10.0.0.0/24 --name private_subnet
  neutron router-interface-add router1 private_subnet
  neutron net-create public 

[Yahoo-eng-team] [Bug 1641331] Re: server metadata PUT request details are wrong

2016-11-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/396867
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=633c817de5a67e798d8610d0df1135e5a568fd8a
Submitter: Jenkins
Branch:master

commit 633c817de5a67e798d8610d0df1135e5a568fd8a
Author: Matt Riedemann 
Date:   Sat Nov 12 11:59:13 2016 -0500

api-ref: fix server_id in metadata docs

The api-ref was saying that the server_id was in the body of the
server metadata requests but it's actually in the path for all
of the requests.

Change-Id: Icdecd980767f89ee5fcc5bdd4802b2c263268a26
Closes-Bug: #1641331


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1641331

Title:
  server metadata PUT request details are wrong

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The api-ref says that a PUT for server metadata has the server_id in
  the body of the request, but it's actually in the path:

  http://developer.openstack.org/api-ref/compute/?expanded=create-or-
  replace-metadata-items-detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1641331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351144] Re: neutron, divergence in behavior wrt floatingips and l3 routers

2016-11-14 Thread Alexander Ignatov
** Project changed: mos => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351144

Title:
  neutron, divergence in behavior wrt floatingips and l3 routers

Status in Juniper Openstack:
  Opinion
Status in Mirantis OpenStack:
  Incomplete

Bug description:
  
  From: Rahul Sharma 
  Date: Friday, August 1, 2014 at 1:17 AM
  To: Sachin Bansal 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  Following is what I see on standard neutron.

  Error: 404-{u'NeutronError': {u'message': u'External network
  695c1164-73bb-4905-8b93-943ebcfae517 is not reachable from subnet
  f2db88dc-378e-48e2-ac6f-23e9887d02b3. Therefore, cannot associate Port
  23b45869-11db-4b7e-aabe-caa73f2826a8 with a Floating IP.', u'type':
  u'ExternalGatewayForFloatingIPNotFound', u'detail': u'’}}

  
  -
  Rahul

  From: Sachin Bansal 
  Date: Friday, August 1, 2014 at 1:05 AM
  To: Rahul Sharma 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  Per Ajay, devstack also behaves similar to ours.

  Sachin

  On Jul 31, 2014, at 12:17 PM, Sachin Bansal 
  wrote:

  I hadn't seen this before and it does seem contradictory to our
  understanding. This will make the entire concept of routers useless.
  We will need to clarify.

  Sachin


  On Jul 31, 2014, at 12:08 PM, Rahul Sharma  wrote:

  As far as I read/understood the spec, it talks of using external
  networks. And that’s why we set “router:external” on FIP networks. And
  for any port to use the FIP, its subnet should have an interface on
  the router.

  
  "Floating Ips can be created on any external network.  In order to associate 
a port with a floating IP, that port must be on a quantum network that has
   an interface on a router that has a gateway to that external network."

  
  Floating IPs

  Instead of having a separate notion of floating-ip pools, we just use this 
same notion of
   an external network.  IPs for use as floating-ips can be allocated from any 
available subnet associated with an external networks (Note: the idea of having 
a separate notion of an external network from a public/shared network is 
because the provider may not
   want to let tenants create VMs directly connected to the external network.)

  Floating Ips can be created on any external network.  In order to associate a 
port with a floating IP, that port must be on a quantum network that has an 
interface
   on a router that has a gateway to that external network.

  
  
https://docs.google.com/document/d/1RqvZ50k60Dd19paKePHLHbk1x1lN2fXSXyWuC9OiJWI/edit?pli=1

  https://blueprints.launchpad.net/neutron/+spec/quantum-l3-fwd-nat

  
  -
  Rahul

  From: Sachin Bansal 
  Date: Friday, August 1, 2014 at 12:25 AM
  To: Rahul Sharma 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  I don't think your understanding is correct. Floating ip and routers
  are two different ways of solving the same problem: Access to external
  networks. We support both.

  Sachin

  On Jul 31, 2014, at 11:53 AM, Rahul Sharma  wrote:

  Access using floating ip’s shouldn’t work, until an interface from
  private subnets is attached to the l3 router.

  In our solution above is not required, neither we need to create l3
  router nor set the public nets as router’s gateway.

  In a nutshell floating ip functionality shouldn’t work without l3
  routers, but in our case l3 routers aren’t a must.

  From: Sachin Bansal 
  Date: Friday, August 1, 2014 at 12:14 AM
  To: Rahul Sharma 
  Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
  Subject: Re: Access using Floating IP's and L3 routers

  I am not sure what is the divergence. We need the exact same steps.

  Sachin

  On Jul 31, 2014, at 11:33 AM, Rahul Sharma  wrote:

  Hi,
  As per various articles, following is what needs to be done to get access 
with Floating IP’s and L3 routers. We diverge from following, in a way that we 
don’t have to add an interface from the subnet that we intend to provide access 
to ..to the router.

  Is our divergence correct?

  neutron router-create router1
  neutron net-create private
  neutron subnet-create private 10.0.0.0/24 --name private_subnet
  neutron router-interface-add router1 private_subnet
  neutron net-create public 

[Yahoo-eng-team] [Bug 1635306] Re: After newton deployment _member_ role is missing in keystone

2016-11-14 Thread Julie Pichon
This works in the last Newton deployment I did, the keystone patch is
sufficient to help with this. Thanks again for the fix!

** Changed in: tripleo
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1635306

Title:
  After newton deployment _member_ role is missing in keystone

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  Fix Committed
Status in tripleo:
  Invalid

Bug description:
  I did a full deployment using RDO Newton and at the end of deployment
  i see _member_ role is missing.

  [stack@topstrio1101 ~]$ openstack role list
  +--+-+
  | ID   | Name|
  +--+-+
  | 023e0f4fc56a47f7bada5fd512bab014 | swiftoperator   |
  | 48e4519e09b4469bbbf5c533830d3ad8 | heat_stack_user |
  | 52be634093e14ea7a1acdf3f5ec12066 | admin   |
  | a1f8e6636dc842d8a896a3e903298997 | ResellerAdmin   |
  +--+-+

  In Mitaka _member_ role has been created correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1635306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351144] [NEW] neutron, divergence in behavior wrt floatingips and l3 routers

2016-11-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:


From: Rahul Sharma 
Date: Friday, August 1, 2014 at 1:17 AM
To: Sachin Bansal 
Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
Subject: Re: Access using Floating IP's and L3 routers

Following is what I see on standard neutron.

Error: 404-{u'NeutronError': {u'message': u'External network
695c1164-73bb-4905-8b93-943ebcfae517 is not reachable from subnet
f2db88dc-378e-48e2-ac6f-23e9887d02b3. Therefore, cannot associate Port
23b45869-11db-4b7e-aabe-caa73f2826a8 with a Floating IP.', u'type':
u'ExternalGatewayForFloatingIPNotFound', u'detail': u'’}}


-
Rahul

From: Sachin Bansal 
Date: Friday, August 1, 2014 at 1:05 AM
To: Rahul Sharma 
Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
Subject: Re: Access using Floating IP's and L3 routers

Per Ajay, devstack also behaves similar to ours.

Sachin

On Jul 31, 2014, at 12:17 PM, Sachin Bansal  wrote:

I hadn't seen this before and it does seem contradictory to our
understanding. This will make the entire concept of routers useless. We
will need to clarify.

Sachin


On Jul 31, 2014, at 12:08 PM, Rahul Sharma  wrote:

As far as I read/understood the spec, it talks of using external
networks. And that’s why we set “router:external” on FIP networks. And
for any port to use the FIP, its subnet should have an interface on the
router.


"Floating Ips can be created on any external network.  In order to associate a 
port with a floating IP, that port must be on a quantum network that has
 an interface on a router that has a gateway to that external network."


Floating IPs

Instead of having a separate notion of floating-ip pools, we just use this same 
notion of
 an external network.  IPs for use as floating-ips can be allocated from any 
available subnet associated with an external networks (Note: the idea of having 
a separate notion of an external network from a public/shared network is 
because the provider may not
 want to let tenants create VMs directly connected to the external network.)

Floating Ips can be created on any external network.  In order to associate a 
port with a floating IP, that port must be on a quantum network that has an 
interface
 on a router that has a gateway to that external network.


https://docs.google.com/document/d/1RqvZ50k60Dd19paKePHLHbk1x1lN2fXSXyWuC9OiJWI/edit?pli=1

https://blueprints.launchpad.net/neutron/+spec/quantum-l3-fwd-nat


-
Rahul

From: Sachin Bansal 
Date: Friday, August 1, 2014 at 12:25 AM
To: Rahul Sharma 
Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
Subject: Re: Access using Floating IP's and L3 routers

I don't think your understanding is correct. Floating ip and routers are
two different ways of solving the same problem: Access to external
networks. We support both.

Sachin

On Jul 31, 2014, at 11:53 AM, Rahul Sharma  wrote:

Access using floating ip’s shouldn’t work, until an interface from
private subnets is attached to the l3 router.

In our solution above is not required, neither we need to create l3
router nor set the public nets as router’s gateway.

In a nutshell floating ip functionality shouldn’t work without l3
routers, but in our case l3 routers aren’t a must.

From: Sachin Bansal 
Date: Friday, August 1, 2014 at 12:14 AM
To: Rahul Sharma 
Cc: Contrail Systems Configuration Team , 
Vedamurthy Ananth Joshi 
Subject: Re: Access using Floating IP's and L3 routers

I am not sure what is the divergence. We need the exact same steps.

Sachin

On Jul 31, 2014, at 11:33 AM, Rahul Sharma  wrote:

Hi,
As per various articles, following is what needs to be done to get access with 
Floating IP’s and L3 routers. We diverge from following, in a way that we don’t 
have to add an interface from the subnet that we intend to provide access to 
..to the router.

Is our divergence correct?

neutron router-create router1
neutron net-create private
neutron subnet-create private 10.0.0.0/24 --name private_subnet
neutron router-interface-add router1 private_subnet
neutron net-create public --router:external=True
neutron subnet-create public 192.168.0.0/24 --name public_subnet 
--enable_dhcp=False --allocation-pool start=192.168.0.200,end=192.168.0.250 
--gateway=192.168.0.1
neutron router-gateway-set router1 public

** Affects: juniperopenstack
 Importance: High
 Assignee: Sachin Bansal (sbansal)
 Status: Opinion

** Affects: neutron
 Importance: Undecided
 Assignee: ivano (l-ivan)
 Status: Incomplete


** Tags: config neutronapi openstack releasenote

[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/394883
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=796625863d7e3de7882dad5bb7b1c66025c61344
Submitter: Jenkins
Branch:master

commit 796625863d7e3de7882dad5bb7b1c66025c61344
Author: jolie 
Date:   Tue Nov 8 18:10:48 2016 +0800

Replaces uuid.uuid4 with uuidutils.generate_uuid()

Change-Id: I72e502a07d971de7e5c85519c80c4d054863eabe
Closes-Bug: #1082248


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in kuryr:
  In Progress
Status in kuryr-libnetwork:
  In Progress
Status in Magnum:
  New
Status in Mistral:
  Fix Released
Status in Murano:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361235] Re: visit horizon failure because of import module failure

2016-11-14 Thread Donovan Francesco
** Changed in: openstack-ansible
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361235

Title:
  visit horizon failure because of import module failure

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in openstack-ansible:
  Fix Released
Status in osprofiler:
  In Progress
Status in python-mistralclient:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  1. Use TripleO to deploy both undercloud, and overcloud, and enable horizon 
when building images.
  2. Visit horizon portal always failure, and has below errors in 
horizon_error.log

  [Wed Aug 20 01:45:58.441221 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] mod_wsgi (pid=5035): Exception occurred processing WSGI 
script 
'/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/django.wsgi'.
  [Wed Aug 20 01:45:58.441273 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] Traceback (most recent call last):
  [Wed Aug 20 01:45:58.441294 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
 line 187, in __call__
  [Wed Aug 20 01:45:58.449979 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self.load_middleware()
  [Wed Aug 20 01:45:58.45 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 44, in load_middleware
  [Wed Aug 20 01:45:58.450556 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] for middleware_path in settings.MIDDLEWARE_CLASSES:
  [Wed Aug 20 01:45:58.450576 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py",
 line 54, in __getattr__
  [Wed Aug 20 01:45:58.454248 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self._setup(name)
  [Wed Aug 20 01:45:58.454269 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py",
 line 49, in _setup
  [Wed Aug 20 01:45:58.454305 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self._wrapped = Settings(settings_module)
  [Wed Aug 20 01:45:58.454319 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init
  __.py", line 128, in __init__
  [Wed Aug 20 01:45:58.454338 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] mod = importlib.import_module(self.SETTINGS_MODULE)
  [Wed Aug 20 01:45:58.454350 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/utils/importlib.py",
 line 40, in import_module
  [Wed Aug 20 01:45:58.462806 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] __import__(name)
  [Wed Aug 20 01:45:58.462826 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py",
 line 28, in 
  [Wed Aug 20 01:45:58.467136 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from openstack_dashboard import exceptions
  [Wed Aug 20 01:45:58.467156 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/exceptions.py",
 line 22, in 
  [Wed Aug 20 01:45:58.467667 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import exceptions as keystoneclient
  [Wed Aug 20 01:45:58.467685 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/__init__.py",
 line 28, in 
  [Wed Aug 20 01:45:58.472968 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import client
  [Wed Aug 20 01:45:58.472989 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/client.py",
 line 13, in 
  [Wed Aug 20 01:45:58.473833 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import discover
  [Wed Aug 20 01:45:58.473851 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 

[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-11-14 Thread Sharat Sharma
** Changed in: mistral
 Assignee: Sharat Sharma (sharat-sharma) => (unassigned)

** Changed in: mistral
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in Freezer:
  In Progress
Status in gce-api:
  Fix Released
Status in Glance:
  In Progress
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  Invalid
Status in Monasca:
  New
Status in networking-arista:
  Fix Released
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  Fix Committed
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  New
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in Rally:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641535] [NEW] FIP failed to remove in router's standby node

2016-11-14 Thread Dongcan Ye
Public bug reported:

ENV

1. Server side:
   enable router_distributed and l3_ha

2. Agent side:
   all L3 agent mode is dvr_snat (include network nodes and compute nodes)


How to reprocude:
=
associate floatingip  -->  disassociate floatingip  --> reassociate floatingip

We hit trace info in l3 agent:
http://paste.openstack.org/show/589071/


Analysis
==
When we processing floatingip (In the situation router's attribute is ha + 
dvr), in ha_router we only remove floatingip if ha state is 'master'[1], and in 
dvr_local_router we remove it's related IP rule.
Then we reassociate floatingip, it will hit RTNETLINK error. Because we had 
already delete the realted IP rule.


[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L273

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: l3-dvr-backlog l3-ha

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641535

Title:
  FIP failed to remove in router's standby node

Status in neutron:
  New

Bug description:
  ENV
  
  1. Server side:
 enable router_distributed and l3_ha

  2. Agent side:
 all L3 agent mode is dvr_snat (include network nodes and compute nodes)

  
  How to reprocude:
  =
  associate floatingip  -->  disassociate floatingip  --> reassociate floatingip

  We hit trace info in l3 agent:
  http://paste.openstack.org/show/589071/

  
  Analysis
  ==
  When we processing floatingip (In the situation router's attribute is ha + 
dvr), in ha_router we only remove floatingip if ha state is 'master'[1], and in 
dvr_local_router we remove it's related IP rule.
  Then we reassociate floatingip, it will hit RTNETLINK error. Because we had 
already delete the realted IP rule.

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L273

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp