[Yahoo-eng-team] [Bug 1705250] Re: OpenStack Administrator Guides: missing index for murano, cinder & keystone page

2018-08-22 Thread Chuck Short
** Changed in: cinder
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1705250

Title:
  OpenStack Administrator Guides: missing index for murano, cinder &
  keystone page

Status in Cinder:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  These href's on https://docs.openstack.org/admin/ are generating
  directory listings instead of proper pages (missing index.html?):

  Block Storage service (cinder)  (/cinder/latest/admin/)
  Identity service (keystone)  (/keystone/latest/admin/)
  Application Catalog service (murano)  (/murano/latest/admin/)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1705250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660317] Re: NotImplementedError for detach_interface in nova-compute during instance deletion

2017-04-10 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660317

Title:
  NotImplementedError for detach_interface in nova-compute during
  instance deletion

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in ironic package in Ubuntu:
  Invalid
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  When baremetal instance deleted there is a harmless but annoying trace
  in nova-compute output.

  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Terminating instance 
[req-5f1eba69-239a-4dd4-8677-f28542b190bc 5a08515f35d749068a6327e387ca04e2 
7d450ecf00d64399aeb93bc122cb6dae - - -]
  nova.compute.resource_tracker[26553]: INFO Auditing locally available compute 
resources for node d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Final resource view: 
name=d02c7361-5e3a-4fdf-89b5-f29b3901f0fc phys_ram=0MB used_ram=8096MB 
phys_disk=0GB used_disk=480GB total_vcpus=0 used_vcpus=0 pci_stats=[] 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Compute_service record updated for 
bare-compute1:d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Neutron deleted interface 
6b563aa7-64d3-4105-9ed5-c764fee7b536; detaching it from the instance and 
deleting it from the info cache [req-fdfeee26-a860-40a5-b2e3-2505973ffa75 
11b95cf353f74788938f580e13b652d8 93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: ERROR Exception during message handling 
[req-fdfeee26-a860-40a5-b2e3-2505973ffa75 11b95cf353f74788938f580e13b652d8 
93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: TRACE Traceback (most recent call last):
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  oslo_messaging.rpc.server[26553]: TRACE res = 
self.dispatcher.dispatch(message)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, 
in dispatch
  oslo_messaging.rpc.server[26553]: TRACE return 
self._do_dispatch(endpoint, method, ctxt, args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
  oslo_messaging.rpc.server[26553]: TRACE result = func(ctxt, **new_args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE function_name, call_dict, binary)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  oslo_messaging.rpc.server[26553]: TRACE self.force_reraise()
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  oslo_messaging.rpc.server[26553]: TRACE six.reraise(self.type_, 
self.value, self.tb)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE return f(self, context, *args, 
**kw)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6691, in 
external_instance_event
  oslo_messaging.rpc.server[26553]: TRACE event.tag)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6660, in 
_process_instance_vif_deleted_event
  oslo_messaging.rpc.server[26553]: TRACE 
self.driver.detach_interface(instance, vif)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 524, in 
detach_interface
  oslo_messaging.rpc.server[26553]: TRACE raise NotImplementedError()
  oslo_messaging.rpc.server[26553]: TRACE NotImplementedError
  oslo_messaging.rpc.server[26553]: TRACE

  
  Affected version:
  nova 14.0.3
  neutron 6.0.0
  ironic 6.2.1

  configuration for nova-compute:
  compute_driver = ironic.IronicDriver

  Ironic is configured to use neutron networks with generic switch as
  mechanism driver for ML2 pluging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1660317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1674156] Re: neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is not iterable

2017-04-10 Thread Chuck Short
** Package changed: neutron-lbaas (Ubuntu) => neutron

** Also affects: neutron-lbaas (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: neutron-lbaas (Ubuntu)
   Status: New => Triaged

** Changed in: neutron-lbaas (Ubuntu)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674156

Title:
  neutron-lbaasv2-agent: TypeError: argument of type 'LoadBalancer' is
  not iterable

Status in neutron:
  New
Status in neutron-lbaas package in Ubuntu:
  Triaged

Bug description:
  Is somebody actually running neutron LBaaSv2 with haproxy on Ubuntu
  16.04?

  root@controller1:~# dpkg -l neutron-lbaasv2-agent
  ii  neutron-lbaasv2-agent   2:8.3.0-0ubuntu1  
   all  Neutron is a virtual network service for 
Openstack - LBaaSv2 agent
  root@controller1:~# lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.2 LTS
  Release:16.04
  Codename:   xenial
  root@controller1:~# 

  
  From /var/log/neutron/neutron-lbaasv2-agent.log:

  2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] Logging enabled!
  2017-03-19 20:39:06.694 4528 INFO neutron.common.config [-] 
/usr/bin/neutron-lbaasv2-agent version 8.3.0
  2017-03-19 20:39:06.702 4528 WARNING oslo_config.cfg 
[req-9a6a669c-5a5a-4b6c-8c2f-2b1edd9462d9 - - - - -] Option 
"default_ipv6_subnet_pool" from group "DEFAULT" is deprecated for removal.  Its 
value may be silently ignored in the future.
  2017-03-19 20:39:07.033 4528 ERROR 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] 

  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager [-] 
Unable to deploy instance for loadbalancer: c49473a7-b956-4a5d-8215-703335eb3320
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/agent/agent_manager.py", line 
185, in _reload_loadbalancer
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
self.device_drivers[driver_name].deploy_instance(loadbalancer)
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 332, in deploy_instance
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if 
not logical_config or not self._is_active(logical_config):
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 310, in _is_active
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager if 
('vip' not in logical_config or
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 
TypeError: argument of type 'LoadBalancer' is not iterable
  2017-03-19 20:39:07.034 4528 ERROR neutron_lbaas.agent.agent_manager 

  /etc/neutron/neutron_lbaas.conf:
  [service_providers]
  
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
  service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  Looking at the code, I don't see how this can actually works.

  /usr/lib/python2.7/dist-
  
packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py
  +310

  def _is_active(self, logical_config):
  LOG.error(logical_config)
  # haproxy wil be unable to start without any active vip
  ==> if ('vip' not in logical_config or
  (logical_config['vip']['status'] not in
   constants.ACTIVE_PENDING_STATUSES) or
  not logical_config['vip']['admin_state_up']):
  return False

  
  
/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/data_models.py:

  class LoadBalancer(BaseDataModel):

  fields = ['id', 'tenant_id', 'name', 'description', 'vip_subnet_id',
'vip_port_id', 'vip_address', 'provisioning_status',
'operating_status', 'admin_state_up', 'vip_port', 'stats',
'provider', 'listeners', 'pools', 'flavor_id']

  def __init__(self, id=None, tenant_id=None, name=None, description=None,
   vip_subnet_id=None, vip_port_id=None, vip_address=None,
   

[Yahoo-eng-team] [Bug 1657452] Re: Incompatibility with python-webob 1.7.0

2017-04-10 Thread Chuck Short
** Changed in: glance (Ubuntu)
   Status: Triaged => Fix Committed

** Changed in: glance (Ubuntu)
   Status: Fix Committed => Fix Released

** Changed in: keystone (Ubuntu)
   Status: Triaged => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657452

Title:
  Incompatibility with python-webob 1.7.0

Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.middleware:
  Confirmed
Status in glance package in Ubuntu:
  Fix Released
Status in keystone package in Ubuntu:
  Fix Committed
Status in nova package in Ubuntu:
  Triaged
Status in python-oslo.middleware package in Ubuntu:
  Triaged

Bug description:
  
  
keystone.tests.unit.test_v3_federation.WebSSOTests.test_identity_provider_specific_federated_authentication
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_v3_federation.py", line 4067, in 
test_identity_provider_specific_federated_authentication
  self.PROTOCOL)
File "keystone/federation/controllers.py", line 345, in 
federated_idp_specific_sso_auth
  return self.render_html_response(host, token_id)
File "keystone/federation/controllers.py", line 357, in 
render_html_response
  headerlist=headers)
File "/usr/lib/python2.7/dist-packages/webob/response.py", line 310, in 
__init__
  "You cannot set the body to a text value without a "
  TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1657452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623664] Re: [SRU] Race between L3 agent and neutron-ns-cleanup

2017-03-27 Thread Chuck Short
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623664

Title:
  [SRU] Race between L3 agent and neutron-ns-cleanup

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Invalid
Status in neutron package in Ubuntu:
  New
Status in neutron-lbaas package in Ubuntu:
  New
Status in neutron source package in Xenial:
  New
Status in neutron-lbaas source package in Xenial:
  New
Status in neutron source package in Yakkety:
  New
Status in neutron-lbaas source package in Yakkety:
  New

Bug description:
  I suspect a race between the neutron L3 agent and the neutron-netns-
  cleanup script, which runs as a CRON job in Ubuntu. Here's a stack
  trace in the router delete code path:

  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager [-] Error during 
notification for neutron.agent.metadata.driver.before_router_removed router, 
before_delete
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 141, in 
_notify_loop
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/driver.py", line 176, 
in before_router_removed
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
router.iptables_manager.apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 423, in apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 431, in _apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply_synchronized()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 457, in _apply_synchronized
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager save_output 
= self.execute(args, run_as_root=True)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 159, in 
execute
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager raise 
RuntimeError(m)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager RuntimeError:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Command: 
['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 
'iptables-save']
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Exit code: 1
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdin:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdout:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stderr: Cannot 
open network namespace "qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8": No such 
file or directory
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 344, in 
_safe_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 360, in 
_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent self, router=ri)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/registry.py", line 44, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
_get_callback_manager().notify(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 123, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent raise 
exceptions.CallbackFailure(errors=errors)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent CallbackFailure: 
Callback 

[Yahoo-eng-team] [Bug 1623664] Re: [SRU] Race between L3 agent and neutron-ns-cleanup

2017-03-27 Thread Chuck Short
** Also affects: neutron-lbaas (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: neutron-lbaas (Ubuntu Xenial)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623664

Title:
  [SRU] Race between L3 agent and neutron-ns-cleanup

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Invalid
Status in neutron-lbaas package in Ubuntu:
  New
Status in neutron-lbaas source package in Xenial:
  New
Status in neutron-lbaas source package in Yakkety:
  New

Bug description:
  I suspect a race between the neutron L3 agent and the neutron-netns-
  cleanup script, which runs as a CRON job in Ubuntu. Here's a stack
  trace in the router delete code path:

  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager [-] Error during 
notification for neutron.agent.metadata.driver.before_router_removed router, 
before_delete
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 141, in 
_notify_loop
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/driver.py", line 176, 
in before_router_removed
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
router.iptables_manager.apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 423, in apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 431, in _apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply_synchronized()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 457, in _apply_synchronized
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager save_output 
= self.execute(args, run_as_root=True)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 159, in 
execute
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager raise 
RuntimeError(m)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager RuntimeError:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Command: 
['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 
'iptables-save']
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Exit code: 1
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdin:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdout:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stderr: Cannot 
open network namespace "qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8": No such 
file or directory
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 344, in 
_safe_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 360, in 
_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent self, router=ri)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/registry.py", line 44, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
_get_callback_manager().notify(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 123, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent raise 
exceptions.CallbackFailure(errors=errors)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent CallbackFailure: 
Callback neutron.agent.metadata.driver.before_router_removed failed with 

[Yahoo-eng-team] [Bug 1623664] Re: Race between L3 agent and neutron-ns-cleanup

2017-03-27 Thread Chuck Short
** Also affects: neutron-lbaas (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623664

Title:
  Race between L3 agent and neutron-ns-cleanup

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Invalid
Status in neutron-lbaas package in Ubuntu:
  New

Bug description:
  I suspect a race between the neutron L3 agent and the neutron-netns-
  cleanup script, which runs as a CRON job in Ubuntu. Here's a stack
  trace in the router delete code path:

  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager [-] Error during 
notification for neutron.agent.metadata.driver.before_router_removed router, 
before_delete
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 141, in 
_notify_loop
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/driver.py", line 176, 
in before_router_removed
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
router.iptables_manager.apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 423, in apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 431, in _apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply_synchronized()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 457, in _apply_synchronized
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager save_output 
= self.execute(args, run_as_root=True)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 159, in 
execute
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager raise 
RuntimeError(m)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager RuntimeError:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Command: 
['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 
'iptables-save']
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Exit code: 1
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdin:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdout:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stderr: Cannot 
open network namespace "qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8": No such 
file or directory
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 344, in 
_safe_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 360, in 
_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent self, router=ri)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/registry.py", line 44, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
_get_callback_manager().notify(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 123, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent raise 
exceptions.CallbackFailure(errors=errors)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent CallbackFailure: 
Callback neutron.agent.metadata.driver.before_router_removed failed with "
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 

[Yahoo-eng-team] [Bug 1552971] Re: InstanceList.get_by_security_group_id can run very slow

2017-03-09 Thread Chuck Short
** Also affects: cloud-archive/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1552971

Title:
  [SRU] InstanceList.get_by_security_group_id can run very slow

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive liberty series:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed

Bug description:
  [Impact]

   Backporting to Liberty Ubuntu Cloud Archive from Mitaka. The backport is
   fairly simple and clean with the exception of extra two unit tests that
   had to be ammended in order to work. The Liberty codebase still has the
   ec2 api code that is deprecated in Kilo and subsequently removed in Mitaka
   and there is a unit test for that api that was failing.

  [Test Case]

   * Deploy Openstack Liberty with this patch

   * Populate some security groups and create/delete some instances, checking
 that the security groups are functioning properly.

   * Run full Tempest test suite (rev 13.0.0) against deployed cloud.

  [Regression Potential]

   This patch has not received any testing with the ec2 api in future releases
   due the fact that that api is removed in M. Tempest did not find any errors
   when testing against L though so I not envisaging any regressions.
   
  

  The nova.objects.instance.InstanceList class's
  get_by_security_group_id function calls the db.security_group_get
  function, which uses the _security_group_get_query() function to
  generate a query object. That query, by default, joins with the
  secgroup-rules table, and currently the db.security_group_get function
  offers no option to avoid joining with the rules. As a result:

  If a group-source secgroup-rule exists on a security group with a
  large number of instances and a large number of rules, the db query
  result will be very large and take multiple seconds to complete, tying
  up conductor and making the system unresponsive.

  Since the InstanceList.get_by_security_group_id call only aims to
  build a list of instances, there is no need in this case to join with
  the rules, and so the db.security_group_get call should optionally
  avoid joining with the rules table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1552971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449606] Re: Firewall status changed to Error if a rule was inserted or removed

2017-02-21 Thread Chuck Short
** Changed in: neutron-fwaas (Ubuntu)
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449606

Title:
  Firewall status changed to Error if a rule was inserted or removed

Status in neutron:
  Incomplete
Status in neutron-fwaas package in Ubuntu:
  Won't Fix

Bug description:
  Inserting/removing firewall rule does not handled properly.  
Inserting/removing rule results to:
  - The firewall removed from a router. filter table is clear. Old rules were 
removed and no new rules were created in the filter table of the iptables
  - The Firewall has ERROR status

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656010] Re: Incorrect notification to nova about ironic baremetall port (for nodes in 'cleaning' state)

2017-02-21 Thread Chuck Short
** Changed in: ironic (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656010

Title:
  Incorrect notification to nova about ironic baremetall port (for nodes
  in 'cleaning' state)

Status in Ironic:
  Fix Released
Status in neutron:
  In Progress
Status in ironic package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  New

Bug description:
  version: newton (2:9.0.0-0ubuntu1~cloud0)

  When neutron trying to bind port for Ironic baremetall node, it
  sending wrong notification to nova about port been ready. neutron send
  it with 'device_id' == ironic-node-id, and nova rejects it as 'not
  found' (there is no nova instance with such id).

  Log:
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 completed by entity DHCP. 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:147
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153
  neutron.callbacks.manager[22265]: DEBUG Notify callbacks 
[('neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned--9223372036854150578',
 >)] for port, 
provisioning_complete [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] 
_notify_loop /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:142
  neutron.plugins.ml2.plugin[22265]: DEBUG Port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 cannot update to ACTIVE because it is not 
bound. [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _port_provisioned 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py:224
  oslo_messaging._drivers.amqpdriver[22265]: DEBUG sending reply msg_id: 
254703530cd3440584c980d72ed93011 reply queue: 
reply_8b6e70ad5191401a9512147c4e94ca71 time elapsed: 0.0452275519492s 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _send_reply 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:73
  neutron.notifiers.nova[22263]: DEBUG Sending events: [{'name': 
'network-changed', 'server_uuid': u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] 
send_events /usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:257
  novaclient.v2.client[22263]: DEBUG REQ: curl -g -i --insecure -X POST 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}592539c9fcd820d7e369ea58454ee17fe7084d5e" -d '{"events": [{"name": 
"network-changed", "server_uuid": "d02c7361-5e3a-4fdf-89b5-f29b3901f0fc"}]}' 
_http_log_request /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:337
  novaclient.v2.client[22263]: DEBUG RESP: [404] Content-Type: 
application/json; charset=UTF-8 Content-Length: 78 X-Compute-Request-Id: 
req-a029af9e-e460-476f-9993-4551f3b210d6 Date: Thu, 12 Jan 2017 15:43:37 GMT 
Connection: keep-alive 
  RESP BODY: {"itemNotFound": {"message": "No instances found for any event", 
"code": 404}}
   _http_log_response 
/usr/lib/python2.7/dist-packages/keystoneauth1/session.py:366
  novaclient.v2.client[22263]: DEBUG POST call to compute for 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 used request id req-a029af9e-e460-476f-9993-4551f3b210d6 _log_request_id 
/usr/lib/python2.7/dist-packages/novaclient/client.py:85
  neutron.notifiers.nova[22263]: DEBUG Nova returned NotFound for event: 
[{'name': 'network-changed', 'server_uuid': 
u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] send_events 
/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:263
  oslo_messaging._drivers.amqpdriver[22265]: DEBUG received message msg_id: 
0bf04ac8fedd4234bd6cd6c04547beca reply to 
reply_8b6e70ad5191401a9512147c4e94ca71 __call__ 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:194
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-47c505d7-4eb5-4c71-9656-9e0927408822 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153

  
  Port info:
  
+-+---+
  | Field   | Value 
|
  
+-+---+
  | admin_state_up  | True  
|
  

[Yahoo-eng-team] [Bug 850443] Re: Nova API does not listen on IPv6

2017-02-02 Thread Chuck Short
I do believe this is no longer and issue. Please re-open if it is.

** Changed in: python-eventlet (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/850443

Title:
  Nova API does not listen on IPv6

Status in OpenStack Compute (nova):
  Invalid
Status in python-eventlet package in Ubuntu:
  Fix Released

Bug description:
  Nova API service does not bind to IPv6 interfaces. When specifying v6
  address using ec2_list or osapi_listen, it returns "gaierror: [Errno
  -9] Address family for hostname not supported"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/850443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657452] [NEW] Incompatibility with python-webob 1.7.1

2017-01-18 Thread Chuck Short
Public bug reported:


keystone.tests.unit.test_v3_federation.WebSSOTests.test_identity_provider_specific_federated_authentication
---

Captured traceback:
~~~
Traceback (most recent call last):
  File "keystone/tests/unit/test_v3_federation.py", line 4067, in 
test_identity_provider_specific_federated_authentication
self.PROTOCOL)
  File "keystone/federation/controllers.py", line 345, in 
federated_idp_specific_sso_auth
return self.render_html_response(host, token_id)
  File "keystone/federation/controllers.py", line 357, in 
render_html_response
headerlist=headers)
  File "/usr/lib/python2.7/dist-packages/webob/response.py", line 310, in 
__init__
"You cannot set the body to a text value without a "
TypeError: You cannot set the body to a text value without a charset

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1657452

Title:
  Incompatibility with python-webob 1.7.1

Status in OpenStack Identity (keystone):
  New

Bug description:
  
  
keystone.tests.unit.test_v3_federation.WebSSOTests.test_identity_provider_specific_federated_authentication
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_v3_federation.py", line 4067, in 
test_identity_provider_specific_federated_authentication
  self.PROTOCOL)
File "keystone/federation/controllers.py", line 345, in 
federated_idp_specific_sso_auth
  return self.render_html_response(host, token_id)
File "keystone/federation/controllers.py", line 357, in 
render_html_response
  headerlist=headers)
File "/usr/lib/python2.7/dist-packages/webob/response.py", line 310, in 
__init__
  "You cannot set the body to a text value without a "
  TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1657452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656349] [NEW] Incompatiblilty with webob 1.7.0

2017-01-13 Thread Chuck Short
Public bug reported:


   File "/<>/keystonemiddleware/auth_token/__init__.py", line 320, 
in __call__
 response = self.process_request(req)
   File "/<>/keystonemiddleware/auth_token/__init__.py", line 582, 
in process_request
 content_type='application/json')
   File "/usr/lib/python3/dist-packages/webob/exc.py", line 268, in __init__
 **kw)
   File "/usr/lib/python3/dist-packages/webob/response.py", line 310, in 
__init__
 "You cannot set the body to a text value without a "
 TypeError: You cannot set the body to a text value without a charset

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656349

Title:
  Incompatiblilty with webob 1.7.0

Status in OpenStack Identity (keystone):
  New

Bug description:
  
 File "/<>/keystonemiddleware/auth_token/__init__.py", line 
320, in __call__
   response = self.process_request(req)
 File "/<>/keystonemiddleware/auth_token/__init__.py", line 
582, in process_request
   content_type='application/json')
 File "/usr/lib/python3/dist-packages/webob/exc.py", line 268, in __init__
   **kw)
 File "/usr/lib/python3/dist-packages/webob/response.py", line 310, in 
__init__
   "You cannot set the body to a text value without a "
   TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653967] Re: nova raises ConfigFileValueError for URLs with dashes

2017-01-05 Thread Chuck Short
** Also affects: nova (Ubuntu Zesty)
   Importance: Undecided
   Status: Confirmed

** Also affects: python-rfc3986 (Ubuntu Zesty)
   Importance: Undecided
   Status: Confirmed

** Also affects: nova (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: python-rfc3986 (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: python-rfc3986 (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: python-rfc3986 (Ubuntu Zesty)
   Status: Confirmed => Fix Released

** Changed in: nova (Ubuntu Zesty)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653967

Title:
  nova raises ConfigFileValueError for URLs with dashes

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  Fix Released
Status in python-rfc3986 package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  New
Status in python-rfc3986 source package in Xenial:
  New
Status in nova source package in Yakkety:
  New
Status in python-rfc3986 source package in Yakkety:
  New
Status in nova source package in Zesty:
  Fix Released
Status in python-rfc3986 source package in Zesty:
  Fix Released

Bug description:
  nova version: newton
  dpkg version: 2:14.0.1-0ubuntu1~cloud0
  distribution: nova @ xenial with ubuntu cloud archive, amd64.

  Nova fails with exception  ConfigFileValueError: Value for option url
  is not valid: invalid URI: if url parameter of [neutron] section or
  novncproxy_base_url parameter contains dashes in url.

  Steps to reproduce:

  Take a working openstack with nova+neutron.

  Put (in [neutron] section) url= http://nodash.example.com:9696  - it
  works

  Put url = http://with-dash.example.com:9696 - it fails with exception:

  
  nova[18937]: TRACE Traceback (most recent call last):
  nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
  nova[18937]: TRACE sys.exit(main())
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
  nova[18937]: TRACE service.wait()
  nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
  nova[18937]: TRACE _launcher.wait()
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  nova[18937]: TRACE return self._conf._get(name, self._group)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  nova[18937]: TRACE value = self._do_get(name, group, namespace)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  nova[18937]: TRACE % (opt.name, str(ve)))
  nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

  Expected behavior: do not crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633576] Re: neutron-db-manage fails when old migrations are present

2017-01-05 Thread Chuck Short
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633576

Title:
  neutron-db-manage fails when old migrations are present

Status in neutron:
  New
Status in neutron package in Ubuntu:
  New

Bug description:
  Ubuntu 14.04 using cloud-archive packages upgrading from kilo to liberty
  python-neutron:
Installed: 2:8.2.0-0ubuntu1~cloud0
500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ 
trusty-updates/mitaka/main amd64 Packages

  neutron db migration error in prod kilo->liberty upgrade:

  # neutron-db-manage upgrade head

  /usr/lib/python2.7/dist-packages/alembic/util/messaging.py:69: UserWarning: 
Revision havana referenced from havana -> 1cbdb560f806 (head), empty message is 
not present
  http://pastebin.com/D8Pdsvn9 >
  KeyError: 'havana'

  packages are additive so system had migrations for 'havana' in
  /usr/lib/python2.7/dist-
  packages/neutron/db/migration/alembic_migrations but current neutron-
  db-manage doesn't know about 'havana' so freaks out.

  solution: 
  rm -r /usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations
  apt-get install --reinstall python-neutron

  then migrations work as expected.

  I feel like this is a packaging bug and old migrations should be
  removed on upgrade.  I could also see this as an upstream bug where
  code should at least remember historic version names, but choosing to
  report here ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1633576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444532] Re: nova-scheduler doesnt reconnect to databases when started and database is down

2016-11-18 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444532

Title:
  nova-scheduler doesnt reconnect to databases when started and database
  is down

Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Invalid

Bug description:
  In Juno release (ubuntu packages), when you start nova-scheduler but
  database is down, the service never reconnects, the stacktrace is as
  follow :

  
  AUDIT nova.service [-] Starting scheduler node (version 2014.2.2)
  ERROR nova.openstack.common.threadgroup [-] (OperationalError) (2003, "Can't 
connect to MySQL server on '10.128.30.11' (111)") None None
  TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
  TRACE nova.openstack.common.threadgroup x.wait()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  TRACE nova.openstack.common.threadgroup return self.thread.wait()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 173, in 
wait
  TRACE nova.openstack.common.threadgroup return self._exit_event.wait()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  TRACE nova.openstack.common.threadgroup return hubs.get_hub().switch()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 293, in 
switch
  TRACE nova.openstack.common.threadgroup return self.greenlet.switch()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 212, in 
main
  TRACE nova.openstack.common.threadgroup result = function(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 490, 
in run_service
  TRACE nova.openstack.common.threadgroup service.start()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 169, in start
  TRACE nova.openstack.common.threadgroup self.host, self.binary)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 161, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup binary=binary, topic=None)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 949, in wrapper
  TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 279, in 
service_get_all_by
  TRACE nova.openstack.common.threadgroup result = 
self.db.service_get_by_args(context, host, binary)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 136, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup return 
IMPL.service_get_by_args(context, host, binary)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 125, in 
wrapper
  TRACE nova.openstack.common.threadgroup return f(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 490, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup result = model_query(context, 
models.Service).\
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 213, in 
model_query
  TRACE nova.openstack.common.threadgroup session = kwargs.get('session') 
or get_session(use_slave=use_slave)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 101, in 
get_session
  TRACE nova.openstack.common.threadgroup facade = _create_facade_lazily()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 91, in 
_create_facade_lazily
  TRACE nova.openstack.common.threadgroup _ENGINE_FACADE = 
db_session.EngineFacade.from_config(CONF)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 
795, in from_config
  TRACE 

[Yahoo-eng-team] [Bug 1485694] Re: Keystone raises an exception when it receives incorrectly encoded parameters

2016-11-02 Thread Chuck Short
** Changed in: keystone (Ubuntu Wily)
   Status: Fix Committed => Fix Released

** Changed in: keystone (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1485694

Title:
  Keystone raises an exception when it receives incorrectly encoded
  parameters

Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released
Status in keystone source package in Wily:
  Fix Released

Bug description:
  The following command will cause an exception:

  $ curl -g -i -X GET
  http://localhost:35357/v3/users?name=nonexit%E8nt -H "User-Agent:
  python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
  "ADMIN

  This command works as expected:

  $ curl -g -i -X GET
  http://localhost:35357/v3/users?name=nonexit%C3%A8nt -H "User-Agent:
  python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
  ADMIN"

  The exception occurs fairly deep in the WebOb library while it is
  trying to parse the parameters our of the URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1485694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217980] Re: OVS agent will leave compute host in an unsafe state when rpc_setup() fails

2016-10-17 Thread Chuck Short
** Changed in: quantum (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1217980

Title:
  OVS agent will leave compute host in an unsafe state when rpc_setup()
  fails

Status in neutron:
  Fix Released
Status in quantum package in Ubuntu:
  Fix Released

Bug description:
  Recently we saw a case where startup of the quantum (not yet neutron
  in our install, although this part of the code hasn't changed) OVS
  agent on compute hosts was failing due to an unresolvable hostname in
  the rabbit_host parameter, exiting the agent during setup_rpc().
  Unfortunately, on startup the agent reinitialized the OVS flows, so
  when it exited before making RPC calls, it left the compute host in a
  state where it wouldn't pass traffic to instances.

  My first inclination is to submit a patch moving RPC initialization
  higher up in __init__, making it fail fast, before it has made any
  changes to the host system.  However, I don't know if this will have
  knock on effects or be unworkable for some reason I can't see.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1217980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1083155] Re: Unable to set Content-MD5 header when using chunked transfer encoding

2016-10-17 Thread Chuck Short
** Changed in: python-webob (Ubuntu)
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1083155

Title:
  Unable to set Content-MD5 header when using chunked transfer encoding

Status in Glance:
  Fix Released
Status in Glance grizzly series:
  Fix Released
Status in python-webob package in Ubuntu:
  Fix Released

Bug description:
  I came across this when debugging test failures of Openstack Glance
  with Webob 1.1.1 as found in 12.04+

  Taking the following code from glance:

  def download(self, response, result):
  size = result['meta']['size']
  checksum = result['meta']['checksum']
  response.headers['Content-Length'] = size
  response.headers['Content-Type'] = 'application/octet-stream'
  if checksum:
  response.headers['Content-MD5'] = checksum
  response.app_iter = common.size_checked_iter(
  response, result['meta'], size, result['data'], self.notifier)

  This should create a response with appropriate headers (including a
  MD5 checksum) and then use the iterator to return the content
  (potentially a large image) to the calling client; however when:

   response.app_iter = ...

  occurs the MD5 that was set in the preceeding line is set back to
  'None' in the response object; result is a chunked transfer encoded
  response without a checksum (which should be supported).  I traced
  this back to webob/response.py:

  def _app_iter__set(self, value):
  if self._app_iter is not None:
  # Undo the automatically-set content-length
  self.content_length = None
  self.content_md5 = None
  self._app_iter = value

  During construction of the object, neither the app_iter or body is
  specified and as a result the body is set to '' (and the app_iter to [
  '' ]).  So even though no data has ever been provided, the md5 sum is
  dropped as soon as the iterator is provided.

  ProblemType: Bug
  DistroRelease: Ubuntu 12.10
  Package: python-webob 1.1.1-1ubuntu1
  ProcVersionSignature: Ubuntu 3.5.0-18.29-generic 3.5.7
  Uname: Linux 3.5.0-18-generic x86_64
  ApportVersion: 2.6.1-0ubuntu6
  Architecture: amd64
  Date: Mon Nov 26 13:12:45 2012
  MarkForUpload: True
  PackageArchitecture: all
  SourcePackage: python-webob
  UpgradeStatus: Upgraded to quantal on 2012-06-11 (168 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1083155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385295] Re: use_syslog=True does not log to syslog via /dev/log anymore

2016-10-17 Thread Chuck Short
This should be fixed now.

** Changed in: python-oslo.log (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385295

Title:
  use_syslog=True does not log to syslog via /dev/log anymore

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in oslo.log:
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  Fix Released

Bug description:
  python-oslo.log SRU:
  [Impact]

   * Nova services not able to write log to syslog

  [Test Case]

   * 1. Set use_syslog to True in nova.conf/cinder.conf
 2. stop rsyslog service
 3. restart nova/cinder services
 4. restart rsyslog service
 5. Log is not written to syslog after rsyslog is brought up.

  [Regression Potential]

   * none

  
  Reproduced on:
  https://github.com/openstack-dev/devstack 
514c82030cf04da742d16582a23cc64962fdbda1
  /opt/stack/keystone/keystone.egg-info/PKG-INFO:Version: 2015.1.dev95.g20173b1
  /opt/stack/heat/heat.egg-info/PKG-INFO:Version: 2015.1.dev213.g8354c98
  /opt/stack/glance/glance.egg-info/PKG-INFO:Version: 2015.1.dev88.g6bedcea
  /opt/stack/cinder/cinder.egg-info/PKG-INFO:Version: 2015.1.dev110.gc105259

  How to reproduce:
  Set
   use_syslog=True
   syslog_log_facility=LOG_SYSLOG
  for Openstack config files and restart processes inside their screens

  Expected:
  Openstack logs logged to syslog as well

  Actual:
  Nothing goes to syslog

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604397] Re: python-swiftclient is missing in requirements.txt (for glare)

2016-10-05 Thread Chuck Short
** Package changed: glance (Ubuntu) => python-glance-store (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1604397

Title:
  python-swiftclient is missing in requirements.txt (for glare)

Status in Glance:
  New
Status in python-glance-store package in Ubuntu:
  Confirmed

Bug description:
  I'm using UCA glance packages (version "13.0.0~b1-0ubuntu1~cloud0").
  And I've got this error:
  <30>Jul 18 16:03:45 node-2 glance-glare[17738]: ERROR: Store swift could not 
be configured correctly. Reason: Missing dependency python_swiftclient.

  Installing "python-swiftclient" fix the problem.

  In master
  (https://github.com/openstack/glance/blob/master/requirements.txt)
  package "python-swiftclient" is not included in requirements.txt. So
  UCA packages don't have proper dependencies.

  I think requirements.txt should be updated (add python-swiftclient
  there). This change should affect UCA packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1604397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310340] Re: live migration fails when use long hostname of a nova compute target host

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310340

Title:
  live migration fails when use long hostname of a nova compute target
  host

Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Invalid

Bug description:
  Nova don't do live-migration when used long hostname of target host

  nova show ubuntu14.04
  
+--+---+
  | Property | Value
 |
  
+--+---+
  
..
  | OS-EXT-SRV-ATTR:host | compute2 
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  |  compute2.site.local 
|
  
..

  nova live-migration ubuntu14.04  compute2.site.local
  ERROR (BadRequest): Compute service of  compute2.site.local is unavailable at 
this time. (HTTP 400) (Request-ID: req-f344c0bf-aaa3-47e6-a24c-8f37e89858e4)

  but 
  nova live-migration ubuntu14.04  compute2
  runs without error 

  
  Also if you try to do migration through horizon it always fails because 
horizon use long hostname fo target host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387244] Re: Increasing number of InstancePCIRequests.get_by_instance_uuid RPC calls during compute host auditing

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387244

Title:
  Increasing number of InstancePCIRequests.get_by_instance_uuid RPC
  calls during compute host auditing

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  Environment: Ubuntu 14.04/OpenStack Juno Release

  The periodic auditing on compute node becomes very RPC call intensive
  when a large number of instances are running on a cloud; the
  InstancePCIRequests.get_by_instance_uuid call is made on all instances
  running on the hypervisor - when this is multiplied across a large
  number of hypervisors, this impacts back onto the conductor processes
  as they try to service an increasing amount of RPC calls over time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397893] Re: Undeletable volume-backed instance

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397893

Title:
  Undeletable volume-backed instance

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  nova-compute package version 1:2014.1.3-0ubuntu1.1 on Ubuntu 14.04.

  Trying to delete a volume-backed instance in error state doesn't work.
  Nova-compute logs the following error.

  # nova delete 71a41e09-e8bc-4829-979e-1d175246da00
  # tail /var/log/nova/nova-compute.log
  [...]
  2014-12-01 09:52:25.563 25832 AUDIT nova.compute.manager 
[req-39acd9da-518c-4804-bf30-1a38eace21bf 4474a81aca524682875658eb8064c33d 
7dbed2bcbd7541289c34ae8392acf612] [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] Terminating 
  instance
  2014-12-01 09:52:25.569 25832 ERROR nova.virt.libvirt.driver [-] [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] During wait destroy, instance disappeared.
  2014-12-01 09:52:25.638 25832 ERROR nova.compute.manager 
[req-39acd9da-518c-4804-bf30-1a38eace21bf 4474a81aca524682875658eb8064c33d 
7dbed2bcbd7541289c34ae8392acf612] [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] Setting inst
  ance vm_state to ERROR
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] Traceback (most recent call last):
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2273, in 
do_terminate_instance
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] self._delete_instance(context, 
instance, bdms, quotas)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 103, in inner
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] rv = f(*args, **kwargs)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2242, in 
_delete_instance
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] quotas.rollback()
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] six.reraise(self.type_, self.value, 
self.tb)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2214, in 
_delete_instance
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] self._shutdown_instance(context, 
db_inst, bdms)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2156, in 
_shutdown_instance
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] requested_networks)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] six.reraise(self.type_, self.value, 
self.tb)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2146, in 
_shutdown_instance
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] block_device_info)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 963, in 
destroy
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] destroy_disks)
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1040, in 
cleanup
  2014-12-01 09:52:25.638 25832 TRACE nova.compute.manager [instance: 
71a41e09-e8bc-4829-979e-1d175246da00] 

[Yahoo-eng-team] [Bug 1418187] Re: _get_host_numa_topology assumes numa cell has memory

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1418187

Title:
  _get_host_numa_topology assumes numa cell has memory

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  numa cells are not guaranteed to have memory.
  libvirt capabilities represent that correctly.
  nova's _get_host_numa_topology assumes that it can convert cell's memory to
  kilobytes via: 
 memory=cell.memory / units.Ki.

  but cell.memory ends up being None. for some
  LibvirtConfigCapsNUMACell.

  stack trace is like this:
  [-] unsupported operand type(s) for /: 'NoneType' and 'int'
  Traceback (most recent call last):
File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
145, in wait
  x.wait()
File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  return self.thread.wait()
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 173, 
in wait
  return self._exit_event.wait()
File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  return hubs.get_hub().switch()
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 293, in 
switch
  return self.greenlet.switch()
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 212, 
in main
  result = function(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", 
line 492, in run_service
  service.start()
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 181, in start
  self.manager.pre_start_hook()
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1188, 
in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6047, 
in update_available_resource
  rt.update_available_resource(context)
File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", 
line 313, in update_available_resource
  resources = self.driver.get_available_resource(self.nodename)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4825, in get_available_resource
  numa_topology = self._get_host_numa_topology()
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4703, in _get_host_numa_topology
  for cell in topology.cells])
  TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1418187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518430] Re: liberty: ~busy loop on epoll_wait being called with zero timeout

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  New
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.messaging package in Ubuntu:
  New

Bug description:
  Context: openstack juju/maas deploy using 1510 charms release
  on trusty, with:
    openstack-origin: "cloud:trusty-liberty"
    source: "cloud:trusty-updates/liberty

  * Several openstack nova- and neutron- services, at least:
  nova-compute, neutron-server, nova-conductor,
  neutron-openvswitch-agent,neutron-vpn-agent
  show almost busy looping on epoll_wait() calls, with zero timeout set
  most frequently.
  - nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
    http://paste.ubuntu.com/13371248/ (ltrace, strace)

  As comparison, this is how it looks on a kilo deploy:
  - http://paste.ubuntu.com/13371635/

  * 'top' sample from a nova-cloud-controller unit from
     this completely idle stack:
    http://paste.ubuntu.com/13371809/

  FYI *not* seeing this behavior on keystone, glance, cinder,
  ceilometer-api.

  As this issue is present on several components, it likely comes
  from common libraries (oslo concurrency?), fyi filed the bug to
  nova itself as a starting point for debugging.

  Note: The description in the following bug gives a good overview of
  the issue and points to a possible fix for oslo.messaging:
  https://bugs.launchpad.net/mos/+bug/1380220

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554195] Re: Nova (juno) ignores logging_*_format_string in syslog output

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554195

Title:
  Nova (juno) ignores logging_*_format_string in syslog output

Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Won't Fix

Bug description:
  Nova in juno ignores following settings in configuration file ([DEFAULT] 
section):
  logging_context_format_string
  logging_default_format_string
  logging_debug_format_suffix
  logging_exception_prefix

  when sending logs via syslog. Log entries on stderr / in log files are
  fine (use logging_*_format).

  Steps to reproduce:

  1. set up custom logging stings and enable syslog:

  [DEFAULT]
  logging_default_format_string=MYSTYLE-DEFAULT-%(message)s
  logging_context_format_string=MYSTYLE-CONTEXT-%(message)s
  use_syslog=true

  2. restart nova and perform some actions

  3. Check the syslog content

  Expected behaviour: MYSTYLE- prefix in all messages.
  Actual behaviour: no changes in log message styles.

  This bug is specific to Juno version of nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561056] Re: cinder volume driver's detach() causes TypeError exception on v1 cinder client

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561056

Title:
  cinder volume driver's detach() causes TypeError exception on v1
  cinder client

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  Nova version: git master branch's HEAD (as of today)
  Expected behavior: cinderclient v1 detach() called with accepted argument
  Actual behavior: cinderclient v1 detach() called with too many arguments

  Change I3cdc4992 indiscriminately passes both volume_id and
  attachment_id to the Cinder client regardless of its version even
  though Cinder client v2 supports passing volume_id and optionally
  attachment_id to its volume manager's detach() method, but v1 does
  not, only accepting volume_id.

  Calling Cinder client v1 detach() with both volume_id and
  attachment_id results in "TypeError: detach() takes exactly 2
  arguments (3 given)"

  Full traceback and proposed bug fix to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1561056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570631] Re: With hw:vif_multiqueue_enabled, libvirt driver fails with VM larger than 8 vCPU

2016-10-05 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570631

Title:
  With hw:vif_multiqueue_enabled, libvirt driver fails with VM larger
  than 8 vCPU

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  Nova version: 2:12.0.0-ubuntu2~cloud0
  Release: Liberty
  Compute node kernel: 3.19.0-47-generic
  Hypervisor: Libvirt+KVM
  libvirtd version: 1.2.16
  Neutron network (Linuxbridge Agent)

  
  When attempting to instantiate an VM based on an image with the metadata 
hw:vif_multiqueue_enabled=true, creation will fail if the flavor has >8 cores 
assigned.  If the flavor specifies 8 or fewer vCPUs, creation is successful.  

  From /var/log/libvirt/libvirtd.log:

  2016-04-14 21:19:08.161+: 3651: error : virNetDevTapCreate:290 :
  Unable to create tap device tap11db5bd0-3a: Argument list too long

  This is the error throw when attempting to create the VM.

  I believe the reason is that in kernels prior to 4.0, the number of
  queues on a tap interface was limited to 8.

  Based on http://lxr.free-
  electrons.com/source/drivers/net/tun.c?v=3.19#L129, MAX_TAP_QUEUES
  resolves to 8 prior to kernel 4.0.

  In the libvirt vif driver (nova/virt/libvirt/vif.py), in
  __get_virtio_mq_settings, this limit is not respected when setting
  vhost_queues = flavor.cpus.  So when the domain XML is written for the
  guest, vhost_queues is used in the 'queues' argument in the driver.
  When this value is >8, it fails when attempting to create the tap
  interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1570631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626402] Re: ERROR (ClientException): Unexpected API Error

2016-09-30 Thread Chuck Short
** Package changed: nova (Ubuntu) => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1626402

Title:
  ERROR (ClientException): Unexpected API Error

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  I was going through the the openstack link and doing hands on practice
  as well.

  link: http://docs.openstack.org/admin-guide/compute-networking-
  nova.html

  Heading: Using multinic

  Below error I got after running the command:

  
  stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ nova 
network-create first-net --fixed-range-v4 20.20.0.0/24 --project-id nova
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-8ee17d86-7d8b-438a-a80e-26389fbf565a)
  stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$ 
  stack@mirantis-HP-Z400-Workstation:/opt/devstack/devstack$
   

  I am using mitaka version in devstack.

  Thanks and Regards,
  Suraj

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1626402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516652] Re: 'lxd' is an unsupported hypervisor type

2016-03-11 Thread Chuck Short
** Changed in: nova (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1516652

Title:
  'lxd' is an unsupported hypervisor type

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  I'm trying to boot glance images which have the hypervisor_type
  property set on them to 'lxd' to target specific hypervisors within a
  mixed hypervisor cloud; instance boot fails with:

  {"message": "Hypervisor virtualization type 'lxd' is not recognised",
  "code": 400, "created": "2015-11-16T14:54:26Z"}

  nova does some based filtering of hypervisor types - lxd is not in the
  recognized list.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.10
  Package: nova-compute-lxd 0.18-0ubuntu3
  ProcVersionSignature: User Name 4.2.0-17.21-generic 4.2.3
  Uname: Linux 4.2.0-17-generic x86_64
  ApportVersion: 2.19.1-0ubuntu4
  Architecture: amd64
  Date: Mon Nov 16 14:56:52 2015
  Ec2AMI: ami-06b8
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.medium
  Ec2Kernel: None
  Ec2Ramdisk: None
  PackageArchitecture: all
  SourcePackage: nova-compute-lxd
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.nova.nova.compute.conf: [deleted]
  modified.conffile..etc.nova.rootwrap.d.lxd.filters: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1516652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353939] Re: Rescue fails with 'Failed to terminate process: Device or resource busy' in the n-cpu log

2015-11-18 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353939

Title:
  Rescue fails with 'Failed to terminate process: Device or resource
  busy' in the n-cpu log

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  New

Bug description:
  [Impact]

   * Users may sometimes fail to shutdown an instance if the associated qemu
 process is on uninterruptable sleep (typically IO).

  [Test Case]

   * 1. create some IO load in a VM
 2. look at the associated qemu, make sure it has STAT D in ps output
 3. shutdown the instance
 4. with the patch in place, nova will retry calling libvirt to shutdown
the instance 3 times to wait for the signal to be delivered to the 
qemu process.

  [Regression Potential]

   * None


  message: "Failed to terminate process" AND
  message:'InstanceNotRescuable' AND message: 'Exception during message
  handling' AND tags:"screen-n-cpu.txt"

  The above log stash-query reports back only the failed jobs, the 'Failed to 
terminate process' close other failed rescue tests,
  but tempest does not always reports them as an error at the end.

  message: "Failed to terminate process" AND tags:"screen-n-cpu.txt"

  Usual console log:
  Details: (ServerRescueTestJSON:test_rescue_unrescue_instance) Server 
0573094d-53da-40a5-948a-747d181462f5 failed to reach RESCUE status and task 
state "None" within the required time (196 s). Current status: SHUTOFF. Current 
task state: None.

  http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-
  full/90726cb/console.html#_2014-08-07_03_50_26_520

  Usual n-cpu exception:
  
http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-full/90726cb/logs/screen-n-cpu.txt.gz#_2014-08-07_03_32_02_855

  2014-08-07 03:32:02.855 ERROR oslo.messaging.rpc.dispatcher 
[req-39ce7a3d-5ceb-41f5-8f9f-face7e608bd1 ServerRescueTestJSON-2035684545 
ServerRescueTestJSON-1017508309] Exception during message handling: Instance 
0573094d-53da-40a5-948a-747d181462f5 cannot be rescued: Driver Error: Failed to 
terminate process 26425 with SIGKILL: Device or resource busy
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 408, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 292, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher pass
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE 

[Yahoo-eng-team] [Bug 1493809] Re: loadbalancer V2 ports are not serviced by DVR

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493809

Title:
  loadbalancer V2 ports are not serviced by DVR

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  
  ## common/constants.py
  DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
  DEVICE_OWNER_LOADBALANCERV2 = "neutron:LOADBALANCERV2"

  
  ## common/utils.py
  def is_dvr_serviced(device_owner):
  """Check if the port need to be serviced by DVR

  Helper function to check the device owners of the
  ports in the compute and service node to make sure
  if they are required for DVR or any service directly or
  indirectly associated with DVR.
  """
  dvr_serviced_device_owners = (q_const.DEVICE_OWNER_LOADBALANCER,
q_const.DEVICE_OWNER_DHCP)
  return (device_owner.startswith('compute:') or
  device_owner in dvr_serviced_device_owners)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494336] Re: Neutron traceback when an external network without IPv6 subnet is attached to an HA Router

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494336

Title:
  Neutron traceback when an external network without IPv6 subnet is
  attached to an HA Router

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  For an HA Router which does not have any subnets in the external network, 
Neutron 
  sets the IPv6 proc entry[1] on the gateway interface to receive Router Advts 
from 
  external IPv6 router and configure a default route pointing to the LLA of the 
external IPv6 Router.

  Normally for an HA Router in the backup state, Neutron removes Link Local 
Address (LLA)
  from the gateway interface. 

  In Kernel version 3.10 when the last IPv6 address is removed from the 
interface, 
  IPv6 is shutdown on the iface and the proc entries corresponding to the iface 
are deleted (i.e., /proc/sys/net/ipv6/conf/)
  This issue is resolved in the later kernels [2], but the issue exists on 
platforms with Kernel version 3.10
  When IPv6 proc entries are missing and Neutron tries to configure the proc 
entry we see the following traceback [3] in Neutron. 

  [1] /proc/sys/net/ipv6/conf/qg-1fc4061d-3c/accept_ra
  [2] 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=876fd05ddbae03166e7037fca957b55bb3be6594
  [3] Trace:
  Command: ['ip', 'netns', 'exec', 
'qrouter-e66b99aa-e840-4a13-9311-6242710a5452', 'sysctl', '-w', 
'net.ipv6.conf.qg-1fc4061d-3c.accept_ra=2']
  Exit code: 255
  Stdin:
  Stdout:
  Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/qg-1fc4061d-3c/accept_ra: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496974] Re: Improve performance of _get_dvr_sync_data

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496974

Title:
  Improve performance of _get_dvr_sync_data

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Today, when scheduling a router to a host, _get_dvr_sync_data makes a
  call to get all ports on that host.   This causes the time to schedule
  a new router to increase as the number of routers on the host
  increases.

  What can we do to improve performance by limiting the number of ports
  that we need to return to the agent?

  Marked high and kilo-backport-potential because the source problem is
  in an existing operator cloud running stable/kilo

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499054] Re: devstack VMs are not booting

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499054

Title:
  devstack VMs are not booting

Status in Ironic:
  Invalid
Status in Ironic Inspector:
  Invalid
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  In devstack, VMs are failing to boot the deploy ramdisk consistently.
  It appears ipxe is failing to configure the NIC, which is usually
  caused by a DHCP timeout, but can also be caused by a bug in the PXE
  ROM that chainloads to ipxe. See also http://ipxe.org/err/040ee1

  Console output:

   eaBIOS (version 1.7.4-20140219_122710-roseapple)
   achine UUID 37679b90-9a59-4a85-8665-df8267e09a3b
  M

  iPXE (http://ipxe.org) 00:04.0 CA00 PCI2.10 PnP PMM+3FFC2360+3FF22360 CA00

 

  
  Booting from ROM...
  iPXE (PCI 00:04.0) starting execution...ok
  iPXE initialising devices...ok


  iPXE 1.0.0+git-2013.c3d1e78-2ubuntu1.1 -- Open Source Network Boot 
Firmware 
  -- http://ipxe.org
  Features: HTTP HTTPS iSCSI DNS TFTP AoE bzImage ELF MBOOT PXE PXEXT Menu

  net0: 52:54:00:7c:af:9e using 82540em on PCI00:04.0 (open)
[Link:up, TX:0 TXE:0 RX:0 RXE:0]
  Configuring (net0 52:54:00:7c:af:9e).. Error 0x040ee119 
(http://
  ipxe.org/040ee119)
  No more network devices

  No bootable device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1499054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501090] Re: OVSDB wait_for_change waits for a change that has already happened

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501090

Title:
  OVSDB wait_for_change waits for a change that has already happened

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The idlutils wait_for_change() function calls idl.run(), but doesn't
  check to see if it caused a change before calling poller.block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497074] Re: Ignore the ERROR when delete a ipset member or destroy ipset sets

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497074

Title:
  Ignore the ERROR when delete a ipset member or destroy ipset sets

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  when ovs-agent or lb-agent execute ipset command,  it will crash in
  some cases, but some actions like delete a ipset memeber or destroy
  ipset sets, these actions should not crash the l2 agent, we just need
  to log it if happen errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501779] Re: Failing to delete an linux bridge causes log littering

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501779

Title:
  Failing to delete an linux bridge causes log littering

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  I saw this in some ansible jobs in the gate:

  2015-09-30 22:37:21.805 26634 ERROR
  neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
  [req-23466df3-f59e-4897-9a22-1abb7c99dfd9
  9a365636c1b44c41a9770a26ead28701 cbddab88045d45eeb3d2027a3e265b78 - -
  -] Cannot delete bridge brq33213e3f-2b, does not exist

  http://logs.openstack.org/57/227957/3/gate/gate-openstack-ansible-
  dsvm-commit/de3daa3/logs/aio1-neutron/neutron-linuxbridge-agent.log

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L533

  That should not be an ERROR message, it could be INFO at best.  If
  you're racing with RPC and a thing is already gone, which you were
  going to delete anyway, it's not an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498665] Re: no dnsmasq name resolution for IPv6 addresses

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498665

Title:
  no dnsmasq name resolution for IPv6 addresses

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The logic to prevent IPv6 entries from being entered as hosts into the
  lease DB[1] is preventing the hosts from getting name resolution from
  dnsmasq.

  1.
  
https://github.com/openstack/neutron/blob/7707cfd86f47dfc66411e274f343fe8484f9e250/neutron/agent/linux/dhcp.py#L534-L535

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482657] Re: Attribute error on virtual_size

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482657

Title:
  Attribute error on virtual_size

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Version: stable/kilo
  Run with ./run_test.py --runserver

  Running an old havana glance backend will result in an AttributeError
  since the attribute is introduced with the icehouse release. See error
  log at bottom of message. A simple check for the attribute will solve
  this issue and restore compatibility.

  Attached is a patch as proposal.

  Regards
  Christoph

  
  Error log:

  Internal Server Error: /project/instances/launch
  Traceback (most recent call last):
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 137, in get_response
  response = response.render()
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/response.py",
 line 103, in render
  self.content = self.rendered_content
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/response.py",
 line 80, in rendered_content
  content = template.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 148, in render
  return self._render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 142, in _render
  return self.nodelist.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 844, in render
  bit = self.render_node(node, context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/debug.py",
 line 80, in render_node
  return node.render(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/defaulttags.py",
 line 525, in render
  six.iteritems(self.extra_context))
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/defaulttags.py",
 line 524, in 
  values = dict((key, val.resolve(context)) for key, val in
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 596, in resolve
  obj = self.var.resolve(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 734, in resolve
  value = self._resolve_lookup(context)
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/django/template/base.py",
 line 788, in _resolve_lookup
  current = current()
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 717, in 
get_entry_point
  step._verify_contributions(self.context)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 392, in 
_verify_contributions
  field = self.action.fields.get(key, None)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 368, in action
  context)
File 
"/home/coby/ao/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 147, in __init__
  request, context, *args, **kwargs)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 138, in 
__init__
  self._populate_choices(request, context)
File "/home/coby/ao/horizon/horizon/workflows/base.py", line 151, in 
_populate_choices
  bound_field.choices = meth(request, context)
File 
"/home/coby/ao/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 428, in populate_image_id_choices
  if image.virtual_size:
File 
"/home/coby/ao/horizon/.venv/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: virtual_size

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482354] Re: Setting "enable_quotas"=False disables Neutron in GUI

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482354

Title:
  Setting "enable_quotas"=False disables Neutron in  GUI

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Excluding OPENSTACK_NEUTRON_NETWORK["enable_quotas"] or setting to False
  will result in Create Network, Create Subnet, Create Router buttons
  not showing up when logged in as the demo account. KeyError Exceptions are 
  thrown.

  These three side effects happen because the code in the views use the
  following construct
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/networks/tables.py#L94

  usages = quotas.tenant_quota_usages(request)
  if usages['networks']['available'] <= 0:

  if enable_quotas is false, then quotas.tenant_quota_usages does not
  add the 'available' node to the usages dict and therefore a KeyError
  'available' is thrown. This ends up aborting the whole is_allowed
  method in horizon.BaseTable and therefore hiding the button.

  quotas.tenant_quota_usages will not add the available key for usage
  items which are disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483382] Re: Able to request a V2 token for user and project in a non-default domain

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1483382

Title:
  Able to request a V2 token for user and project in a non-default
  domain

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Using the latest devstack, I am able to request a V2 token for user
  and project in a non-default domain. This problematic as non-default
  domains are not suppose to be visible to V2 APIs.

  Steps to reproduce:

  1) install devstack

  2) run these commands

  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 domain list
  
+--+-+-+--+
  | ID   | Name| Enabled | Description  
|
  
+--+-+-+--+
  | 769ad7730e0c4498b628aa8dc00e831f | foo | True|  
|
  | default  | Default | True| Owns users and 
tenants (i.e. projects) available on Identity API v2. |
  
+--+-+-+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 user list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+--+
  | ID   | Name |
  +--+--+
  | cf0aa0b2d5db4d67a94d1df234c338e5 | bar  |
  +--+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 project list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+-+
  | ID   | Name|
  +--+-+
  | 413abdbfef5544e2a5f3e8ac6124dd29 | foo-project |
  +--+-+
  gyee@dev:~$ curl -k -H 'Content-Type: application/json' -d '{"auth": 
{"passwordCredentials": {"userId": "cf0aa0b2d5db4d67a94d1df234c338e5", 
"password": "secrete"}, "tenantId": "413abdbfef5544e2a5f3e8ac6124dd29"}}' 
-XPOST http://localhost:35357/v2.0/tokens | python -mjson.tool
    % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100  3006  100  2854  100   152  22164   1180 --:--:-- --:--:-- --:--:-- 22472
  {
  "access": {
  "metadata": {
  "is_admin": 0,
  "roles": [
  "2b7f29ebd1c8453fb91e9cd7c2e1319b",
  "9fe2ff9ee4384b1894a90878d3e92bab"
  ]
  },
  "serviceCatalog": [
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "id": "3a92a79a21fb41379fa3e135be65eeff",
  "internalURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "publicURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "nova",
  "type": "compute"
  },
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "id": "64338d9eb3054598bcee30443c678e2a",
  "internalURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "publicURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "cinderv2",
  "type": "volumev2"
  },
  {
  "endpoints": [
  {
     

[Yahoo-eng-team] [Bug 1475762] Re: v3 tokens with references outside the default domain can be validated on v2

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1475762

Title:
  v3 tokens with references outside the default domain can be validated
  on v2

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  v2 has no knowledge of multiple domains, so all ID references it sees
  must exist inside the default domain.

  So, a v3 token being validated on v2 must have a project-scope in the
  default domain, a user identity in the default domain, and obviously
  must not be a domain-scoped token.

  The current implementation of Fernet blindly returns tokens to the v2
  API with (at least) project references that exist outside the default
  domain (I have not tested user references). The consequence is that v2
  clients may end up with naming collisions (due to lack of domain
  namespacing).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1475762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477600] Re: Token Validation API returns 401 not 404 on invalid fernet token

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1477600

Title:
  Token Validation API returns 401 not 404 on invalid fernet token

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  Validate token API specifies 404 response for invalid Subject tokens:
   * 
http://developer.openstack.org/api-ref-identity-admin-v2.html#admin-validateToken
   * http://developer.openstack.org/api-ref-identity-v3.html#validateTokens 
(not clear, but KSC auth middleware has the same logic as v2.0)

  For Fernet tokens, this API returns 401 for invalid token:

  curl -H 'X-Auth-Token: valid' -H 'X-Subject-Token: invalid' 
localhost:5000/v3/auth/tokens
  {"error": {"message": "The request you have made requires authentication. 
(Disable debug mode to suppress these details.)", "code": 401, "title": 
"Unauthorized"}}

  I've check the tests and found incorrect one. API spec requires 404,
  test check for 401
  
https://github.com/openstack/keystone/blob/master/keystone/tests/unit/token/test_fernet_provider.py#L51

  Looks like it's broken in one of this places:
   * Controller doesn't check the return 
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L448
   * Fernet token's core doesn't check the return here 
https://github.com/openstack/keystone/blob/master/keystone/token/providers/fernet/core.py#L152
   * Fernet token goes raises 401 here 
https://github.com/openstack/keystone/blob/master/keystone/token/providers/fernet/token_formatters.py#L201

  Note that UUID token raises 404 here as expected
  
https://github.com/openstack/keystone/blob/master/keystone/token/providers/common.py#L679

  Also, note that KSC auth middleware https://github.com/openstack
  /python-
  keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L1147
  we're expect 404 for invalid USER token, and and 401 for invalid ADMIN
  token. So 401 for invalid user token makes middleware go for new admin
  token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1477600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468000] Re: Group lookup by name in LDAP via v3 fails

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1468000

Title:
  Group lookup by name in LDAP via v3 fails

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  This bug is similar to
  https://bugs.launchpad.net/keystone/+bug/1454309 but relates to
  groups. When issuing an "openstack group show " command on
  a domain associated with LDAP, invalid LDAP query is composed and
  Keystone returns ISE 500:

  $ openstack --os-token ADMIN --os-url http://localhost:35357/v3 
--os-identity-api-version 3 group show --domain ad 'Domain Admins'
  ERROR: openstack An unexpected error prevented the server from fulfilling 
your request: {'desc': 'Bad search filter'} (Disable debug mode to suppress 
these details.) (HTTP 500) (Request-ID: 
req-06fd5907-6ade-4872-95ab-e66f0809986a)

  Here's the log:

  2015-06-23 15:59:41.627 8571 DEBUG keystone.common.ldap.core [-] LDAP search: 
base=CN=Users,DC=dept,DC=example,DC=org scope=2 
filterstr=(&((sAMAccountName=Domain Admins))(objectClass=group)) 
attrs=['cn', 'sAMAccountName', 'description'] attrsonly=0 search_s 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py:933
  2015-06-23 15:59:41.628 8571 DEBUG keystone.common.ldap.core [-] LDAP unbind 
unbind_s 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py:906
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi [-] {'desc': 'Bad 
search filter'}
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 240, in __call__
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi result = 
method(context, **params)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/controller.py",
 line 202, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
context, filters, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/controllers.py",
 line 310, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi hints=hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/manager.py",
 line 54, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py",
 line 342, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py",
 line 353, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py",
 line 1003, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi ref_list = 
driver.list_groups(hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/backends/ldap.py",
 line 164, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return 
self.group.get_all_filtered(hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/backends/ldap.py",
 line 402, in get_all_filtered
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi for group in 
self.get_all(query)]
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py",
 line 1507, in get_all
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi for x in 
self._ldap_get_all(ldap_filter)]
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py",
 line 1469, in _ldap_get_all
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi attrs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
"/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py",
 line 946, in search_s
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi attrlist_utf8, 
attrsonly)
  

[Yahoo-eng-team] [Bug 1459382] Re: Fernet tokens can fail with LDAP identity backends

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459382

Title:
  Fernet tokens can fail with LDAP identity backends

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  It is possible for Keystone to fail to issue tokens when using an
  external identity backend, like LDAP, if the user IDs of a different
  format than UUID. This is because the Fernet token formatter attempts
  to convert the UUID to bytes before packing the payload. This is done
  to save space and results in a shorter token.

  When using an LDAP backend that doesn't use UUID format for the user
  IDs, we get a ValueError because UUID can't convert whenever the ID is
  to UUID.bytes [0]. We have to do something similar with the default
  domain in the case that it's not a uuid, same with federated user IDs
  [1], which we should probably do in this case.

  Related stacktrace [2].

  
  [0] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L415
  [1] 
https://github.com/openstack/keystone/blob/e5f2d88e471ac3595c4ea0e28f27493687a87588/keystone/token/providers/fernet/token_formatters.py#L509
  [2] http://lists.openstack.org/pipermail/openstack/2015-May/012885.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454968] Re: hard to understand the uri printed in the log

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454968

Title:
  hard to understand the uri printed in the log

Status in Keystone:
  Fix Released
Status in Keystone juno series:
  In Progress
Status in Keystone kilo series:
  Fix Released

Bug description:
  In keystone's log file, we can easily find some uri printed like this:
  
http://127.0.0.1:35357/v3/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens

  seems there is something wrong when we are trying to log the uri in the log 
file.
  LOG.info('%(req_method)s %(uri)s', {
  'req_method': req.environ['REQUEST_METHOD'].upper(),
  'uri': wsgiref.util.request_uri(req.environ),
  })

  code is here:
  
https://github.com/openstack/keystone/blob/0debc2fbf448b44574da6f3fef7d457037c59072/keystone/common/wsgi.py#L232
  but seems it has already been wrong when the req is passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1454968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459790] Re: With fernet tokens, validate token loses the ms on 'expires' value

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459790

Title:
  With fernet tokens, validate token loses the ms on 'expires' value

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  With fernet tokens, the expires ms value is 0 when the token is
  validated.  So the 'expires' on the post token and the get token are
  different; this is not the case with uuid tokens.

  $ curl -s \
   -H "Content-Type: application/json" \
   -d '{ "auth":{ "tenantName":"testTenantName", "passwordCredentials":{ 
"username":"testUserName", "password":"password" }}}' \
  -X POST $KEYSTONE_ENDPOINT:5000/v2.0/tokens | python -mjson.tool

  post token portion of the response contains 'expires' with a ms value
  :

  "token": {
  "audit_ids": [
  "eZtfF60tR7y5oAuL4LSr4w"
  ],
  "expires": "2015-05-28T20:50:56.015102Z",
  "id": 
"gABVZ2OQu3OunvR6FKklDdNWj95Aq-ju_sIhB9o0KRin2SpLRUa0C3H_XiV_RWN409Ma-Q7lIkA_S6mY3bnxgboJZ_qxUiTdzUscG5y_fSCUW5sQqmB2AI1rlmMetvTl6AnnRKzVHVlJlDKQNHuk0MzHM3IVr4-ysJ2AHBtmDfkdpRZCrFo%3D",
  "issued_at": "2015-05-28T18:50:56.015211Z",
  "tenant": {
  "description": "Test tenant ...",
  "enabled": true,
  "id": "1c6e0d2ac4bf4cd5bc7666d86b28aee0",
  "name": "testTenantName",
  "parent_id": null
  }
  },

  If this token is validated, the expires ms now show as 00Z

  $ curl -s \
   -H "Content-Type: application/json" \
   -H "X-Auth-Token: $ADMIN_TOKEN" \
  -X GET   $KEYSTONE_ENDPOINT:35357/v2.0/tokens/$USER_TOKEN | python -mjson.tool

  get token portion of the response contains 'expires' with ms = 00Z

  ],
  "token": {
  "audit_ids": [
  "lZwaM7oaShCZGQt0A9FaKA"
  ],
  "expires": "2015-05-28T20:27:24.00Z",
  "id": 
"gABVZ14MKoaOBq4WBHaF1fqEKrN_nTrYYhwi8xrAisWmyJ52DJOrVlyxAoUuL_tfrGhslYVffRTosF5FqQVYlNq6hqU-qGzhueC4xVJZL8oitv0PfOdGfLgAWM1pciuiIdDLnWb-6oNrgZ9l1lHqn1kyuO0JVmS_YJfYI4YOt0o7ZfJhzFQ=",
  "issued_at": "2015-05-28T18:27:24.00Z",
  "tenant": {
  "description": "Test tenant ...",
  "enabled": true,
  "id": "1c6e0d2ac4bf4cd5bc7666d86b28aee0",
  "name": "testTenantName",
  "parent_id": null
  }
  },

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454309] Re: Keystone v3 user/tenant lookup by name via OpenStack CLI client fails

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454309

Title:
  Keystone v3 user/tenant lookup by name via OpenStack CLI client fails

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  When using the openstack CLI client to look up users/tenants by name
  (e.g., openstack user show admin or openstack openstack project show
  AdminTenant), it fails with a 500 and the following traceback:

  2015-05-12 09:27:22.483530 2015-05-12 09:27:22.483 31012 DEBUG 
keystone.common.ldap.core [-] LDAP search: base=ou=People,dc=local,dc=lan 
scope=2 filterstr=(&((sn=admin))(objectClass=inetOrgPerson)) attrs=['cn', 
'userPassword', 'enabled', 'sn', 'mail'] attrsonly=0 search_s 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py:931
  2015-05-12 09:27:22.483677 2015-05-12 09:27:22.483 31012 DEBUG 
keystone.common.ldap.core [-] LDAP unbind unbind_s 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py:904
  2015-05-12 09:27:22.485831 2015-05-12 09:27:22.483 31012 ERROR 
keystone.common.wsgi [-] {'desc': 'Bad search filter'}
  2015-05-12 09:27:22.485874 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi Traceback (most recent call last):
  2015-05-12 09:27:22.485881 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 239, in 
__call__
  2015-05-12 09:27:22.485885 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi result = method(context, **params)
  2015-05-12 09:27:22.485897 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 202, in 
wrapper
  2015-05-12 09:27:22.485901 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, context, filters, **kwargs)
  2015-05-12 09:27:22.485904 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/controllers.py", line 223, 
in list_users
  2015-05-12 09:27:22.485908 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi hints=hints)
  2015-05-12 09:27:22.485911 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 52, in 
wrapper
  2015-05-12 09:27:22.485915 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, *args, **kwargs)
  2015-05-12 09:27:22.485919 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 342, in 
wrapper
  2015-05-12 09:27:22.485922 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, *args, **kwargs)
  2015-05-12 09:27:22.485926 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 353, in 
wrapper
  2015-05-12 09:27:22.485930 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return f(self, *args, **kwargs)
  2015-05-12 09:27:22.485933 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 791, in 
list_users
  2015-05-12 09:27:22.485937 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi ref_list = driver.list_users(hints)
  2015-05-12 09:27:22.485941 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py", line 82, 
in list_users
  2015-05-12 09:27:22.485944 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return self.user.get_all_filtered(hints)
  2015-05-12 09:27:22.485948 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py", line 
269, in get_all_filtered
  2015-05-12 09:27:22.485951 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi return [self.filter_attributes(user) for user in 
self.get_all(query)]
  2015-05-12 09:27:22.485964 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py", line 1863, in 
get_all
  2015-05-12 09:27:22.485968 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi for x in self._ldap_get_all(ldap_filter)
  2015-05-12 09:27:22.485972 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py", line 1467, in 
_ldap_get_all
  2015-05-12 09:27:22.485975 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi attrs)
  2015-05-12 09:27:22.485979 2015-05-12 09:27:22.483 31012 TRACE 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py", line 944, in 
search_s
  

[Yahoo-eng-team] [Bug 1465444] Re: Fernet key rotation removing keys early

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1465444

Title:
  Fernet key rotation removing keys early

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  When setting up Fernet key rotation with a maximum number of active of
  keys set to 25, it turned out that 'keystone-manage fernet_rotate'
  started deleting two keys once there reached 13 existing keys. It
  would waver between 12 and 13 keys every time it was rotated. It looks
  like this might be related to the range of keys to remove being
  negative :

  excess_keys = ( keys[:len(key_files) - CONF.fernet_tokens.max_active_keys + 
1])
  .. ends up being excess_keys = ( keys[:-11] )
  .. which seems to be dipping back into the range of keys that should still be 
good and removing those.

  Adding something like: "if len(key_files) -
  CONF.fernet_tokens.max_active_keys + 1 >= 0" for the purge excess keys
  section seemed to allow us to generate all 25 keys, then rotate as
  normal. Once we hit the full 25 keys, this additional line was no
  longer needed.

  Attaching some log information showing the available keys going from
  12, 13, 12, 13.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1465444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287757] Re: Optimization: Don't prune events on every get

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1287757

Title:
  Optimization:  Don't prune events on every get

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in Keystone liberty series:
  Fix Released

Bug description:
  _prune_expired_events_and_get always locks the backend. Store the time
  of the oldest event so that the prune process can be skipped if none
  of the events have timed out.

  (decided at keystone midcycle - 2015/07/17) -- MorganFainberg
  The easiest solution is to do the prune on issuance of new revocation event 
instead on the get.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1287757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448286] Re: unicode query string raises UnicodeEncodeError

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1448286

Title:
  unicode query string raises UnicodeEncodeError

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  The logging in keystone.common.wsgi is unable to handle unicode query
  strings. The simplest example would be:

$ curl http://localhost:35357/?Ϡ

  This will fail with a backtrace similar to:

2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi   File 
".../keystone/keystone/common/wsgi.py", line 234, in __call__
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi 'params': 
urllib.urlencode(req.params)})
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/urllib.py", line 1311, in urlencode
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi k = 
quote_plus(str(k))
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u03e0' in position 
0: ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1448286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491131] Re: Ipset race condition

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491131

Title:
  Ipset race condition

Status in neutron:
  In Progress
Status in neutron juno series:
  Confirmed
Status in neutron kilo series:
  Fix Released

Bug description:
  Hello,

  We have been using ipsets in neutron since juno.  We have upgraded our
  install to kilo a month or so and we have experienced 3 issues with
  ipsets.

  The issues are as follows:
  1.) Iptables attempts to apply rules for an ipset that was not added
  2.) iptables attempt to apply rules for an ipset that was removed, but still 
refrenced in the iptables config
  3.) ipset churns trying to remove an ipset that has already been removed.

  For issue one and two I am unable to get the logs for these issues
  because neutron was dumping the full iptables-restore entries to log
  once every second for a few hours and eventually filled up the disk
  and we removed the file to get things working again.

  For issue 3.) I have the start of the logs here:
  2015-08-31 12:17:00.100 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
29355e52-bae1-44b2-ace6-5bc7ce497d32 not present in bridge br-int
  2015-08-31 12:17:00.101 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.101 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'2aa0f79d-4983-4c7a-b489-e0612c482e36']
  2015-08-31 12:17:00.861 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:00.862 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.862 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:01.499 4581 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.500 6840 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.608 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:01.609 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:01.609 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:02.358 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:02.359 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:02.359 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.108 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:03.109 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.109 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.855 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
fddff586-9903-47ad-92e1-b334e02e9d1c not present in bridge br-int
  2015-08-31 12:17:03.855 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.856 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'3f706749-f8bb-41ab-aa4c-a0925dc67bd4']
  2015-08-31 12:17:03.919 4581 INFO neutron.agent.securitygroups_rpc 
[req-1872b212-b537-41cc-96af-0c6ad380824c ] Security group member 

[Yahoo-eng-team] [Bug 1430394] Re: neutron port-delete operation throws HTTP 500, if port is lb-vip

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430394

Title:
  neutron port-delete operation throws HTTP 500, if  port is lb-vip

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  1. create a VIP for existed load-balancer

  # neutron lb-vip-create --name vip --protocol-port 80 --protocol HTTP
  --subnet-id  LB

  2. obtain the id of this new VIP by  neutron port-list

  # neutron port-list
  
+--+--+---+-+
  | id   | name 
| mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6bbfbc5b-93d2-4791-bb8a-ef292f04aed1 | 
vip-0093a88f-3c4c-4e84-a9d4-14e9264faa5a | fa:16:3e:7d:b9:b0 | {"subnet_id": 
"b22172b7-05ee-42b8-b3b9-48a312fdfc97", "ip_address": "192.168.10.5"} |

  3.  # neutron port-delete 6bbfbc5b-93d2-4791-bb8a-ef292f04aed1
  Request Failed: internal server error while processing your request.

  # neutron --verbose port-delete 6bbfbc5b-93d2-4791-bb8a-ef292f04aed1
  DEBUG: neutronclient.neutron.v2_0.port.DeletePort 
run(Namespace(id=u'6bbfbc5b-93d2-4791-bb8a-ef292f04aed1', 
request_format='json'))
  DEBUG: neutronclient.client
  ...
  DEBUG: neutronclient.client
  REQ: curl -i 
http://10.162.80.155:9696/v2.0/ports/6bbfbc5b-93d2-4791-bb8a-ef292f04aed1.json 
-X DELETE -H "X-Auth-Token: MIINoQYJKoZIhvcNAQcCoII..
  .Yr80gJf7djQE1JI+PA-Q==" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "User-Agent: python-neutronclient"

  DEBUG: neutronclient.client RESP:{'date': 'Tue, 10 Mar 2015 15:09:30
  GMT', 'status': '500', 'content-length': '88', 'content-type':
  'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-
  75f8c9ca-e273-4e3f-bc4d-9db7d7828794'} {"NeutronError": "Request
  Failed: internal server error while processing your request."}

  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": "Request 
Failed: internal server error while processing your request."}
  ERROR: neutronclient.shell Request Failed: internal server error while 
processing your request.
  Traceback (most recent call last):
    File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 526, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
    File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 79, in 
run_command
  return cmd.run(known_args)
    File 
"/usr/lib/python2.6/site-packages/neutronclient/neutron/v2_0/__init__.py", line 
509, in run
  obj_deleter(_id)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
111, in with_params
  ret = self.function(instance, *args, **kwargs)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
326, in delete_port
  return self.delete(self.port_path % (port))
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1232, in delete
  headers=headers, params=params)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1221, in retry_request
  headers=headers, params=params)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1164, in do_request
  self._handle_fault_response(status_code, replybody)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
1134, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
    File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line 
84, in exception_handler_v20
  message=error_dict)
  NeutronClientException: Request Failed: internal server error while 
processing your request.
  DEBUG: neutronclient.shell clean_up DeletePort
  DEBUG: neutronclient.shell Got an error: Request Failed: internal server 
error while processing your request.
  [root@kvalenti-controller ~(keystone_admin)]# neutron port-delete 
6bbfbc5b-93d2-4791-bb8a-ef292f04aed1
  Request Failed: internal server error while processing your request.
  #

  It's better to return "Unable to delete" or some other temporary error
  code.  This mocking  is not so painful for returning "500" to clients
  .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1442787] Re: Mapping openstack_user attribute in k2k assertions with different domains

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1442787

Title:
  Mapping openstack_user attribute in k2k assertions with different
  domains

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  We can have two users with the same username in different domains. So
  if we have a "User A" in "Domain X" and a "User A" in "Domain Y",
  there is no way to differ what "User A" is being used in a SAML
  assertion generated by this IdP (we have only the openstack_user
  attribute in the SAML assertion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1442787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388698] Re: dhcp_agents_per_network does not work appropriately.

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1388698

Title:
  dhcp_agents_per_network does not work appropriately.

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  neutron.conf
  -
  # Number of DHCP agents scheduled to host a network. This enables redundant
  # DHCP agents for configured networks.
  # dhcp_agents_per_network = 1
  dhcp_agents_per_network = 1
  -

  Conditions:
    A) multiple network nodes.
    B) dhcp-agents are alives in each network nodes.
    C) one network is hosted by one dhcp-agent.

     ex:
   network node1:  dhcp-agent1 hosts network1.
   network node2:  dhcp-agent2 hosts nothing.

  procedures:

  1)  stop dhcp-agent1.
  2)  port create ysing network1
  3)  start dhcp-agent1.

  result:

   network node1:  dhcp-agent1 hosts network1.
   network node2:  dhcp-agent2 hosts network1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1388698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443607] Re: Linux Bridge: can't change the VM's bridge and tap interface MTU at Compute node.

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443607

Title:
  Linux Bridge: can't change the VM's bridge and tap interface MTU at
  Compute node.

Status in networking-cisco:
  New
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  I use DevStack to deploy OpenStack with Linux Bridge instead of OVS in
  a multi-node set up.

  I¹m testing jumbo frames and want to set MTU to 9000.

  At the Network node, the bridges and tap interfaces are created with
  MTU = 9000:

  localadmin@qa4:~/devstack$ brctl show
  bridge name   bridge id STP enabled   interfaces
  brq09047ecb-1c   8000.7c69f62c4f2f   no  eth1

tapedbcd5b1-a6
  brq319688ab-93   8000.3234d6ee3a18  no   bond0.300

tap4e230a86-cb

tapfddaf12e-85
  virbr08000.  yes

  localadmin@qa4:~/devstack$ ifconfig brq09047ecb-1c
  brq09047ecb-1c Link encap:Ethernet  HWaddr 7c:69:f6:2c:4f:2f
inet6 addr: fe80::3c79:c8ff:fe23:2fe7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:696 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:30617 (30.6 KB)  TX bytes:648 (648.0 B)

  localadmin@qa4:~/devstack$ ifconfig brq319688ab-93
  brq319688ab-93 Link encap:Ethernet  HWaddr 32:34:d6:ee:3a:18
inet6 addr: fe80::e0ec:3bff:fe09:4318/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:30 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4236 (4.2 KB)  TX bytes:648 (648.0 B)

  localadmin@qa4:~/devstack$ ifconfig tapedbcd5b1-a6
  tapedbcd5b1-a6 Link encap:Ethernet  HWaddr ae:fb:64:53:f7:2d
inet6 addr: fe80::acfb:64ff:fe53:f72d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:65 errors:0 dropped:0 overruns:0 frame:0
TX packets:947 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10223 (10.2 KB)  TX bytes:80510 (80.5 KB)

  localadmin@qa4:~/devstack$ ifconfig tap4e230a86-cb
  tap4e230a86-cb Link encap:Ethernet  HWaddr 32:34:d6:ee:3a:18
inet6 addr: fe80::3034:d6ff:feee:3a18/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:2073 errors:0 dropped:0 overruns:0 frame:0
TX packets:2229 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8878139 (8.8 MB)  TX bytes:8914532 (8.9 MB)

  localadmin@qa4:~/devstack$ ifconfig tapfddaf12e-85
  tapfddaf12e-85 Link encap:Ethernet  HWaddr d2:33:29:9b:2c:e8
inet6 addr: fe80::d033:29ff:fe9b:2ce8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:152 errors:0 dropped:0 overruns:0 frame:0
TX packets:295 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15237 (15.2 KB)  TX bytes:51849 (51.8 KB)


  The instance launched at the Compute node has interface eth0 MTU =
  9000:

  ubuntu@qa5-vm2:~$ ifconfig
  eth0  Link encap:Ethernet  HWaddr fa:16:3e:05:36:58
inet addr:10.0.0.4  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe05:3658/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:1169 errors:0 dropped:0 overruns:0 frame:0
TX packets:384 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1408206 (1.4 MB)  TX bytes:1336535 (1.3 MB)

  
  However, the associated bridge and tap interface MTU is set to default 1500:

  localadmin@qa5:~/devstack$ brctl show
  bridge name   bridge id STP enabled   
interfaces
  brq319688ab-93   8000.6805ca302558   no   bond0.300

tapa7acee8a-54
  virbr08000.  yes

  localadmin@qa5:~/devstack$ ifconfig brq319688ab-93
  brq319688ab-93 Link encap:Ethernet  HWaddr 68:05:ca:30:25:58
inet6 addr: fe80::a8ef:dfff:feb1:8eec/64 Scope:Link
UP 

[Yahoo-eng-team] [Bug 1365476] Re: HA routers interact badly with l2pop

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365476

Title:
  HA routers interact badly with l2pop

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Since internal HA router interfaces are created on more than a single
  agent, this interacts badly with l2pop that assumes that a Neutron
  port is located in a certain place in the network. We'll need to
  report to l2pop when a HA router transitions to an active state, so
  the port location is changed.

  Patch is here:
  https://review.openstack.org/#/c/141114/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274034] Re: Neutron firewall anti-spoofing does not prevent ARP poisoning

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274034

Title:
  Neutron firewall anti-spoofing does not prevent ARP poisoning

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Invalid
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache 
poisoning.
  When anti-spoofing rules are handled by Nova, a list of rules was added 
through the libvirt network filter feature:
  - no-mac-spoofing
  - no-ip-spoofing
  - no-arp-spoofing
  - nova-no-nd-reflection
  - allow-dhcp-server

  Actually, the neutron firewall driver 'iptabes_firawall' handles only
  MAC and IP anti-spoofing rules.

  This is a security vulnerability, especially on shared networks.

  Reproduce an ARP cache poisoning and man in the middle:
  - Create a private network/subnet 10.0.0.0/24
  - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 
10.0.0.4)
  - Log on VM1 and install ettercap [1]
  - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:'
  - Log on too on VM2 (with VNC/spice console) and ping google.fr => ping is ok
  - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 
instead to be send directly to the network gateway and forwarded by the VM1 to 
the gw. The ICMP capture looks something like that [2]
  - Go back to VM2 and check the ARP table => the MAC address associated to the 
GW is the MAC address of VM1

  [1] http://ettercap.github.io/ettercap/
  [2] http://paste.openstack.org/show/62112/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357068] Re: Arping doesn't work with IPv6

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357068

Title:
  Arping doesn't work with IPv6

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Neutron tries to query if host exists by apring, but in IPv6 there is no ARP, 
but only NDP.
  Some other tool should be used, for example:  ndisc6 
(http://www.remlab.net/ndisc6/), ndp 
(http://www.freebsd.org/cgi/man.cgi?query=ndp=8). 
  RFC Neighbor Discovery for IP version 6: http://tools.ietf.org/html/rfc4861

  Neutron log:
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-dfe30f07-f4cd-47db-ac31-347b87435c83', 'arping', '-A', '-I', 
'qr-bdaba7ef-6c', '-c', '3', 'fd02::1']
  Exit code: 2
  Stdout: ''
  Stderr: 'arping: unknown host fd02::1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411163] Re: No fdb entries added when failover dhcp and l3 agent together

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411163

Title:
  No fdb entries added when failover dhcp and l3 agent together

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  [Env]

  OpenStack: icehouse
  OS: ubuntu
  enable l2 population
  enable gre tunnel

  [Description]
  If the dhcp and l3 agent on the same host, then after this host is down, then 
there will be a probability that scheduled to other same host, then sometimes 
the ovs tunnel can't be created on the new scheduled host.

  [Root cause]
  After debugging, we found below log:
  2015-01-14 13:44:18.284 9815 INFO neutron.plugins.ml2.drivers.l2pop.db 
[req-e36fe1fe-a08c-43c9-9d9c-75fe714d6f91 None] query:[, ]

  Above shows there will be a probability that two ACTIVE ports shows up in db 
together, but from l2 pop mech_driver:
  "
  if agent_active_ports == 1 or (
  self.get_agent_uptime(agent) < 
cfg.CONF.l2pop.agent_boot_time):
  "
  only in above condition the fdb entry will be added and notified to agent, so 
failures are pop up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474618] Re: N1KV network and port creates failing from dashboard

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474618

Title:
  N1KV network and port creates failing from dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Due to the change in name of the "profile" attribute in Neutron
  attribute extensions for networks and ports, network and port
  creations fail from the dashboard since dashboard is still using
  "n1kv:profile_id" rather than "n1kv:profile".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474241] Re: Need a way to disable simple tenant usage

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474241

Title:
  Need a way to disable simple tenant usage

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Frequent calls to Nova's API when displaying the simple tenant usage
  can lead to efficiency problems and even crash on the Nova side,
  especially when there are a lot of deleted nodes in the database. We
  are working on resolving that, but in the mean time, it would be nice
  to have a way of disabling the simple tenant usage stats on the
  Horizon side as a workaround.

  Horizon enabled that option depending on whether it's supported on the
  Nova side. In the 2.0 version of API we can simply disable the support
  for it on the Nova side, but that won't be possible in version 2.1
  anymore, so we need a configuration option on the Horizon side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490403] Re: Gate failing on test_routerrule_detail

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490403

Title:
  Gate failing on test_routerrule_detail

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  The gate/jenkins checks is currently bombing out on this error:

  ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
  --
  Traceback (most recent call last):
File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, 
in instance_stub_out
  return fn(self, *args, **kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 711, in test_routerrule_detail
  res = self._get_detail(router)
File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, 
in instance_stub_out
  return fn(self, *args, **kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 49, in _get_detail
  args=[router.id]))
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 470, in get
  **extra)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 286, in get
  return self.generic('GET', path, secure=secure, **r)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 358, in generic
  return self.request(**r)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 440, in request
  six.reraise(*exc_info)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 146, in get
  context = self.get_context_data(**kwargs)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/views.py", 
line 140, in get_context_data
  context = super(DetailView, self).get_context_data(**kwargs)
File "/home/ubuntu/horizon/horizon/tables/views.py", line 107, in 
get_context_data
  context = super(MultiTableMixin, self).get_context_data(**kwargs)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 56, in 
get_context_data
  exceptions.handle(self.request)
File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/home/ubuntu/horizon/horizon/tabs/views.py", line 54, in 
get_context_data
  context["tab_group"].load_tab_data()
File "/home/ubuntu/horizon/horizon/tabs/base.py", line 128, in load_tab_data
  exceptions.handle(self.request)
File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/home/ubuntu/horizon/horizon/tabs/base.py", line 125, in load_tab_data
  tab._data = tab.get_context_data(self.request)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 82, in get_context_data
  data["rulesmatrix"] = self.get_routerrulesgrid_data(rules)
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 127, in get_routerrulesgrid_data
  source, target, rules))
File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 159, in _get_subnet_connectivity
  if (int(dst.network) >= int(rd.broadcast) or
  TypeError: int() argument must be a string or a number, not 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490403/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1474512] Re: STATIC_URL statically defined for stack graphics

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474512

Title:
  STATIC_URL statically defined for stack graphics

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  The svg and gif images are still using '/static/' as the base url.
  Since WEBROOT is configurable and STATIC_URL is as well. This is needs
  to be fixed or the images won't be found when either has been set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469563] Re: Fernet tokens do not maintain expires time across rescope (V2 tokens)

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469563

Title:
  Fernet tokens do not maintain expires time across rescope (V2 tokens)

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in Keystone liberty series:
  Fix Released

Bug description:
  Fernet tokens do not maintain the expiration time when rescoping
  tokens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1469563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492065] Re: Create instance testcase -- "test_launch_form_keystone_exception" broken

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492065

Title:
  Create instance testcase -- "test_launch_form_keystone_exception"
  broken

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  The test_launch_form_keystone_exception test method calls the handle
  method of the LaunchInstance class. Changes made to the handle method
  in [1] introduced a new neutron api call that was not being mocked
  out, causing an unexpected exception in the
  _cleanup_ports_on_failed_vm_launch function of the create_instance
  module, while running the test_launch_form_keystone_exception unit
  test

  [1] https://review.openstack.org/#/c/202347/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500385] Re: Change region selector requires 2 ciicks to open

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500385

Title:
  Change region selector requires 2 ciicks to open

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  This behavior is true only for stable/kilo branch and is not seen in
  liberty release due to a different codebase.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479943] Re: XmlBodyMiddleware stubs break existing configs

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1479943

Title:
  XmlBodyMiddleware stubs break existing configs

Status in Keystone:
  Invalid
Status in Keystone kilo series:
  Fix Released

Bug description:
  The Kilo Keystone release dropped support for requests with XML
  bodies, but included shims to (presumably) prevent existing configs
  from breaking. This works as desired for XmlBodyMiddleware, but not
  XmlBodyMiddlewareV2 and XmlBodyMiddlewareV3. As a result, all client
  requests to a pipeline with either of those filters will receive a 500
  response and the server's logs look like

  2015-07-30 19:06:57.029 22048 DEBUG keystone.middleware.core [-] RBAC: 
auth_context: {} process_request 
/vagrant/swift3/.tox/keystone/local/lib/python2.7/site-packages/keystone/middleware/core.py:239
  2015-07-30 19:06:57.029 22048 ERROR keystone.common.wsgi [-] 
'XmlBodyMiddlewareV2' object has no attribute 'application'
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi   File 
"/vagrant/swift3/.tox/keystone/local/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 452, in __call__
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi response = 
request.get_response(self.application)
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi AttributeError: 
'XmlBodyMiddlewareV2' object has no attribute 'application'
  2015-07-30 19:06:57.029 22048 TRACE keystone.common.wsgi
  2015-07-30 19:06:57.055 22048 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[30/Jul/2015 19:06:57] "GET /v2.0/tenants HTTP/1.1" 500 423 0.027812

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1479943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453264] Re: iptables_manager can run very slowly when a large number of security group rules are present

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453264

Title:
  iptables_manager can run very slowly when a large number of security
  group rules are present

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  We have customers that typically add a few hundred security group
  rules or more.  We also typically run 30+ VMs per compute node.  When
  about 10+ VMs with a large SG set all get scheduled to the same node,
  the L2 agent (OVS) can spend many minutes in the
  iptables_manager.apply() code, so much so that by the time all the
  rules are updated, the VM has already tried DHCP and failed, leaving
  it in an unusable state.

  While there have been some patches that tried to address this in Juno
  and Kilo, they've either not helped as much as necessary, or broken
  SGs completely due to re-ordering the of the iptables rules.

  I've been able to show some pretty bad scaling with just a handful of
  VMs running in devstack based on today's code (May 8th, 2015) from
  upstream Openstack.

  Here's what I tested:

  1. I created a security group with 1000 TCP port rules (you could
  alternately have a smaller number of rules and more VMs, but it's
  quicker this way)

  2. I booted VMs, specifying both the default and "large" SGs, and
  timed from the second it took Neutron to "learn" about the port until
  it completed it's work

  3. I got a :( pretty quickly

  And here's some data:

  1-3 VM - didn't time, less than 20 seconds
  4th VM - 0:36
  5th VM - 0:53
  6th VM - 1:11
  7th VM - 1:25
  8th VM - 1:48
  9th VM - 2:14

  While it's busy adding the rules, the OVS agent is consuming pretty
  close to 100% of a CPU for most of this time (from top):

PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND   
  
  25767 stack 20   0  157936  76572   4416 R  89.2  0.5  50:14.28 python

  And this is with only ~10K rules at this point!  When we start
  crossing the 20K point VM boot failures start to happen.

  I'm filing this bug since we need to take a closer look at this in
  Liberty and fix it, it's been this way since Havana and needs some
  TLC.

  I've attached a simple script I've used to recreate this, and will
  start taking a look at options here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452418] Re: Fernet tokens read from disk on every request

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452418

Title:
  Fernet tokens read from disk on every request

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  The fernet keys are stored (by default) in /etc/keystone/fernet-keys/
  in individual key files. All keys are read from disk on every request,
  so you end up with log spam like:

keystone.token.providers.fernet.utils [-] Loaded 2 encryption keys
  from: /etc/keystone/fernet-keys/

  Keystone really only needs to hit the disk periodically to check for a
  different set of keys, not on every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1452418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478656] Re: Non-numeric filenames in key_repository will make Keystone explode

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478656

Title:
  Non-numeric filenames in key_repository will make Keystone explode

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  If one makes any files in that directory, such as an editor backup,
  Keystone will explode on startup or at the next key rotation because
  it assumes all files will pass int(filename)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471967] Re: Fernet unit tests do not test persistence logic

2015-10-13 Thread Chuck Short
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1471967

Title:
  Fernet unit tests do not test persistence logic

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  There are some unit tests for the Fernet token provider that live
  outside of the functional-like tests (test_v3_auth.py, for example)
  [0]. These tests should include a test to assert that the Fernet token
  provider returns False when asked if it's tokens need persistence [1].


  [0] 
https://github.com/openstack/keystone/blob/992d9ecbf4f563c42848147d4d66f8ec8efd4df0/keystone/tests/unit/token/test_fernet_provider.py
  [1] 
https://github.com/openstack/keystone/blob/992d9ecbf4f563c42848147d4d66f8ec8efd4df0/keystone/token/providers/fernet/core.py#L36-L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1471967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466642] Re: Intermittent failure in AgentManagementTestJSON.test_list_agent

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466642

Title:
  Intermittent failure in AgentManagementTestJSON.test_list_agent

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  This failure is fairly rare (6 occurrences in 48 hours):
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy5hcGkuYWRtaW4udGVzdF9hZ2VudF9tYW5hZ2VtZW50LkFnZW50TWFuYWdlbWVudFRlc3RKU09OLnRlc3RfbGlzdF9hZ2VudFwiIEFORCBtZXNzYWdlOlwiRkFJTEVEXCIgbWVzc2FnZTpcIm5ldXRyb24udGVzdHMuYXBpLmFkbWluLnRlc3RfYWdlbnRfbWFuYWdlbWVudC5BZ2VudE1hbmFnZW1lbnRUZXN0SlNPTi50ZXN0X2xpc3RfYWdlbnRcIiBBTkQgbWVzc2FnZTpcIkZBSUxFRFwiIEFORCB0YWdzOmNvbnNvbGUuaHRtbCIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNDY1ODM3MzIxMn0=

  Query:
  
message:"neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent"
  AND message:"FAILED"
  
message:"neutron.tests.api.admin.test_agent_management.AgentManagementTestJSON.test_list_agent"
  AND message:"FAILED" AND tags:console.html

  the failure itself is rather silly. The test expects description to be
  None, whereas it is an empty string ->
  http://logs.openstack.org/08/188608/6/check/check-neutron-dsvm-
  api/fea6d1d/console.html#_2015-06-18_14_32_40_302

  Note: it looks similar to 1442494 but the failure mode is quite
  different.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477253] Re: ovs arp_responder unsuccessfully inserts IPv6 address into arp table

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477253

Title:
  ovs arp_responder unsuccessfully inserts IPv6 address into arp table

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The ml2 openvswitch arp_responder agent attempts to install IPv6
  addresses into the OVS arp response tables. The action obviously
  fails, reporting:

  ovs-ofctl: -:4: 2001:db8::x:x:x:x invalid IP address

  The end result is that the OVS br-tun arp tables are incomplete.

  The submitted patch verifies that the address is IPv4 before
  attempting to add the address to the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463665] Re: Missing requirement for PLUMgrid Neutron Plugin

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463665

Title:
  Missing requirement for PLUMgrid Neutron Plugin

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Missing networking-plumgrid in requirement for PLUMgrid Neutron Plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455042] Re: Stale metadata processes are not cleaned up on l3 agent sync

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455042

Title:
  Stale metadata processes are not cleaned up on l3 agent sync

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  L3 agent cleans up stale namespaces of deleted routers during sync,
  but metadata processes are still running (forever :0) which leads to
  waste of resources.

  Can be easily reproduced by deleting router while l3 agent is stopped.
  After starting the agent will delete namespace but not md process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466750] Re: router-interface-add with no address causes internal error

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466750

Title:
  router-interface-add with no address causes internal error

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  for example:
  neutron net-create hoge
  neutron port-create --name hoge-port hoge
  neutron router-create hoge-router
  neutron router-interface-add hoge-router port=hoge-port

  this is a regression in commit
  I7d4e8194815e626f1cfa267f77a3f2475fdfa3d1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460562] Re: ipset can't be destroyed when last sg rule is deleted

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460562

Title:
  ipset can't be destroyed when last sg rule is deleted

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  reproduce steps:
  1. a VM A in default security group
  2. default security group has rules: 1. allow all traffic out; 2. allow it 
self as remote_group in
  3. firstly delete rule 1, then delete rule2

  I found the iptables in compute node which VM A resids didn't be
  reload, and the relevant ipset didn't be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483266] Re: q-svc fails to start in kilo due to "ImportError: No module named neutron_vpnaas.services.vpn.service_drivers.ipsec"

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483266

Title:
  q-svc fails to start in kilo due to "ImportError: No module named
  neutron_vpnaas.services.vpn.service_drivers.ipsec"

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  http://logs.openstack.org/70/210870/1/check/gate-grenade-dsvm-
  neutron/20f794e/logs/new/screen-q-svc.txt.gz?level=TRACE

  Looks like this is blocking kilo jobs that use neutron:

  2015-08-10 00:37:30.529 8402 ERROR neutron.common.config [-] Unable to load 
neutron from configuration file /etc/neutron/api-paste.ini.
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config Traceback (most 
recent call last):
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/opt/stack/new/neutron/neutron/common/config.py", line 227, in load_paste_app
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config app = 
deploy.loadapp("config:%s" % config_path, name=app_name)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
loadobj(APP, uri, name=name, **kw)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
context.create()
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
self.object_type.invoke(self)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config 
**context.local_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config val = 
callable(*args, **kw)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 28, in urlmap_factory
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config app = 
loader.get_app(app_name, global_conf=global_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config name=name, 
global_conf=global_conf).create()
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
self.object_type.invoke(self)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config 
**context.local_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config val = 
callable(*args, **kw)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/opt/stack/new/neutron/neutron/auth.py", line 71, in pipeline_factory
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config app = 
loader.get_app(pipeline[-1])
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config name=name, 
global_conf=global_conf).create()
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
self.object_type.invoke(self)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 146, in invoke
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config return 
fix_call(context.object, context.global_conf, **context.local_conf)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config val = 
callable(*args, **kw)
  2015-08-10 00:37:30.529 8402 TRACE neutron.common.config   File 

[Yahoo-eng-team] [Bug 1489671] Re: Neutron L3 sync_routers logic process all router ports from database when even sync for a specific router

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489671

Title:
  Neutron L3 sync_routers logic process all router ports from database
  when even sync for a specific router

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Recreate Steps:
  1) Create multiple routers and allocate each router interface for neutron 
route ports from different network.
  for example, below, there are 4 routers with each have 4,2,1,2 ports.  
(So totally 9 router ports in database)
  [root@controller ~]# neutron router-list
  
+--+---+---+-+---+
  | id   | name  | 
external_gateway_info | distributed | ha|
  
+--+---+---+-+---+
  | b2b466d2-1b1a-488d-af92-9d83d1c0f2c0 | routername1   | null 
 | False   | False |
  | 919f4312-41d1-47a8-b2b5-dc7f14d3f331 | routername2   | null 
 | False   | False |
  | 2854df21-7fe8-4968-a372-3c4a5c3d4ecf | routername3   | null 
 | False   | False |
  | daf51173-0084-4881-9ba3-0a9ac80d7d7b | routername4   | null 
 | False   | False |
  
+--+---+---+-+---+

  [root@controller ~]# neutron router-port-list routername1
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6194f014-e7c1-4d0b-835f-3cbf94839b9b |  | fa:16:3e:a9:43:7a | 
{"subnet_id": "84b1e75e-9ce3-4a85-a9c6-32133fca081d", "ip_address": "77.0.0.1"} 
|
  | bcac4f23-b74d-4cb3-8bbe-f1d59dff724f |  | fa:16:3e:72:59:a1 | 
{"subnet_id": "80dc7dfe-d353-4c51-8882-934da8bbbe8b", "ip_address": "77.1.0.1"} 
|
  | 39bb4b6c-e439-43a3-85f2-cade8bce8d3c |  | fa:16:3e:9a:65:e6 | 
{"subnet_id": "b54cb217-98b8-41e1-8b6f-fb69d84fcb56", "ip_address": "80.0.0.1"} 
|
  | 3349d441-4679-4176-9f6f-497d39b37c74 |  | fa:16:3e:eb:43:b5 | 
{"subnet_id": "8fad7ca7-ae0d-4764-92d9-a5e23e806eba", "ip_address": "81.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername2
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 77ac0964-57bf-4ed2-8822-332779e427f2 |  | fa:16:3e:ea:83:f8 | 
{"subnet_id": "2f07dbf4-9c5c-477c-b992-1d3dd284b987", "ip_address": "95.0.0.1"} 
|
  | aeeb920e-5c73-45ba-8fe9-f6dafabdab68 |  | fa:16:3e:ee:43:a8 | 
{"subnet_id": "15c55c9f-2051-4b4d-9628-552b86543e4e", "ip_address": "97.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername3
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | f792ac7d-0bdd-4dbe-bafb-7822ce388c71 |  | fa:16:3e:fe:b7:f7 | 
{"subnet_id": "b62990de-0468-4efd-adaf-d421351c6a8b", "ip_address": "66.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername4
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 
 

[Yahoo-eng-team] [Bug 1481613] Re: [DVR] DVR router do not support to update service port's arp entry after created.

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481613

Title:
  [DVR] DVR router do not support to update service port's arp entry
  after created.

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  When creating VMs,  DVR router will broadcast VM's arp details to all
  the l3 agents hosting it. that enable dvr can forwarding networking
  traffic in link layer, but when the port is attached to the service
  liking lbaas, their arp will not be broadcast, so the dvr do not know
  its mac, and will  cause that vms in other subnet can not reach the
  service port through the dvr router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381413] Re: Switch Region dropdown doesn't work

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381413

Title:
  Switch Region dropdown doesn't work

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  In case Horizon was set up to work with multiple regions (by editing
  AVAILABLE_REGIONS in settings.py), region selector drop-down appears
  in top right corner. But it doesn't work now.

  Suppose I login into the Region1, then if I try to switch to Region2,
  it redirects me to the login view of django-openstack-auth
  
https://github.com/openstack/horizon/blob/2014.2.rc1/horizon/templates/horizon/common/_region_selector.html#L11

  There I am being immediately redirected to the
  settings.LOGIN_REDIRECT_URL because I am already authenticated at
  Region1, so I cannot view Region2 resources if I switch to it via top
  right dropdown. Selecting region at login page works though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1381413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474228] Re: inline edit failed in user table because description doesn't exists

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474228

Title:
  inline edit failed in user table because description doesn't exists

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  inline edit failed in user table because description doesn't exits

  Environment:
  ubuntu devstack stable/kilo

  horizon commit id: c2b543bb8f3adb465bb7e8b3774b3dd1d5d999f6
  keystone commit id: 8125a8913d233f3da0eaacd09aa8e0b794ea98cb

  $keystone --version
  
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/keystoneclient/shell.py:64:
 DeprecationWarning: The keystone CLI is deprecated in favor of 
python-openstackclient. For a Python library, continue using 
python-keystoneclient.
    'python-keystoneclient.', DeprecationWarning)
  1.6.0

  
  How to reproduce the bug:

  
  1. create a new user. (important)
  2. Try to edit user using inline edit.

  
  Note: 

  If you edit the user using inline edit and the user was edited by
  update user form ever, the exception will not raise because the update
  form set description to empty string.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/forms.py#L195

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/users/forms.py#L228

  
  Traceback:
  Internal Server Error: /identity/users/
  Traceback (most recent call last):
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/tables/views.py", line 224, in post
  return self.get(request, *args, **kwargs)
    File "/home/user/github/horizon/horizon/tables/views.py", line 160, in get
  handled = self.construct_tables()
    File "/home/user/github/horizon/horizon/tables/views.py", line 145, in 
construct_tables
  preempted = table.maybe_preempt()
    File "/home/user/github/horizon/horizon/tables/base.py", line 1533, in 
maybe_preempt
  new_row)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1585, in 
inline_edit_handle
  error = exceptions.handle(request, ignore=True)
    File "/home/user/github/horizon/horizon/exceptions.py", line 361, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1580, in 
inline_edit_handle
  cell_name)
    File "/home/user/github/horizon/horizon/tables/base.py", line 1606, in 
inline_update_action
  self.request, datum, obj_id, cell_name, new_cell_value)
    File "/home/user/github/horizon/horizon/tables/actions.py", line 952, in 
action
  self.update_cell(request, datum, obj_id, cell_name, new_cell_value)
    File 
"/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py",
 line 210, in update_cell
  horizon_exceptions.handle(request, ignore=True)
    File "/home/user/github/horizon/horizon/exceptions.py", line 361, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File 
"/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py",
 line 200, in update_cell
  description=user_obj.description,
    File 
"/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/keystoneclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: description

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423453] Re: Delete ports when Launching VM fails when plugin is N1K

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1423453

Title:
  Delete ports when Launching VM fails when plugin is N1K

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  When plugin is Cisco N1KV, ports gets created before launching VM instance.
  But upon failure of launching, the ports are not cleaned up in the except 
block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1423453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467935] Re: widget attributes changed

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467935

Title:
  widget attributes changed

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
   In Django 1.8, widget attribute data-date-picker=True will be
  rendered as 'data-date-picker'. To preserve current behavior, use the
  string 'True' instead of the boolean value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465185] Re: No reverse match exception while try to edit the QoS spec

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1465185

Title:
  No reverse match exception while try to edit the QoS spec

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  while try to edit the QoS spec, i am getting NoReverseMatch Exception
  since the URL is wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1465185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1203413] Re: VM launch fails with Neutron in "admin" tenant if "admin" and "demo" tenants have secgroups with a same name "web"

2015-10-13 Thread Chuck Short
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1203413

Title:
  VM launch fails with Neutron in "admin" tenant if "admin" and "demo"
  tenants have secgroups with a same name "web"

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Using Grizzly with Neutron: If there are multiple security groups with
  the same name (in other tenants for example), it is not possible to
  boot an instance with this security group as Horizon will only use the
  name of the security group.

  Example from logs:
  2013-07-21 03:39:12.432 ERROR nova.network.security_group.quantum_driver 
[req-aaca5681-72b8-41dc-a89c-9a5c95c7eff4 33fe423e114c4586a573514b3e98341e 
e91fe07ea4834f8487c5cec7deaa2eac] Quantum Error: Multiple security_group 
matches found for name 'web', use an ID to be more specific.
  2013-07-21 03:39:12.439 ERROR nova.api.openstack 
[req-aaca5681-72b8-41dc-a89c-9a5c95c7eff4 33fe423e114c4586a573514b3e98341e 
e91fe07ea4834f8487c5cec7deaa2eac] Caught error: Multiple security_group matches 
found for name 'web', use an ID to be more specific.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1203413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481443] Re: Add configurability for HA networks in L3 HA

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481443

Title:
  Add configurability for HA networks in L3 HA

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The L3 HA mechanism creates a project network for HA (VRRP) traffic
  among routers. The HA project network uses the first (default)
  network type in 'tenant_network_types' and next available segmentation
  ID. Depending on the environment, this combination may not provide a
  desirable path for HA traffic. For example, some operators may prefer
  to use a specific network for HA traffic, such that the HA networks
  will use tunneling while tenant networks use VLANs or vice versa.
  Alternatively, the physical_network tag of the HA networks may need to
  be selected so that HA networks will use a separate or different NIC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466663] Re: radvd exits -1 intermittently in test_ha_router_process_ipv6_subnets_to_existing_port functional test

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/143

Title:
  radvd exits -1 intermittently in
  test_ha_router_process_ipv6_subnets_to_existing_port functional test

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  An example of the failure: http://logs.openstack.org/91/189391/6/check
  /check-neutron-dsvm-functional/0ba6e51/console.html

  A logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJDb21tYW5kIEFORCByYWR2ZC5jb25mIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM0NjYzNTQ3ODU5fQ==

   ERROR neutron.agent.l3.router_info Command: ['ip', 'netns', 'exec',
  'qrouter-c37cf4a8-bf31-42a1-abb8-579c583e7ea9', 'radvd', '-C',
  '/tmp/tmpidCgIT/tmplIquzu/ra/c37cf4a8-bf31-42a1-abb8-579c583e7ea9.radvd.conf',
  '-p',
  
'/tmp/tmpidCgIT/tmplIquzu/external/pids/c37cf4a8-bf31-42a1-abb8-579c583e7ea9.pid.radvd',
  '-m', 'syslog']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455675] Re: IptablesManager._find_last_entry taking up majority of time to plug ports

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455675

Title:
  IptablesManager._find_last_entry taking up majority of time to plug
  ports

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  During profiling of the OVS agent, I found that
  IptablesManager._find_last_entry is taking up roughly 40% of the time
  when plugging a large amount of ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479558] Re: _ensure_default_security_group calls create_security_group within at db transaction

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479558

Title:
  _ensure_default_security_group calls create_security_group within at
  db transaction

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  _ensure_default_security_group calls create_security_group within at db 
transaction [1],
  Neutron plugin may choose to override create_security_group so it can invoke 
backend operations,
  handling it under an open transaction might lead to a db lock timeout.

  [1]:
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/securitygroups_db.py#n666

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477860] Re: TestAsyncProcess.test_async_process_respawns fails with TimeoutException

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477860

Title:
  TestAsyncProcess.test_async_process_respawns fails with
  TimeoutException

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hc3luY19wcm9jZXNzX3Jlc3Bhd25zXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mzc3MjMxNTU2ODB9

  fails for both feature/qos and master:

  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.369 | Captured traceback:
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.370 | ~~~
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.371 | Traceback (most 
recent call last):
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.372 |   File 
"neutron/tests/functional/agent/linux/test_async_process.py", line 70, in 
test_async_process_respawns
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.373 | 
proc._kill_process(proc.pid)
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.375 |   File 
"neutron/agent/linux/async_process.py", line 177, in _kill_process
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.376 | 
self._process.wait()
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.377 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 75, in wait
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.378 | 
eventlet.sleep(check_interval)
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.379 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 34, in sleep
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.380 | hub.switch()
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.381 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.382 | return 
self.greenlet.switch()
  2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.383 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.384 | 
self.wait(sleep_time)
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.385 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.387 | presult = 
self.do_poll(seconds)
  2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.388 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
 line 62, in do_poll
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.389 | return 
self.poll.poll(seconds)
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.390 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.391 | raise 
TimeoutException()
  2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.392 | 
fixtures._fixtures.timeout.TimeoutException

  Example: http://logs.openstack.org/64/199164/2/check/gate-neutron-
  dsvm-functional/9b43ead/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475938] Re: create_security_group code may get into endless loop

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475938

Title:
  create_security_group code may get into endless loop

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  That damn piece of code again.

  In some cases when network is created for tenant and default security group 
is created in the process, there may be concurrent network or sg creation 
happening.
  That leads to a condition when the code fetches default sg, it's not there, 
tries to add it - it's already there, then it tries to fetch it again, but due 
to REPEATABLE READ isolation method, the query returns empty result, as in the 
first attempt.
  As a result, such logic will hang in the loop forever.

  Reproducible with rally create_and_delete_ports test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460741] Re: security groups iptables can block legitimate traffic as INVALID

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460741

Title:
  security groups iptables can block legitimate traffic as INVALID

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The iptables implementation of security groups includes a default rule
  to drop any INVALID packets (according to the Linux connection state
  tracking system.)  It looks like this:

  -A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

  This is placed near the top of the rule stack, before any security
  group rules added by the user.  See:

  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

  However, there are some cases where you would not want traffic marked
  as INVALID to be dropped here.  Specifically, our use case:

  We have a load balancing scheme where requests from the LB are
  tunneled as IP-in-IP encapsulation between the LB and the VM.
  Response traffic is configured for DSR, so the responses go directly
  out the default gateway of the VM.

  The results of this are iptables on the hypervisor does not see the
  initial SYN from the LB to VM (because it is encapsulated in IP-in-
  IP), and thus it does not make it into the connection table.  The
  response that comes out of the VM (not encapsulated) hits iptables on
  the hypervisor and is dropped as invalid.

  I'd like to see a Neutron option to enable/disable the population of
  this INVALID state rule, so that operators (such as us) can disable it
  if desired.  Obviously it's better in general to keep it in there to
  drop invalid packets, but there are cases where you would like to not
  do this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454408] Re: ObectDeletedError while deleting network

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454408

Title:
  ObectDeletedError while deleting network

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The following trace could be observed running rally tests on multi-
  server environment:

  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 476, in delete
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 671, in 
delete_network
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self._delete_ports(context, ports)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 587, in 
_delete_ports
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource port.id)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 239, in 
__get__
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource return 
self.impl.get(instance_state(instance), dict_)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 589, in 
get
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource value = 
callable_(state, passive)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py", line 424, in 
__call__
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self.manager.deferred_scalar_loader(self, toload)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 614, in 
load_scalar_attributes
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource raise 
orm_exc.ObjectDeletedError(state)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
ObjectDeletedError: Instance '' has been deleted, or 
its row is otherwise not present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461519] Re: Enabling ml2 port security extension driver causes net-list to fail on existing deployment

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461519

Title:
  Enabling ml2 port security extension driver causes net-list to fail on
  existing deployment

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  I had a kilo setup where there were a few existing networks.  Then I
  enabled the port security extension driver in ml2_conf.ini.

  After this net-list fails because the extension driver tries to access
  the fields(port security related) which were never set for the old
  networks.

  This also happens when port-security is enabled and when creating an
  HA router.

  ocloud@ubuntu:~/devstack$ neutron net-list
  Request Failed: internal server error while processing your request.

  2015-06-03 17:14:44.059 ERROR neutron.api.v2.resource 
[req-d831393d-e02a-4405-8f3a-dd13291f86b1 admin admin] index failed
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 319, in index
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource return 
self._items(request, True, parent_id)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 249, in _items
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 669, in get_networks
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource limit, 
marker, page_reverse)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1020, in get_networks
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
page_reverse=page_reverse)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/common_db_mixin.py", line 184, in _get_collection
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource items = 
[dict_func(c, fields) for c in query]
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 858, in 
_make_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
attributes.NETWORKS, res, network)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/common_db_mixin.py", line 162, in 
_apply_dict_extend_functions
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource func(*args)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 477, in 
_ml2_md_extend_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
self.extension_manager.extend_network_dict(session, netdb, result)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 782, in 
extend_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
driver.obj.extend_network_dict(session, base_model, result)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_security.py", line 60, 
in extend_network_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
self._extend_port_security_dict(result, db_data)
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_security.py", line 68, 
in _extend_port_security_dict
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource 
db_data['port_security'][psec.PORTSECURITY])
  2015-06-03 17:14:44.059 11154 TRACE neutron.api.v2.resource TypeError: 
'NoneType' object has no attribute '__getitem__'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490380] Re: netaddr 0.7.16 causes gate havoc

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490380

Title:
  netaddr 0.7.16 causes gate havoc

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Netaddr just released and that causes mayhem.

  https://pypi.python.org/pypi/netaddr

  An example:

  http://logs.openstack.org/03/216603/4/check/gate-neutron-
  python27/21af647/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488320] Re: neutron-vpnaas uses bad file permissions on PSK file

2015-10-13 Thread Chuck Short
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488320

Title:
  neutron-vpnaas uses bad file permissions on PSK file

Status in neutron:
  In Progress
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Summary:

  OpenStack VPNaaS uses IPSec pre-shared keys(PSK) to secure VPN
  tunnels.  Those keys are specified by the user via the API when
  creating the VPN connection, and they are stored in the neutron
  database, then copied to the filesystem on the network node.  The PSK
  file created by the VPNaaS OpenSwan driver has perms of 644, and the
  directories in its path allow access by anyone.

  This means that if an intruder were to compromise the network node the
  pre-shared VPN keys for all tenants would be vulnerable to
  unauthorized disclosure.

  VPNaaS uses the neutron utility function replace_file() to create the
  PSK file, and replace_file sets the mode of all files it creates to
  0o644.

  This vulnerability exists in the OpenSwan ipsec driver, I have not yet
  investigated whether it exists in any of the other implementation
  drivers.

  I have developed patches to neutron and neutron_vpnaas to add an
  optional file_perm argument (with default 0o644)  to replace_file(),
  and to specify mode 0o400 when neutron-vpnaas creates the PSK file.
  This allows all other existing calls to replace_file() to maintain
  there existing behavior.

  The Gory Details:

  Here is the "ps -ef" output for the ipsec pluto process for the VPN
  endpoint on the network node:

  root 19701 1  0 01:15 ?00:00:00 /usr/lib/ipsec/pluto
  --ctlbase /var/run/neutron/ipsec/ad83280f-6993-478b-976e-
  608550093ed8/var/run/pluto --ipsecdir
  /var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc --use-
  netkey --uniqueids --nat_traversal --secretsfile
  /var/run/neutron/ipsec/ad83280f-6993-478b-976e-
  608550093ed8/etc/ipsec.secrets --virtual_private
  %v4:10.1.0.0/24,%v4:10.2.0.0/24

  The PSK is stored in /var/run/neutron/ipsec/ad83280f-6993-478b-976e-
  608550093ed8/etc/ipsec.secrets:

  /home/stack# less 
/var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc/ipsec.secrets
  # Configuration for myvpnrA
  172.16.0.2 172.16.0.3 : PSK "secret"

  Here we see the file perms:

  /home/stack# ls -l 
/var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc/ipsec.secrets
  -rw-r--r-- 1 neutron neutron 65 Aug 16 01:15 
/var/run/neutron/ipsec/ad83280f-6993-478b-976e-608550093ed8/etc/ipsec.secrets

  OpenSwan delivers a default secrets file
  /var/lib/openswan/ipsec.secrets.inc, and we see it has a mode that we
  would expect:

  /home/stack# ls -l /var/lib/openswan/ipsec.secrets.inc
  -rw--- 1 root root 0 Aug 15 23:51 /var/lib/openswan/ipsec.secrets.inc

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   >