[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/260498
Committed: 
https://git.openstack.org/cgit/openstack/python-muranoclient/commit/?id=cdbf173c2c599e964f71a89a60215849704a9076
Submitter: Jenkins
Branch:master

commit cdbf173c2c599e964f71a89a60215849704a9076
Author: Janonymous 
Date:   Mon Dec 21 18:22:27 2015 +0530

Put py34 first in the envlist order of tox

To solve the problem of "db type could
not be determined" on py34 we have to run first the py34 env to, then, run
py27. This patch puts py34 first on the tox.ini list of envs to avoid this
problem to happen.

Change-Id: I58246fbcdbfd4390f07563157889f4e217e6c086
Closes-bug: #1489059


** Changed in: python-muranoclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in cloudkitty:
  Fix Committed
Status in Glance:
  Fix Committed
Status in glance_store:
  Fix Committed
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Committed
Status in Manila:
  Fix Released
Status in Murano:
  Fix Committed
Status in networking-midonet:
  In Progress
Status in networking-ofagent:
  New
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Committed
Status in python-muranoclient:
  Fix Released
Status in python-solumclient:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tap-as-a-service:
  New
Status in tempest:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528805] Re: create_ipsec_site_connection call failed because of code inconsistency

2015-12-23 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1528435 ***
https://bugs.launchpad.net/bugs/1528435

** This bug has been marked a duplicate of bug 1528435
   The gate test of VPNaaS is failing with AttributeError

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528805

Title:
  create_ipsec_site_connection call failed because of code inconsistency

Status in neutron:
  Fix Committed

Bug description:
  EC2-API gating checks vpn functionality and it is broken two days due to 
vpn-ass.
  gating enables vpnass plugin and neutron

  gating fails on call - "neutron.create_ipsec_site_connection":

  REQ: curl -g -i -X POST http://127.0.0.1:9696/v2.0/vpn/ipsec-site-
  connections.json -H "User-Agent: python-neutronclient" -H "Content-
  Type: application/json" -H "Accept: application/json" -H "X-Auth-
  Token: {SHA1}88227c79fef8266afa89bd559e6f92bb99dd17df" -d
  '{"ipsec_site_connection": {"ikepolicy_id": "9f00451d-
  aaa8-4465-97f1-f45b6890771e", "peer_cidrs": ["172.16.25.0/24"], "mtu":
  1427, "ipsecpolicy_id": "31b17d19-a898-472b-8944-fa34321b67e5",
  "vpnservice_id": "7586b8d4-1c1e-49dd-91cc-e1055138ba6d", "psk":
  "p.Hu0SlBXoSd8sF1WV.QDSy32sVCjKON", "peer_address": "198.51.100.77",
  "peer_id": "198.51.100.77", "initiator": "response-only", "name":
  "vpn-360307d8/subnet-031705ee"}}' _http_log_request
  /usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:198

  
  it takes exception from neutron and netron logs are:

  2015-12-22 21:28:38.620 DEBUG neutron.api.v2.base 
[req-86bbd8b1-af64-4d82-86d4-15daa40a2a8f user-29edecd6 project-3a1e0ba2] 
Request body: {u'ipsec_site_connection': {u'psk': 
u'p.Hu0SlBXoSd8sF1WV.QDSy32sVCjKON', u'peer_cidrs': [u'172.16.25.0/24'], 
u'vpnservice_id': u'7586b8d4-1c1e-49dd-91cc-e1055138ba6d', u'initiator': 
u'response-only', u'mtu': 1427, u'ikepolicy_id': 
u'9f00451d-aaa8-4465-97f1-f45b6890771e', u'ipsecpolicy_id': 
u'31b17d19-a898-472b-8944-fa34321b67e5', u'peer_address': u'198.51.100.77', 
u'peer_id': u'198.51.100.77', u'name': u'vpn-360307d8/subnet-031705ee'}} 
prepare_request_body /opt/stack/new/neutron/neutron/api/v2/base.py:645
  2015-12-22 21:28:38.620 ERROR neutron.api.v2.resource 
[req-86bbd8b1-af64-4d82-86d4-15daa40a2a8f user-29edecd6 project-3a1e0ba2] 
create failed
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 147, in wrapper
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 414, in create
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 680, in 
prepare_request_body
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
attributes.convert_value(attr_info, res_dict, webob.exc.HTTPBadRequest)
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/attributes.py", line 919, in 
convert_value
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource res = 
validators[rule](res_dict[attr], attr_vals['validate'][rule])
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/extensions/vpnaas.py", line 167, 
in _validate_subnet_list_or_none
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
attr._validate_subnet_list(data, key_specs)
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource AttributeError: 
'module' object has no attribute '_validate_subnet_list'
  2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
  2015-12-22 21:28:38.625 INFO neutron.wsgi 
[req-86bbd8b1-af64-4d82-86d4-15daa40a2a8f user-29edecd6 project-3a1e0ba2] 
127.0.0.1 - - [22/Dec/2015 21:28:38] "POST 
/v2.0/vpn/ipsec-site-connections.json HTTP/1.1" 500 383 0.092068

[Yahoo-eng-team] [Bug 1528834] [NEW] HostManager host_state_map should have synchronized access

2015-12-23 Thread Chris Dent
Public bug reported:

Reporting this as  a bug to get some discussion rolling on how best to
fix it (or whether to fix it). I kind of assume this is a well known
problem, but since the code isn't covered with NOTEs about it, I thought
perhaps best to throw it up here.

In nova's master branch as of Dec 2015 the HostState objects and
host_state_map are not robust in the face of concurrent access. That
concurrent access doesn't happen frequently under test situations is
primarily the fault of the test situations and luck, not the correctness
of the code. Unless I'm completely misunderstanding the way in which
eventlet is being used, each of the following methods has the potential
to yield on I/O (rpc calls, log messages) or sleeps (not used, but handy
in debugging situations). This is not an exhaustive list, just what came
up in my digging in nova/scheduler/host_manager.py (digging described in
more detail below):

* HostManager.get_all_host_states
* HostState.consume_from_request
* HostState.update_from_compute_node

If these yield at exactly the wrong time, an ongoing set of
select_destinations calls will have some which have incorrect host
information. As we know, this is often true anyway, but the suggested
fix (below) is pretty lightweight, so perhaps a small improvement is
worth it?

How I figured this out:

I was reading the code and realized that access to the HostState is not
synchronized so tried to see if I could come up with a way to get
incorrect information to show up at the scheduler when running just one
scheduler, just one compute node and doing multiple "concurrent" server
creates (calling three creates over the API in a backgrounding loop). It
was pretty easy to report some incorrect RAM usage. To make things fail
more reliably I introduced small sleeps into consume_from_request to
force it to yield. I managed to get very incorrect usage information
(negative RAM).

Adding a utils.synchronized decorator to consume_from_request improved
the situation somewhat: At the end of the scheduling run the RAM usage
was off by one deduction. Adding a synchronized to get_all_host_states
cleared this up.

I'm not clear on the implications of these changes: they make sense to
me (we don't want to concurrent access to a shared data structure) but I
wonder if perhaps there are other things to consider that I'm not aware
of such as "well, actually, this stuff was supposed to be written so it
never yielded, that they can or may is the bug" (in which case the
methods need some phat warnings on them for future maintainers).

I'm also not clear on the semaphores that ought to be used (if
synchronization is the way to go). In my POC solution
get_all_host_states had its own semaphore while the other two methods
shared one.

Beyond that, though, it might make sense for the host_state_map and the
entire HostState object to be "thread" safe, to get all places where
this could be a problem rather than piecemeal dealing with methods as
curious people notice them.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528834

Title:
  HostManager host_state_map should have synchronized access

Status in OpenStack Compute (nova):
  New

Bug description:
  Reporting this as  a bug to get some discussion rolling on how best to
  fix it (or whether to fix it). I kind of assume this is a well known
  problem, but since the code isn't covered with NOTEs about it, I
  thought perhaps best to throw it up here.

  In nova's master branch as of Dec 2015 the HostState objects and
  host_state_map are not robust in the face of concurrent access. That
  concurrent access doesn't happen frequently under test situations is
  primarily the fault of the test situations and luck, not the
  correctness of the code. Unless I'm completely misunderstanding the
  way in which eventlet is being used, each of the following methods has
  the potential to yield on I/O (rpc calls, log messages) or sleeps (not
  used, but handy in debugging situations). This is not an exhaustive
  list, just what came up in my digging in
  nova/scheduler/host_manager.py (digging described in more detail
  below):

  * HostManager.get_all_host_states
  * HostState.consume_from_request
  * HostState.update_from_compute_node

  If these yield at exactly the wrong time, an ongoing set of
  select_destinations calls will have some which have incorrect host
  information. As we know, this is often true anyway, but the suggested
  fix (below) is pretty lightweight, so perhaps a small improvement is
  worth it?

  How I figured this out:

  I was reading the code and realized that access to the HostState is
  not synchronized so tried to see if I could come up with a way to get
  incorrect information to show up at the scheduler when running just
  one scheduler, 

[Yahoo-eng-team] [Bug 1528877] [NEW] Libvirt can't honour user-supplied dev names

2015-12-23 Thread sean redmond
Public bug reported:

OpenStack liberty (Ubuntu 14.04)

When creating a new instance via the dashboard and selecting 'boot from
image (creates new volume)' the below is logged into nova-compute.log on
the host the instance is deployed to:

"Ignoring supplied device name: /dev/vda. Libvirt can't honour user-
supplied dev names"

In the virsh XML for the instance I can see the target dev name is sda,
this seems to have changed from how kilo was behaving.

I also noticed that if I then try and add an extra volume to this
instance the below error is placed in the nova-compute.log and the
volume is not attached.

caf420922780362 - - -] Exception during message handling: internal error: 
unable to execute QEMU command 'device_add': Duplicate ID 'scsi0-0-0-0' for 
device
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, 
in _do_dispatch
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher payload)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 72, in wrapped
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 378, in 
decorated_function
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 366, in 
decorated_function
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4609, in 
attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
do_attach_volume(context, instance, driver_bdm)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 254, in 
inner
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4607, in 
do_attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
bdm.destroy()
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4604, in 
do_attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
self._attach_volume(context, instance, driver_bdm)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4627, in 
_attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, bdm.volume_id)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1528868] [NEW] Neutron-metering-agent failed to get traffic counters when xtables is locked

2015-12-23 Thread Sergey Belous
Public bug reported:

In some cases, if we try to show iptables on router namespace manualy,
where exist some rules created by meter-agent, we can see the following
traces in meter-agent logs:

2015-12-17 08:49:14.709 ERROR neutron.agent.linux.utils [-] Exit code:
4; Stdin: ; Stdout: ; Stderr: Another app is currently holding the
xtables lock. Perhaps you want to use the -w option?

2015-12-17 08:49:14.710 ERROR 
neutron.services.metering.drivers.iptables.iptables_driver [-] Failed to get 
traffic counters, router: {u'status': u'ACTIVE', u'name': u'router1', 
u'gw_port_id': u'00e8ce89-0ebe-4e8a-b1ca-c2993210a8db', u'admin_state_up': 
True, u'tenant_id': u'f8267bb3db654ca2a26a07d9757ec280', u'_metering_labels': 
[{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'ef4f1dab-cafe-4058-aa6a-85a79a10c67e', u'id': 
u'986d6cfe-719b-43a7-96d1-79e97eb93567', u'excluded': False}], u'id': 
u'ef4f1dab-cafe-4058-aa6a-85a79a10c67e'}], u'id': 
u'8aef2cba-45b1-42f3-b7a4-8ea992d6cded'}
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver Traceback (most 
recent call last):
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 354, in get_traffic_counters
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver chain, 
wrap=False, zero=True)
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 661, in 
get_traffic_counters
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver current_table = 
self.execute(args, run_as_root=True)
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 140, in execute
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver raise 
RuntimeError(msg)
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver RuntimeError: Exit 
code: 4; Stdin: ; Stdout: ; Stderr: Another app is currently holding the 
xtables lock. Perhaps you want to use the -w option?
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver
2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver


Steps to reproduce:
1. Create neutron-meter-label
2. Find router's namespace, where rules added by meter-agent
3. run:
sudo watch -n 1 ip net e %namespace-name% iptables -L -v -x -n
4. See logs of meter-agent

** Affects: neutron
 Importance: Undecided
 Assignee: Sergey Belous (sbelous)
 Status: New


** Tags: metering

** Changed in: neutron
 Assignee: (unassigned) => Sergey Belous (sbelous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528868

Title:
  Neutron-metering-agent failed to get traffic counters when xtables is
  locked

Status in neutron:
  New

Bug description:
  In some cases, if we try to show iptables on router namespace manualy,
  where exist some rules created by meter-agent, we can see the
  following traces in meter-agent logs:

  2015-12-17 08:49:14.709 ERROR neutron.agent.linux.utils [-] Exit code:
  4; Stdin: ; Stdout: ; Stderr: Another app is currently holding the
  xtables lock. Perhaps you want to use the -w option?

  2015-12-17 08:49:14.710 ERROR 
neutron.services.metering.drivers.iptables.iptables_driver [-] Failed to get 
traffic counters, router: {u'status': u'ACTIVE', u'name': u'router1', 
u'gw_port_id': u'00e8ce89-0ebe-4e8a-b1ca-c2993210a8db', u'admin_state_up': 
True, u'tenant_id': u'f8267bb3db654ca2a26a07d9757ec280', u'_metering_labels': 
[{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'ef4f1dab-cafe-4058-aa6a-85a79a10c67e', u'id': 
u'986d6cfe-719b-43a7-96d1-79e97eb93567', u'excluded': False}], u'id': 
u'ef4f1dab-cafe-4058-aa6a-85a79a10c67e'}], u'id': 
u'8aef2cba-45b1-42f3-b7a4-8ea992d6cded'}
  2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver Traceback (most 
recent call last):
  2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 354, in get_traffic_counters
  2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver chain, 
wrap=False, zero=True)
  2015-12-17 08:49:14.710 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 661, in 
get_traffic_counters
  2015-12-17 08:49:14.710 TRACE 

[Yahoo-eng-team] [Bug 1512416] Re: Glance doesn't catches exception NotFound from glance_store

2015-12-23 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Also affects: glance/liberty
   Importance: Undecided
   Status: New

** Changed in: glance/kilo
   Importance: Undecided => Medium

** Changed in: glance
   Importance: Undecided => Medium

** Changed in: glance/liberty
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1512416

Title:
  Glance doesn't catches exception  NotFound  from glance_store

Status in Glance:
  In Progress
Status in Glance kilo series:
  New
Status in Glance liberty series:
  New

Bug description:
  Glance doesn't catches exception NotFound from glance_store when
  upload images
  
https://github.com/openstack/glance/blob/master/glance/api/v2/image_data.py#L83,
  it results to errors http://paste.openstack.org/show/477804/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1512416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/258988
Committed: 
https://git.openstack.org/cgit/openstack/python-ironicclient/commit/?id=a90b16ca84edc19978117b2446f4b578bef4136f
Submitter: Jenkins
Branch:master

commit a90b16ca84edc19978117b2446f4b578bef4136f
Author: Shuquan Huang 
Date:   Thu Dec 17 20:54:09 2015 +0800

Replace assertEqual(None, *) with assertIsNone in tests

Replace assertEqual(None, *) with assertIsNone in tests to have
more clear messages in case of failure.

Change-Id: I109bef1afbbdd3a2cf25e9fb7133509516910891
Closes-bug: #1280522


** Changed in: python-ironicclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in Heat Translator:
  In Progress
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-cueclient:
  In Progress
Status in python-designateclient:
  In Progress
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  In Progress
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  In Progress
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  In Progress
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in Solum:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528850] [NEW] log level of ProcessMonitor should not be ERROR

2015-12-23 Thread Zou Keke
Public bug reported:

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L230

I suppose the log level should be info or warning.

** Affects: neutron
 Importance: Undecided
 Assignee: Zou Keke (zoukeke)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Zou Keke (zoukeke)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528850

Title:
  log level of ProcessMonitor should not be ERROR

Status in neutron:
  New

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L230

  I suppose the log level should be info or warning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512416] Re: Glance doesn't catches exception NotFound from glance_store

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/241207
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=20d0c547cc15a5dccb8d0313144d6df8b668f01f
Submitter: Jenkins
Branch:master

commit 20d0c547cc15a5dccb8d0313144d6df8b668f01f
Author: Darja Shakhray 
Date:   Tue Nov 3 16:30:58 2015 +0300

Fix glance doesn't catches exception NotFound from glance_store

Add catches exception NotFound from glance_store when saving to
uploaded image.

Change-Id: Ib352af844610a8d5794372e9a0016d36fb30213e
Closes-bug: #1512416


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1512416

Title:
  Glance doesn't catches exception  NotFound  from glance_store

Status in Glance:
  Fix Released
Status in Glance kilo series:
  New
Status in Glance liberty series:
  New

Bug description:
  Glance doesn't catches exception NotFound from glance_store when
  upload images
  
https://github.com/openstack/glance/blob/master/glance/api/v2/image_data.py#L83,
  it results to errors http://paste.openstack.org/show/477804/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1512416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528866] [NEW] gate-horizon-npm-run-test fails with Error: read ECONNRESET

2015-12-23 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/80/233880/3/gate/gate-horizon-npm-run-
test/79077e7/console.html.gz

2015-12-17 09:41:13.102 | npm http GET https://registry.npmjs.org/event-emitter
2015-12-17 09:41:13.108 | 
2015-12-17 09:41:13.109 | Error making request.
2015-12-17 09:41:13.109 | Error: read ECONNRESET
2015-12-17 09:41:13.109 | at errnoException (net.js:901:11)
2015-12-17 09:41:13.109 | at TCP.onread (net.js:556:19)
2015-12-17 09:41:13.109 | 
2015-12-17 09:41:13.109 | Please report this full log at 
https://github.com/Medium/phantomjs
2015-12-17 09:41:13.115 | npm WARN This failure might be due to the use of 
legacy binary "node"
2015-12-17 09:41:13.115 | npm WARN For further explanations, please read
2015-12-17 09:41:13.115 | /usr/share/doc/nodejs/README.Debian

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22Error:%20read%20ECONNRESET%5C%22%20AND%20tags:%5C%22console%5C%22%20AND%20build_name:%5C
%22gate-horizon-npm-run-test%5C%22

5 hits in 7 days, check and gate, all failures. Looks like it only hit
on hpcloud nodes, not sure if that would be related.

** Affects: horizon
 Importance: Undecided
 Status: Confirmed

** Changed in: horizon
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528866

Title:
  gate-horizon-npm-run-test fails with Error: read ECONNRESET

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  http://logs.openstack.org/80/233880/3/gate/gate-horizon-npm-run-
  test/79077e7/console.html.gz

  2015-12-17 09:41:13.102 | npm http GET 
https://registry.npmjs.org/event-emitter
  2015-12-17 09:41:13.108 | 
  2015-12-17 09:41:13.109 | Error making request.
  2015-12-17 09:41:13.109 | Error: read ECONNRESET
  2015-12-17 09:41:13.109 | at errnoException (net.js:901:11)
  2015-12-17 09:41:13.109 | at TCP.onread (net.js:556:19)
  2015-12-17 09:41:13.109 | 
  2015-12-17 09:41:13.109 | Please report this full log at 
https://github.com/Medium/phantomjs
  2015-12-17 09:41:13.115 | npm WARN This failure might be due to the use of 
legacy binary "node"
  2015-12-17 09:41:13.115 | npm WARN For further explanations, please read
  2015-12-17 09:41:13.115 | /usr/share/doc/nodejs/README.Debian

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22Error:%20read%20ECONNRESET%5C%22%20AND%20tags:%5C%22console%5C%22%20AND%20build_name:%5C
  %22gate-horizon-npm-run-test%5C%22

  5 hits in 7 days, check and gate, all failures. Looks like it only hit
  on hpcloud nodes, not sure if that would be related.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1528866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514977] Re: Check if system supports TCP_KEEPIDLE

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/226773
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=29cb5490e3fb8b5c0a600adec768404b594a8bf8
Submitter: Jenkins
Branch:master

commit 29cb5490e3fb8b5c0a600adec768404b594a8bf8
Author: Julien Danjou 
Date:   Wed Sep 23 15:05:28 2015 +0200

eventlet: handle system that misses TCP_KEEPIDLE

Some systems (e.g. Darwin) do not have this option, so let's check that
it's available before using it.

Co-Authored-By: Pranesh Pandurangan 
Closes-Bug: #1514977
Change-Id: Ibaf1c07605944ce690e73013f56d3b95654cfff9


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1514977

Title:
  Check if system supports TCP_KEEPIDLE

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Some systems (e.g. Darwin) do not support TCP_KEEPIDLE so you can't
  start keystone on these systems and they'll fail with an error (when
  running under eventlet , which is deprecated anyways).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1514977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528894] [NEW] Native ovsdb implementation not working

2015-12-23 Thread Mohammed Naser
Public bug reported:

When trying to use the new native OVSDB provider, the connectivity never
goes up due to the fact that what seems to be the db_set operation
failing to change the patch ports from "nonexistant-peer" to the correct
peer, therefore not linking the bridges together.

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1119

The system must be running the latest Liberty release, python-
openvswitch package installed and the following command executed:

# ovs-vsctl set-manager ptcp:6640:127.0.0.1

Once that's all done, the openvswitch agent configuration should be
changed to the following:

[OVS]
ovsdb_interface = ovsdb

Restarting the OVS agent will setup everything but leave your network in
a failed state because the correct patch ports aren't updated:

# ovs-vsctl show
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "em1"
Interface "em1"
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=nonexistent-peer}
Bridge br-int
fail_mode: secure
Port "qvo25d28228-9c"
tag: 1
Interface "qvo25d28228-9c"
...
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=nonexistent-peer}

Reverting to the regular old forked implementation works with no
problems.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528894

Title:
  Native ovsdb implementation not working

Status in neutron:
  New

Bug description:
  When trying to use the new native OVSDB provider, the connectivity
  never goes up due to the fact that what seems to be the db_set
  operation failing to change the patch ports from "nonexistant-peer" to
  the correct peer, therefore not linking the bridges together.

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1119

  The system must be running the latest Liberty release, python-
  openvswitch package installed and the following command executed:

  # ovs-vsctl set-manager ptcp:6640:127.0.0.1

  Once that's all done, the openvswitch agent configuration should be
  changed to the following:

  [OVS]
  ovsdb_interface = ovsdb

  Restarting the OVS agent will setup everything but leave your network
  in a failed state because the correct patch ports aren't updated:

  # ovs-vsctl show
  Bridge br-ex
  Port br-ex
  Interface br-ex
  type: internal
  Port "em1"
  Interface "em1"
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=nonexistent-peer}
  Bridge br-int
  fail_mode: secure
  Port "qvo25d28228-9c"
  tag: 1
  Interface "qvo25d28228-9c"
  ...
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=nonexistent-peer}

  Reverting to the regular old forked implementation works with no
  problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527719] Re: Adding a VNIC type for physical functions

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/260574
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=83284103d5ec9f37723d62910e2e9814c9db9810
Submitter: Jenkins
Branch:master

commit 83284103d5ec9f37723d62910e2e9814c9db9810
Author: Atsushi SAKAI 
Date:   Tue Dec 22 13:50:02 2015 +

Add VNIC types

Add SR-IOV vnic_type (direct-physical)

Change-Id: I80b68b8bcb44c05c1bca0c55b9f6f22202e6bf83
Closes-Bug: #1527719


** Changed in: openstack-api-site
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527719

Title:
  Adding a VNIC type for physical functions

Status in neutron:
  Fix Released
Status in openstack-api-site:
  Fix Released
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/246923
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2c60278992d5a21724105ed0ca6e1d2f3e5c
  Author: Brent Eagles 
  Date:   Mon Nov 9 09:26:53 2015 -0330

  Adding a VNIC type for physical functions
  
  This change adds a new VNIC type to distinguish between virtual and
  physical functions in SR-IOV.
  
  The new VNIC type 'direct-physical' deviates from the behavior of
  'direct' VNICs for virtual functions. While neutron tracks the resource
  as a port, it does not currently perform any management functions.
  Future changes may extend the segment mapping functionality that is
  currently based on agent configuration to include direct types.
  However, the direct-physical VNICs will not have functional parity with
  the other SR-IOV VNIC types in that quality of service and port security
  functionality is not available.
  
  APIImpact
  DocImpact: Add description for new 'direct-physical' VNIC type.
  
  Closes-Bug: #1500993
  
  Change-Id: If1ab969c2002c649a3d51635ca2765c262e2d37f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526976] Re: Any operation without token fails with internal server error for fernet token

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/259563
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=171f0e2193f336c02646e4366764d53336b10c8b
Submitter: Jenkins
Branch:master

commit 171f0e2193f336c02646e4366764d53336b10c8b
Author: Haneef Ali 
Date:   Fri Dec 18 09:34:18 2015 -0800

Fix 500 error when no fernet token is passed

Keystone returns internal server error if the
user doesn't send any token. This happens only for
fernet token. This review returns 401 if the token
is not passed. Logic is moved from provider to
controller layer.

Since the logic has movoed to controller, some
of code which directly checks for no token in
the provider and their corresponding  tests
has been removed from the token providers
as they are redundant.

Closes-Bug: 1526976

Change-Id: I0b6b0c48d6c841f996d1b8711d6c343ddfd5d945


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1526976

Title:
  Any operation without token fails with internal server error for
  fernet token

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This bug is only for fernet token.  Configure keystone to use fernet
  token. Call any operation without passing a X-Auth-Token. It reports
  500 error. It should throw 401

  e.g curl -X DELEETE $OS_AUTH_URL/v3/projects/https://bugs.launchpad.net/keystone/+bug/1526976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507752] Re: Rally change breaks Keystone rally job

2015-12-23 Thread Andrey Kurilin
** Changed in: rally
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1507752

Title:
  Rally change breaks Keystone rally job

Status in OpenStack Identity (keystone):
  Fix Released
Status in Rally:
  Fix Released

Bug description:
  This change:
  
http://git.openstack.org/cgit/openstack/rally/diff/rally/plugins/openstack/scenarios/keystone/basic.py?id=f871de842214f103b4841160e90c73cd98c4f5ad

  Breaks this job:
  
http://logs.openstack.org/74/231574/6/check/gate-rally-dsvm-keystone/57d4dfc/rally-plot/results.html.gz#/KeystoneBasic.create_user/failures

  Traceback:
  Traceback (most recent call last):
File "/opt/stack/new/rally/rally/task/runner.py", line 64, in 
_run_scenario_once
  method_name)(**kwargs) or scenario_output
File 
"/opt/stack/new/rally/rally/plugins/openstack/scenarios/keystone/basic.py", 
line 33, in create_user
  self._user_create(**kwargs)
File "/opt/stack/new/rally/rally/task/atomic.py", line 83, in 
func_atomic_actions
  f = func(self, *args, **kwargs)
File 
"/opt/stack/new/rally/rally/plugins/openstack/scenarios/keystone/utils.py", 
line 45, in _user_create
  name, password=password, email=email, **kwargs)
  TypeError: create() got an unexpected keyword argument 'name_length'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1507752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528895] [NEW] Timeouts in update_device_list (too slow with large # of VIFs)

2015-12-23 Thread Mohammed Naser
Public bug reported:

In our environment, we have some large compute nodes with a large number
of VIFs.  When the update_device_list call happens on the agent start
up:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L842

This takes a very long time as it seems to loop on each port at the
server side, contact Nova and much more. The default rpc timeout of 60
seconds is not enough and it ends up failing on a server with around 120
VIFs.  When raising the timeout to 120, it seems to work with no
problems.

2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1e6cc46d-eb52-4d99-bd77-bf2e8424a1ea - - - - -] Error while processing VIF 
ports
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1752, in rpc_loop
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
ovs_restarted)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1507, in process_network_ports
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self._bind_devices(need_binding_devices)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 847, in _bind_devices
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.conf.host)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/rpc.py", line 179, in 
update_device_list
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
agent_id=agent_id, host=host)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in 
call
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
retry=self.retry)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in 
_send
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
timeout=timeout, retry=retry)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
431, in send
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent retry=retry)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
420, in _send
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = 
self._waiter.wait(msg_id, timeout)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
318, in wait
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent message = 
self.waiters.get(msg_id, timeout=timeout)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
223, in get
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 'to message 
ID %s' % msg_id)
2015-12-23 15:27:27.373 38588 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
MessagingTimeout: Timed out waiting for a reply to message ID 
c42c1ffc801b41ca89aa4472696bbf1a

I don't think that an RPC call should ever take that long, the neutron-
server is not loaded or anything and adding new ones doesn't seem to
resolve it, due to the fact a single RPC responder answers this.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

[Yahoo-eng-team] [Bug 1528877] Re: Libvirt can't honour user-supplied dev names

2015-12-23 Thread sean redmond
** Also affects: ubuntu
   Importance: Undecided
   Status: New

** No longer affects: ubuntu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528877

Title:
  Libvirt can't honour user-supplied dev names

Status in OpenStack Compute (nova):
  New

Bug description:
  OpenStack liberty (Ubuntu 14.04)

  When creating a new instance via the dashboard and selecting 'boot
  from image (creates new volume)' the below is logged into nova-
  compute.log on the host the instance is deployed to:

  "Ignoring supplied device name: /dev/vda. Libvirt can't honour user-
  supplied dev names"

  In the virsh XML for the instance I can see the target dev name is
  sda, this seems to have changed from how kilo was behaving.

  I also noticed that if I then try and add an extra volume to this
  instance the below error is placed in the nova-compute.log and the
  volume is not attached.

  caf420922780362 - - -] Exception during message handling: internal error: 
unable to execute QEMU command 'device_add': Duplicate ID 'scsi0-0-0-0' for 
device
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, 
in _do_dispatch
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher payload)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 72, in wrapped
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 378, in 
decorated_function
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 366, in 
decorated_function
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4609, in 
attach_volume
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
do_attach_volume(context, instance, driver_bdm)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 254, in 
inner
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4607, in 
do_attach_volume
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
bdm.destroy()
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4604, in 
do_attach_volume
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 

[Yahoo-eng-team] [Bug 1527145] Re: when port updated on one compute node, ipset in other compute nodes did not be refreshed

2015-12-23 Thread Hong Hui Xiao
*** This bug is a duplicate of bug 1448022 ***
https://bugs.launchpad.net/bugs/1448022

It is the same bug, duplicate them will be more appropriate.

** This bug has been marked a duplicate of bug 1448022
   update port IP, ipset member can't be updated in another host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527145

Title:
  when port updated on one compute node, ipset in other compute nodes
  did not be refreshed

Status in neutron:
  Fix Released

Bug description:
  I found this problem in Kilo release, but I'm not sure if it still
  exists in master branch.

  =
  Reproduce steps:
  =
  (Three compute nodes,  ovs agent,   security group with ipset enabled)
  1. Launch VM1(1.1.1.1) on Compute Node1 with default security group
  2. Launch VM2(1.1.1.2) on Compute Node2 with default security group
  3. Launch VM3(1.1.1.3) on Compute Node3 with default security group
  4. Change VM1's ip address to 1.1.1.10 and port-update add allowed address 
pair 1.1.1.10

  After these operations, I found that ipset in Compute Node1 added
  member 1.1.1.10, but ipset in Compute Node2 and Compute Node3 did not,
  so that VM1 ping VM2 and VM3 failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502773] Re: "Delete Instance" looks better over "Terminate Instance" for consistency

2015-12-23 Thread Akihiro Motoki
I will check if any documentation change is required related to this
horizon change.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1502773

Title:
  "Delete Instance" looks better over "Terminate Instance" for
  consistency

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in openstack-manuals:
  New

Bug description:
  "Delete Instance" looks better than "Terminate Instance" for consistency.
  We use "Terminate Instance" for the action label of deleting a server.
  I think "Delete Instance" is more consistent from the point of both 
consistency across OpenStack Dashboard and consistency with nova/openstack CLI.

  In addition, I think "Delete" is easier for users to associate this operation 
will delete the instance data completely.
  "Terminate" is a strong word and native English speaker can image this 
operation crashes the instance and we can never use it again, but it is not 
easy that non-native speakers can understand such kind of nuance of the word.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1502773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522188] Re: Replace "Terminate Instance" with "Delete Instance"

2015-12-23 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1502773 ***
https://bugs.launchpad.net/bugs/1502773

It will be covered by bug 1502773.
DocImpact flag now files a bug to Horizon itself.

I will check if documentation update is required or not as part of bug
1502773.

** This bug has been marked a duplicate of bug 1502773
   "Delete Instance" looks better over "Terminate Instance" for consistency

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1522188

Title:
  Replace "Terminate Instance" with "Delete Instance"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/231428
  \Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/horizon" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5fc26b0a117439bc6f07ab527ca20716511947ca
  Author: Akihiro Motoki 
  Date:   Tue Oct 6 20:24:22 2015 +0900

  Replace "Terminate Instance" with "Delete Instance"
  
  "Delete" is being used almost everywhere in OpenStack Dashboard
  except the instance panel. Using "Delete" looks more consistent.
  In addition, "Delete" tells non-native English speakers that
  deleted instances will be no longer usable again compared to
  "Terminate".
  
  DocImpact
  Closes-Bug: #1502773
  Change-Id: Idccaf3c45566f20f11d02ada64c1d3934a6f3002

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1522188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527145] Re: when port updated on one compute node, ipset in other compute nodes did not be refreshed

2015-12-23 Thread Zou Keke
This bug has been fixed on Master.
https://review.openstack.org/#/c/177159/

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527145

Title:
  when port updated on one compute node, ipset in other compute nodes
  did not be refreshed

Status in neutron:
  Fix Released

Bug description:
  I found this problem in Kilo release, but I'm not sure if it still
  exists in master branch.

  =
  Reproduce steps:
  =
  (Three compute nodes,  ovs agent,   security group with ipset enabled)
  1. Launch VM1(1.1.1.1) on Compute Node1 with default security group
  2. Launch VM2(1.1.1.2) on Compute Node2 with default security group
  3. Launch VM3(1.1.1.3) on Compute Node3 with default security group
  4. Change VM1's ip address to 1.1.1.10 and port-update add allowed address 
pair 1.1.1.10

  After these operations, I found that ipset in Compute Node1 added
  member 1.1.1.10, but ipset in Compute Node2 and Compute Node3 did not,
  so that VM1 ping VM2 and VM3 failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528977] [NEW] Neutron router does not work with latest iproute package included in CentOS-7.2-1511

2015-12-23 Thread Andrew Poltavchenko
Public bug reported:

Seems that something has been changed in the new iproute version and now
attempts to add more than one interface to router cause errors posted at
the bottom. This affects neutron-l3-agent on CentOS-7.2-1511 and
possible on RedHat (cannot check this).

Quick solution is to simple downgrade the package:
# wget 
http://mirror.centos.org/centos/7.1.1503/os/x86_64/Packages/iproute-3.10.0-21.el7.x86_64.rpm
# yum -y downgrade ./iproute-3.10.0-21.el7.x86_64.rpm

Part of the /var/log/neutron/l3-agent.log:

2015-12-24 01:35:05.794 6343 ERROR neutron.agent.linux.utils [-]
Command: ['ip', 'netns', 'add', u'qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a']
Exit code: 1
Stdin:
Stdout:
Stderr: Cannot create namespace file 
"/var/run/netns/qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a": File exists

2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info [-]
Command: ['ip', 'netns', 'add', u'qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a']
Exit code: 1
Stdin:
Stdout:
Stderr: Cannot create namespace file 
"/var/run/netns/qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a": File exists
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info Traceback (most 
recent call last):
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 356, in call
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 692, 
in process
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self._process_internal_ports(agent.pd)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 396, 
in _process_internal_ports
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self.internal_network_added(p)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 328, 
in internal_network_added
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
INTERNAL_DEV_PREFIX)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 303, 
in _internal_network_added
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
prefix=prefix)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 252, 
in plug
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info bridge, 
namespace, prefix)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 483, 
in plug_new
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
namespace2=namespace)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 144, in 
add_veth
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self.ensure_namespace(namespace2)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 163, in 
ensure_namespace
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info ip = 
self.netns.add(name)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 793, in 
add
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self._as_root([], ('add', name), use_root_namespace=True)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 280, in 
_as_root
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
use_root_namespace=use_root_namespace)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 80, in 
_as_root
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 89, in 
_execute
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
log_fail_as_error=log_fail_as_error)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 159, in 
execute
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info raise 
RuntimeError(m)
2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info RuntimeError:
2015-12-24 

[Yahoo-eng-team] [Bug 1528977] Re: Neutron router not working with latest iproute2 package included in CentOS-7.2-1511

2015-12-23 Thread Andrew Poltavchenko
** Also affects: centos
   Importance: Undecided
   Status: New

** No longer affects: centos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528977

Title:
  Neutron router not working with latest iproute2 package included in
  CentOS-7.2-1511

Status in neutron:
  New

Bug description:
  Seems that something has been changed in the new iproute version and
  now attempts to add more than one interface to router cause errors
  posted at the bottom. This affects neutron-l3-agent on CentOS-7.2-1511
  and possible on RedHat (cannot check this).

  Quick solution is to simple downgrade the package:
  # wget 
http://mirror.centos.org/centos/7.1.1503/os/x86_64/Packages/iproute-3.10.0-21.el7.x86_64.rpm
  # yum -y downgrade ./iproute-3.10.0-21.el7.x86_64.rpm

  Part of the /var/log/neutron/l3-agent.log:

  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.linux.utils [-]
  Command: ['ip', 'netns', 'add', 
u'qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: Cannot create namespace file 
"/var/run/netns/qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a": File exists

  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info [-]
  Command: ['ip', 'netns', 'add', 
u'qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: Cannot create namespace file 
"/var/run/netns/qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a": File exists
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 356, in call
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 692, 
in process
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self._process_internal_ports(agent.pd)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 396, 
in _process_internal_ports
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self.internal_network_added(p)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 328, 
in internal_network_added
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
INTERNAL_DEV_PREFIX)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 303, 
in _internal_network_added
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
prefix=prefix)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 252, 
in plug
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info bridge, 
namespace, prefix)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 483, 
in plug_new
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
namespace2=namespace)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 144, in 
add_veth
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self.ensure_namespace(namespace2)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 163, in 
ensure_namespace
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info ip = 
self.netns.add(name)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 793, in 
add
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self._as_root([], ('add', name), use_root_namespace=True)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 280, in 
_as_root
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
use_root_namespace=use_root_namespace)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 80, in 
_as_root
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 

[Yahoo-eng-team] [Bug 1528981] [NEW] keystone fernet cannot work with mod wsgi anymore

2015-12-23 Thread Dave Chen
Public bug reported:

With the latest code, fernet cannot work anymore due to this change id
(Change-Id: I0723cd50bbb464c38c9efcf1888e39d14950997b).

The stacktrace like this,

2015-12-23 10:47:53.526487 9923 DEBUG passlib.registry 
[req-e4501bef-5f1e-4bd3-8e1b-7320093b767b - - - - -] registered 'sha512_crypt' 
handler:  
register_crypt_handler 
/usr/local/lib/python2.7/dist-packages/passlib/registry.py:284
2015-12-23 10:47:53.625320 9923 INFO keystone.token.providers.fernet.utils 
[req-e4501bef-5f1e-4bd3-8e1b-7320093b767b - - - - -] Loaded 2 encryption keys 
(max_active_keys=3) from: /etc/keystone/fernet-keys/
2015-12-23 10:47:53.735808 mod_wsgi (pid=9923): Exception occurred processing 
WSGI script '/usr/local/bin/keystone-wsgi-public'.
2015-12-23 10:47:53.735856 TypeError: expected byte string object for header 
value, value of type unicode found


Need identity which change from this commit 
(https://review.openstack.org/#/c/259563/) cause the regression

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Dave Chen (wei-d-chen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1528981

Title:
  keystone fernet cannot work with mod wsgi anymore

Status in OpenStack Identity (keystone):
  New

Bug description:
  With the latest code, fernet cannot work anymore due to this change id
  (Change-Id: I0723cd50bbb464c38c9efcf1888e39d14950997b).

  The stacktrace like this,

  2015-12-23 10:47:53.526487 9923 DEBUG passlib.registry 
[req-e4501bef-5f1e-4bd3-8e1b-7320093b767b - - - - -] registered 'sha512_crypt' 
handler:  
register_crypt_handler 
/usr/local/lib/python2.7/dist-packages/passlib/registry.py:284
  2015-12-23 10:47:53.625320 9923 INFO keystone.token.providers.fernet.utils 
[req-e4501bef-5f1e-4bd3-8e1b-7320093b767b - - - - -] Loaded 2 encryption keys 
(max_active_keys=3) from: /etc/keystone/fernet-keys/
  2015-12-23 10:47:53.735808 mod_wsgi (pid=9923): Exception occurred processing 
WSGI script '/usr/local/bin/keystone-wsgi-public'.
  2015-12-23 10:47:53.735856 TypeError: expected byte string object for header 
value, value of type unicode found

  
  Need identity which change from this commit 
(https://review.openstack.org/#/c/259563/) cause the regression

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1528981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528771] Re: Some translations aren't correct for zh_CN django.po

2015-12-23 Thread Akihiro Motoki
Translation bugs should be fixed in Zanata
(https://translate.openstack.org/) and bugs are tracked on openstack-
i18n launchpad.

** Also affects: openstack-i18n
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: In Progress => Invalid

** No longer affects: horizon

** Tags removed: i18n
** Tags added: horizon

** Tags added: simplified-chinese

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528771

Title:
  Some translations aren't correct for zh_CN django.po

Status in openstack i18n:
  New

Bug description:
  1) In line
  
https://github.com/openstack/horizon/blob/master/horizon/locale/zh_CN/LC_MESSAGES/django.po#L354,
  we don't leave blank space between Chinese words. Take "您无权访问%s" as
  example, we use "您无权访问该页面" instead of "您无权访问 该页面".

  2) In line
  
https://github.com/openstack/horizon/blob/master/horizon/locale/zh_CN/LC_MESSAGES/django.po#L350,
  punctuation symbol ":" in Simplified Chinese should be ":"

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-i18n/+bug/1528771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524602] Re: Return availability_zone_hints as string when net-create

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/256261
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=999ee86deaab2d72069f9d16c3a7d893c1426fc4
Submitter: Jenkins
Branch:master

commit 999ee86deaab2d72069f9d16c3a7d893c1426fc4
Author: Hirofumi Ichihara 
Date:   Fri Dec 11 16:19:27 2015 +0900

Return availability_zone_hints as list when net-create

In neutron with availability zone extensions, we receive
the return value with availability_zone_hints as string
although we expect list.

Change-Id: Ifb1d741324725f3f2692962a02bf3d870611fafb
Closes-bug: #1524602


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524602

Title:
  Return availability_zone_hints as string when net-create

Status in neutron:
  Fix Released

Bug description:
  In neutron with availability zone extensions, we receive the return
  value with string as availability_zone_hints although we expect list.

  
 $ neutron net-create --availability-zone-hint zone-1 
--availability-zone-hint zone-2 net1
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   | ["zone-1", "zone-2"] |
  | id| 0ef0597c-4aab-4235-8513-bf5d8304fe64 |
  | mtu   | 0|
  | name  | net1 |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1054 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | 32f5512c7b3f47fb8924588ff9ad603b |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528137] Re: creating meter label rule doesn't work properly

2015-12-23 Thread Akihiro Motoki
remote_ip_prefix of metering label rule is unclear and should be
updated.

According to the discussion in the review in neutron,
for egress direction, remote_ip_prefix is a destination IP address or ranges
and for ingress direction, it means a source IP (range).

** Also affects: openstack-api-site
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528137

Title:
  creating meter label rule doesn't work properly

Status in neutron:
  In Progress
Status in openstack-api-site:
  New

Bug description:
  Created rule by the following API counts packets between a router
  which connects to external network and the connection destination
  device.

API: POST /v2.0/metering/metering-label-rules

  When outbound traffic of external router, destination should be
  remote_ip, and when inbound traffic, sender should be remote_ip. But
  it has become actually reversed.

  Because option for creating the iptables rule is reversed.

code:
  
https://github.com/openstack/neutron/blob/master/neutron/services/metering/drivers/iptables/iptables_driver.py#L176

  I'll show you an example that created the meter label rule the
  remote_ip is set to 192.168.0.0/16.

  
  [Actual results]

  $ neutron meter-label-create test-label --tenant-id 
2a023bd32f014e44b60b591cbd151514
  Created a new metering_label:
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | d35d0464-f872-43c7-8dd8-850657da59ef |
  | name| test-label   |
  | shared  | False|
  | tenant_id   | 2a023bd32f014e44b60b591cbd151514 |
  +-+--+
  $ neutron meter-label-create test-label2 --tenant-id 
2a023bd32f014e44b60b591cbd151514
  Created a new metering_label:
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | 61c344ce-0438-4cd3-bbd8-a4d5e0dbce6f |
  | name| test-label2  |
  | shared  | False|
  | tenant_id   | 2a023bd32f014e44b60b591cbd151514 |
  +-+--+
  $ neutron meter-label-rule-create --tenant-id 
2a023bd32f014e44b60b591cbd151514 --direction egress 
d35d0464-f872-43c7-8dd8-850657da59ef 192.168.0.0/16

  $ neutron meter-label-rule-create --tenant-id
  2a023bd32f014e44b60b591cbd151514 --direction ingress
  61c344ce-0438-4cd3-bbd8-a4d5e0dbce6f 192.168.0.0/16

  $ neutron meter-label-rule-list
  
+--+--+---+--+
  | id   | excluded | direction | 
remote_ip_prefix |
  
+--+--+---+--+
  | 3e426537-61f4-44ac-a67a-e66ce26dc11b | False| egress| 
192.168.0.0/16   |
  | 4d669406-173c-4eea-af21-00430719cbfa | False| ingress   | 
192.168.0.0/16   |
  
+--+--+---+--+

  $ sudo ip netns exec qrouter-b72b789e-8ca9-465e-a2d1-98f725a7042f 
iptables-save
  ...
  -A neutron-meter-r-61c344ce-043 -d 192.168.0.0/16 -i qg-708e8abf-bc -j 
neutron-meter-l-61c344ce-043
  -A neutron-meter-r-d35d0464-f87 -s 192.168.0.0/16 -o qg-708e8abf-bc -j 
neutron-meter-l-d35d0464-f87
  ...

  
   [The expected iptables rules]

  -A neutron-meter-r-61c344ce-043 -s 192.168.0.0/16 -i qg-708e8abf-bc -j 
neutron-meter-l-61c344ce-043
  -A neutron-meter-r-d35d0464-f87 -d 192.168.0.0/16 -o qg-708e8abf-bc -j 
neutron-meter-l-d35d0464-f87

  
  [Examples of required packet is not counted]

  ubuntu@test-vm(10.0.0.3):~$ ping 192.168.0.3 -c 3
  PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
  64 bytes from 192.168.0.3: icmp_seq=1 ttl=62 time=1.13 ms
  64 bytes from 192.168.0.3: icmp_seq=2 ttl=62 time=0.618 ms
  64 bytes from 192.168.0.3: icmp_seq=3 ttl=62 time=0.652 ms

  --- 192.168.0.3 ping statistics ---
  3 packets transmitted, 3 received, 0% packet loss, time 2000ms
  rtt min/avg/max/mdev = 0.618/0.801/1.133/0.235 ms

  $ sudo ip netns exec qrouter-b72b789e-8ca9-465e-a2d1-98f725a7042f iptables -t 
filter -L neutron-meter-l-d35d0464-f87 -n -v -x
  Chain neutron-meter-l-d35d0464-f87 (2 references)
  pkts  bytes target prot opt in out source   
destination
 00all  --  *  *   0.0.0.0/0
0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528137/+subscriptions

-- 

[Yahoo-eng-team] [Bug 1528967] [NEW] Horizon doesn't create new scoped token when user role is removed

2015-12-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When a user logs into Horizon an unscoped token is created using which a
scoped token is obtained. I am logged into Horizon and remove myself
from a project which is not the current active project. This results in
all my scoped tokens getting invalidated. I have some API calls in the
middleware that require authorization which fail because the token is
invalid. Horizon will throw an Unauthorized error (see attachment) and
the only way to recover from this is to clear cookies, logout and log
back in again.

Horizon should immediately obtain a new scoped token if previous token
is invalidated. Alternatively, keystone should not invalidate all tokens
(for all projects) when user is removed from one project.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Horizon doesn't create new scoped token when user role is removed
https://bugs.launchpad.net/bugs/1528967
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524220] Re: can update the gateway of subnet with needless "0" in the ip address via cli

2015-12-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/255217
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d515a8db880388092b61f4649f8dd0b29a2c5de3
Submitter: Jenkins
Branch:master

commit d515a8db880388092b61f4649f8dd0b29a2c5de3
Author: Bo Chi 
Date:   Thu Dec 10 08:42:26 2015 -0500

reject leading '0's in IPv4 addr to avoid ambiguity

If a IPv4 address has a leading '0', it will be interpreted as an
octal number, e.g. 10.0.0.011 will be interpreted as 10.0.0.9.
Users who are not familiar with or not expecting octal interpretation
will not get the address they intended. Since there is no standard
around this, we reject leading '0's to avoid ambiguity.

APIImpact
Change-Id: I3163ba13468c47d385585221d2167fbe31d24010
Closes-Bug: #1524220


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524220

Title:
  can update the gateway of subnet with needless "0" in the ip address
  via cli

Status in neutron:
  Fix Released

Bug description:
  [Summary]
  can update the gateway of subnet with needless "0" in the ip address via cli

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  if update the gateway of subnet with needless "0" in the ip address, the 
needless "0" can be cut off

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) create 1 subnet:
  root@45-59:/opt/stack/devstack# neutron subnet-show sub-test
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "100.0.0.100", "end": "100.0.0.200"} |
  | cidr  | 100.0.0.0/24   |
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 100.0.0.1  |
  | host_routes   ||
  | id| 00dfe80b-911f-4cf1-8874-77639e6082c5   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  | sub-test   |
  | network_id| 79292c3a-1c85-4014-b0d7-0f078f1a4ee8   |
  | subnetpool_id ||
  | tenant_id | 71209fa21a7343e3b778ec5f4ff45252   |
  +---++

  2)update the gateway of subnet with needless "0" in the ip address:
  root@45-59:/opt/stack/devstack# neutron subnet-update  --gateway 100.0.0.001 
sub-test
  Updated subnet: sub-test
  root@45-59:/opt/stack/devstack# neutron subnet-show sub-test
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "100.0.0.100", "end": "100.0.0.200"} |
  | cidr  | 100.0.0.0/24   |
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 100.0.0.001>>>ISSUE, should be 100.0.0.1
  |
  | host_routes   ||
  | id| 00dfe80b-911f-4cf1-8874-77639e6082c5   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  | sub-test   |
  | network_id| 79292c3a-1c85-4014-b0d7-0f078f1a4ee8   |
  | subnetpool_id ||
  | tenant_id | 71209fa21a7343e3b778ec5f4ff45252   |
  +---++
  root@45-59:/opt/stack/devstack# 

  3) if update the gateway of subnet with needless "0" in the ip address
  via dashboard, no this issue

  
  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1528967] [NEW] Horizon doesn't create new scoped token when user role is removed

2015-12-23 Thread Mitali Parthasarathy
Public bug reported:

When a user logs into Horizon an unscoped token is created using which a
scoped token is obtained. I am logged into Horizon and remove myself
from a project which is not the current active project. This results in
all my scoped tokens getting invalidated. I have some API calls in the
middleware that require authorization which fail because the token is
invalid. Horizon will throw an Unauthorized error (see attachment) and
the only way to recover from this is to clear cookies, logout and log
back in again.

Horizon should immediately obtain a new scoped token if previous token
is invalidated. Alternatively, keystone should not invalidate all tokens
(for all projects) when user is removed from one project.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2015-12-16 at 6.41.53 PM.png"
   
https://bugs.launchpad.net/bugs/1528967/+attachment/4539573/+files/Screen%20Shot%202015-12-16%20at%206.41.53%20PM.png

** Project changed: django-openstack-auth => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528967

Title:
  Horizon doesn't create new scoped token when user role is removed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a user logs into Horizon an unscoped token is created using which
  a scoped token is obtained. I am logged into Horizon and remove myself
  from a project which is not the current active project. This results
  in all my scoped tokens getting invalidated. I have some API calls in
  the middleware that require authorization which fail because the token
  is invalid. Horizon will throw an Unauthorized error (see attachment)
  and the only way to recover from this is to clear cookies, logout and
  log back in again.

  Horizon should immediately obtain a new scoped token if previous token
  is invalidated. Alternatively, keystone should not invalidate all
  tokens (for all projects) when user is removed from one project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1528967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529012] [NEW] Miss policy checks in image panels

2015-12-23 Thread Wang Bo
Public bug reported:

There is no policy checking code in image panels: project/images,
project/ngimages,  admin/images.

As ngusers code:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/static/dashboard/identity/users/table/table.controller.js#L54.

We should add policy checks of "get_images" in image panels

** Affects: horizon
 Importance: Undecided
 Assignee: Wang Bo (chestack)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1529012

Title:
  Miss policy checks in image panels

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is no policy checking code in image panels: project/images,
  project/ngimages,  admin/images.

  As ngusers code:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/identity/static/dashboard/identity/users/table/table.controller.js#L54.

  We should add policy checks of "get_images" in image panels

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1529012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528899] Re: heat event list in horizon issue

2015-12-23 Thread Sergey Kraynev
Hi jear,

If you say, that CLI works fine, than It means, that it needs to be
fixed in Horizon Project.

If you can reproduce it only with Heat CLI or API, i.e. without Horizon, please 
create another bug.
I will assign this one on Horizon team.

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: heat
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528899

Title:
  heat event list in horizon issue

Status in heat:
  Invalid
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hello,

  in devstack stable/liberty , in horizon, when I try to display heat event 
list, I get a never ending spinning wheel.
  CLI is working well.

  When I display the Event tab in a new browser tab: I get:
  http://pastebin.com/Bf3ViPv0

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1528899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511167] Re: Cinder client incorrectly list the volume size units in GB, the API is actually in GiB

2015-12-23 Thread NidhiMittalHada
** Changed in: python-manilaclient
   Status: In Progress => Fix Committed

** Changed in: python-manilaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511167

Title:
  Cinder client incorrectly list the volume size units in GB, the API is
  actually in GiB

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-manilaclient:
  Fix Released

Bug description:
  Both Horizon and the cinder client documents the size paramater in
  gigabyes(GB) but the API docs(both v1 and v2) list the size units as
  gibibytes(GiBs). The correct unit should be gibibytes(GiBs)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529007] [NEW] policy topic document has been out of date

2015-12-23 Thread Wang Bo
Public bug reported:

When I tried fix policy related bugs(#1411239) found that
horizon/doc/source/topics/policy.rst has been out of date.

We do not use "oslo-incubator" in horizon now. 
There is file "openstack_dashboard/openstack/common/policy.py" in hoirzon

Refer to:
https://github.com/openstack/horizon/blob/master/doc/source/topics/policy.rst

** Affects: horizon
 Importance: Undecided
 Assignee: Wang Bo (chestack)
 Status: New

** Description changed:

+ When I tried fix policy related bugs(#1411239) found that
+ horizon/doc/source/topics/policy.rst has been out of date.
+ 
+ We do not use "oslo-incubator" in horizon now. 
+ There is file "openstack_dashboard/openstack/common/policy.py" in hoirzon
+ 
  Refer to:
  https://github.com/openstack/horizon/blob/master/doc/source/topics/policy.rst
- 
- We do not use "oslo-incubator" in horizon now. There is file
- "openstack_dashboard/openstack/common/policy.py" in hoirzon

** Changed in: horizon
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1529007

Title:
  policy topic document has been out of date

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I tried fix policy related bugs(#1411239) found that
  horizon/doc/source/topics/policy.rst has been out of date.

  We do not use "oslo-incubator" in horizon now. 
  There is file "openstack_dashboard/openstack/common/policy.py" in hoirzon

  Refer to:
  https://github.com/openstack/horizon/blob/master/doc/source/topics/policy.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1529007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529014] [NEW] Something wrong in logout dashboard when instance is resizing

2015-12-23 Thread User
Public bug reported:

Steps to reproduce:


1. Resize an instance.
2. logout and login again after a few minutes.
3. Go to the instance panel and notice that the instance has in error status,
prompted for authentication required.


I think,the problem is due to when we logout dashboard,it performs the delete 
the token method(delete_token(endpoint=endpoint, token_id=token.id)).
however,the instance hasn't been performed

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1529014

Title:
  Something wrong in logout dashboard when instance  is  resizing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  
  1. Resize an instance.
  2. logout and login again after a few minutes.
  3. Go to the instance panel and notice that the instance has in error status,
  prompted for authentication required.


  I think,the problem is due to when we logout dashboard,it performs the delete 
the token method(delete_token(endpoint=endpoint, token_id=token.id)).
  however,the instance hasn't been performed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1529014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528805] [NEW] create_ipsec_site_connection call failed because of code inconsistency

2015-12-23 Thread Andrey Pavlov
Public bug reported:

EC2-API gating checks vpn functionality and it is broken two days due to 
vpn-ass.
gating enables vpnass plugin and neutron

gating fails on call - "neutron.create_ipsec_site_connection":

REQ: curl -g -i -X POST http://127.0.0.1:9696/v2.0/vpn/ipsec-site-
connections.json -H "User-Agent: python-neutronclient" -H "Content-Type:
application/json" -H "Accept: application/json" -H "X-Auth-Token:
{SHA1}88227c79fef8266afa89bd559e6f92bb99dd17df" -d
'{"ipsec_site_connection": {"ikepolicy_id": "9f00451d-
aaa8-4465-97f1-f45b6890771e", "peer_cidrs": ["172.16.25.0/24"], "mtu":
1427, "ipsecpolicy_id": "31b17d19-a898-472b-8944-fa34321b67e5",
"vpnservice_id": "7586b8d4-1c1e-49dd-91cc-e1055138ba6d", "psk":
"p.Hu0SlBXoSd8sF1WV.QDSy32sVCjKON", "peer_address": "198.51.100.77",
"peer_id": "198.51.100.77", "initiator": "response-only", "name": "vpn-
360307d8/subnet-031705ee"}}' _http_log_request /usr/local/lib/python2.7
/dist-packages/keystoneclient/session.py:198


it takes exception from neutron and netron logs are:

2015-12-22 21:28:38.620 DEBUG neutron.api.v2.base 
[req-86bbd8b1-af64-4d82-86d4-15daa40a2a8f user-29edecd6 project-3a1e0ba2] 
Request body: {u'ipsec_site_connection': {u'psk': 
u'p.Hu0SlBXoSd8sF1WV.QDSy32sVCjKON', u'peer_cidrs': [u'172.16.25.0/24'], 
u'vpnservice_id': u'7586b8d4-1c1e-49dd-91cc-e1055138ba6d', u'initiator': 
u'response-only', u'mtu': 1427, u'ikepolicy_id': 
u'9f00451d-aaa8-4465-97f1-f45b6890771e', u'ipsecpolicy_id': 
u'31b17d19-a898-472b-8944-fa34321b67e5', u'peer_address': u'198.51.100.77', 
u'peer_id': u'198.51.100.77', u'name': u'vpn-360307d8/subnet-031705ee'}} 
prepare_request_body /opt/stack/new/neutron/neutron/api/v2/base.py:645
2015-12-22 21:28:38.620 ERROR neutron.api.v2.resource 
[req-86bbd8b1-af64-4d82-86d4-15daa40a2a8f user-29edecd6 project-3a1e0ba2] 
create failed
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 147, in wrapper
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 414, in create
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 680, in 
prepare_request_body
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
attributes.convert_value(attr_info, res_dict, webob.exc.HTTPBadRequest)
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/attributes.py", line 919, in 
convert_value
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource res = 
validators[rule](res_dict[attr], attr_vals['validate'][rule])
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/extensions/vpnaas.py", line 167, 
in _validate_subnet_list_or_none
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
attr._validate_subnet_list(data, key_specs)
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource AttributeError: 
'module' object has no attribute '_validate_subnet_list'
2015-12-22 21:28:38.620 16707 ERROR neutron.api.v2.resource 
2015-12-22 21:28:38.625 INFO neutron.wsgi 
[req-86bbd8b1-af64-4d82-86d4-15daa40a2a8f user-29edecd6 project-3a1e0ba2] 
127.0.0.1 - - [22/Dec/2015 21:28:38] "POST 
/v2.0/vpn/ipsec-site-connections.json HTTP/1.1" 500 383 0.092068

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528805

Title:
  create_ipsec_site_connection call failed because of code inconsistency

Status in neutron:
  New

Bug description:
  EC2-API gating checks vpn functionality and it is broken two days due to 
vpn-ass.
  gating enables vpnass plugin and neutron

  gating fails on call - "neutron.create_ipsec_site_connection":

  REQ: 

[Yahoo-eng-team] [Bug 1528777] [NEW] Remove not required packages in requirements.txt

2015-12-23 Thread Wang Bo
Public bug reported:

We need remove the not using packages in requirements.txt, Otherwise
the requirements.txt will be updated when global requirements updated 
unnecessarily.

** Affects: horizon
 Importance: Undecided
 Assignee: Wang Bo (chestack)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Wang Bo (chestack)

** Description changed:

- Remove the not using packages in requirements.txt, Otherwise
- the requirements.txt will be updated when global requirements updated.
+ We need remove the not using packages in requirements.txt, Otherwise
+ the requirements.txt will be updated when global requirements updated 
unnecessarily.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528777

Title:
  Remove not required packages in requirements.txt

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We need remove the not using packages in requirements.txt, Otherwise
  the requirements.txt will be updated when global requirements updated 
unnecessarily.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1528777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp