[Yahoo-eng-team] [Bug 1628135] [NEW] Integrate Identity back end with LDAP in Administrator Guide

2016-09-27 Thread Dave Walker
Public bug reported:

The guide states that both "keystone.identity.backends.ldap.Identity"
and "user_attribute_ignore" can be used, but as you can see below this
is deprecated (and infact not working in current Newton)


==> keystone-apache-admin-error.log <==
2016-09-27 10:35:13.891436 2016-09-27 10:35:13.890 24 WARNING stevedore.named 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Could not load 
keystone.identity.backends.ldap.Identity

==> keystone.log <==
2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.

==> keystone-apache-admin-error.log <==
2016-09-27 10:35:13.914683 2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.


---
Release: 0.9 on 2016-09-27 12:00
SHA: 974a8b3e88ffdda8b621a6befc124d4f9ca9bdc7
Source: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/admin-guide/source/keystone-integrate-identity-backend-ldap.rst
URL: 
http://docs.openstack.org/admin-guide/keystone-integrate-identity-backend-ldap.html

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: openstack-manuals
 Importance: Undecided
 Status: New


** Tags: admin-guide

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628135

Title:
  Integrate Identity back end with LDAP in Administrator Guide

Status in OpenStack Identity (keystone):
  New
Status in openstack-manuals:
  New

Bug description:
  The guide states that both "keystone.identity.backends.ldap.Identity"
  and "user_attribute_ignore" can be used, but as you can see below this
  is deprecated (and infact not working in current Newton)

  
  ==> keystone-apache-admin-error.log <==
  2016-09-27 10:35:13.891436 2016-09-27 10:35:13.890 24 WARNING stevedore.named 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Could not load 
keystone.identity.backends.ldap.Identity

  ==> keystone.log <==
  2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.

  ==> keystone-apache-admin-error.log <==
  2016-09-27 10:35:13.914683 2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.

  
  ---
  Release: 0.9 on 2016-09-27 12:00
  SHA: 974a8b3e88ffdda8b621a6befc124d4f9ca9bdc7
  Source: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/admin-guide/source/keystone-integrate-identity-backend-ldap.rst
  URL: 
http://docs.openstack.org/admin-guide/keystone-integrate-identity-backend-ldap.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598860] [NEW] live migration failing

2016-07-04 Thread Dave Walker
Public bug reported:

Deploying OpenStack current master via kolla source, with kvm and
ceph/rbd enabled and attempting to perform a live-migration, it becomes
stuck in state migrating with this appearing in the nova-compute log.

Looking at nova git log, i'm seeing a bunch of live-migration changes in
the last few days.  I suspect there might have been a regression.

2016-07-04 10:00:15.643 1 INFO nova.compute.resource_tracker 
[req-1f6df1d8-a88a-4f27-8ce2-609ea25da4e2 - - - - -] Compute_service record 
updated for compute01:compute01
2016-07-04 10:00:56.731 1 ERROR root [req-a2b97b2c-f479-4b7c-b3d9-142c1ab1b25f 
b3bedc85b7674ce2b7e0589a7427dc81 22ab55c3ffe343639d872b1e4db6abb4 - - -] 
Original exception being dropped: ['Traceback (most recent call last):\n', '  
File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", 
line 66, in wrapped\nreturn f(self, context, *args, **kw)\n', '  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/utils.py", line 
608, in decorated_function\n*args, **kwargs)\n', '  File 
"/usr/lib64/python2.7/inspect.py", line 980, in getcallargs\n\'arguments\' 
if num_required > 1 else \'argument\', num_total))\n', 'TypeError: 
live_migration() takes exactly 7 arguments (6 given)\n']
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server 
[req-a2b97b2c-f479-4b7c-b3d9-142c1ab1b25f b3bedc85b7674ce2b7e0589a7427dc81 
22ab55c3ffe343639d872b1e4db6abb4 - - -] Exception during message handling
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", 
line 133, in _process_incoming
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 150, in dispatch
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 121, in _do_dispatch
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", 
line 71, in wrapped
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server f, self, context, 
*args, **kw)
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", 
line 85, in _get_call_dict
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server context, *args, 
**kw)
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python2.7/inspect.py", line 980, in getcallargs
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server 'arguments' if 
num_required > 1 else 'argument', num_total))
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server TypeError: 
live_migration() takes exactly 7 arguments (6 given)
2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1598860

Title:
  live migration failing

Status in OpenStack Compute (nova):
  New

Bug description:
  Deploying OpenStack current master via kolla source, with kvm and
  ceph/rbd enabled and attempting to perform a live-migration, it
  becomes stuck in state migrating with this appearing in the nova-
  compute log.

  Looking at nova git log, i'm seeing a bunch of live-migration changes
  in the last few days.  I suspect there might have been a regression.

  2016-07-04 10:00:15.643 1 INFO nova.compute.resource_tracker 
[req-1f6df1d8-a88a-4f27-8ce2-609ea25da4e2 - - - - -] Compute_service record 
updated for compute01:compute01
  2016-07-04 10:00:56.731 1 ERROR root 
[req-a2b97b2c-f479-4b7c-b3d9-142c1ab1b25f b3bedc85b7674ce2b7e0589a7427dc81 
22ab55c3ffe343639d872b1e4db6abb4 - - -] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", 
line 66, in wrapped\nreturn f(self, context, *args, **kw)\n', '  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/utils.py", line 
608, in decorated_function\n*args, **kwargs)\n', '  File 
"/usr/lib64/python2.7/inspect.py", line 980, in getcallargs\n\'arguments\' 
if num_required > 1 else \'argument\', num_total))\n', 'TypeError: 
live_migration() takes exactly 7 arguments (6 given)\n']
  2016-07-

[Yahoo-eng-team] [Bug 1352256] Re: Uploading a new object fails with Ceph as object storage backend using RadosGW

2016-05-10 Thread Dave Walker
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352256

Title:
  Uploading a new object fails with Ceph as object storage backend using
  RadosGW

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  While uploading a new Object using Horizon, with Ceph as object
  storage backend, it fails with error mesage "Error: Unable to upload
  object"

  Ceph Release : Firefly

  Error in horizon_error.log:

  
  [Wed Jul 23 09:04:46.840751 2014] [:error] [pid 30045:tid 140685813683968] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
firefly-master.ashish.com
  [Wed Jul 23 09:04:46.842984 2014] [:error] [pid 30045:tid 140685813683968] 
WARNING:urllib3.connectionpool:HttpConnectionPool is full, discarding 
connection: firefly-master.ashish.com
  [Wed Jul 23 09:04:46.843118 2014] [:error] [pid 30045:tid 140685813683968] 
REQ: curl -i http://firefly-master.ashish.com/swift/v1/new-cont-dash/test -X 
PUT -H "X-Auth-Token: 91fc8466ce17e0d22af86de9b3343b2d"
  [Wed Jul 23 09:04:46.843227 2014] [:error] [pid 30045:tid 140685813683968] 
RESP STATUS: 411 Length Required
  [Wed Jul 23 09:04:46.843584 2014] [:error] [pid 30045:tid 140685813683968] 
RESP HEADERS: [('date', 'Wed, 23 Jul 2014 09:04:46 GMT'), ('content-length', 
'238'), ('content-type', 'text/html; charset=iso-8859-1'), ('connection', 
'close'), ('server', 'Apache/2.4.7 (Ubuntu)')]
  [Wed Jul 23 09:04:46.843783 2014] [:error] [pid 30045:tid 140685813683968] 
RESP BODY: 
  [Wed Jul 23 09:04:46.843907 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843930 2014] [:error] [pid 30045:tid 140685813683968] 
411 Length Required
  [Wed Jul 23 09:04:46.843937 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843944 2014] [:error] [pid 30045:tid 140685813683968] 
Length Required
  [Wed Jul 23 09:04:46.843951 2014] [:error] [pid 30045:tid 140685813683968] 
A request of the requested method PUT requires a valid Content-length.
  [Wed Jul 23 09:04:46.843957 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843963 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843969 2014] [:error] [pid 30045:tid 140685813683968]
  [Wed Jul 23 09:04:46.844530 2014] [:error] [pid 30045:tid 140685813683968] 
Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 
411 Length Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844555 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844607 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844900 2014] [:error] [pid 30045:tid 140685813683968] 
https://bugs.launchpad.net/cloud-archive/+bug/1352256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297325] Re: swap and ephemeral devices defined in the flavor are not created as a block device mapping

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297325

Title:
  swap and ephemeral devices defined in the flavor are not created as a
  block device mapping

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When booting an instance specifying the swap and/or ephemeral devices,
  those will be created as a block device mapping in the database
  together with the image and volumes if present.

  If, instead, we rely on libvirt to define the swap and ephemeral
  devices later from the specified instance type, those devices won't be
  added to the block device mapping list.

  To be consistent and to prevent any errors when trying to guess the
  device name from the existing block device mappings, we should create
  a mappings for those devices if present in the instance type. We
  should create them from the API layer, before validating the block
  device mappings and only if no swap or ephemeral device are defined by
  the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240373] Re: VMware: Sparse glance vmdk's size property is mistaken for capacity

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240373

Title:
  VMware: Sparse glance vmdk's size property is mistaken for capacity

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in VMwareAPI-Team:
  Confirmed

Bug description:
  Scenario:

  a sparse vmdk whose file size is 800MB and whose capacity is 4GB is uploaded 
to glance without specifying the size property.
  (glance uses the file's size for the size property in this case)

  nova boot with flavor tiny (root disk size of 1GB) said image.

  Result:
  The vmwareapi driver fails to spawn the VM because the ESX server throws a 
fault when asked to 'grow' the disk from 4GB to 1GB (driver thinks it is 
attempt to grow from 800MB to 1GB)

  Relevant hostd.log on ESX host:
  2013-10-15T17:02:24.509Z [35BDDB90 verbose 'Default'
  opID=HB-host-22@3170-d82e35d0-80] ha-license-manager:Validate -> Valid
  evaluation detected for "VMware ESX Server 5.0" (lastError=2,
  desc.IsValid:Yes)
  2013-10-15T17:02:25.129Z [FFBE3D20 info 'Vimsvc.TaskManager'
  opID=a3057d82-8e] Task Created :
  haTask--vim.VirtualDiskManager.extendVirtualDisk-526626761


  2013-10-15T17:02:25.158Z [35740B90 warning 'Vdisksvc' opID=a3057d82-8e]
  New capacity (2097152) is not greater than original capacity (8388608).

  I am still not exactly sure if this is consider user error on glance
  import, a glance shortcoming of not introspecting the vmdk, or a bug
  in the compute driver. Regardless, this bug is to track any potential
  defensive code we can add to the driver to better handle this
  scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252341] Re: Horizon crashes when removing logged user from project

2016-05-10 Thread Dave Walker
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1252341

Title:
  Horizon crashes when removing logged user from project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  Horizon is crashing when removing the logged user from any project.

  1 - Log in Horizon with a user that has the admin role.
  2 - In the projects panel, modify the project members of any project and add 
the user that you logged in Horizon to that project. Save the modification.
  3 - Without logging out, in the projects panel, edit the project that you 
have just added the logged user and remove this same user from the project.
  4 - When the modification is saved, Horizon shows Unauthorized errors when 
trying to retrieve the user/project/image/... list.
  5 - If you log out and log in again with the same user everything works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1252341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558658] Re: Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP requests

2016-05-10 Thread Dave Walker
** Changed in: neutron/kilo
Milestone: None => 2015.1.4

** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558658

Title:
  Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP
  requests

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Triaged

Bug description:
  The IptablesFirewallDriver does not prevent spoofing other instances
  or a routers MAC and/or IP addresses. The rule to permit DHCP
  discovery and request messages:

  ipv4_rules += [comment_rule('-p udp -m udp --sport 68 --dport 67 '
  '-j RETURN', comment=ic.DHCP_CLIENT)]

  is too permissive, it does not enforce the source MAC or IP address.
  This is the IPv4 case of public bug
  https://bugs.launchpad.net/neutron/+bug/1502933, and a solution was
  previously mentioned in June 2013 in
  https://bugs.launchpad.net/neutron/+bug/1427054.

  If L2population is not used, an instance can spoof the Neutron
  router's MAC address and cause the switches to learn a MAC move,
  allowing the instance to intercept other instances traffic potentially
  belonging to other tenants if this is shared network.

  The solution for this is to permit this DHCP traffic only from the
  instance's IP address and the unspecified IPv4 address 0.0.0.0/32
  rather than from an IPv4 source, additionally the source MAC address
  should be restricted to MAC addresses assigned to the instance's
  Neutron port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290975] Re: cells AttributeError with compute api methods using new object access style

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290975

Title:
  cells AttributeError with compute api methods using new object access
  style

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The nova-cells service looks up instances locally before passing them
  to the local compute api, and only converts them to objects if the
  compute api method is explicitly listed in the run_compute_api method.
  There is in fact a FIXME around this process, but it appears to not
  have been addressed yet :)

  2014-03-10 17:27:59.881 30193 ERROR nova.cells.messaging 
[req-3e27c8c0-6b3c-482d-bb9b-d638933ec949 10226892 5915610] Error processing 
message locally: 'dict' object has no attribute 'metadata'
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging Traceback (most 
recent call last):
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging   File 
"/opt/rackstack/615.0/nova/lib/python2.6/site-packages/nova/cells/messaging.py",
 line 211, in _process_locally
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging   File 
"/opt/rackstack/615.0/nova/lib/python2.6/site-packages/nova/cells/messaging.py",
 line 1290, in _process_message_locally
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging   File 
"/opt/rackstack/615.0/nova/lib/python2.6/site-packages/nova/cells/messaging.py",
 line 706, in run_compute_api_method
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging return 
fn(message.ctxt, *args, **method_info['method_kwargs'])
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging   File 
"/opt/rackstack/615.0/nova/lib/python2.6/site-packages/nova/compute/api.py", 
line 199, in wrapped
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging return 
func(self, context, target, *args, **kwargs)
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging   File 
"/opt/rackstack/615.0/nova/lib/python2.6/site-packages/nova/compute/api.py", 
line 189, in inner
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging return 
function(self, context, instance, *args, **kwargs)
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging   File 
"/opt/rackstack/615.0/nova/lib/python2.6/site-packages/nova/compute/api.py", 
line 170, in inner
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging return f(self, 
context, instance, *args, **kw)
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging   File 
"/opt/rackstack/615.0/nova/lib/python2.6/site-packages/nova/compute/api.py", 
line 2988, in update_instance_metadata
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging orig = 
dict(instance.metadata)
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging AttributeError: 
'dict' object has no attribute 'metadata'
  2014-03-10 17:27:59.881 30193 TRACE nova.cells.messaging

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1183436] Re: list_virtual_interfaces fails with Quantum

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1183436

Title:
  list_virtual_interfaces fails with Quantum

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  To reprod : nosetests  nosetests
  
tempest.api.compute.servers.test_virtual_interfaces.py:VirtualInterfacesTestXML.test_list_virtual_interfaces
  in a devstack+quantum environment

  The test triggers a 500 error code
  "ComputeFault: Got compute fault
  Details: The server has either erred or is incapable of performing the 
requested operation."
  where it expects a 200.

  The test makes the following request : GET 
http://XX/v2/$Tenant/servers/$server_id/os-virtual-interfaces
  This HTTP request calls the Quantum API (nova/nova/network/quantumv2/api.py) 
and specificaly the get_vifs_by_* methods which are not implemented (raise 
NotImplementedError())

  I suggest to skip this test if Quantum is enabled as set in Tempest
  configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1183436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447342] Re: libvirtError: XML error: Missing CPU model name lead to compute service fail to start

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447342

Title:
  libvirtError: XML error: Missing CPU model name lead to compute
  service fail to start

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed

Bug description:
  got following error and failed to start a compute service
  not sure if we should disallow compute service to start 
  if 'libvirtError: XML error: Missing CPU model name' or not

  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 497, in run_service
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
service.start()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/service.py", line 164, in start
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/manager.py", line 1258, in init_host
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
self.driver.init_host(host=self.host)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 529, in init_host
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
self._do_quality_warnings()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 507, in _do_quality_warnings
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup caps = 
self._host.get_capabilities()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/virt/libvirt/host.py", line 753, in get_capabilities
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup rv = 
execute(f, *args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
six.reraise(c, e, tb)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup rv = 
meth(*args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 3153, in baselineCPU
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup if ret is 
None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup libvirtError: 
XML error: Missing CPU model name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462484] Re: Port Details VNIC type value is not translatable

2016-05-10 Thread Dave Walker
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462484

Title:
  Port Details VNIC type value is not translatable

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  On port deatails, the Binding/VNIC type value is not translatable. To 
recreate the problem:
  - create a pseudo translation:

  ./run_tests.sh --makemessages
  ./run_tests.sh --pseudo de
  ./run_tests.sh --compilemessages

  start the dev server, login and change to German/Deutsch (de)

  Navigate to
  Project->Network->Networks->[Detail]->[Port Detail]

  notice at the bottom of the panel the VNIC type is not translated.

  The 3 VNIC types should be translated when displayed in Horizon
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/portbindings.py#L73
  but neutron will expect these to be provided in English on API calls.

  Note that the mapping is already correct on Edit Port - the
  translations just need to be applied on the details panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403034] Re: Horizon should accept an IPv6 address as a VIP Address for LB Pool

2016-05-10 Thread Dave Walker
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1403034

Title:
  Horizon should accept an IPv6 address as a VIP Address for LB Pool

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Description of problem:
  ===
  Horizon should accept an IPv6 address as a VIP Address for LB Pool.

  Version-Release number of selected component (if applicable):
  =
  RHEL-OSP6: 2014-12-12.1
  python-django-horizon-2014.2.1-2.el7ost.noarch

  How reproducible:
  =
  Always

  Steps to Reproduce:
  ===
  0. Have an IPv6 subnet.

  1. Browse to: http:///dashboard/project/loadbalancers/

  2. Create a Load Balancing Pool.

  3. Add VIP as follows:
 - Name: test
 - VIP Subnet: Select your IPv6 subnet
 - Specify a free IP address from the selected subnet: IPv6 address such 
as: 2001:65:65:65::a
 - Protocol Port: 80
 - Protocol: HTTP
 - Admin State: UP

  Actual results:
  ===
  Error: Invalid version for IP address

  Expected results:
  =
  Should work OK, this is supported in the python-neutronclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1403034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407105] Re: Password Change Doesn't Affirmatively Invalidate Sessions

2016-05-10 Thread Dave Walker
** Changed in: horizon/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1407105

Title:
  Password Change Doesn't Affirmatively Invalidate Sessions

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) icehouse series:
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  The password change dialog at /horizon/settings/password/ contains the
  following code:

  
  if user_is_editable:
  try:
  api.keystone.user_update_own_password(request,
  data['current_password'],
  data['new_password'])
  response = http.HttpResponseRedirect(settings.LOGOUT_URL)
  msg = _("Password changed. Please log in again to continue.")
  utils.add_logout_reason(request, response, msg)
  return response
  except Exception:
  exceptions.handle(request,
    _('Unable to change password.'))
  return False
  else:
  messages.error(request, _('Changing password is not supported.'))
  return False
  

  There are at least two security concerns here:
  1) Logout is done by means of an HTTP redirect.  Let's say Eve, as MitM, gets 
ahold of Alice's token somehow.  Alice is worried this may have happened, so 
she changes her password.  If Eve suspects that the request is a 
password-change request (which is the most Eve can do, because we're running 
over HTTPS, right?  Right!?), then it's a simple matter to block the redirect 
from ever reaching the client, or the redirect request from hitting the server. 
 From Alice's PoV, something weird happened, but her new password works, so 
she's not bothered.  Meanwhile, Alice's old login ticket continues to work.
  2) Part of the purpose of changing a password is generally to block those who 
might already have the password from continuing to use it.  A password change 
should trigger (insofar as is possible) a purging of all active logins/tokens 
for that user.  That does not happen here.

  Frankly, I'm not the least bit sure if I've thought of the worst-case
  scenario(s) for point #1.  It just strikes me as very strange not to
  aggressively/proactively kill the ticket/token(s), instead relying on
  the client to do so.  Feel free to apply minds smarter and more
  devious than my own!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1407105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433843] Re: live-migration failed but give wrong refer url

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433843

Title:
  live-migration failed but give wrong refer url

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  taget@liyong:~/devstack$ nova live-migration test1 liyong2
  ERROR (BadRequest): Migration pre-check error: CPU doesn't have compatibility.

  Requested operation is not valid: no CPU model specified

  Refer to http://libvirt.org/html/libvirt-
  libvirt.html#virCPUCompareResult (HTTP 400) (Request-ID: req-a022c4bc-
  06c6-40a6-ba2e-ab6f22f51229)

  the reference like is not correctly.
  it should be 
http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1433843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525915] Re: [OSSA 2016-006] Normal user can change image status if show_multiple_locations has been set to true (CVE-2016-0757)

2016-05-10 Thread Dave Walker
** Changed in: glance/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1525915

Title:
  [OSSA 2016-006] Normal user can change image status if
  show_multiple_locations has been set to true (CVE-2016-0757)

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in Glance liberty series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  User (non admin) can set image back to queued state by deleting
  location(s) from image when "show_multiple_locations" config parameter
  has been set to true.

  This breaks the immutability promise glance has similar way as
  described in OSSA 2015-019 as the image gets transitioned from active
  to queued and new image data can be uploaded.

  ubuntu@devstack-02:~/devstack$ glance image-show 
f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc
  
+--+--+
  | Property | Value
|
  
+--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 
|
  | container_format | ami  
|
  | created_at   | 2015-12-14T09:58:54Z 
|
  | disk_format  | ami  
|
  | id   | f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc 
|
  | locations| [{"url": 
"file:///opt/stack/data/glance/images/f4bb4c9e-71ba-4a8c-b70a-  |
  |  | 640dbe37b3bc", "metadata": {}}]  
|
  | min_disk | 0
|
  | min_ram  | 0
|
  | name | cirros-test  
|
  | owner| ab69274aa31a4fba8bf559af2b0b98bd 
|
  | protected| False
|
  | size | 25165824 
|
  | status   | active   
|
  | tags | []   
|
  | updated_at   | 2015-12-14T09:58:54Z 
|
  | virtual_size | None 
|
  | visibility   | private  
|
  
+--+--+
  ubuntu@devstack-02:~/devstack$ glance location-delete --url 
file:///opt/stack/data/glance/images/f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc 
f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc

  ubuntu@devstack-02:~/devstack$ glance image-show 
f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 |
  | container_format | ami  |
  | created_at   | 2015-12-14T09:58:54Z |
  | disk_format  | ami  |
  | id   | f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc |
  | locations| []   |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-test  |
  | owner| ab69274aa31a4fba8bf559af2b0b98bd |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2015-12-14T13:43:23Z |
  | virtual_size | None |
  | visibility   | private  |
  +--+--+
  ubuntu@devstack-02:~/devstack$ glance image-upload --file 
files/images/c

[Yahoo-eng-team] [Bug 1457527] Re: Image-cache deleting active swap backing images

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457527

Title:
  Image-cache deleting active swap backing images

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Version: 1:2015.1.0-0ubuntu1~cloud0 (Kilo)

  I am having issues with backing images of disk.swap being deleted from
  the image cache.

  Here is part of the nova-compute log, although multiple instances have
  disk.swap with swap_256 as the base image, the image cache repeatedly
  tries to delete it:

  2015-05-14 08:08:15.080 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 08:48:46.209 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 09:29:00.814 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 10:09:14.351 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Removing base or swap 
file: /var/lib/nova/instances/_base/swap_256
  2015-05-14 16:14:21.340 6479 INFO nova.virt.libvirt.imagecache 
[req-1d61428f-0b8b-4bae-9293-42ac99dc3f58 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 16:55:21.195 6479 INFO nova.virt.libvirt.imagecache 
[req-1d61428f-0b8b-4bae-9293-42ac99dc3f58 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 17:36:12.260 6479 INFO nova.virt.libvirt.imagecache 
[req-1d61428f-0b8b-4bae-9293-42ac99dc3f58 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256

  I am NOT using shared storage for the instances directory, it is
  sitting on the local filesystem, and instances on the same host node
  are using the swap base file that the image cache is deleting.

  As far as I can tell, it is attempting to delete it immediately after
  the swap image is created. Swap backing images of the form
  swap_512_512 are not deleted though (as opposed to just swap_512; I
  couldn't figure out what the difference is).

  Reproduce: Create volume with swap disk
  Temporary solution: image_cache_manager_interval = -1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496235] Re: Boot from volume faild with availability_zone option, in case of cinder do not have availability_zone

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496235

Title:
  Boot from volume faild with availability_zone option, in case of
  cinder do not have availability_zone

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  My envirement ,Nova has 2 availability_zone and Cinder has no 
availability_zone.
  When I run two command "cinder create" and "nova boot" separately ,  it is 
finished normally.
  But run at one time, and an error occurs as below.(It is the same on the 
dashboard)

  
  $ nova boot --flavor m1.small --block-device 
source=image,id=4014a3f7-507b-4692-86c8-8224bbcc7102,dest=volume,size=10,shutdown=delete,bootindex=0
 --nic net-id=4d9e9847-80b5-46ca-8439-344930a59825 --availability_zone az2 test1

  $ nova list
  
+--+---++--+-+---+
  | ID   | Name  | Status | Task State  
 | Power State | Networks  |
  
+--+---++--+-+---+
  | a0dc0f03-155c-422b-b0ac-996fc17e0989 | test1 | ERROR  | 
block_device_mapping | NOSTATE |   |
  
+--+---++--+-+---+

  $ nova show
  
+--+--+
  | Property | Value


|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   


|
  | OS-EXT-AZ:availability_zone  | az1  


|
  | OS-EXT-SRV-ATTR:host | compute011-az1   


   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | compute011-az1.maas  


   |
  | OS-EXT-SRV-ATTR:instance_name| instance-0514


|
  | OS-EXT-STS:power_state   | 0


|
  | OS-EXT-STS:task_state| block_device_mapping 


|
  | OS-EXT-STS:vm_state  | error


|
  | OS-SRV-USG:launched_at   | -

[Yahoo-eng-team] [Bug 1471916] Re: race with neutron: can't create bridge with the same name

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471916

Title:
  race with neutron: can't create bridge with the same name

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  If a node running nova-compute is using the neutron with the
  linuxbridge agent, and the node is also running other neutron agents
  like the l3-agent or dhcp-agent (ie the node is acting as compute node
  and a network node) then races can happen when both nova-compute and
  the linuxbridge agent try to add the same linux bridge at the same
  time. This can cause instances to fail to start.

  http://logs.openstack.org/12/138512/10/experimental/check-tempest-
  dsvm-neutron-linuxbridge/e737f60/

  http://logs.openstack.org/12/138512/10/experimental/check-tempest-
  dsvm-neutron-
  linuxbridge/e737f60/logs/screen-n-cpu.txt.gz#_2015-07-02_03_09_09_131

  The neutron l3-agent added a tap tap device:

  2015-07-02 03:09:06.614 DEBUG neutron.agent.linux.utils 
[req-42397477-9e58-4c7a-aa0b-8e54269f9d34 None None] Running command (rootwrap 
daemon): ['ip', 'link', 'add', 'tapbb2831fd-86', 'type', 'veth', 'peer', 
'name', 'qr-bb2831fd-86', 'netns', 
'qrouter-048a7c31-08a1-4693-91a8-13e3221c0982'] execute_rootwrap_daemon 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:101
  Command: ['ip', 'link', 'add', u'tapbb2831fd-86', 'type', 'veth', 'peer', 
'name', u'qr-bb2831fd-86', 'netns', 
u'qrouter-048a7c31-08a1-4693-91a8-13e3221c0982']

  
  The linuxbridge-agent detects this device and adds the bridge for it's 
network:

  2015-07-02 03:09:09.119 DEBUG neutron.agent.linux.utils 
[req-d4f88282-233b-437e-8157-5d134538d206 None None] Running command (rootwrap 
daemon): ['brctl', 'addbr', 'brq94158651-73'] execute_rootwrap_daemon 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:101
  2015-07-02 03:09:09.122 DEBUG neutron.agent.linux.utils 
[req-d4f88282-233b-437e-8157-5d134538d206 None None] 
  Command: ['brctl', 'addbr', u'brq94158651-73']
  Exit code: 0

  
  About the same time, nova-compute also tried to add the same bridge for a VIF 
it was about to create on the same network, but lost the race and so the 
instance went into an error state:

  2015-07-02 03:09:09.130 DEBUG oslo_concurrency.processutils 
[req-8391bcc1-bc92-48c7-a635-7e3b2f1af76b 
tempest-ServerRescueNegativeTestJSON-1230044074 
  tempest-ServerRescueNegativeTestJSON-1737504944] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf brctl addbr brq94158651-73" returned: 1 in 0.094s exe
  cute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:247
  2015-07-02 03:09:09.131 DEBUG oslo_concurrency.processutils 
[req-8391bcc1-bc92-48c7-a635-7e3b2f1af76b 
tempest-ServerRescueNegativeTestJSON-1230044074 
  tempest-ServerRescueNegativeTestJSON-1737504944] u'sudo nova-rootwrap 
/etc/nova/rootwrap.conf brctl addbr brq94158651-73' failed. Not Retrying. execut
  e /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:295
  2015-07-02 03:09:09.131 DEBUG oslo_concurrency.lockutils 
[req-8391bcc1-bc92-48c7-a635-7e3b2f1af76b 
tempest-ServerRescueNegativeTestJSON-1230044074 tem
  pest-ServerRescueNegativeTestJSON-1737504944] Lock "lock_bridge" released by 
"ensure_bridge" :: held 0.096s inner /usr/local/lib/python2.7/dist-packag
  es/oslo_concurrency/lockutils.py:262
  2015-07-02 03:09:09.131 ERROR nova.compute.manager 
[req-8391bcc1-bc92-48c7-a635-7e3b2f1af76b 
tempest-ServerRescueNegativeTestJSON-1230044074 tempest-S
  erverRescueNegativeTestJSON-1737504944] [instance: 
018793d0-c890-43c7-91ad-5694931cd98c] Instance failed to spawn
  2015-07-02 03:09:09.131 5163 ERROR nova.compute.manager [instance: 
018793d0-c890-43c7-91ad-5694931cd98c] Traceback (most recent call last):
  2015-07-02 03:09:09.131 5163 ERROR nova.compute.manager [instance: 
018793d0-c890-43c7-91ad-5694931cd98c]   File 
"/opt/stack/new/nova/nova/compute/mana
  ger.py", line 2113, in _build_resources
  2015-07-02 03:09:09.131 5163 ERROR nova.compute.manager [instance: 
018793d0-c890-43c7-91ad-5694931cd98c] yield resources
  2015-07-02 03:09:09.131 5163 ERROR nova.compute.manager [instance: 
018793d0-c890-43c7-91ad-5694931cd98c]   File 
"/opt/stack/new/nova/nova/compute/mana
  ger.py", line 1985, in _build_and_run_instance
  2015-07-02 03:09:09.131 5163 ERROR nova.compute.manager [instance: 
018793d0-c890-43c7-91ad-5694931cd98c] block_device_info=block_device_info)
  2015-07-02 03:09:09.131 5163 ERROR nova.compute.manager [instance: 
018793d0-c890-43c7-91ad-5694931cd98c]   File 
"/opt/stack/new/nova/nova/virt/libvirt
  /driver.py", line 2435, in spawn
  2015-07-02 03:09:09.131 5163 ERROR nova.compute.manager [instance: 
018793d0-c890-43c7-91ad-5694931cd98c] block_device_info=block_device_info)
  2015

[Yahoo-eng-team] [Bug 1532809] Re: Gate failures when DHCP lease cannot be acquired

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532809

Title:
  Gate failures when DHCP lease cannot be acquired

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  Example from:
  
http://logs.openstack.org/97/265697/1/check/gate-grenade-dsvm/6eeced7/console.html#_2016-01-11_07_42_30_838

  Logstash query:
  message:"No lease, failing" AND voting:1

  dhcp_release for an ip/mac does not seem to reach dnsmasq (or it fails
  to act on it - "unknown lease") as i don't see entries in syslog for
  it.

  Logs from nova network:
  dims@dims-mac:~/junk/6eeced7$ grep dhcp_release old/screen-n-net.txt.gz | 
grep 10.1.0.3 | grep CMD
  2016-01-11 07:25:35.548 DEBUG oslo_concurrency.processutils 
[req-62aaa0b9-093e-4f28-805d-d4bf3008bfe6 tempest-ServersTestJSON-1206086292 
tempest-ServersTestJSON-1551541405] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:32:51:c3" 
returned: 0 in 0.117s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
  2016-01-11 07:25:51.259 DEBUG oslo_concurrency.processutils 
[req-31115ffa-8d43-4621-bb2e-351d6cd4bcef 
tempest-ServerActionsTestJSON-357128318 
tempest-ServerActionsTestJSON-854742699] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:a4:f0:11" 
returned: 0 in 0.108s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
  2016-01-11 07:26:35.357 DEBUG oslo_concurrency.processutils 
[req-c32a216e-d909-41a0-a0bc-d5eb7a21c048 
tempest-TestVolumeBootPattern-46217374 
tempest-TestVolumeBootPattern-1056816637] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:ed:de:f6" 
returned: 0 in 0.110s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297

  Logs from syslog:
  dims@dims-mac:~/junk$ grep 10.1.0.3 syslog.txt.gz
  Jan 11 07:25:35 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:32:51:c3 unknown lease
  Jan 11 07:25:51 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:a4:f0:11 unknown lease
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPOFFER(br100) 10.1.0.3 fa:16:3e:ed:de:f6
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPREQUEST(br100) 10.1.0.3 fa:16:3e:ed:de:f6
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPACK(br100) 10.1.0.3 fa:16:3e:ed:de:f6 tempest
  Jan 11 07:27:34 devstack-trusty-rax-iad-7090830 object-auditor: Object audit 
(ALL). Since Mon Jan 11 07:27:34 2016: Locally: 1 passed, 0 quarantined, 0 
errors files/sec: 2.03 , bytes/sec: 10119063.16, Total time: 0.49, Auditing 
time: 0.00, Rate: 0.00
  Jan 11 07:39:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:40:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:41:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:42:26 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:fe:1f:36 unknown lease

  Net, The test that runs the ssh against the vm fails with the "No
  lease, failing" in its serial console

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543766] Re: nova.tests.unit.test_wsgi.TestWSGIServerWithSSL fails with testtools.matchers._impl.MismatchError: 'OK\r\n' != 'PONG' and eventlet 0.18.2

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543766

Title:
  nova.tests.unit.test_wsgi.TestWSGIServerWithSSL fails with
  testtools.matchers._impl.MismatchError: 'OK\r\n' != 'PONG' and
  eventlet 0.18.2

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed

Bug description:
  http://logs.openstack.org/04/273104/5/gate/gate-nova-
  python27/0a68bad/console.html#_2016-02-09_18_39_51_792

  2016-02-09 18:39:51.792 | 
nova.tests.unit.test_wsgi.TestWSGIServerWithSSL.test_two_servers
  2016-02-09 18:39:51.792 | 

  2016-02-09 18:39:51.792 | 
  2016-02-09 18:39:51.792 | Captured traceback:
  2016-02-09 18:39:51.792 | ~~~
  2016-02-09 18:39:51.793 | Traceback (most recent call last):
  2016-02-09 18:39:51.793 |   File "nova/tests/unit/test_wsgi.py", line 
282, in test_two_servers
  2016-02-09 18:39:51.793 | self.assertEqual(response[-4:], "PONG")
  2016-02-09 18:39:51.793 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 362, in assertEqual
  2016-02-09 18:39:51.793 | self.assertThat(observed, matcher, message)
  2016-02-09 18:39:51.793 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 447, in assertThat
  2016-02-09 18:39:51.793 | raise mismatch_error
  2016-02-09 18:39:51.793 | testtools.matchers._impl.MismatchError: 
'OK\r\n' != 'PONG'

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22testtools.matchers._impl.MismatchError%3A%20'OK%5C%5C%5C%5Cr%5C%5C%5C%5Cn'%20!%3D%20'PONG'%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536214] Re: PO files broken

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536214

Title:
  PO files broken

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed

Bug description:
  python setup.py compile_catalog fails to compile the existing
  translated files for Spanish and Turkish - both in master and liberty.

  Suggested action:
  1) Fix the strings in translation server so that next translation import gets 
strings that are valid
  2) Add lint check that checks that translations are valid.

  For 2: Add to tox.ini a check like it's done for keystone:
# Check that .po and .pot files are valid.
bash -c "find nova -type f -regex '.*\.pot?' -print0| \
 xargs -0 -n 1 msgfmt --check-format -o /dev/null"

  Change 2)  will take care that the daily translation import cannot
  import again non-valid translations. 2) should only merge once 1) is
  fixed and imported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1536214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548450] Re: [OSSA 2016-007] Host data leak during resize/migrate for raw-backed instances (CVE-2016-2140)

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1548450

Title:
  [OSSA 2016-007] Host data leak during resize/migrate for raw-backed
  instances (CVE-2016-2140)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  First, a caveat. This report is from code inspection only. I haven't
  attempted to replicate it, and I have no immediate plans to. It's
  possible it doesn't exist due to an interaction which isn't
  immediately obvious.

  When resizing an instance using the libvirt driver, we run
  LibvirtDriver.migrate_disk_and_power_off on the source host. If there
  is no shared storage, data is copied. Specifically, there's a loop in
  that function which loops over disk info:

  for info in disk_info:
  # assume inst_base == dirname(info['path'])
  ...
  copy the disk

  Note that this doesn't copy disk.info, because it's not a disk. I have
  actually confirmed this whilst investigating another bug.

  The problem with this is that disk.info contains file format
  information, which means that when the instance starts up again, the
  format of all its disks are re-inspected. This is the bug. It means
  that a malicious user can write data to an ephemeral or root disk
  which fakes a qcow2 header, and on re-inspection it will be detected
  as qcow2 and data from a user-specified backing file will be served.

  I am moderately confident that this is a real bug.

  Unlike the previous file format bug I reported, though, this bug would
  be mitigated by the fact that the user would have to access the disk
  via libvirt/qemu. Assuming they haven't disabled SELinux (nobody does
  that, right?) this severely limits the data which can be accessed,
  possibly to the point that it isn't worth exploiting. I also believe
  it would only be exploitable on deployments using raw storage, which I
  believe isn't common.

  Given that I don't think it's all that serious in practise, I'm not
  going to work on this immediately as I don't have the time. If it's
  still around when I'm less busy I'll pick it up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1548450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557938] Re: [doc]support matrix of vmware for chap is wrong

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557938

Title:
  [doc]support matrix of vmware for chap is wrong

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  In support-matrix, it says that vmware driver supports chap authentication 
over iscsi.
  In fact, vmware driver doesn't pass  authentication info to vSphere API.
  So the function doesn't work.

  
  Code: 
  def _iscsi_add_send_target_host(self, storage_system_mor, hba_device,
  target_portal):
  """Adds the iscsi host to send target host list."""
  client_factory = self._session.vim.client.factory
  send_tgt = client_factory.create('ns0:HostInternetScsiHbaSendTarget')
  (send_tgt.address, send_tgt.port) = target_portal.split(':')
  LOG.debug("Adding iSCSI host %s to send targets", send_tgt.address)
  self._session._call_method(
  self._session.vim, "AddInternetScsiSendTargets",
  storage_system_mor, iScsiHbaDevice=hba_device, targets=[send_tgt])

  Doc:
  
http://docs.openstack.org/developer/nova/support-matrix.html#storage_block_backend_iscsi_auth_chap_vmware

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1557938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558697] Re: [kilo] libvirt block migrations fail due to disk_info being an encoded JSON string

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558697

Title:
  [kilo] libvirt block migrations fail due to disk_info being an encoded
  JSON string

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  The fix for OSSA 2016-007 / CVE-2016-2140 in f302bf04 assumed that
  disk_info is always a plain, decoded list. However prior to Liberty
  when preforming a live block migration the compute manager populates
  disk_info with an encoded JSON string when calling
  self.driver.get_instance_disk_info. In the live migration case without
  block migration disk_info remains a plain decoded list.

  More details with an example trace downstream in the following bug :

  live migration without shared storage fails in pre_live_migration after 
upgrade to 2015.1.2-18.2
  https://bugzilla.redhat.com/show_bug.cgi?id=1318722

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555287] Re: Libvirt driver broken for non-disk-image backends

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1555287

Title:
  Libvirt driver broken for non-disk-image backends

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed

Bug description:
  Recently the ceph job (and any other configuration that doesn't use
  disk image as the backend storage) started failing like this:

  
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _dispatch
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 110, in wrapped
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher payload)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 89, in wrapped
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 359, in decorated_function
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance=instance)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 328, in decorated_function
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 409, in decorated_function
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 316, in decorated_function
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
migration.instance_uuid, exc_info=True)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-09 14:47:29.102 17597 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 293, in decorated_function
  2016-03-09 14:47:29.102 17597 ERROR o

[Yahoo-eng-team] [Bug 1551900] Re: test_virt_drivers.LibvirtConnTestCase fails if /etc/machine-id is empty

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551900

Title:
  test_virt_drivers.LibvirtConnTestCase fails if /etc/machine-id is
  empty

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed

Bug description:
  This is related to bug 1475353 but in this case unit tests fail if the
  file exists but is empty. Either way we shouldn't be trying to read
  from a file on the system in unit tests, so we should set the
  CONF.libvirt.sysinfo_serial config option value to 'none' in the
  tests.

  Example failure (this is from Juno):

  nova.tests.virt.test_virt_drivers.LibvirtConnTestCase.test_block_stats
  --

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/virt/test_virt_drivers.py", line 54, in wrapped_func
  return f(self, *args, **kwargs)
File "nova/tests/virt/test_virt_drivers.py", line 521, in 
test_block_stats
  instance_ref, network_info = self._get_running_instance()
File "nova/tests/virt/test_virt_drivers.py", line 228, in 
_get_running_instance
  [], 'herp', network_info=network_info)
File "nova/virt/libvirt/driver.py", line 2661, in spawn
  write_to_disk=True)
File "nova/virt/libvirt/driver.py", line 4182, in _get_guest_xml
  context)
File "nova/virt/libvirt/driver.py", line 3863, in _get_guest_config
  guest.sysinfo = self._get_guest_config_sysinfo(instance)
File "nova/virt/libvirt/driver.py", line 3582, in 
_get_guest_config_sysinfo
  sysinfo.system_serial = self._sysinfo_serial_func()
File "nova/virt/libvirt/driver.py", line 3571, in 
_get_host_sysinfo_serial_auto
  return self._get_host_sysinfo_serial_os()
File "nova/virt/libvirt/driver.py", line 3567, in 
_get_host_sysinfo_serial_os
  return str(uuid.UUID(f.read().split()[0]))
  IndexError: list index out of range
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1551900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550623] Re: Functional ARPSpoofTestCase's occasionally fail

2016-05-10 Thread Dave Walker
** Changed in: neutron/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550623

Title:
  Functional ARPSpoofTestCase's occasionally fail

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  We occasionally get failures in the gate like the one below.
  Unfortunately they are difficult to reproduce locally.

  ft32.3: 
neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_allowed_address_pairs(native)_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  DEBUG [oslo_policy._cache_handler] Reloading cached file 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
 DEBUG [oslo_policy.policy] Reloaded policy file: 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
  }}}

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_ovs_flows.py", line 202, in 
test_arp_spoof_allowed_address_pairs
  net_helpers.assert_ping(self.src_namespace, self.dst_addr, count=2)
File "neutron/tests/common/net_helpers.py", line 102, in assert_ping
  dst_ip])
File "neutron/agent/linux/ip_lib.py", line 885, in execute
  log_fail_as_error=log_fail_as_error, **kwargs)
File "neutron/agent/linux/utils.py", line 140, in execute
  raise RuntimeError(msg)
  RuntimeError: Exit code: 1; Stdin: ; Stdout: PING 192.168.0.2 (192.168.0.2) 
56(84) bytes of data.

  --- 192.168.0.2 ping statistics ---
  2 packets transmitted, 0 received, 100% packet loss, time 1006ms

  ; Stderr:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1550623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-05-10 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed
Status in nova-powervm:
  Fix Committed
Status in tempest:
  Fix Released

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1570748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525901] Re: Agents report as started before neutron recognizes as active

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525901

Title:
  Agents report as started before neutron recognizes as active

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  In HA, there is a potential race condition between the openvswitch
  agent and other agents that "own", depend on or manipulate ports. As
  the neutron server resumes on a failover it will not immediately be
  aware of openvswitch agents that have also been activated on failover
  and act as though there are no active openvswitch agents (this is an
  example, it most likely affects other L2 agents). If an agent such as
  the L3 agent starts and begins resync before the neutron server is
  aware of the active openvswitch agent, ports for the routers on that
  agent will be marked as "binding_failed". Currently this is a
  "terminal" state for the port as neutron does not attempt to rebind
  failed bindings on the same host.

  Unfortunately, the neutron agents do not provide even a best-effort
  deterministic indication to the outside service manager (systemd,
  pacemaker, etc...) that it has fully initialized and the neutron
  server should be aware that it is active. Agents should follow the
  same pattern as wsgi based services and notify systemd after it can be
  reasonably assumed that the neutron server should be aware that it is
  alive. That way service startup order logic or constraints can
  properly start an agent that is dependent on other agents *after*
  neutron should be aware that the required agents are active.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433843] Re: live-migration failed but give wrong refer url

2016-05-09 Thread Dave Walker
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433843

Title:
  live-migration failed but give wrong refer url

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  taget@liyong:~/devstack$ nova live-migration test1 liyong2
  ERROR (BadRequest): Migration pre-check error: CPU doesn't have compatibility.

  Requested operation is not valid: no CPU model specified

  Refer to http://libvirt.org/html/libvirt-
  libvirt.html#virCPUCompareResult (HTTP 400) (Request-ID: req-a022c4bc-
  06c6-40a6-ba2e-ab6f22f51229)

  the reference like is not correctly.
  it should be 
http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1433843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444797] Re: ovs_lib: get_port_tag_dict race with port removal

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444797

Title:
  ovs_lib: get_port_tag_dict race with port removal

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlVuYWJsZSB0byBleGVjdXRlIFwiIEFORCBtZXNzYWdlOiBcIi0tY29sdW1ucz1uYW1lLHRhZ1wiIEFORCBtZXNzYWdlOiBcIm92cy12c2N0bDogbm8gcm93XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyOTE1OTgwNzI3OX0=

  
http://logs.openstack.org/45/160245/42/check/check-tempest-dsvm-neutron-dvr/09be830/logs/screen-q-agt.txt.gz
  2015-04-15 10:05:53.275 DEBUG neutron.agent.linux.utils 
[req-3c5b23d8-3c6f-4145-
  bdc2-c21c106b4192 None None] 
  Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'list
  -ports', 'br-int']
  Exit code: 0
  Stdin:
  Stdout: 
patch-tun\nqr-5a91d525-f6\nqr-a84affc0-ff\nqvo34a97bc9-9a\nsg-1e0fedbc-a
  
9\nsg-29d238f1-d0\nsg-591a0885-ba\nsg-d27f8240-60\ntap2566d7d2-51\ntap3704a993-8
  5\ntapd8d2b093-c5

  Stderr:  execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:134
  2015-04-15 10:05:53.275 DEBUG neutron.agent.linux.utils 
[req-3c5b23d8-3c6f-4145-
  bdc2-c21c106b4192 None None] Running command (rootwrap daemon): ['ovs-vsctl', 
'-
  -timeout=10', '--oneline', '--format=json', '--', '--columns=name,tag', 
'list', 
  'Port', 'patch-tun', 'qr-5a91d525-f6', 'qr-a84affc0-ff', 'qvo34a97bc9-9a', 
'sg-1
  e0fedbc-a9', 'sg-29d238f1-d0', 'sg-591a0885-ba', 'sg-d27f8240-60', 
'tap2566d7d2-
  51', 'tap3704a993-85', 'tapd8d2b093-c5'] execute_rootwrap_daemon 
/opt/stack/new/
  neutron/neutron/agent/linux/utils.py:100
  2015-04-15 10:05:53.276 3624 DEBUG neutron.agent.linux.ovsdb_monitor [-] 
Output 
  received from ovsdb monitor: 
{"data":[["b3d74d22-a37d-4cdc-a85c-e14437a46cc8","d
  elete","sg-591a0885-ba",75]],"headings":["row","action","name","ofport"]}
   _read_stdout /opt/stack/new/neutron/neutron/agent/linux/ovsdb_monitor.py:44
  2015-04-15 10:05:53.280 DEBUG neutron.agent.linux.utils 
[req-3c5b23d8-3c6f-4145-
  bdc2-c21c106b4192 None None] 
  Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--co
  lumns=name,tag', 'list', 'Port', u'patch-tun', u'qr-5a91d525-f6', 
u'qr-a84affc0-
  ff', u'qvo34a97bc9-9a', u'sg-1e0fedbc-a9', u'sg-29d238f1-d0', 
u'sg-591a0885-ba',
   u'sg-d27f8240-60', u'tap2566d7d2-51', u'tap3704a993-85', u'tapd8d2b093-c5']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: ovs-vsctl: no row "sg-591a0885-ba" in table Port
   execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:134
  2015-04-15 10:05:53.281 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-3c5b23d8-3c6f-
  4145-bdc2-c21c106b4192 None None] Unable to execute ['ovs-vsctl', 
'--timeout=10'
  , '--oneline', '--format=json', '--', '--columns=name,tag', 'list', 'Port', 
u'pa
  tch-tun', u'qr-5a91d525-f6', u'qr-a84affc0-ff', u'qvo34a97bc9-9a', 
u'sg-1e0fedbc
  -a9', u'sg-29d238f1-d0', u'sg-591a0885-ba', u'sg-d27f8240-60', 
u'tap2566d7d2-51'
  , u'tap3704a993-85', u'tapd8d2b093-c5'].
  2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl Traceback 
(mos
  t recent call last):
  2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl   File 
"/opt/s
  tack/new/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in run_vsctl
  2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl 
log_fail_a
  s_error=False).rstrip()
  2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl   File 
"/opt/s
  tack/new/neutron/neutron/agent/linux/utils.py", line 137, in execute
  2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl raise 
Runt
  imeError(m)
  2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl 
RuntimeError: 
  2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl Command: 
['ovs
  -vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=name,tag
  ', 'list', 'Port', u'patch-tun', u'qr-5a91d525-f6', u'qr-a84affc0-ff', 
u'qvo34a9
  7bc9-9a', u'sg-1e0fedbc-a9', u'sg-29d238f1-d0', u'sg-591a0885-ba', 
u'sg-d27f8240
  -60', u'tap2566d7d2-51', u'tap3704a993-85', u'tapd8d2b093-c5']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490051] Re: test_assert_pings_during_br_int_setup_not_lost fails with oslo_rootwrap.wrapper.NoFilterMatched

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490051

Title:
  test_assert_pings_during_br_int_setup_not_lost fails with
  oslo_rootwrap.wrapper.NoFilterMatched

Status in neutron:
  Incomplete
Status in neutron kilo series:
  New

Bug description:
  Logstash:
  message:"time.sleep(0.25)" AND build_name:"gate-neutron-dsvm-functional"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGltZS5zbGVlcCgwLjI1KVwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1uZXV0cm9uLWRzdm0tZnVuY3Rpb25hbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDQwODAxODc4NDYxfQ==

  Sample console output:
  
http://logs.openstack.org/07/202207/18/gate/gate-neutron-dsvm-functional/ea0382e/console.html

  Sample trace:
  http://paste.openstack.org/show/431453/

  I suspect the oslo rootwrap filter issue to be a red herring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511311] Re: L3 agent failed to respawn keepalived process

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511311

Title:
  L3 agent failed to respawn keepalived process

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  I enabled the l3 ha in neutron configuration, and I usually see the
  following log in l3_agent.log:

  2015-10-14 22:30:16.397 21460 ERROR neutron.agent.linux.external_process [-] 
default-service for router with uuid 59de181e-8f02-470d-80f6-cb9f0d46f78b not 
found. The process should not have died
  2015-10-14 22:30:16.397 21460 ERROR neutron.agent.linux.external_process [-] 
respawning keepalived for uuid 59de181e-8f02-470d-80f6-cb9f0d46f78b
  2015-10-14 22:30:16.397 21460 DEBUG neutron.agent.linux.utils [-] Unable to 
access /var/lib/neutron/ha_confs/59de181e-8f02-470d-80f6-cb9f0d46f78b.pid 
get_value_from_file 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:222
  2015-10-14 22:30:16.398 21460 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-59de181e-8f02-470d-80f6-cb9f0d46f78b', 
'keepalived', '-P', '-f', 
'/var/lib/neutron/ha_confs/59de181e-8f02-470d-80f6-cb9f0d46f78b/keepalived.conf',
 '-p', '/var/lib/neutron/ha_confs/59de181e-8f02-470d-80f6-cb9f0d46f78b.pid',  
'-r', 
'/var/lib/neutron/ha_confs/59de181e-8f02-470d-80f6-cb9f0d46f78b.pid-vrrp'] 
create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:84 
 

  And I noticed that the counts of vrrp pid files were usually bigger
  than the "pid" files:

  root@neutron2:~# ls /var/lib/neutron/ha_confs/ | grep pid | grep -v vrrp | wc 
-l
  664
  root@neutron2:~# ls /var/lib/neutron/ha_confs/ | grep vrrp | wc -l
  677

  And seems that if "pid.vrrp" file existed,  we can't successfully respawn  
the keepalived process using this kind of command:
  keepalived -P -f 
/var/lib/neutron/ha_confs/cb01b1de-fa6c-461e-ba39-4d506dfdfccb/keepalived.conf 
-p /var/lib/neutron/ha_confs/cb01b1de-fa6c-461e-ba39-4d506dfdfccb.pid -r 
/var/lib/neutron/ha_confs/cb01b1de-fa6c-461e-ba39-4d506dfdfccb.pid-vrrp

  So  I think in neutron,  after we checked that the pid is not active,
  can we check the existence of "pid" file and "vrrp pid" file and
  remove them before respawn the keepalived process to make sure the
  process can be started successfully ?

  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L91-L92

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401095] Re: HA router can't be manually scheduled on L3 agent

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401095

Title:
  HA router can't be manually scheduled on L3 agent

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  HA routers get scheduled automatically to L3 agents, you can view the
  router using l3-agent-list-hosting-router

  $ neutron l3-agent-list-hosting-router harouter2
  +--+--++---+
  | id   | host | admin_state_up | alive |
  +--+--++---+
  | 9c34ec17-9045-4744-ae82-1f65f72ce3bd | net1 | True   | :-)   |
  | cf758b1b-423e-44d9-ab0f-cf0d524b3dac | net2 | True   | :-)   |
  | f2aac1e3-7a00-47c3-b6c9-2543d4a2ba9a | net3 | True   | :-)   |
  +--+--++---+

  You can remove it from an agent using l3-agent-router-remove, but when using 
l3-agent-router-add you get a 409:
  $ neutron l3-agent-router-add bff55e85-65f6-4299-a3bb-f0e1c1ee2a05 harouter2
  Conflict (HTTP 409) (Request-ID: req-22c1bb67-f0f8-4194-b863-93b8bb561c83)

  The log says:
  2014-12-10 07:47:41.036 INFO neutron.api.v2.resource 
[req-22c1bb67-f0f8-4194-b863-93b8bb561c83 admin 
f1bb80396ef34197b30117dfef45bea8] create failed (client error): The router 
72b9f897-b84d-4270-a645-af38fe3bd838 has been already hosted by the L3 Agent 
9c34ec17-9045-4744-ae82-1f65f72ce3bd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453855] Re: HA routers may fail to send out GARPs when node boots

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453855

Title:
  HA routers may fail to send out GARPs when node boots

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When a node boots, it starts the OVS and L3 agents. As an example, in
  RDO systemd unit files, these services have no dependency. This means
  that the L3 agent can start before the OVS agent. It can start
  configuring routers before the OVS agent finished syncing with the
  server and starts processing ovsdb monitor updates. The result is that
  when the L3 agent finishes configuring an HA router, it starts up
  keepalived, which under certain conditions will transition to master
  and send our gratuitous ARPs before the OVS agent finishes plugging
  its ports. This means that the gratuitous ARP will be lost, but with
  the router acting as master, this can cause black holes.

  Possible solutions:
  * Introduce systemd dependencies, but this has its set of intricacies and 
it's hard to solve the above problem comprehensively just with this approach.
  * Regardless, it's a good idea to use new keepalived flags:
  garp_master_repeat  how often the gratuitous ARP after MASTER state 
transition should be repeated?
  garp_master_refresh  Periodic delay in seconds sending gratuitous 
ARP while in MASTER state

  These will be configurable and have sane defaults.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512199] Re: change vm fixed ips will cause unable to communicate to vm in other network

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512199

Title:
  change vm fixed ips will cause unable to communicate to vm in other
  network

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  I use dvr+kilo,  vxlan.  The environment is like:

  vm2-2<- compute1  --vxlan-  comupte2 ->vm2-1
  vm3-1<-

  vm2-1<- net2  -router1- net3 ->vm3-1
  vm2-2<-

  
  vm2-1(192.168.2.3) and vm2-2(192.168.2.4) are in the same net(net2 
192.168.2.0/24) but not assigned to the same compute node. vm3-1 is in 
net3(192.168.3.0/24). net2 and net3 are connected by router1. The three vms are 
in default security-group. Not use firewall.

  1. Using command below to change the ip of vm2-1.
  neutron port-update portID  --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.10 --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.20
  In vm2-1 using "sudo udhcpc"(carrios) to get ip, the dhcp message is correct 
but the ip not changed.
  Then reboot vm2-1. The ip of vm2-1 turned to be 192.168.2.20.

  2. Using vm2-2 could ping 192.168.2.20 successfully . But vm3-1 could
  not ping 192.168.2.20 successfully.

  By capturing packets and looking for related information, the reason maybe:
  1. newIP(192.168.2.20) and MAC of vm2-1 was not wrote to arp cache in the 
namespace of router1 in compute1 node.
  2. In dvr mode, the arp request from gw port(192.168.2.1) from compute1 to 
vm2-1 was dropped by flowtable in compute2. So the arp 
request(192.168.2.1->192.168.2.20) could not arrive at vm2-1.
  3. For vm2-2, the arp request(192.168.2.4->192.168.2.20) was not dropped and 
could connect with vm2-1.

  In my opinion, if both new fixed IPs of vm2-1(192.168.2.10 and
  102.168.2.20) and MAC is wrote to arp cache in namespace of router1 in
  compute1 node, the problem will resolved. But only one
  ip(192.168.2.10) and MAC is wrote.

  BTW, if only set one fixed ip for vm2-1, it works fine. But if set two
  fixed ips for vm2-1, the problem above most probably happens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1512199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460164] Re: restart of openvswitch-switch causes instance network down when l2population enabled

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460164

Title:
  restart of openvswitch-switch causes instance network down when
  l2population enabled

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive icehouse series:
  Fix Released
Status in Ubuntu Cloud Archive juno series:
  New
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  Fix Released
Status in neutron source package in Wily:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  Restarts of openvswitch (typically on upgrade) result in loss of tunnel 
connectivity when the l2population driver is in use.  This results in loss of 
access to all instances on the effected compute hosts

  [Test Case]
  Deploy cloud with ml2/ovs/l2population enabled
  boot instances
  restart ovs; instance connectivity will be lost until the 
neutron-openvswitch-agent is restarted on the compute hosts.

  [Regression Potential]
  Minimal - in multiple stable branches upstream.

  [Original Bug Report]
  On 2015-05-28, our Landscape auto-upgraded packages on two of our
  OpenStack clouds.  On both clouds, but only on some compute nodes, the
  upgrade of openvswitch-switch and corresponding downtime of
  ovs-vswitchd appears to have triggered some sort of race condition
  within neutron-plugin-openvswitch-agent leaving it in a broken state;
  any new instances come up with non-functional network but pre-existing
  instances appear unaffected.  Restarting n-p-ovs-agent on the affected
  compute nodes is sufficient to work around the problem.

  The packages Landscape upgraded (from /var/log/apt/history.log):

  Start-Date: 2015-05-28  14:23:07
  Upgrade: nova-compute-libvirt:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
libsystemd-login0:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
nova-compute-kvm:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
systemd-services:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
isc-dhcp-common:amd64 (4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), nova-common:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), python-nova:amd64 (2014.1.4-0ubuntu2, 
2014.1.4-0ubuntu2.1), libsystemd-daemon0:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), grub-common:amd64 (2.02~beta2-9ubuntu1.1, 
2.02~beta2-9ubuntu1.2), libpam-systemd:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), udev:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
grub2-common:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), 
openvswitch-switch:amd64 (2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2), 
libudev1:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), isc-dhcp-client:amd64 
(4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), python-eventlet:amd64 (0.13.0-1ubuntu2, 
0.13.0-1ubuntu
 2.1), python-novaclient:amd64 (2.17.0-0ubuntu1.1, 2.17.0-0ubuntu1.2), 
grub-pc-bin:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), grub-pc:amd64 
(2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), nova-compute:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), openvswitch-common:amd64 
(2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2)
  End-Date: 2015-05-28  14:24:47

  From /var/log/neutron/openvswitch-agent.log:

  2015-05-28 14:24:18.336 47866 ERROR neutron.agent.linux.ovsdb_monitor
  [-] Error received from ovsdb monitor: ovsdb-client:
  unix:/var/run/openvswitch/db.sock: receive failed (End of file)

  Looking at a stuck instances, all the right tunnels and bridges and
  what not appear to be there:

  root@vector:~# ip l l | grep c-3b
  460002: qbr7ed8b59c-3b:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default
  460003: qvo7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master ovs-system state UP mode DEFAULT group default qlen 1000
  460004: qvb7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master qbr7ed8b59c-3b state UP mode DEFAULT group default qlen 
1000
  460005: tap7ed8b59c-3b:  mtu 1500 qdisc 
pfifo_fast master qbr7ed8b59c-3b state UNKNOWN mode DEFAULT group default qlen 
500
  root@vector:~# ovs-vsctl list-ports br-int | grep c-3b
  qvo7ed8b59c-3b
  root@vector:~#

  But I can't ping the unit from within the qrouter-${id} namespace on
  the neutron gateway.  If I tcpdump the {q,t}*c-3b interfaces, I don't
  see any traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1460164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472955] Re: Fail to boot with snapshot image in vmware driver

2016-05-09 Thread Dave Walker
*** This bug is a duplicate of bug 1240373 ***
https://bugs.launchpad.net/bugs/1240373

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472955

Title:
  Fail to boot with snapshot image in vmware driver

Status in OpenStack Compute (nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  branch: master/kilo

  When using nova VMwareVCDriver, nova boot failed with instance
  snapshot image.

  This error is found from tempest failure:
  
tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_shelve_shelved_server

  Steps to reproduce the failure(command line):
  1. Boot an instance using a vmdk image, for example, the instance is named 
with test1
  2. Take a snapshot of the instance:
  nova image_create test1 test1_snapshot
  3. Boot an instance using the snapshot image, with same disk size flavor in 
step 1

  Expected result: Instance should boot up successfully

  Actual result: Instance boots up with ERROR and following error is found in 
compute.log:
  VMwareDriverException: A specified parameter was not correct.
  capacity

  Full log is available here: http://paste.openstack.org/show/358046/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403034] Re: Horizon should accept an IPv6 address as a VIP Address for LB Pool

2016-05-09 Thread Dave Walker
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1403034

Title:
  Horizon should accept an IPv6 address as a VIP Address for LB Pool

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  Description of problem:
  ===
  Horizon should accept an IPv6 address as a VIP Address for LB Pool.

  Version-Release number of selected component (if applicable):
  =
  RHEL-OSP6: 2014-12-12.1
  python-django-horizon-2014.2.1-2.el7ost.noarch

  How reproducible:
  =
  Always

  Steps to Reproduce:
  ===
  0. Have an IPv6 subnet.

  1. Browse to: http:///dashboard/project/loadbalancers/

  2. Create a Load Balancing Pool.

  3. Add VIP as follows:
 - Name: test
 - VIP Subnet: Select your IPv6 subnet
 - Specify a free IP address from the selected subnet: IPv6 address such 
as: 2001:65:65:65::a
 - Protocol Port: 80
 - Protocol: HTTP
 - Admin State: UP

  Actual results:
  ===
  Error: Invalid version for IP address

  Expected results:
  =
  Should work OK, this is supported in the python-neutronclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1403034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521524] Re: With DVR enabled instances sometimes fail to get metadata

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521524

Title:
  With DVR enabled instances sometimes fail to get metadata

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Rally scenario which creates VMs with floating IPs at a high rate
  sometimes fails with SSHTimeout when trying to connect to the VM by
  floating IP. At the same time pings to the VM are fine.

  It appeared that VMs may sometimes fail to get public key from
  metadata. That happens because metadata proxy process was started
  after VM boot.

  Further analysis showed that l3 agent on compute node was not notified
  about new VM port at the time this port was created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297325] Re: swap and ephemeral devices defined in the flavor are not created as a block device mapping

2016-05-09 Thread Dave Walker
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297325

Title:
  swap and ephemeral devices defined in the flavor are not created as a
  block device mapping

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  When booting an instance specifying the swap and/or ephemeral devices,
  those will be created as a block device mapping in the database
  together with the image and volumes if present.

  If, instead, we rely on libvirt to define the swap and ephemeral
  devices later from the specified instance type, those devices won't be
  added to the block device mapping list.

  To be consistent and to prevent any errors when trying to guess the
  device name from the existing block device mappings, we should create
  a mappings for those devices if present in the instance type. We
  should create them from the API layer, before validating the block
  device mappings and only if no swap or ephemeral device are defined by
  the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240373] Re: VMware: Sparse glance vmdk's size property is mistaken for capacity

2016-05-09 Thread Dave Walker
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240373

Title:
  VMware: Sparse glance vmdk's size property is mistaken for capacity

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in VMwareAPI-Team:
  Confirmed

Bug description:
  Scenario:

  a sparse vmdk whose file size is 800MB and whose capacity is 4GB is uploaded 
to glance without specifying the size property.
  (glance uses the file's size for the size property in this case)

  nova boot with flavor tiny (root disk size of 1GB) said image.

  Result:
  The vmwareapi driver fails to spawn the VM because the ESX server throws a 
fault when asked to 'grow' the disk from 4GB to 1GB (driver thinks it is 
attempt to grow from 800MB to 1GB)

  Relevant hostd.log on ESX host:
  2013-10-15T17:02:24.509Z [35BDDB90 verbose 'Default'
  opID=HB-host-22@3170-d82e35d0-80] ha-license-manager:Validate -> Valid
  evaluation detected for "VMware ESX Server 5.0" (lastError=2,
  desc.IsValid:Yes)
  2013-10-15T17:02:25.129Z [FFBE3D20 info 'Vimsvc.TaskManager'
  opID=a3057d82-8e] Task Created :
  haTask--vim.VirtualDiskManager.extendVirtualDisk-526626761


  2013-10-15T17:02:25.158Z [35740B90 warning 'Vdisksvc' opID=a3057d82-8e]
  New capacity (2097152) is not greater than original capacity (8388608).

  I am still not exactly sure if this is consider user error on glance
  import, a glance shortcoming of not introspecting the vmdk, or a bug
  in the compute driver. Regardless, this bug is to track any potential
  defensive code we can add to the driver to better handle this
  scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463891] Re: Setting admin_state down on port produces no logs

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463891

Title:
  Setting admin_state down on port  produces no logs

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Tried to check how admin_state down affects HA ports.
  Noticed that management data between them stoped and cause them to become 
master. Although traffic to connected floating IP remain working.
  Problem is: no log on OVS agent idicated why it's processing a port update or 
why it's setting it's VLAN tag to 4095.
  (06:39:44 PM) amuller: there should be an INFO level log saying something 
like: "Setting port admin_state to {True/False}"(06:39:56 PM) amuller: with the 
port ID of course

  Current log:
  2015-06-08 10:25:25.782 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 'ha-8e0f96c5-78' has lost its 
vlan tag '1'!
  2015-06-08 10:25:25.783 1055 INFO neutron.agent.securitygroups_rpc 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Refresh firewall rules
  2015-06-08 10:25:26.784 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 
8e0f96c5-7891-46a4-8420-778454949bd0 updated. Details: {u'profile': {}, 
u'allowed_address_pairs': [], u'admin_state_up': False, u'network_id': 
u'6a5116a2-39f7-45bc-a432-3d624765d602', u'segmentation_id': 10, 
u'device_owner': u'network:router_ha_interface', u'physical_network': None, 
u'mac_address': u'fa:16:3e:02:cb:47', u'device': 
u'8e0f96c5-7891-46a4-8420-778454949bd0', u'port_security_enabled': True, 
u'port_id': u'8e0f96c5-7891-46a4-8420-778454949bd0', u'fixed_ips': 
[{u'subnet_id': u'f81913ba-328f-4374-96f2-1a7fd44d7fb1', u'ip_address': 
u'169.254.192.3'}], u'network_type': u'vxlan'}
  2015-06-08 10:25:26.940 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Configuration for device 
8e0f96c5-7891-46a4-8420-778454949bd0 completed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501686] Re: Incorrect exception handling in DB code involving rollbacked transactions.

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501686

Title:
  Incorrect exception handling in DB code involving rollbacked
  transactions.

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  I found out that some methods like  add_ha_port
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L312-L328
  contain the following logic:

  def create():
    create_something()
    try:
  _do_other_thing()
    except Exception:
   with excutils.save_and_reraise_exception():
   delete_something()

  def  _do_other_thing():
   with context.session.begin(subtransactions=True):
   

  The problem is that when exception is raised in _do_other_thing it is
  caught in except block, but the object cannot be deleted in except
  section because internal transaction has been rolled back. We have
  tests on these methods, but they also are not correct (for example
  
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/db/test_l3_hamode_db.py#L360-L377)
  as methods  _do_other_thing() are mocked so inside transaction is
  never created and aborted.

  The possible solution is to use nested transaction in such places like
  this:

  def create()
     with context.session.begin(subtransactions=True):
     create_something()
     try:
     _do_other_thing()
     except Exception:
     with excutils.save_and_reraise_exception():
     delete_something()

  def _do_other_thing():
   with context.session.begin(nested=True):
   

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487338] Re: OVS ARP spoofing protection breaks floating IPs without port security extension

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487338

Title:
  OVS ARP spoofing protection breaks floating IPs without port security
  extension

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The OVS ARP spoofing protection depends on the port security extension
  being enabled to disable ARP spoofing protection on router interfaces
  that have floating IP traffic on them. So if the port security
  extension is disabled the router interface will get ARP spoofing
  rules, which don't know about the floating IPs and will drop the ARP
  requests for them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457527] Re: Image-cache deleting active swap backing images

2016-05-09 Thread Dave Walker
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457527

Title:
  Image-cache deleting active swap backing images

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Version: 1:2015.1.0-0ubuntu1~cloud0 (Kilo)

  I am having issues with backing images of disk.swap being deleted from
  the image cache.

  Here is part of the nova-compute log, although multiple instances have
  disk.swap with swap_256 as the base image, the image cache repeatedly
  tries to delete it:

  2015-05-14 08:08:15.080 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 08:48:46.209 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 09:29:00.814 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 10:09:14.351 4319 INFO nova.virt.libvirt.imagecache 
[req-967c82af-329e-42ff-aade-f4af2b4ba732 - - - - -] Removing base or swap 
file: /var/lib/nova/instances/_base/swap_256
  2015-05-14 16:14:21.340 6479 INFO nova.virt.libvirt.imagecache 
[req-1d61428f-0b8b-4bae-9293-42ac99dc3f58 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 16:55:21.195 6479 INFO nova.virt.libvirt.imagecache 
[req-1d61428f-0b8b-4bae-9293-42ac99dc3f58 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256
  2015-05-14 17:36:12.260 6479 INFO nova.virt.libvirt.imagecache 
[req-1d61428f-0b8b-4bae-9293-42ac99dc3f58 - - - - -] Base or swap file too 
young to remove: /var/lib/nova/instances/_base/swap_256

  I am NOT using shared storage for the instances directory, it is
  sitting on the local filesystem, and instances on the same host node
  are using the swap base file that the image cache is deleting.

  As far as I can tell, it is attempting to delete it immediately after
  the swap image is created. Swap backing images of the form
  swap_512_512 are not deleted though (as opposed to just swap_512; I
  couldn't figure out what the difference is).

  Reproduce: Create volume with swap disk
  Temporary solution: image_cache_manager_interval = -1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461148] Re: Dead L3 agent should reflect HA router states

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461148

Title:
  Dead L3 agent should reflect HA router states

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The main use case of L3 HA is bouncing back from a machine (That is
  running a L3 agent) dying. In this case, with bp/report-ha-router-
  master merged, any active routers on that node will remain active in
  the Neutron DB (As the dead agent cannot update the server of
  anything). A backup node will pick up the routers previously active on
  the dead node and will update their status, resulting in the Neutron
  DB having the router 'active' on two different nodes. This can mess up
  l2pop as HA router interfaces will now be arbitrarily hosted on any of
  the 'active' hosts.

  The solution would be that when a L3 agent is marked as dead, to go
  ahead and change the HA router states on that agent to from active to
  standby, and also to update the router ports 'host' value to point to
  the new active agent.

  Note: This bug is at least partially coupled with
  https://bugs.launchpad.net/neutron/+bug/1365476. Ideally we could
  solve the two bugs in two separate patches with no dependencies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499647] Re: test_ha_router fails intermittently

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499647

Title:
  test_ha_router fails intermittently

Status in neutron:
  In Progress
Status in neutron kilo series:
  New

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1
  compute (Kilo) keepalived v1.2.13 I create 50 nets with 50 subnets and
  50 routers with interface is set for each subnet(Note: I've seem the
  same errors with just one router and net). I've got the following
  errors:

  root@node-6:~# neutron l3-agent-list-hosting-router router-1
  Request Failed: internal server error while processing your request.
   
  In neutron-server error log:  http://paste.openstack.org/show/473760/

  When I fixed _get_agents_dict_for_router to skip None for further
  testing, so then I was able to see:

  root@node-6:~# neutron l3-agent-list-hosting-router router-1
  
+--+---++---+--+
  | id   | host  | admin_state_up | 
alive | ha_state |
  
+--+---++---+--+
  | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | node-6.domain.tld | True   | 
:-)   | active   |
  | c9159f09-34d4-404f-b46c-a8c18df677f3 | node-7.domain.tld | True   | 
:-)   | standby  |
  | b458ab49-c294-4bdb-91bf-ae375d87ff20 | node-8.domain.tld | True   | 
:-)   | standby  |
  | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | node-6.domain.tld | True   | 
:-)   | active   |
  
+--+---++---+--+

  root@node-6:~# neutron port-list 
--device_id=fcf150c0-f690-4265-974d-8db370e345c4
  
+--+-+---++
  | id   | name 
   | mac_address   | fixed_ips  
|
  
+--+-+---++
  | 0834f8a2-f109-4060-9312-edebac84aba5 |  
   | fa:16:3e:73:9f:33 | {"subnet_id": 
"0c7a2cfa-1cfd-4ecc-a196-ab9e97139352", "ip_address": "172.18.161.223"}  |
  | 2b5a7a15-98a2-4ff1-9128-67d098fa3439 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:b8:f6:35 | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.149"} |
  | 48c887c1-acc3-4804-a993-b99060fa2c75 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:e7:70:13 | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.151"} |
  | 82ab62d6-7dd1-4294-a0dc-f5ebfbcbb4ca |  
   | fa:16:3e:c6:fc:74 | {"subnet_id": 
"c4cc21c9-3b3a-407c-b4a7-b22f783377e7", "ip_address": "10.0.40.1"}   |
  | bbca8575-51f1-4b42-b074-96e15aeda420 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:84:4c:fc | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.150"} |
  | bee5c6d4-7e0a-4510-bb19-2ef9d60b9faf | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:09:a1:ae | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.193.11"}  |
  | f8945a1d-b359-4c36-a8f8-e78c1ba992f0 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:c4:54:b5 | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.193.12"}  |
  
+--+-+---++
  mysql root@192.168.0.2:neutron> SELECT * FROM ha_router_agent_port_bindings 
WHERE router_id='fcf150c0-f690-4265-974d-8db370e345c4';
  
+--+--+--+-+
  | port_id  | router_id
| l3_agent_id  | state   |
  
|--+--+--+-|
  | 2b5a7a15-98a2-4ff1-9128-67d098fa3439 | fcf150c0-f690-4265-974d-8db370e345c4 
| c9159f09-34d4-404f-b46c-a8c18df677f3 | standby |
  | 48c887c1-acc3-4804-a993-b99060fa2c75 | fcf150c0-f690-4265-974d-8db370e345c4 
| b458ab49-c294-4bdb-91bf-ae375d87ff20 | standby |
  | bbca8575-51f1-4b42-b074-9

[Yahoo-eng-team] [Bug 1515341] Re: DVR associating a new floatingip on existing network fails after restart of l3 agent

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515341

Title:
  DVR associating a new floatingip on existing network fails after
  restart of l3 agent

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  If DVR is enabled and an l3-agent is restarted, adding a new vm to an
  existing network and then associating a floating ip to that vm will
  fail and set all floating ip statuses on that network to 'ERROR'.  In
  tracking down this issue, it appears that the 'floating_ip_added_dist'
  method in agent/l3/dvr_local_router.py attempts to invoke
  'self.rtr_fip_subnet.get_pair()' without first checking that
  self.rtr_fip_subnet is not None.

  To reproduce this issue:
  On a running DVR enabled installation, create a network, subnet, external 
network and router.
  Boot a vm on the network and associate a floating ip to the vm.
  Restart the neutron-l3-agent on the compute node hosting the vm.
  Boot a second vm on the existing network hosted on the compute node where the 
neutron-l3-agent has been restarted.
  Associate a new floatingip to the second vm.
  Observe that the flaotingip is in the ERROR state and that the vm is not 
pingable via that floatingip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407105] Re: Password Change Doesn't Affirmatively Invalidate Sessions

2016-05-09 Thread Dave Walker
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1407105

Title:
  Password Change Doesn't Affirmatively Invalidate Sessions

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) icehouse series:
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  The password change dialog at /horizon/settings/password/ contains the
  following code:

  
  if user_is_editable:
  try:
  api.keystone.user_update_own_password(request,
  data['current_password'],
  data['new_password'])
  response = http.HttpResponseRedirect(settings.LOGOUT_URL)
  msg = _("Password changed. Please log in again to continue.")
  utils.add_logout_reason(request, response, msg)
  return response
  except Exception:
  exceptions.handle(request,
    _('Unable to change password.'))
  return False
  else:
  messages.error(request, _('Changing password is not supported.'))
  return False
  

  There are at least two security concerns here:
  1) Logout is done by means of an HTTP redirect.  Let's say Eve, as MitM, gets 
ahold of Alice's token somehow.  Alice is worried this may have happened, so 
she changes her password.  If Eve suspects that the request is a 
password-change request (which is the most Eve can do, because we're running 
over HTTPS, right?  Right!?), then it's a simple matter to block the redirect 
from ever reaching the client, or the redirect request from hitting the server. 
 From Alice's PoV, something weird happened, but her new password works, so 
she's not bothered.  Meanwhile, Alice's old login ticket continues to work.
  2) Part of the purpose of changing a password is generally to block those who 
might already have the password from continuing to use it.  A password change 
should trigger (insofar as is possible) a purging of all active logins/tokens 
for that user.  That does not happen here.

  Frankly, I'm not the least bit sure if I've thought of the worst-case
  scenario(s) for point #1.  It just strikes me as very strange not to
  aggressively/proactively kill the ticket/token(s), instead relying on
  the client to do so.  Feel free to apply minds smarter and more
  devious than my own!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1407105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501515] Re: DBReferenceError is raised when updating dvr port binding

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501515

Title:
  DBReferenceError is raised when updating dvr port binding

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  several tests upstream are failing with a DBReferenceError exception when 
calls to 
  File "/opt/stack/new/neutron/neutron/plugins/ml2/db.py", line 208, in 
ensure_dvr_port_binding

  This appears to be caused by accessing database resource without
  locking it (race condition).

  Here is an excerpt of the error: 
  2015-09-29 18:39:00.822 7813 ERROR oslo_messaging.rpc.dispatcher 
DBReferenceError: (pymysql.err.IntegrityError) (1452, u'Cannot add or update a 
child row: a foreign key constraint fails (`neutron`.`ml2_dvr_port_bindings`, 
CONSTRAINT `ml2_dvr_port_bindings_ibfk_1` FOREIGN KEY (`port_id`) REFERENCES 
`ports` (`id`) ON DELETE CASCADE)') [SQL: u'INSERT INTO ml2_dvr_port_bindings 
(port_id, host, router_id, vif_type, vif_details, vnic_type, profile, status) 
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: 
(u'851c0627-5133-43e2-b7a3-da9c29afd4ea', 
u'devstack-trusty-hpcloud-b2-5150973', u'973254dc-d1aa-4177-b952-2ac648bad4b5', 
'unbound', '', 'normal', '', 'DOWN')]

  An example of the failure can be found here: 
  
http://logs.openstack.org/17/227517/3/check/gate-tempest-dsvm-neutron-dvr/fc1efa2/logs/screen-q-svc.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469322] Re: Associating a floatingip with a dual stack port requires the fixed-address to be specified

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469322

Title:
  Associating a floatingip with a dual stack port requires the fixed-
  address to be specified

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Associating a floatingip with a dual stack port fails unless the IPv4
  address is specified with the fixed-ip-address parameter. Presently
  when a user attempts to associate a floatingip with a port which an
  IPv4 and IPv6 address Neutron returns 'Bad floatingip request: Port %s
  has multiple fixed IPs.  Must provide a specific IP when assigning a
  floating IP'.

  While this is a helpful error message, it would be better if Neutron
  could infer that the floating IP must be associated with the fixed-ip
  of the same address family.

  See also https://bugs.launchpad.net/neutron/+bug/1323766, restricting
  floating IPs to IPv4 would simply things since associating an dual
  stack floating IP with a dual stack port could result in any of the
  following:

  1. 1:1 NAT between the IPv4 address of the floating IP and IPv4 address of 
  the associated port.
  2. 1:1 NAT between the IPv6 address of the floating IP and IPv6 address of 
the associated port.
  3. Both 1 and 2 (Neutron presently only uses the first fixed-ip of the port 
owned by the floating IP)
  4. a NAT-PT between the IPv4 address of the floating IP and IPv6 address of 
  the associated port
  5. a NAT-PT between the IPv6 address of the floating IP and IPv4 address of 
  the associated port

  Since NAT-PT and IPv6 NAT are not widely supported option 1 would be
  preferable and the most intuitive of these results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504527] Re: network_device_mtu not documented in agent config files

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504527

Title:
  network_device_mtu not documented in agent config files

Status in neutron:
  Fix Committed
Status in neutron kilo series:
  New

Bug description:
  There is no network_device_mtu notion in agent config files, while
  it's a supported and useful option.

  -bash-4.2$ grep network_device_mtu -r etc/
  -bash-4.2$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477829] Re: Create port API with invalid value returns 500(Internal Server Error)

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477829

Title:
  Create port API with invalid value returns 500(Internal Server Error)

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  I executed "POST /v2.0/ports" with invalid value like a "null" as the 
parameter "allowed_address_pairs".
  Then Neutron Server returned 500(Internal Server Error).

  I expected Neutron Server just returns 400(Bad Request).

  API Result and Logs are as follows.
  [API Result]
  stack@ubuntu:~/deg$ curl -g -i -X POST -H "Content-Type: application/json" -H 
"X-Auth-Token: ${token}" http://192.168.122.99:9696/v2.0/ports -d 
"{\"port\":{\"network_id\":\"7da5015b-4e6a-4c9f-af47-42467a4a34c5\",\"allowed_address_pairs\":null}}"
 ; echo
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json; charset=UTF-8
  Content-Length: 150
  X-Openstack-Request-Id: req-f44e7756-dd17-42c9-81e2-1c38e60a748e
  Date: Thu, 23 Jul 2015 09:35:26 GMT

  {"NeutronError": {"message": "Request Failed: internal server error
  while processing your request.", "type": "HTTPInternalServerError",
  "detail": ""}}

  [Neutron Server Log]
  2015-07-23 18:35:26.373 DEBUG neutron.api.v2.base 
[req-f44e7756-dd17-42c9-81e2-1c38e60a748e demo 
0522fc19a56b4d7ca32a9140d3d36a08] Request body: {u'port': {u'network_id': 
u'7da5015b-4e6a-4c9f-af47-42467a4a34c5', u'allowed_address_pairs': None}} from 
(pid=24318) prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:606
  2015-07-23 18:35:26.376 ERROR neutron.api.v2.resource 
[req-f44e7756-dd17-42c9-81e2-1c38e60a748e demo 
0522fc19a56b4d7ca32a9140d3d36a08] create failed
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 396, in create
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 664, in prepare_request_body
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource 
attr_vals['validate'][rule])
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 52, in 
_validate_allowed_address_pairs
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource if 
len(address_pairs) > cfg.CONF.max_allowed_address_pair:
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource TypeError: object of 
type 'NoneType' has no len()
  2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557938] Re: [doc]support matrix of vmware for chap is wrong

2016-05-09 Thread Dave Walker
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => Fix Committed

** Changed in: nova/kilo
Milestone: None => 2015.1.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557938

Title:
  [doc]support matrix of vmware for chap is wrong

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  In support-matrix, it says that vmware driver supports chap authentication 
over iscsi.
  In fact, vmware driver doesn't pass  authentication info to vSphere API.
  So the function doesn't work.

  
  Code: 
  def _iscsi_add_send_target_host(self, storage_system_mor, hba_device,
  target_portal):
  """Adds the iscsi host to send target host list."""
  client_factory = self._session.vim.client.factory
  send_tgt = client_factory.create('ns0:HostInternetScsiHbaSendTarget')
  (send_tgt.address, send_tgt.port) = target_portal.split(':')
  LOG.debug("Adding iSCSI host %s to send targets", send_tgt.address)
  self._session._call_method(
  self._session.vim, "AddInternetScsiSendTargets",
  storage_system_mor, iScsiHbaDevice=hba_device, targets=[send_tgt])

  Doc:
  
http://docs.openstack.org/developer/nova/support-matrix.html#storage_block_backend_iscsi_auth_chap_vmware

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1557938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462484] Re: Port Details VNIC type value is not translatable

2016-05-09 Thread Dave Walker
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462484

Title:
  Port Details VNIC type value is not translatable

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  On port deatails, the Binding/VNIC type value is not translatable. To 
recreate the problem:
  - create a pseudo translation:

  ./run_tests.sh --makemessages
  ./run_tests.sh --pseudo de
  ./run_tests.sh --compilemessages

  start the dev server, login and change to German/Deutsch (de)

  Navigate to
  Project->Network->Networks->[Detail]->[Port Detail]

  notice at the bottom of the panel the VNIC type is not translated.

  The 3 VNIC types should be translated when displayed in Horizon
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/portbindings.py#L73
  but neutron will expect these to be provided in English on API calls.

  Note that the mapping is already correct on Edit Port - the
  translations just need to be applied on the details panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520517] Re: IPv6 networking broken by garp_master_* keepalived settings when using HA routers

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520517

Title:
  IPv6 networking broken by garp_master_* keepalived settings when using
  HA routers

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  http://git.openstack.org/cgit/openstack/neutron/commit/?id=5d38dc5
  added the "garp_master_repeat 5" and "garp_master_refresh 10". This
  badly breaks networking to the point where it is completely unsuable:

  First of all, this setting causes Keepalived to constantly spam five
  unsolicited neighbour advertisements every ten seconds, as shown in
  this tcpdump (the active router is fe80::f816:3eff:feb8:3857):

  ubuntu@test:~$ sudo tcpdump -i eth0 host fe80::f816:3eff:feb8:3857 
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
  10:08:09.459090 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, router 
advertisement, length 56
  10:08:12.566305 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, router 
advertisement, length 56
  10:08:15.638044 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, router 
advertisement, length 56
  10:08:18.039185 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:18.039275 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:18.039496 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:18.039581 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:18.039595 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:21.952451 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, router 
advertisement, length 56
  10:08:28.046863 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:28.046944 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:28.047045 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:28.047986 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:28.048033 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, neighbor 
advertisement, tgt is fe80::f816:3eff:feb8:3857, length 32
  10:08:30.931114 IP6 fe80::f816:3eff:feb8:3857 > ip6-allnodes: ICMP6, router 
advertisement, length 56

  That's not a problem in itself. The problem is however that these
  neighbour advertisements causes the instance to loose its default
  route, which stays gone until the next router advertisement packet
  arrives:

  ubuntu@test:~$ sudo ip -6 monitor route | ts
  Nov 27 10:08:09 default via fe80::f816:3eff:feb8:3857 dev eth0  proto ra  
metric 1024 
  Nov 27 10:08:18 Deleted default via fe80::f816:3eff:feb8:3857 dev eth0  proto 
ra  metric 1024  expires 27sec hoplimit 64
  Nov 27 10:08:21 default via fe80::f816:3eff:feb8:3857 dev eth0  proto ra  
metric 1024 
  Nov 27 10:08:28 Deleted default via fe80::f816:3eff:feb8:3857 dev eth0  proto 
ra  metric 1024  expires 23sec hoplimit 64
  Nov 27 10:08:30 default via fe80::f816:3eff:feb8:3857 dev eth0  proto ra  
metric 1024 

  These periods without a default route obviously badly breaks network
  connectivity for the node, which is easily observable by a simple ping
  test:

  ubuntu@test:~$ ping6 2a02:c0::1 2>&1 | ts
  Nov 27 10:08:15 PING 2a02:c0::1(2a02:c0::1) 56 data bytes
  Nov 27 10:08:15 64 bytes from 2a02:c0::1: icmp_seq=1 ttl=62 time=0.570 ms
  Nov 27 10:08:16 64 bytes from 2a02:c0::1: icmp_seq=2 ttl=62 time=0.873 ms
  Nov 27 10:08:17 64 bytes from 2a02:c0::1: icmp_seq=3 ttl=62 time=0.666 ms
  Nov 27 10:08:18 ping: sendmsg: Network is unreachable
  Nov 27 10:08:19 ping: sendmsg: Network is unreachable
  Nov 27 10:08:20 ping: sendmsg: Network is unreachable
  Nov 27 10:08:21 ping: sendmsg: Network is unreachable
  Nov 27 10:08:22 64 bytes from 2a02:c0::1: icmp_seq=8 ttl=62 time=1.42 ms
  Nov 27 10:08:23 64 bytes from 2a02:c0::1: icmp_seq=9 ttl=62 time=0.785 ms
  Nov 27 10:08:24 64 bytes from 2a02:c0::1: icmp_seq=10 ttl=62 time=0.712 ms
  Nov 27 10:08:25 64 bytes from 2a02:c0::1: icmp_seq=11 ttl=62 time=0.724 ms
  Nov 27 10:08:26 64 bytes from 2a02:c0::1: icmp_seq=12 ttl=62 time=0.864 ms
  Nov 27 10:08:27 64 bytes from 2a02:c0::1: icmp_seq=13 ttl=62 time=0.652 ms
  Nov 27 10:08:28 ping: sendmsg: Network is unreachable
  Nov 2

[Yahoo-eng-team] [Bug 1352256] Re: Uploading a new object fails with Ceph as object storage backend using RadosGW

2016-05-09 Thread Dave Walker
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352256

Title:
  Uploading a new object fails with Ceph as object storage backend using
  RadosGW

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  While uploading a new Object using Horizon, with Ceph as object
  storage backend, it fails with error mesage "Error: Unable to upload
  object"

  Ceph Release : Firefly

  Error in horizon_error.log:

  
  [Wed Jul 23 09:04:46.840751 2014] [:error] [pid 30045:tid 140685813683968] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
firefly-master.ashish.com
  [Wed Jul 23 09:04:46.842984 2014] [:error] [pid 30045:tid 140685813683968] 
WARNING:urllib3.connectionpool:HttpConnectionPool is full, discarding 
connection: firefly-master.ashish.com
  [Wed Jul 23 09:04:46.843118 2014] [:error] [pid 30045:tid 140685813683968] 
REQ: curl -i http://firefly-master.ashish.com/swift/v1/new-cont-dash/test -X 
PUT -H "X-Auth-Token: 91fc8466ce17e0d22af86de9b3343b2d"
  [Wed Jul 23 09:04:46.843227 2014] [:error] [pid 30045:tid 140685813683968] 
RESP STATUS: 411 Length Required
  [Wed Jul 23 09:04:46.843584 2014] [:error] [pid 30045:tid 140685813683968] 
RESP HEADERS: [('date', 'Wed, 23 Jul 2014 09:04:46 GMT'), ('content-length', 
'238'), ('content-type', 'text/html; charset=iso-8859-1'), ('connection', 
'close'), ('server', 'Apache/2.4.7 (Ubuntu)')]
  [Wed Jul 23 09:04:46.843783 2014] [:error] [pid 30045:tid 140685813683968] 
RESP BODY: 
  [Wed Jul 23 09:04:46.843907 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843930 2014] [:error] [pid 30045:tid 140685813683968] 
411 Length Required
  [Wed Jul 23 09:04:46.843937 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843944 2014] [:error] [pid 30045:tid 140685813683968] 
Length Required
  [Wed Jul 23 09:04:46.843951 2014] [:error] [pid 30045:tid 140685813683968] 
A request of the requested method PUT requires a valid Content-length.
  [Wed Jul 23 09:04:46.843957 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843963 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843969 2014] [:error] [pid 30045:tid 140685813683968]
  [Wed Jul 23 09:04:46.844530 2014] [:error] [pid 30045:tid 140685813683968] 
Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 
411 Length Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844555 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844607 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844900 2014] [:error] [pid 30045:tid 140685813683968] 
https://bugs.launchpad.net/cloud-archive/+bug/1352256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323766] Re: Incorrect Floating IP behavior in dual stack or ipv6 only network

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323766

Title:
  Incorrect Floating IP behavior  in dual stack or ipv6 only network

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  investigation on the floatingip API revealed a few issues:
-- if the external network is associated with multiple subnets, one IP 
address from each of the subnets is allocated. But the ip address used for the 
floating ip is picked up randomly. For example, a network could have both an 
ipv6 and an ipv4 subnet. In my tests, for one of such network, it picked up the 
ipv4 address; for the other, it picked up the ipv6 address. 
-- given that one IP is allocated from each subnet, addresses not used for 
floating ip is wasted.
-- Most likely, ipv6 floating ip (with the same mechanism as ipv4) won't be 
supported. But the API allows ipv6 addresses to be used as floating ip.
-- it allows an IPv4 floating ip to be associated with a port's ipv6 
addresses, and vice versa.

  In general, the behavior/semantics involved in the floating IP API
  needs to be looked at in the context of dual stack or ipv6 only
  network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252341] Re: Horizon crashes when removing logged user from project

2016-05-09 Thread Dave Walker
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1252341

Title:
  Horizon crashes when removing logged user from project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed
Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  Horizon is crashing when removing the logged user from any project.

  1 - Log in Horizon with a user that has the admin role.
  2 - In the projects panel, modify the project members of any project and add 
the user that you logged in Horizon to that project. Save the modification.
  3 - Without logging out, in the projects panel, edit the project that you 
have just added the logged user and remove this same user from the project.
  4 - When the modification is saved, Horizon shows Unauthorized errors when 
trying to retrieve the user/project/image/... list.
  5 - If you log out and log in again with the same user everything works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1252341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312016] Re: nova libvirtError: Unable to add bridge brqxxx-xx port tapxxx-xx: Device or resource busy

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312016

Title:
  nova libvirtError: Unable to add bridge brqxxx-xx port tapxxx-xx:
  Device or resource busy

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Hello:  My OpenStack's version is 2013.1.5(G) , plugin is linuxbridge
  ,os is ubuntu12.04.3 , libvirt-bin is '1.1.1-0ubuntu8.9'

  When  i launch three instances , two instances is successful, and one
  of the three is failed to spawn .

  I check the log of nova-compute , I  found the following  errors :

  (it's worth noting that:
  "libvirtError: Unable to add bridge brq233a5889-2e port tap3f81c08a-39: 
Device or resource busy")

  Somebody in the same problem?

  2014-04-24 14:41:58.499 ERROR nova.compute.manager 
[req-4dc590cc-9a34-460d-8c6a-4efdfb9de456 fd7179d2284247179c70db99ee1842db 
4f50d05ffb6b44a29f9b23978e40542b] [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] Instance failed to spawn
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] Traceback (most recent call last):
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1119, in _spawn
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] block_device_info)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1539, in 
spawn
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] block_device_info)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2455, in 
_create_domain_and_network
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] domain = self._create_domain(xml, 
instance=instance)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2416, in 
_create_domain
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] domain.createWithFlags(launch_flags)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in proxy_call
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] rv = execute(f,*args,**kwargs)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] rv = meth(*args,**kwargs)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 728, in createWithFlags
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]

  libvirtError: Unable to add bridge brq233a5889-2e port tap3f81c08a-39:
  Device or resource busy

  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]
  2014-04-24 14:41:58.680 AUDIT nova.compute.manager 
[req-4dc590cc-9a34-460d-8c6a-4efdfb9de456 fd7179d2284247179c70db99ee1842db 
4f50d05ffb6b44a29f9b23978e40542b] [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] Terminating instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514728] Re: insufficient service name for external process

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514728

Title:
  insufficient service name for external process

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The following external process monitor has insufficient name for external 
process manager:
  1. HA router IP
  2. DHCP dnsmasq
  3. keepalived
  4. metadata-proxy

  These monitor will get some insufficient  log like:
  'default-service'  for router with uuid xxx-xxx-xxx-xxx-xxx not found.  The 
process should not have died
  respawning 'default-service' for uuid xxx-xxx-xxx-xxx-xxx

  The 'default-service' is insufficient for cloud administartor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523638] Re: tempest fails with No IPv4 addresses found

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523638

Title:
  tempest fails with  No IPv4 addresses found

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in tempest:
  Fix Released

Bug description:
  http://logs.openstack.org/42/250542/7/check/gate-tempest-dsvm-neutron-
  linuxbridge/3a00f8b/logs/testr_results.html.gz

  Traceback (most recent call last):
File "tempest/test.py", line 113, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/scenario/test_network_basic_ops.py", line 550, in 
test_subnet_details
  self._setup_network_and_servers(dns_nameservers=[initial_dns_server])
File "tempest/scenario/test_network_basic_ops.py", line 123, in 
_setup_network_and_servers
  floating_ip = self.create_floating_ip(server)
File "tempest/scenario/manager.py", line 842, in create_floating_ip
  port_id, ip4 = self._get_server_port_id_and_ip4(thing)
File "tempest/scenario/manager.py", line 821, in _get_server_port_id_and_ip4
  "No IPv4 addresses found in: %s" % ports)
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/unittest2/case.py",
 line 845, in assertNotEqual
  raise self.failureException(msg)
  AssertionError: 0 == 0 : No IPv4 addresses found in: []

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523780] Re: Race between HA router create and HA router delete

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523780

Title:
  Race between HA router create and HA router delete

Status in neutron:
  In Progress
Status in neutron kilo series:
  New

Bug description:
  Set more than one API worker and RPC worker,  and then run rally scenario 
test  create_and_delete_routers:
  you may get such errors:

  1.DBReferenceError: (IntegrityError) (1452, 'Cannot add or update a
  child row: a foreign key constraint fails
  (`neutron`.`ha_router_agent_port_bindings`, CONSTRAINT
  `ha_router_agent_port_bindings_ibfk_2` FOREIGN KEY (`router_id`)
  REFERENCES `routers` (`id`) ON DELETE CASCADE)') 'INSERT INTO
  ha_router_agent_port_bindings (port_id, router_id, l3_agent_id, state)
  VALUES (%s, %s, %s, %s)' ('xxx', 'xxx', None,
  'standby')

  (InvalidRequestError: This Session's transaction has been rolled back
  by a nested rollback() call.  To begin a new transaction, issue
  Session.rollback() first.)

  2. AttributeError: 'NoneType' object has no attribute 'config' (l3
  agent process router in router_delete function)

  3. DBError: UPDATE statement on table 'ports' expected to update 1
  row(s); 0 were matched.

  4. res = {"id": port["id"],
     TypeError: 'NoneType' object is unsubscriptable

  5. delete HA network during deleting the last router, get error
  message: "Unable to complete operation on network . There
  are one or more ports still in use on the network."

  There are a bunch of sub-bugs related to this one, basically different
  incarnations of race conditions in the interactions between the
  l3-agent and the neutron-server:

     https://bugs.launchpad.net/neutron/+bug/1499647
     https://bugs.launchpad.net/neutron/+bug/1533441
     https://bugs.launchpad.net/neutron/+bug/1533443
     https://bugs.launchpad.net/neutron/+bug/1533457
     https://bugs.launchpad.net/neutron/+bug/1533440
     https://bugs.launchpad.net/neutron/+bug/1533454
     https://bugs.launchpad.net/neutron/+bug/1533455
 https://bugs.launchpad.net/neutron/+bug/1533460

  (I suggest we use this main bug as a tracker for the whole thing,
   as reviews already reference this bug as related).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524020] Re: DVRImpact: dvr_vmarp_table_update and dvr_update_router_add_vm is called for every port update instead of only when host binding or mac-address changes occur

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524020

Title:
  DVRImpact:  dvr_vmarp_table_update and dvr_update_router_add_vm is
  called for every port update instead of only when host binding or mac-
  address changes occur

Status in neutron:
  Fix Committed
Status in neutron kilo series:
  New

Bug description:
  DVR arp update (dvr_vmarp_table_update) and dvr_update_router_add_vm
  called for every update_port if the mac_address changes or when
  update_devic_up is true.

  These functions should be called from _notify_l3_agent_port_update,
  only when a host binding for a service port changes or when a
  mac_address for the service port changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533441] Re: HA router can not be deleted in L3 agent after race between HA router creating and deleting

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533441

Title:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

  Exception:
  1. Unable to process HA router %s without HA port (HA router initialize)

  2. AttributeError: 'NoneType' object has no attribute 'config' (HA
  router deleting procedure)

  
  With the newest neutron code, I find a infinite loop in _safe_router_removed.
  Consider a HA router without HA port was placed in the l3 agent,
  usually because of the race condition.

  Infinite loop steps:
  1. a HA router deleting RPC comes
  2. l3 agent remove it
  3. the RouterInfo will delete its the router 
namespace(self.router_namespace.delete())
  4. the HaRouter, ha_router.delete(), where the AttributeError: 'NoneType' or 
some error will be raised.
  5. _safe_router_removed return False
  6. self._resync_router(update)
  7. the router namespace is not existed, RuntimeError raised, go to 5, 
infinite loop 5 - 7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533454] Re: L3 agent unable to update HA router state after race between HA router creating and deleting

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533454

Title:
  L3 agent unable to update HA router state after race between HA router
  creating and deleting

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The router L3 HA binding process does not take into account the fact
  that the port it is binding to the agent can be concurrently deleted.

  Details:

  When neutron server deleted all the resources of a
  HA router, L3 agent can not aware that, so race
  happened in some procedure like this:
  1. Neutron server delete all resources of a HA router
  2. RPC fanout to L3 agent 1 in which
     the HA router was master state
  3. In l3 agent 2 'backup' router set itself to masert
     and notify neutron server a HA router state change notify.
  4. PortNotFound rasied in updating HA router states function
  (Seems the DB error was no longer existed.)

  How the step 2 and 3 happens?
  Consider that l3 agent 2 has much more HA routers than l3 agent 1,
  or any reason that causes l3 agent 2 gets/processes the deleting
  RPC later than l3 agent 1. Then l3 agent 1 remove HA router's
  keepalived process will soonly be detected by backup router in
  l3 agent 2 via VRRP protocol. Now the router deleting RPC is in
  the queue of RouterUpdate or any step of a HA router deleting
  procedure, and the router_info will still have 'the' router info.
  So l3 agent 2 will do the state change procedure, AKA notify
  the neutron server to update router state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528031] Re: 'NetworkNotFound' exception during listing ports

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528031

Title:
  'NetworkNotFound' exception during listing ports

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  There is a problem - when I run tests in parallel then one/two can fail.
  As I see in logs one thread is deleting network while second thread is
  listing all ports. And second thread get exception 'NetworkNotFound'.

  Part of neutron service logs is:

  2015-12-18 06:29:05.151 INFO neutron.wsgi 
[req-4d303e7d-ae31-47b5-a644-552fceeb03ef user-0a50ad96 project-ce45a55a] 
52.90.96.102 - - [18/Dec/2015 06:29:05] "DELETE 
/v2.0/networks/d2d2481a-4c20-452f-8088-6e6815694ac0.json HTTP/1.1" 204 173 
0.426808
  2015-12-18 06:29:05.173 ERROR neutron.policy 
[req-a406e696-6791-4345-8b04-215ca313ea67 user-0a50ad96 project-ce45a55a] 
Policy check error while calling >!
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy Traceback (most recent 
call last):
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/policy.py", line 258, in __call__
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy fields=[parent_field])
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 713, in get_network
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy result = 
super(Ml2Plugin, self).get_network(context, id, None)
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 385, in get_network
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy network = 
self._get_network(context, id)
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_common.py", line 188, in 
_get_network
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy raise 
n_exc.NetworkNotFound(net_id=id)
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy NetworkNotFound: Network 
d2d2481a-4c20-452f-8088-6e6815694ac0 could not be found.
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy 
  2015-12-18 06:29:05.175 INFO neutron.api.v2.resource 
[req-a406e696-6791-4345-8b04-215ca313ea67 user-0a50ad96 project-ce45a55a] index 
failed (client error): Network d2d2481a-4c20-452f-8088-6e6815694ac0 could not 
be found.
  2015-12-18 06:29:05.175 INFO neutron.wsgi 
[req-a406e696-6791-4345-8b04-215ca313ea67 user-0a50ad96 project-ce45a55a] 
52.90.96.102 - - [18/Dec/2015 06:29:05] "GET 
/v2.0/ports.json?tenant_id=63f912ca152048c6a6b375784d90bd37 HTTP/1.1" 404 359 
0.311871

  
  Answer from Kevin Benton (in mailing list):
  Ah, I believe what is happening is that the network is being deleted after 
the port has been retrieved from the database during the policy check. The 
policy check retrieves the port's network to be able to enforce the 
network_owner lookup: 
https://github.com/openstack/neutron/blob/master/etc/policy.json#L6

  So order of events seems to be:

  port list API call received
  ports retrieved from db
  network delete request is processed
  ports processed by policy engine
  policy engine triggers network lookup and hits 404

  
  This appears to be a legitimate bug. Maybe we need to find a way to cache the 
network at port retrieval time for the policy engine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528967] Re: Horizon doesn't create new scoped token when user role is removed

2016-05-09 Thread Dave Walker
*** This bug is a duplicate of bug 1252341 ***
https://bugs.launchpad.net/bugs/1252341

** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528967

Title:
  Horizon doesn't create new scoped token when user role is removed

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Committed

Bug description:
  When a user logs into Horizon an unscoped token is created using which
  a scoped token is obtained. I am logged into Horizon and remove myself
  from a project which is not the current active project. This results
  in all my scoped tokens getting invalidated. I have some API calls in
  the middleware that require authorization which fail because the token
  is invalid. Horizon will throw an Unauthorized error (see attachment)
  and the only way to recover from this is to clear cookies, logout and
  log back in again.

  Horizon should immediately obtain a new scoped token if previous token
  is invalidated. Alternatively, keystone should not invalidate all
  tokens (for all projects) when user is removed from one project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1528967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534652] Re: Host machine exposed to tenant networks via IPv6

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534652

Title:
  Host machine exposed to tenant networks via IPv6

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  When creating a new interface Neutron creates interface and brings
  link up without disabling default IPv6 binding. By default Linux
  brings IPv6 link local addresses to all interfaces, this is different
  behavior than IPv4 where an administrator must explicitly configure an
  address on the interface.

  The is significantly exposed in LinuxBridgeManager ensure_vlan() and
  ensure_vxlan() where a new VLAN or VXLAN interface is created and set
  link up before being enslaved in the bridge. In the case of compute
  node joining and existing network, there is a time window in which
  VLAN or VXLAN interface is created and has connectivity to the tenant
  network before it has been enslaved in bridge. Under normal
  circumstances this time window is less than the time needed to preform
  IPv6 duplicate address detection, but under high load this assumption
  may not hold.

  I recommend explicitly disabling IPv6 via sysctl on each interface
  which will be attached to a bridge prior bringing the interface link
  up. This is already done for the bridge interfaces themselves, but
  should be done for all Neutron configured interfaces in the default
  namespace.

  This issue was referenced in https://bugs.launchpad.net/neutron/+bug/1459856
  Related issue addressed being addressed in Nova: 
https://review.openstack.org/#/c/198054/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1534652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538387] Re: fdb_chg_ip_tun throwing exception because fdb_entries not in correct format

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1538387

Title:
  fdb_chg_ip_tun throwing exception because fdb_entries not in correct
  format

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  I've been trying to track down failures in the DVR multinode job.  I'm
  now tripping over this one.

  For context I've been focusing on a single change, but if you see a
  failure in the gate-tempest-dsvm-neutron-dvr-multinode-full job you'll
  probably be able to find similar info.  This is the change:

  http://logs.openstack.org/77/17/4/check/gate-tempest-dsvm-neutron-
  dvr-multinode-full/5abca7b/logs/

  The screen-q-agt log shows a traceback here:

  http://logs.openstack.org/77/17/4/check/gate-tempest-dsvm-neutron-
  dvr-multinode-
  full/5abca7b/logs/screen-q-agt.txt.gz#_2016-01-18_10_11_29_715

  
  2016-01-18 10:11:29.724 12932 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py",
 line 312, in fdb_chg_ip_tun
  2016-01-18 10:11:29.724 12932 ERROR oslo_messaging.rpc.dispatcher 
mac_ip.mac_address,
  2016-01-18 10:11:29.724 12932 ERROR oslo_messaging.rpc.dispatcher 
AttributeError: 'list' object has no attribute 'mac_address'
  2016-01-18 10:11:29.724 12932 ERROR oslo_messaging.rpc.dispatcher

  The info passed to fdb_chg_ip_tun() should have a "PortInfo"
  namedtuple as data, but from the line before we can see it doesn't:

  DEBUG neutron.plugins.ml2.drivers.l2pop.rpc_manager.l2population_rpc
  [req-671e8634-c753-4002-acfd-68515dd44f29 None None]
  
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent
  method fdb_chg_ip_tun called with arguments (,
  
, {u'c4ae0757-e3e5-419c-b2ba-4d7817964237':
  {u'10.209.192.28': {u'before': [[u'fa:16:3e:8d:2e:48',
  u'2003:0:0:1::1']]}}}, '10.208.193.94', {u'7ca0dcf2-fb63-4959-92ee-
  cc757da8d120':

  So from this it's clear that _unmarshall_fdb_entries() either hasn't
  been called, or didn't do anything.

  Looking over in screen-q-svc.log for the info before the RPC call
  finds:

  DEBUG neutron.plugins.ml2.drivers.l2pop.rpc [req-
  f32790a5-0160-47b9-89b4-763b9c23bc08 tempest-
  TestGettingAddress-2071048693 tempest-TestGettingAddress-1817548879]
  Fanout notify l2population agents at q-agent-notifier the message
  update_fdb_entries with {'chg_ip': {u'c4ae0757-e3e5-419c-b2ba-
  4d7817964237': {u'10.208.193.94': {'before':
  [PortInfo(mac_address=u'fa:16:3e:8d:2e:48',
  ip_address=u'2003:0:0:1::1')] _notification_fanout
  /opt/stack/new/neutron/neutron/plugins/ml2/drivers/l2pop/rpc.py:47

  This is the message right before _marshall_fdb_entries() was called to
  convert the PortInfo into [, ] pairs, and from the above it
  looks like it did.

  I'm just starting to look at this now, but maybe someone more familiar
  with l2pop has a guess at what's broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1538387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549394] Re: neutron-sanity-check --no* BoolOpts don't work

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549394

Title:
  neutron-sanity-check --no* BoolOpts don't work

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  neutron-sanity-check allows the passing of --config-file and --config-
  dir to automatically set which tests are run based on neutron's
  configuration. Since the test options are BoolOpts, they automatically
  register --no* inverse options. This implies that one could pass
  --config-file /etc/neutron/l3_agent.ini and then disable the
  dnsmasq_version check with --nodnsmasq_version.

  neutron-sanity-check uses set_override() to override the test
  configuration, which also overrides the CLI. Using set_default()
  should fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545695] Re: L3 agent: traceback is suppressed on floating ip setup failure

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1545695

Title:
  L3 agent: traceback is suppressed on floating ip setup failure

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Following traceback says nothing about actual exception and makes it
  hard to debug issues:

  2016-02-10 05:26:54.025 682 ERROR neutron.agent.l3.router_info [-] L3 agent 
failure to setup floating IPs
  2016-02-10 05:26:54.025 682 TRACE neutron.agent.l3.router_info Traceback 
(most recent call last):
  2016-02-10 05:26:54.025 682 TRACE neutron.agent.l3.router_info File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 604, 
in process_external
  2016-02-10 05:26:54.025 682 TRACE neutron.agent.l3.router_info fip_statuses = 
self.configure_fip_addresses(interface_name)
  2016-02-10 05:26:54.025 682 TRACE neutron.agent.l3.router_info File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 268, 
in configure_fip_addresses
  2016-02-10 05:26:54.025 682 TRACE neutron.agent.l3.router_info raise 
n_exc.FloatingIpSetupException('L3 agent failure to setup '
  2016-02-10 05:26:54.025 682 TRACE neutron.agent.l3.router_info 
FloatingIpSetupException: L3 agent failure to setup floating IPs

  Need to log actual exception with traceback before reraising.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1545695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550623] Re: Functional ARPSpoofTestCase's occasionally fail

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550623

Title:
  Functional ARPSpoofTestCase's occasionally fail

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  We occasionally get failures in the gate like the one below.
  Unfortunately they are difficult to reproduce locally.

  ft32.3: 
neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_allowed_address_pairs(native)_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  DEBUG [oslo_policy._cache_handler] Reloading cached file 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
 DEBUG [oslo_policy.policy] Reloaded policy file: 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
  }}}

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_ovs_flows.py", line 202, in 
test_arp_spoof_allowed_address_pairs
  net_helpers.assert_ping(self.src_namespace, self.dst_addr, count=2)
File "neutron/tests/common/net_helpers.py", line 102, in assert_ping
  dst_ip])
File "neutron/agent/linux/ip_lib.py", line 885, in execute
  log_fail_as_error=log_fail_as_error, **kwargs)
File "neutron/agent/linux/utils.py", line 140, in execute
  raise RuntimeError(msg)
  RuntimeError: Exit code: 1; Stdin: ; Stdout: PING 192.168.0.2 (192.168.0.2) 
56(84) bytes of data.

  --- 192.168.0.2 ping statistics ---
  2 packets transmitted, 0 received, 100% packet loss, time 1006ms

  ; Stderr:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1550623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546110] Re: DB error causes router rescheduling loop to fail

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546110

Title:
  DB error causes router rescheduling loop to fail

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  In router rescheduling looping task db call to get down bindings is
  done outside of try/except block which may cause task to fail (see
  traceback below). Need to put db operation inside try/except.

  2016-02-15T10:44:44.259995+00:00 err: 2016-02-15 10:44:44.250 15419 ERROR 
oslo.service.loopingcall [req-79bce4c3-2e81-446c-8b37-6d30e3a964e2 - - - - -] 
Fixed interval looping call 
'neutron.services.l3_router.l3_router_plugin.L3RouterPlugin.reschedule_routers_from_down_agents'
 failed
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 113, in 
_run_loop
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 
101, in reschedule_routers_from_down_agents
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall down_bindings = 
self._get_down_bindings(context, cutoff)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_dvrscheduler_db.py", line 460, 
in _get_down_bindings
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall context, cutoff)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 
149, in _get_down_bindings
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return 
query.all()
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2399, in all
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return list(self)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2516, in 
__iter__
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return 
self._execute_and_instances(context)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2529, in 
_execute_and_instances
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
close_with_result=True)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2520, in 
_connection_from_session
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall **kw)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 882, in 
connection
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
execution_options=execution_options)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 889, in 
_connection_for_bind
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall conn = 
engine.contextual_connect(**kw)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2039, in 
contextual_connect
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
self._wrap_pool_connect(self.pool.connect, None),
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2078, in 
_wrap_pool_connect
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall e, dialect, self)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1401, in 
_handle_dbapi_exception_noconnection
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
util.raise_from_cause(newraise, exc_info)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 199, in 
raise_from_cause
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
reraise(type(exception), exception, tb=exc_tb)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2074, in 
_wrap_pool_connect
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return fn()
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sq

[Yahoo-eng-team] [Bug 1552960] Re: Tempest and Neutron duplicate tests

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552960

Title:
  Tempest and Neutron duplicate tests

Status in neutron:
  In Progress
Status in neutron kilo series:
  New

Bug description:
  Problem statement:

  1) Tests overlap between Tempest and Neutron repos - 264 tests last I 
checked. The effect is:
  1.1) Confusion amongst QA & dev contributors and reviewers. I'm writing a 
test, where should it go? Someone just proposed a Tempest patch for a new 
Neutron API, what should I do with this patch?
  1.2) Wasteful from a computing resources point of view - The same tests are 
being run multiple times for every Neutron patchset.
  2) New APIs (Subnet pools, address scopes, QoS, RBAC, port security, service 
flavors and availability zones), are not covered by Tempest tests. Consumers 
have to adapt and run both the Tempest tests and the Neutron tests in two 
separate runs. This causes a surprising amount of grief.

  Proposed solution:

  For problem 1, we eliminate the overlap. We do this by defining a set
  of tests that will live in Tempest, and another set of tests that will
  live in Neutron. More information may be found here:
  https://review.openstack.org/#/c/280427/. After deciding on the line
  in the sand, we will delete any tests in Neutron that should continue
  to live in Tempest. Some Neutron tests were modified after they were
  copied from Tempest, these modifications will have to be examined and
  then proposed to Tempest. Afterwards these tests may be removed from
  Neutron, eliminating the overlap from the Neutron side. Once this is
  done, overlapping tests may be deleted from Tempest.

  For problem 2, we will develop a Neutron Tempest plugin. This will be
  tracked in a separate bug. Note that there's already a patch for this
  up for review: https://review.openstack.org/#/c/274023/

  * The work is also being tracked here:
  https://etherpad.openstack.org/p/neutron-tempest-defork

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555850] Re: excessive "no bound segment for port warnings" in server log

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555850

Title:
  excessive "no bound segment for port warnings" in server log

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The server logs are filled with the warnings below in a normal tempest
  run. This is easy to trigger in a normal workflow by updating a port
  which is not yet bound to a host. For example, the following will
  result in a warning:

  neutron port-create net --name=myport
  neutron port-update myport --allowed-address-pairs type=dict list=true 
mac_address=01:02:03:04:05:06,ip_address=1.1.1.1/32



  
  2016-03-09 20:06:55.986 18501 WARNING neutron.plugins.ml2.plugin 
[req-40c0c19d-41fb-41b3-bf53-a7025116c3b4 tempest-PortsIpV6TestJSON-136844784 
-] In _notify_port_updated(), no bound segment for port 
1438eb2e-e084-43e8-b8bd-552408bd8b02 on network 
293447f0-5fbc-4cb7-8521-0c57ab3ab800
  2016-03-09 20:06:58.922 18502 WARNING neutron.plugins.ml2.plugin 
[req-0c734344-da3c-4eca-b162-a0b3189e378e tempest-PortsIpV6TestJSON-136844784 
-] In _notify_port_updated(), no bound segment for port 
f1a92580-9894-4bff-b2e2-feddcbf555d3 on network 
ab152352-b335-4a25-8458-675d48666cf1
  2016-03-09 20:06:59.667 18502 WARNING neutron.plugins.ml2.plugin 
[req-2e7c62c6-e846-473c-9ad3-1b06db91fdc1 tempest-PortsIpV6TestJSON-136844784 
-] In _notify_port_updated(), no bound segment for port 
f1a92580-9894-4bff-b2e2-feddcbf555d3 on network 
ab152352-b335-4a25-8458-675d48666cf1
  2016-03-09 20:07:11.848 18502 WARNING neutron.plugins.ml2.plugin 
[req-ccd3058e-211b-4a42-9ca6-2200fde4f8e7 tempest-PortsIpV6TestJSON-136844784 
-] In _notify_port_updated(), no bound segment for port 
2e13552e-e9cb-479f-aea7-dba2613399a4 on network 
293447f0-5fbc-4cb7-8521-0c57ab3ab800
  2016-03-09 20:07:16.215 18502 WARNING neutron.plugins.ml2.plugin 
[req-0375d8af-8af2-46e2-bd9e-bdbcd095d29d tempest-PortsIpV6TestJSON-136844784 
-] In _notify_port_updated(), no bound segment for port 
40faa9e6-a606-48bc-91c7-ba31d586997a on network 
293447f0-5fbc-4cb7-8521-0c57ab3ab800

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551593] Re: functional test failures caused by failure to setup OVS bridge

2016-05-09 Thread Dave Walker
*** This bug is a duplicate of bug 1470234 ***
https://bugs.launchpad.net/bugs/1470234

** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551593

Title:
  functional test failures caused by failure to setup OVS bridge

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Committed

Bug description:
  Right now we get random functional test failures and it seems to be
  related to the fact that we only log and error and continue when a
  flow fails to be inserted into a bridge:
  http://logs.openstack.org/90/283790/10/check/gate-neutron-dsvm-
  
functional/ab54e0a/logs/neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_doesnt_block_ipv6_native_.log.txt.gz#_2016-03-01_01_47_17_903

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555842] Re: Excessive "network has been deleted" warnings in dhcp agent log

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555842

Title:
  Excessive "network has been deleted" warnings in dhcp agent log

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The DHCP agent is filled with warnings about networks being deleted.
  These happen all of the time in a normal lifecycle, they are not
  warning worthy!

  
  2016-03-09 20:05:52.756 18546 WARNING neutron.agent.dhcp.agent 
[req-2abe83bf-b702-4b23-9ad2-495ab3148f87 
tempest-ExtraDHCPOptionsTestJSON-1227029378 -] Network 
1d26186d-8469-4b66-9f38-efd5ece31750 has been deleted.
  2016-03-09 20:05:52.826 18546 WARNING neutron.agent.dhcp.agent 
[req-2abe83bf-b702-4b23-9ad2-495ab3148f87 
tempest-ExtraDHCPOptionsTestJSON-1227029378 -] Network 
1d26186d-8469-4b66-9f38-efd5ece31750 has been deleted.
  2016-03-09 20:05:57.370 18546 WARNING neutron.agent.dhcp.agent 
[req-5fec9816-d7c5-45a3-a729-c27959849e0c 
tempest-ExtraDHCPOptionsTestJSON-1227029378 -] Network 
1d26186d-8469-4b66-9f38-efd5ece31750 has been deleted.
  2016-03-09 20:05:59.678 18546 WARNING neutron.agent.dhcp.agent 
[req-dbed1988-8379-4f20-b1f7-49b390156074 tempest-PortsTestJSON-491096991 -] 
Network b1cfc910-6f38-4b31-9135-180f8af1fa30 has been deleted.
  2016-03-09 20:05:59.755 18546 WARNING neutron.agent.dhcp.agent 
[req-b83a65ea-44d6-4a59-b4ab-936766b77815 tempest-PortsTestJSON-491096991 -] 
Network b1cfc910-6f38-4b31-9135-180f8af1fa30 has been deleted.
  2016-03-09 20:06:02.807 18546 WARNING neutron.agent.dhcp.agent 
[req-21a71a3a-86ac-4c51-b80e-dd063399f78a 
tempest-RoutersNegativeTest-2043192433 -] Network 
fae1f7e3-17eb-46b4-a6fb-489423c5ef3a has been deleted.
  2016-03-09 20:06:02.972 18546 WARNING neutron.agent.dhcp.agent 
[req-2e2a5d83-98e8-4ecb-b952-23d6bfe7be10 
tempest-RoutersNegativeTest-2043192433 -] Network 
86e6f4f6-44d8-47fe-a99d-09706c2bca2b has been deleted.
  2016-03-09 20:06:03.080 18546 WARNING neutron.agent.dhcp.agent 
[req-7122bf8d-d791-4861-b631-c98c612d14b5 
tempest-RoutersNegativeTest-2043192433 -] Network 
84757435-3913-48f6-a6ec-2afead3d8aca has been deleted.
  2016-03-09 20:06:03.281 18546 WARNING neutron.agent.dhcp.agent 
[req-f92e4880-da9c-411e-935a-da24ab1ce3f9 
tempest-DvrRoutersNegativeTest-1147844329 -] Network 
31a7c63c-7d56-45e8-a16b-dc2f260e9662 has been deleted.
  2016-03-09 20:06:03.429 18546 WARNING neutron.agent.dhcp.agent 
[req-f92e4880-da9c-411e-935a-da24ab1ce3f9 
tempest-DvrRoutersNegativeTest-1147844329 -] Network 
31a7c63c-7d56-45e8-a16b-dc2f260e9662 has been deleted.
  2016-03-09 20:06:19.922 18546 WARNING neutron.agent.dhcp.agent 
[req-d55228c9-4334-4002-b802-70b61e89c4b7 
tempest-RoutersNegativeIpV6Test-790755908 -] Network 
f0c50548-5fcb-44b4-9a0b-fd9129f90e0a has been deleted.
  2016-03-09 20:06:19.980 18546 WARNING neutron.agent.dhcp.agent 
[req-24a1762c-91a7-4684-b63f-d13154df4acc 
tempest-RoutersNegativeIpV6Test-790755908 -] Network 
753d50bc-9c8d-4214-9f33-b38ebedd8861 has been deleted.
  2016-03-09 20:06:20.070 18546 WARNING neutron.agent.dhcp.agent 
[req-880a6a7f-b75c-4779-9cd4-7f95fcd25e50 tempest-FloatingIPTestJSON-65531998 
-] Network 7ea76559-f824-441b-b4d4-73721c734766 has been deleted.
  2016-03-09 20:06:20.377 18546 WARNING neutron.agent.dhcp.agent 
[req-aa5e0fc6-8dd7-4ede-bc4a-50c871587bdf tempest-FloatingIPTestJSON-65531998 
-] Network cf36968e-4c9d-462c-ab75-e61af8e27f0c has been deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557640] Re: DHCP agent logging is causing performance issues

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557640

Title:
  DHCP agent logging is causing performance issues

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When a new port gets added or removed, the DHCP agent is currently
  dumping the entirety of the hosts file in the logs. This is a problem
  in some deployments we have encountered, some of which with upwards of
  1 ports.

  Change-Id: I3ad7864eeb2f959549ed356a1e34fa18804395cc removed logging
  calls from inside the loop. It also added two new log calls: one at
  the beginning of the loop that builds the hosts file in memory and one
  at the end, that also includes the full content. Those could be useful
  pieces of debugging information. However, adding the host file
  contents can get expensive when there are large numbers of ports.

  For large numbers of ports, it can take more than one hour for the
  agent to sync up. The log output contributes a significant amount of
  that time.

  This was introduced in Liberty. One reviewer at the time had already
  expressed concerns about the impact of the log call.

  Will propose a patch to keep the log message, but remove the file
  contents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553595] Re: test_external_network_visibility intermittent failure

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553595

Title:
  test_external_network_visibility intermittent failure

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Very odd failure:

  http://logs.openstack.org/79/288279/3/gate/gate-neutron-dsvm-
  api/300ee95/testr_results.html.gz

  ft33.1: 
neutron.tests.api.test_networks.NetworksIpV6TestJSON.test_external_network_visibility[id-af774677-42a9-4e4b-bb58-16fe6a5bc1ec,smoke]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2016-03-05 17:07:49,598 11488 INFO [tempest.lib.common.rest_client] 
Request (NetworksIpV6TestJSON:test_external_network_visibility): 200 GET 
http://127.0.0.1:9696/v2.0/networks?router%3Aexternal=True 0.232s
  2016-03-05 17:07:49,599 11488 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: None
  Response - Headers: {'status': '200', 'content-location': 
'http://127.0.0.1:9696/v2.0/networks?router%3Aexternal=True', 'content-type': 
'application/json; charset=UTF-8', 'connection': 'close', 
'x-openstack-request-id': 'req-7c15efb9-e07d-47de-8f49-e77dc2059f57', 
'content-length': '1199', 'date': 'Sat, 05 Mar 2016 17:07:49 GMT'}
  Body: {"networks": [{"status": "ACTIVE", "router:external": true, 
"availability_zone_hints": [], "availability_zones": ["nova"], "qos_policy_id": 
null, "subnets": ["1ee8f3fc-1957-46c6-8a7c-6a5335342871", 
"068121cc-6ed9-4bdb-8813-35fe689642c2"], "shared": false, "tenant_id": 
"e118b21bf7a74b36a7e1339918290567", "created_at": "2016-03-05T16:53:27", 
"tags": [], "ipv6_address_scope": null, "updated_at": "2016-03-05T16:53:27", 
"is_default": true, "admin_state_up": true, "ipv4_address_scope": null, 
"port_security_enabled": true, "mtu": 1450, "id": 
"85a04141-b614-406d-b7d8-912c2a37bc4b", "name": "public"}, {"status": "ACTIVE", 
"router:external": true, "availability_zone_hints": [], "availability_zones": 
["nova"], "qos_policy_id": null, "subnets": 
["d3ea9b6d-a20e-48c0-b7ec-50f6239c5199"], "shared": true, "tenant_id": 
"d6562d45e82f4a85a30dc0cec714e04d", "created_at": "2016-03-05T17:07:31", 
"tags": [], "ipv6_address_scope": null, "updated_at": "2016-03-05T17:07:31", 
"is_default": fals
 e, "admin_state_up": true, "ipv4_address_scope": 
"978d5509-cfa9-4753-9ff3-6bb11fdb6f57", "port_security_enabled": true, "mtu": 
1450, "id": "a005c6f8-1438-42aa-a86c-68d04796d2e9", "name": 
"sharednetwork--1158192641"}]}
  2016-03-05 17:07:49,947 11488 INFO [tempest.lib.common.rest_client] 
Request (NetworksIpV6TestJSON:test_external_network_visibility): 200 GET 
http://127.0.0.1:9696/v2.0/subnets 0.348s
  2016-03-05 17:07:49,948 11488 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: None
  Response - Headers: {'status': '200', 'content-location': 
'http://127.0.0.1:9696/v2.0/subnets', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close', 'x-openstack-request-id': 
'req-7a2b800a-eeb9-4c4d-92c8-1e2bf556fb89', 'content-length': '1641', 'date': 
'Sat, 05 Mar 2016 17:07:49 GMT'}
  Body: {"subnets": [{"name": "", "enable_dhcp": true, "network_id": 
"a005c6f8-1438-42aa-a86c-68d04796d2e9", "tenant_id": 
"d6562d45e82f4a85a30dc0cec714e04d", "created_at": "2016-03-05T17:07:35", 
"dns_nameservers": [], "updated_at": "2016-03-05T17:07:35", "gateway_ip": 
"8.0.0.1", "ipv6_ra_mode": null, "allocation_pools": [{"start": "8.0.0.2", 
"end": "8.0.0.14"}], "host_routes": [], "ip_version": 4, "ipv6_address_mode": 
null, "cidr": "8.0.0.0/28", "id": "d3ea9b6d-a20e-48c0-b7ec-50f6239c5199", 
"subnetpool_id": "b5058565-3ce7-448c-a581-3411f1aa764b"}, {"name": 
"tempest-BaseTestCase-467862474-subnet", "enable_dhcp": true, "network_id": 
"00d56d28-c7c2-4059-b3f5-146e60110b67", "tenant_id": 
"f3ef1b7cfa324fb29d4ea00646a1bb61", "created_at": "2016-03-05T17:07:37", 
"dns_nameservers": [], "updated_at": "2016-03-05T17:07:37", "gateway_ip": 
"10.100.0.1", "ipv6_ra_mode": null, "allocation_pools": [{"start": 
"10.100.0.2", "end": "10.100.0.14"}], "host_routes": [], "ip_version": 4, 
"ipv6_addr
 ess_mode": null, "cidr": "10.100.0.0/28", "id": 
"08548e7c-5e95-4371-8694-1d4ceba7c2e1", "subnetpool_id": null}, {"name": "", 
"enable_dhcp": true, "network_id": "d527821a-86b1-4bcc-be1f-7231c8640a60", 
"tenant_id": "f3ef1b7cfa324fb29d4ea00646a1bb61", "created_at": 
"2016-03-05T17:07:47", "dns_nameservers": [], "updated_at": 
"2016-03-05T17:07:47", "gateway_ip": "2003:0:0:::1", "ipv6_ra_mode": null, 
"allocation_pools": [{"start": "2003:0:0:::2", "end": 
"2003::::::"}], "host_routes": [], "ip_version": 6

[Yahoo-eng-team] [Bug 1558397] Re: functional job fails due to missing netcat

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558397

Title:
  functional job fails due to missing netcat

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  A good build:

  http://logs.openstack.org/39/293239/3/check/gate-neutron-dsvm-
  functional/f1284e9/logs/dpkg-l.txt.gz

  A bad build:

  http://logs.openstack.org/87/293587/1/check/gate-neutron-dsvm-
  functional/53d6bee/logs/dpkg-l.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554695] Re: network not found warnings in test runs

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554695

Title:
  network not found warnings in test runs

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  A neutron server log from a normal test run is filled with entries
  like the following:

  2016-03-08 10:08:32.269 18894 WARNING
  neutron.api.rpc.handlers.dhcp_rpc [req-a55cec8d-37ee-
  46b7-97f3-aadf91bcd512 - -] Network
  1fd1dfd5-8d24-4016-b8e4-032ec8ef3ce1 could not be found, it might have
  been deleted concurrently.

  
  They are completely normal during network creation/deletion so it's not a 
warning condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554696] Re: Neutron server log filled with "device requested by agent not found"

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554696

Title:
  Neutron server log filled with "device requested by agent not found"

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The neutron server logs from a normal test run are filled with the
  following entries:

  2016-03-08 10:07:29.265 18894 WARNING neutron.plugins.ml2.rpc [req-
  c5cf3153-b01f-4be7-88f6-730e28fa4d09 - -] Device 91993890-6352-4488
  -9e1f-1a419fa17bc1 requested by agent ovs-agent-devstack-trusty-ovh-
  bhs1-8619597 not found in database

  
  This occurs whenever an agent requests details about a recently deleted port. 
It's not a valid warning condition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558721] Re: neutron-rootwrap-xen-dom0 not properly closing XenAPI sessions

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558721

Title:
  neutron-rootwrap-xen-dom0 not properly closing XenAPI sessions

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Hello,

  When using OpenStack Liberty with XenServer, neutron is not properly
  closing its XenAPI sessions. Since it creates these so rapidly, the
  XenServer host eventually exceeds its maximum allowed number of
  connections:

  Mar 17 11:39:05 compute3 xapi:
  [debug|compute3.openstack.lab.eco.rackspace.com|25 db_gc|DB GC
  D:bb694b976766|db_gc] Number of disposable sessions in group
  'external' in database (401/401) exceeds limit (400): will delete the
  oldest

  This occurs roughly once per minute, with many sessions being
  invalidated. The effect is that any long-running hypervisor operations
  (for example a live-migration) will fail with an "unauthorized" error,
  as their session was invalidated while they were still running:

  2016-03-17 11:43:34.483 14310 ERROR nova.virt.xenapi.vmops Failure: 
['INTERNAL_ERROR', 
'Storage_interface.Internal_error("Http_client.Http_error(\\"401\\", \\"{ frame 
= false; method = POST; uri = /services/SM;
  query = [ session_id=OpaqueRef:8663a5b7-928e-6ef5-e312-9f430b553c7f ]; 
content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [  ]; task 
= ; subtask_of = ; content-type = ; host = ; user_agent = xe
  n-api-libs/1.0 }\\")")']

  The fix is to add a line to neutron-rootwrap-xen-dom0 to have it
  properly close the sessions.

  Before:

  def run_command(url, username, password, user_args, cmd_input):
  try:
  session = XenAPI.Session(url)
  session.login_with_password(username, password)
  host = session.xenapi.session.get_this_host(session.handle)
  result = session.xenapi.host.call_plugin(
  host, 'netwrap', 'run_command',
  {'cmd': json.dumps(user_args), 'cmd_input': 
json.dumps(cmd_input)})
  return json.loads(result)
  except Exception as e:
  traceback.print_exc()
  sys.exit(RC_XENAPI_ERROR)

  After:

  def run_command(url, username, password, user_args, cmd_input):
  try:
  session = XenAPI.Session(url)
  session.login_with_password(username, password)
  host = session.xenapi.session.get_this_host(session.handle)
  result = session.xenapi.host.call_plugin(
  host, 'netwrap', 'run_command',
  {'cmd': json.dumps(user_args), 'cmd_input': 
json.dumps(cmd_input)})
  session.xenapi.session.logout()
  return json.loads(result)
  except Exception as e:
  traceback.print_exc()
  sys.exit(RC_XENAPI_ERROR)

  
  After making this change, the logs still show the sessions being rapidly 
created, but it also shows them being destroyed. The "exceeds limit" error no 
longer occurs, and live-migrations now succeed.

  
  Regards,

  Alex Oughton

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558658] Re: Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP requests

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558658

Title:
  Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP
  requests

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in OpenStack Security Advisory:
  Triaged

Bug description:
  The IptablesFirewallDriver does not prevent spoofing other instances
  or a routers MAC and/or IP addresses. The rule to permit DHCP
  discovery and request messages:

  ipv4_rules += [comment_rule('-p udp -m udp --sport 68 --dport 67 '
  '-j RETURN', comment=ic.DHCP_CLIENT)]

  is too permissive, it does not enforce the source MAC or IP address.
  This is the IPv4 case of public bug
  https://bugs.launchpad.net/neutron/+bug/1502933, and a solution was
  previously mentioned in June 2013 in
  https://bugs.launchpad.net/neutron/+bug/1427054.

  If L2population is not used, an instance can spoof the Neutron
  router's MAC address and cause the switches to learn a MAC move,
  allowing the instance to intercept other instances traffic potentially
  belonging to other tenants if this is shared network.

  The solution for this is to permit this DHCP traffic only from the
  instance's IP address and the unspecified IPv4 address 0.0.0.0/32
  rather than from an IPv4 source, additionally the source MAC address
  should be restricted to MAC addresses assigned to the instance's
  Neutron port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563028] Re: Routes version 2.3 broke the way we register routes

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563028

Title:
  Routes version 2.3 broke the way we register routes

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  Routes v2.3 which just came out included this change

  
https://github.com/bbangert/routes/commit/0a417004be7e2d950bdcd629ccf24cf9f56ef817

  which changed submap.connect() in a way that broke the way Trove
  registers routes.

  We now need to provide a routename and a path as independent
  arguments.

  I tried to make sense of the documentation at
  http://routes.readthedocs.org/en/v2.3/setting_up.html#submappers and
  got nowhere in a hurry, but reading the code for connect() provided a
  possible solution.

  This will likely need to go to stable/mitaka as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561046] Re: If there is a /var/lib/neutron/ha_confs/.pid then l3 agent fails to spawn a keepalived process for that router

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561046

Title:
  If there is a /var/lib/neutron/ha_confs/.pid then l3 agent
  fails to spawn a keepalived process for that router

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  If the .pid file for the previous keepalived process (located in
  /var/lib/neutron/ha_confs/.pid) already exists then the L3
  agent fails to spawn a keepalived process for that router.

  For example, upon neutron node shutdown and restart the processes are
  assigned new PIDs that can be same as those previously assigned to
  some of the keepalived processes. The latter are captured in PID files
  and once keepalived starts, it detects that there is a running process
  with that PID and reports "daemon is already running".

  Steps to reproduce:
  1)  Pick a router that you want to make display this issue;  record the 
router_id
  2)  kill the two processes denoted in these two files: 
/lib/neutron/ha_confs/.pid and 
/lib/neutron/ha_confs/.pid-vrrp
  3)  Make sure that no keepalived process comes back for this router
  4) Now pick out an existing process id - anything that's really  running - 
and put that processid into the PID files.  For example, a background sleep 
process running as pid 12345 can be put into .pid file and 
.pid-vrrp.

  Bug valid with keepalived version 1.2.13 and 1.2.19.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562467] Re: DVR logic in OVS doesn't handle CSNAT ofport change

2016-05-09 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562467

Title:
  DVR logic in OVS doesn't handle CSNAT ofport change

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  if the ofport of a port changes due to it being quickly
  unplugged/plugged (i.e. within a polling interval), the OVS agent will
  not update the ofport in its DVR cache of local port info so it will
  fail to be wired correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540411] Re: kilo: ValueError: git history requires a target version of pbr.version.SemanticVersion(2015.1.4), but target version is pbr.version.SemanticVersion(2015.1.3)

2016-02-08 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540411

Title:
  kilo: ValueError: git history requires a target version of
  pbr.version.SemanticVersion(2015.1.4), but target version is
  pbr.version.SemanticVersion(2015.1.3)

Status in neutron:
  In Progress
Status in neutron kilo series:
  New

Bug description:
  http://logs.openstack.org/57/266957/1/gate/gate-tempest-dsvm-neutron-
  linuxbridge/ed15bbf/logs/devstacklog.txt.gz#_2016-01-31_05_55_23_989

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22error%20in%20setup%20command%3A%20Error%20parsing%5C%22%20AND%20message%3A%5C%22ValueError%3A%20git%20history%20requires%20a%20target%20version%20of%20pbr.version.SemanticVersion(2015.1.4)%2C%20but%20target%20version%20is%20pbr.version.SemanticVersion(2015.1.3)%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

  This hits multiple projects, it's a known issue, this is just a bug
  for tracking the failures in elastic-recheck.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249065] Re: Nova throws 400 when attempting to add floating ip (instance.info_cache.network_info is empty)

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249065

Title:
  Nova throws 400 when attempting to add floating ip
  (instance.info_cache.network_info is empty)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Ran into this problem in check-tempest-devstack-vm-neutron

   Traceback (most recent call last):
 File "tempest/scenario/test_snapshot_pattern.py", line 74, in 
test_snapshot_pattern
   self._set_floating_ip_to_server(server, fip_for_server)
 File "tempest/scenario/test_snapshot_pattern.py", line 62, in 
_set_floating_ip_to_server
   server.add_floating_ip(floating_ip)
 File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
108, in add_floating_ip
   self.manager.add_floating_ip(self, address, fixed_address)
 File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
465, in add_floating_ip
   self._action('addFloatingIp', server, {'address': address})
 File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
993, in _action
   return self.api.client.post(url, body=body)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 234, in 
post
   return self._cs_request(url, 'POST', **kwargs)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 213, in 
_cs_request
   **kwargs)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 195, in 
_time_request
   resp, body = self.request(url, method, **kwargs)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 189, in 
request
   raise exceptions.from_response(resp, body, url, method)
   BadRequest: No nw_info cache associated with instance (HTTP 400) 
(Request-ID: req-9fea0363-4532-4ad1-af89-114cff68bd89)

  Full console logs here: http://logs.openstack.org/27/55327/3/check
  /check-tempest-devstack-vm-neutron/8d26d3c/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287757] Re: Optimization: Don't prune events on every get

2016-01-21 Thread Dave Walker
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1287757

Title:
  Optimization:  Don't prune events on every get

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) kilo series:
  Fix Released

Bug description:
  _prune_expired_events_and_get always locks the backend. Store the time
  of the oldest event so that the prune process can be skipped if none
  of the events have timed out.

  (decided at keystone midcycle - 2015/07/17) -- MorganFainberg
  The easiest solution is to do the prune on issuance of new revocation event 
instead on the get.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1287757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284202] Re: nova --debug secgroup-list --all-tenant 1 does not show all tenant when Neutron is used

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284202

Title:
  nova --debug secgroup-list --all-tenant 1 does not show all tenant
  when Neutron is used

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  As admin, I cannot list all neutron security groups with nova.

  # neutron security-group-list
  
+--+-+--+
  | id   | name| description
  |
  
+--+-+--+
  | 0060bb2a-a685-4445-b1a5-89d0c4f5f226 | default | default
  |
  | 00c42289-69fb-4d0b-a69b-cce89b89fefb | default | default
  |
  | 00e73187-cfb7-4362-86e5-f2310ace5266 | default | default
  |
  | 0149af4c-4521-4e89-8e26-49dbff96494b | default | default
  |
  | 039b91b1-daf6-4e8c-a815-c137f3a56975 | default | default
  |
  | 03c1640e-a715-4b4b-b69e-ab02c247c72b | default | default
  |
  | 0679924c-70a8-499b-88f5-663c520bf6d1 | sec_grp--3193851| 
desc--722700139  |
  | 0a223200-56a0-4eb9-a933-013211082be7 | default | default
  |
  | 0a4fcf8b-dc8c-408a-994e-c563345e6e20 | default | default
  |
  | 0b97a8f7-0582-4814-b914-b0571ddd4746 | default | default
  |
  | 0db18431-a54a-4b59-913a-85a9542bcb3c | default | default
  |
  | 1024926f-40ee-4fdf-9637-a14d6aed1d66 | default | default
  |
  | 119d588f-9ce8-4dbe-a711-3bd2de3327a4 | default | default
  |
  | 13d49793-dd7c-4504-bbe6-96ff0ffac1d9 | default | default
  |
  | 161a0b3c-334e-4a4f-a411-db70eb6ab26d | sec_grp--876102254  | desc--49581304 
  |
  | 18645f71-76c5-440a-9198-27a406f5635e | default | default
  |
  | 18cbadeb-687a-4001-8b4b-c41900947ecb | default | default
  |
  | 18d5badf-7e3e-4ba5-a172-eb7f60b3fe49 | default | default
  |
  | 1a846b20-ec19-427f-aca6-2bb3fae50f2d | default | default
  |
  | 1da062a2-33a2-41d4-be25-c061b1593b31 | default | default
  |
  | 2252c82d-8624-4ed3-9d4e-c533848be734 | default | default
  |
  | 238a40e8-d7a1-4def-a9b4-0a9a475ec97a | default | default
  |
  | 26aadab6-6e31-4871-937f-9ab367f970c5 | default | default
  |
  | 26e08be5-7c5d-4e20-b7d5-dd95fe481edd | default | default
  |
  | 26eaef82-b627-4ed8-ac71-c80718c5d3f7 | default | default
  |
  | 28acea35-c0b4-4735-ae06-2f9e643d084b | default | default
  |
  | 305a4649-fb63-4daa-9548-f4d62ff53f20 | default | default
  |
  | 348f7291-223f-4fad-babd-1a1bfa1bde87 | default | default
  |
  | 386b0d4a-f561-480d-96e9-0ffe9810f999 | default | default
  |
  | 3b3a0261-461d-4e95-836f-bb3c7610c6f1 | default | default
  |
  | 3fb562dd-891f-4619-af12-8faa53372d35 | default | default
  |
  | 4027da48-0e5d-4a65-abfd-102345b14b30 | default | default
  |
  | 411f79ea-754a-4ce2-a678-35f20f5de532 | default | default
  |
  | 42243252-4f8a-464d-8002-c1e6c9c8592b | default | default
  |
  | 45283e79-5698-4852-bfe3-20e84ec61e4f | default | default
  |
  | 498987d7-1a4c-4a81-af45-5704142fbd7b | default | default
  |
  | 4b05cc35-f9e6-493f-a520-b2e883cb2305 | default | default
  |
  | 4b3bd3ca-fa60-4d78-a207-78e421aeee78 | default | default
  |
  | 4c0c1393-70a3-46d0-a748-792e00297e7d | default | default
  |
  | 4c5f102d-0509-4ef8-8132-12d4bba8a536 | default | default
  |
  | 4de94b11-5bbf-41ad-ab6f-8291c14df5c3 | default | default
  |
  | 4ea81807-6627-4182-aa21-3feb1315b79c | default | default
  |
  | 4f3efebe-2a22-4c9a-8d65-dd0859c84979 | default | default
  |
  | 543cfe02-457f-4399-887f-3c1f2307c2e3 | default | default
  |
  | 5462f168-324f-44e3-a81e-0f3444099d19 | default | default
  |
  | 554c7ecd-7876-4f80-9096-41b0f1e7498b | default | default
  |
  | 55b20025-f031-4ab3-83a5-ae84e7d162b3 | default | default
  |
  | 56523131-0929-4b7c-a647-d19452579e54 | default | default
  |
  | 571b707b-718d-428f-8e9f-40815be756a5 | default | default

[Yahoo-eng-team] [Bug 1288039] Re: live-migration cinder boot volume target_lun id incorrect

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288039

Title:
  live-migration cinder boot volume target_lun id incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When nova goes to cleanup _post_live_migration on the source host, the
  block_device_mapping has incorrect data.

  I can reproduce this 100% of the time with a cinder iSCSI backend,
  such as 3PAR.

  This is a Fresh install on 2 new servers with no attached storage from Cinder 
and no VMs.
  I create a cinder volume from an image. 
  I create a VM booted from that Cinder volume.  That vm shows up on host1 with 
a LUN id of 0.
  I live migrate that vm.   The vm moves to host 2 and has a LUN id of 0.   The 
LUN on host1 is now gone.

  I create another cinder volume from image.
  I create another VM booted from the 2nd cinder volume.  The vm shows up on 
host1 with a LUN id of 0.  
  I live migrate that vm.  The VM moves to host 2 and has a LUN id of 1.  
  _post_live_migrate is called on host1 to clean up, and gets failures, because 
it's asking cinder to delete the volume
  on host1 with a target_lun id of 1, which doesn't exist.  It's supposed to be 
asking cinder to detach LUN 0.

  First migrate
  HOST2
  2014-03-04 19:02:07.870 WARNING nova.compute.manager 
[req-24521cb1-8719-4bc5-b488-73a4980d7110 admin admin] pre_live_migrate: 
{'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 
'mount_device': u'vda', 'connection_info': {u'd
  river_volume_type': u'iscsi', 'serial': 
u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260'
  , u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 
'device_type': u'disk', 'delete_on_termination': False}]}
  HOST1
  2014-03-04 19:02:16.775 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi',
   u'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': 
{u'target_discovered': True, u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}



  Second Migration
  This is in _post_live_migration on the host1.  It calls libvirt's driver.py 
post_live_migration with the volume information returned from the new volume on 
host2, hence the target_lun = 1.   It should be calling libvirt's driver.py to 
clean up the original volume on the source host, which has a target_lun = 0.
  2014-03-04 19:24:51.626 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi', u'serial': 
u'f0087595-804d-4bdb-9bad-0da2166313ea', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 1, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387543] Re: [OSSA 2015-015] Resize/delete combo allows to overload nova-compute (CVE-2015-3241)

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387543

Title:
  [OSSA 2015-015] Resize/delete combo allows to overload nova-compute
  (CVE-2015-3241)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  If user create instance, and resize it to larger flavor and than
  delete that instance, migration process does not stop. This allow
  user to repeat operation many times, causing overload to affected
  compute nodes over user quota.

  Affected installation: most drastic effect happens on 'raw-disk'
  instances without live migration. Whole raw disk (full size of the
  flavor) is copied during migration.

  If user delete instance it does not terminate rsync/scp keeping disk
  backing file opened regardless of removal by nova compute.

  Because rsync/scp of large disks is rather slow, it gives malicious
  user enough time to repeat that operation few hundred times, causing
  disk space depletion on compute nodes, huge impact on management
  network and so on.

  Proposed solution: abort migration (kill rsync/scp) as soon, as
  instance is deleted.

  Affected installation: Havana, Icehouse, probably Juno (not tested).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392527] Re: [OSSA 2015-017] Deleting instance while resize instance is running leads to unuseable compute nodes (CVE-2015-3280)

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392527

Title:
  [OSSA 2015-017] Deleting instance while resize instance is running
  leads to unuseable compute nodes (CVE-2015-3280)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Steps to reproduce:
  1) Create a new instance,waiting until it’s status goes to ACTIVE state
  2) Call resize API
  3) Delete the instance immediately after the task_state is “resize_migrated” 
or vm_state is “resized”
  4) Repeat 1 through 3 in a loop

  I have kept attached program running for 4 hours, all instances
  created are deleted (nova list returns empty list) but I noticed
  instances directories with the name “_resize> are not
  deleted from the instance path of the compute nodes (mainly from the
  source compute nodes where the instance was running before resize). If
  I keep this program running for couple of more hours (depending on the
  number of compute nodes), then it completely uses the entire disk of
  the compute nodes (based on the disk_allocation_ratio parameter
  value). Later, nova scheduler doesn’t select these compute nodes for
  launching new vms and starts reporting error "No valid hosts found".

  Note: Even the periodic tasks doesn't cleanup these orphan instance
  directories from the instance path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402760] Re: All user tokens are considered revoked on it's group role revocation

2016-01-21 Thread Dave Walker
** Changed in: keystone/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1402760

Title:
  All user tokens are considered revoked on it's group role revocation

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) kilo series:
  Fix Released

Bug description:
  The case for the bug:
  - User authenticates and receives a token scoped to the project1
  - User authenticates and receives a token scoped to the project2
  - User joins the group
  - Group is granted a role to the project1
  - Group role grant to the project1 is revoked

  Result:
  All user tokens are considered revoked.

  Analysis:
  Revoke model lacks correct token by group revocation - it is done through 
revocation by user, what results in described effect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1402760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423772] Re: During live-migration Nova expects identical IQN from attached volume(s)

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423772

Title:
  During live-migration Nova expects identical IQN from attached
  volume(s)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When attempting to do a live-migration on an instance with one or more
  attached volumes, Nova expects that the IQN will be exactly the same
  as it's attaching the volume(s) to the new host. This conflicts with
  the Cinder settings such as "hp3par_iscsi_ips" which allows for
  multiple IPs for the purpose of load balancing.

  Example:
  An instance on Host A has a volume attached at 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  An attempt is made to migrate the instance to Host B.
  Cinder sends the request to attach the volume to the new host.
  Cinder gives the new host 
"/dev/disk/by-path/ip-10.10.120.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  Nova looks for the volume on the new host at the old location 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"

  The following error appears in n-cpu in this case:

  2015-02-19 17:09:05.574 ERROR nova.virt.libvirt.driver [-] [instance: 
b6fa616f-4e78-42b1-a747-9d081a4701df] Live Migration failure: Failed to open 
file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5426, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5393, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1582, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Removing descriptor: 3

  
  When looking at the nova DB, this is the state of block_device_mapping prior 
to the migration attempt:

  mysql> select * from block_device_mapping where 
instance_uuid='b6fa616f-4e78-42b1-a747-9d081a4701df' and deleted=0;
  
+-+-+++-+---+-+--+-+---+---+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id | device_name | 
delete_on_termination | snapshot_id | volume_id| 
volume_size | no_device | connection_info   




 

[Yahoo-eng-team] [Bug 1398086] Re: nova servers pagination does not work with deleted marker

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398086

Title:
  nova servers pagination does not work with deleted marker

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Nova does not paginate correctly if the marker is a deleted server.

  I am trying to get all of the servers for a given tenant. In total
  (i.e. active, delete, error, etc.) there are 405 servers.

  If I query the API without a marker and with a limit larger (for example, 500)
  than the total number of servers I get all of them, i.e. the following query
  correctly returns 405 servers:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=500"

  However, if I try to paginate over them, doing:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100"

  I get the first 100 with a link to the next page. If I try to follow
  it:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100&marker=foobar"

  I am always getting a "badRequest" error saying that the marker is not found. 
I
  guess this is because of these lines in "nova/db/sqlalchemy/api.py"

  2000 # paginate query
  2001 if marker is not None:
  2002 try:
  2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
  2004 except exception.InstanceNotFound:
  2005 raise exception.MarkerNotFound(marker)

  The function "_instance_get_by_uuid" gets the machines that are not
  deleted, therefore it fails to locate the marker if it is a deleted
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436187] Re: 'AttributeError' is getting raised while unshelving instance booted from volume

2016-01-21 Thread Dave Walker
** Changed in: nova/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436187

Title:
  'AttributeError' is getting raised while unshelving instance booted
  from volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  ‘AttributeError’ exception is getting raised while unshelving instance
  which is booted from volume

  Steps to reproduce:
  
  1.Create bootable volume
  2.Create instance from bootable volume
  3.Shelve instance
  4.Try to unshelve instance

  Error log on n-cpu service:
  ---

  2015-03-24 23:32:13.646 ERROR nova.compute.manager 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] Instance failed to spawn
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] Traceback (most recent call last):
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4368, in _unshelve_instance
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] block_device_info=block_device_info)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2342, in spawn
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] block_device_info)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/blockinfo.py", line 622, in get_disk_info
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] instance=instance)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]   File 
"/opt/stack/nova/nova/virt/libvirt/blockinfo.py", line 232, in 
get_disk_bus_for_device_type
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] disk_bus = 
image_meta.get('properties', {}).get(key)
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a] AttributeError: 'NoneType' object has no 
attribute 'get'
  2015-03-24 23:32:13.646 TRACE nova.compute.manager [instance: 
183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a]
  2015-03-24 23:32:13.649 DEBUG oslo_concurrency.lockutils 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] Lock 
"183c8ba3-3074-47aa-bc0c-bd9cea9c6d2a" released by "do_unshelve_instance" :: 
held 1.182s from (pid=11271) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:456
  2015-03-24 23:32:13.650 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
9c227430eaf34c64b94f36661ef2ec8f from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.650 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
7329362a2cab48968ce31760bcac8628. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.696 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
d2388c787036413a9bcf95f55e38027b from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.696 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
c466ff4a11574ff3a1032e85f3d9bd87. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.746 DEBUG oslo_messaging._drivers.amqpdriver 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] MSG_ID is 
0367b08fd7dd428ab8ef494bb42f1499 from (pid=11271) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310
  2015-03-24 23:32:13.746 DEBUG oslo_messaging._drivers.amqp 
[req-da05280c-be61-4705-a0c9-08ccf3c1d245 demo demo] UNIQUE_ID is 
b7395e5e66da4a47ba4132568713d4c4. from (pid=11271) _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226
  2015-03-24 23:32:13.789 DEBUG nova.openstack.common.periodic_task 
[req-db2bb34f-1f3d-4ac0-99d0-f6fe78f8393d None None] Running periodic task 
ComputeManager._poll_volume_usage from (pid=11271) run_periodic_tasks 
/opt/stack/nova/nova/openstack/common/periodic_task.py:219
  2015-03-24 23:32:13.789 DEBUG nova.openstack.common.loopin

  1   2   3   >