[Yahoo-eng-team] [Bug 1609097] Re: vif_port_id of ironic port is not updated after neutron port-delete

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609097

Title:
  vif_port_id of ironic port is not updated after neutron port-delete

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Steps to reproduce
  ==
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property | Value |
  
+---+---+
  | address | 52:54:00:85:19:89 |
  | created_at | 2016-07-20T13:15:23+00:00 |
  | extra | {u'vif_port_id': u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741 |
  | pxe_enabled | |
  | updated_at | 2016-07-22T13:31:29+00:00 |
  | uuid | 735fcaf5-145d-4125-8701-365c58c6b796 |
  
+---+---+
  3. Delete neutron port:
  neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  4. It is gone from interface list:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  ++-++--+--+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  ++-++--+--+
  ++-++--+--+
  5. ironic port still has vif_port_id with neutron's port id:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property | Value |
  
+---+---+
  | address | 52:54:00:85:19:89 |
  | created_at | 2016-07-20T13:15:23+00:00 |
  | extra | {u'vif_port_id': u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741 |
  | pxe_enabled | |
  | updated_at | 2016-07-22T13:31:29+00:00 |
  | uuid | 735fcaf5-145d-4125-8701-365c58c6b796 |
  
+---+---+

  Expected result
  ===
  ironic port should not have vif_port_id in extra field.

  Actual result
  =
  ironic port has vif_port_id with id of deleted neutron port.

  This can confuse when user wants to get list of unused ports of ironic node.
  vif_port_id should be removed after neutron port-delete.
  Corresponding bug filed on neutron side 
https://bugs.launchpad.net/neutron/+bug/1606229

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1609097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615908] Re: dummy BDM record if reserve_block_device_name timeout

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615908

Title:
  dummy BDM record if reserve_block_device_name timeout

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When attaching a volume, nova-api will initiate a rpc call to nova-
  compute to run reserve_block_device_name:

  def _attach_volume(self, context, instance, volume_id, device,
 disk_bus, device_type):
  """Attach an existing volume to an existing instance.

  This method is separated to make it possible for cells version
  to override it.
  """
  # NOTE(vish): This is done on the compute host because we want
  # to avoid a race where two devices are requested at
  # the same time. When db access is removed from
  # compute, the bdm will be created here and we will
  # have to make sure that they are assigned atomically.
  volume_bdm = self.compute_rpcapi.reserve_block_device_name(
  context, instance, device, volume_id, disk_bus=disk_bus,
  device_type=device_type)
  try:
  volume = self.volume_api.get(context, volume_id)
  self.volume_api.check_attach(context, volume, instance=instance)
  self.volume_api.reserve_volume(context, volume_id)
  self.compute_rpcapi.attach_volume(context, instance=instance,
  volume_id=volume_id, mountpoint=device, bdm=volume_bdm)
  except Exception:
  with excutils.save_and_reraise_exception():
  volume_bdm.destroy()

  return volume_bdm.device_name

  
  If timeout occurs, a dummy BDM record is left in database. As a result, you 
will see an attached volume when run nova show, which is wrong.

  The trace:
  ---
  2016-08-03 10:29:29.929 4508 ERROR nova.api.openstack 
[req-9036ab02-c49e-408e-9914-1627175e9158 a3e789e51d6243e493483d12593757a6 
597854e23bfe46abb6178f786af12391 - - -] Caught error: Timed out waiting for a 
reply to message ID ddf15f1d53764aa090920c64852e4fba
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack Traceback (most recent 
call last):
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
req.get_response(self.application)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
resp(environ, start_response)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 634, in __call__
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 554, in _call_app
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
resp(environ, start_response)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
resp(environ, start_response)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/routes/middleware.py", line 131, in __call__
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
resp(environ, start_response)
  2016-08-03 10:29:29.929 4508 TRACE 

[Yahoo-eng-team] [Bug 1657441] Re: Remove the set device_owner when attaching subports

2017-09-26 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657441

Title:
  Remove the set device_owner when attaching subports

Status in neutron:
  Expired

Bug description:
  This is not required by business logic as pointed out in the commit
  message of https://review.openstack.org/#/c/368289/, and in some
  cases can lead to races.

  There may be other proyects using neutron TrunkPorts, such as kuryr,
  that can trigger the port/subport creation, therefore wanting to use
  the device_owner to specify that kuryr-service is the one managing
  those subports.

  To give an example of the possible race, as the trunk_add_subport
  internally calls update_port to set the device owner, it may happen that
  a call from kuryr that wants to attach a subport to a port, and then set
  the device_owner to kuryr, end up with the wrong device_owner as,
  even though the calls are triggered in this order:
  1.- trunk_add_subport (internally calls update_port)
  2.- update_port

  The update_port is executed between trunk_add_subport and the internal
  call to update_port, resulting in device_owner being set to
  trunk:subport, instead of kuryr.

  Possible solutions are:
  - Revert the commit https://review.openstack.org/#/c/368289/. We have already
  have some discussion about this in the patch 
https://review.openstack.org/#/c/419028/
  - Make the set device_owner optional based on the value of TRUNK_SUBPORT_OWNER
  - Define what should be the scope of device_owner to clarify how this 
attribute should be used within/outside neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651712] Re: failed to start VM on disabled port_security_enabled network

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1651712

Title:
  failed to start VM on disabled port_security_enabled network

Status in OpenStack Compute (nova):
  Expired

Bug description:
  to start a VM on port_security_enabled disabled network, it failed:
  016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
[req-ee15cc56-ef1d-4e25-889d-4634804fae57 ff5a300a13f846a08f47c08a3b14f162 
3d0d66439f3640c79007c0ea842f - - -] Instance failed network setup after 1 
attempt(s)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager Traceback (most recent 
call last):
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
1397, in _allocate_network_async
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
bind_host_id=bind_host_id)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 861, in allocate_for_instance
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager security_group_ids)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 801, in _create_ports_for_instance
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager neutron, instance, 
created_port_ids)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager self.force_reraise()
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 783, in _create_ports_for_instance
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager raise 
exception.SecurityGroupCannotBeApplied()
  2016-12-21 17:36:01.533 7 ERROR nova.compute.manager 
SecurityGroupCannotBeApplied: Network requires port_security_enabled and subnet 
associated in order to apply security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1651712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643684] Re: _check_requested_image uses incorrect bdm info to validate target volume when booting from boot from volume instance snapshot image

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1643684

Title:
  _check_requested_image uses incorrect bdm info to validate target
  volume when booting from boot from volume instance snapshot image

Status in OpenStack Compute (nova):
  Expired

Bug description:
  This is a complicated one.

  Start with a boot from volume instance. Instance uses a 1G volume, and
  a tiny image. Make a snapshot of this, which results in a volume
  snapshot plus a image in glance. Image details:

  # openstack image show a9a2a4af-0383-47a0-85e6-c3211f11459e
  
+--+---+
  | Field| Value
 |
  
+--+---+
  | checksum | d41d8cd98f00b204e9800998ecf8427e 
 |
  | container_format | bare 
 |
  | created_at   | 2016-11-21T18:13:23Z 
 |
  | disk_format  | qcow2
 |
  | file | /v2/images/a9a2a4af-0383-47a0-85e6-c3211f11459e/file 
 |
  | id   | a9a2a4af-0383-47a0-85e6-c3211f11459e 
 |
  | min_disk | 10   
 |
  | min_ram  | 0
 |
  | name | testsnap 
 |
  | owner| 6fe9193745c64a44ab30d6bd1c5cb8bb 
 |
  | properties   | base_image_ref='', bdm_v2='True',
 |
  |  | block_device_mapping='[{"guest_format": null,
 |
  |  | "boot_index": 0, "delete_on_termination": false, 
 |
  |  | "no_device": null, "snapshot_id": "5186f403-bf91-49d4
 |
  |  | -b3db-c273e087fcfa", "device_name": "/dev/vda",  
 |
  |  | "disk_bus": "virtio", "image_id": null, "source_type":   
 |
  |  | "snapshot", "tag": null, "device_type": "disk",  
 |
  |  | "volume_id": null, "destination_type": "volume", 
 |
  |  | "volume_size": 1}]', 
owner_specified.shade.md5='133eae9fb |
  |  | 1c98f45894a4e60d8736619',
 |
  |  | owner_specified.shade.object='images/cirros', 
owner_speci |
  |  | 
fied.shade.sha256='f11286e2bd317ee1a1d0469a6b182b33bda4af |
  |  | 6f35ba224ca49d75752c81e20a', root_device_name='/dev/vda' 
 |
  | protected| False
 |
  | schema   | /v2/schemas/image
 |
  | size | 0
 |
  | status   | active   
 |
  | tags |  
 |
  | updated_at   | 2016-11-21T18:13:24Z 
 |
  | virtual_size | None 
 |
  | visibility   | private  
 |
  
+--+---+

  First, try booting just using the image:

  # openstack server create --image a9a2a4af-0383-47a0-85e6-c3211f11459e 
--flavor 2 --nic net-id=832c3099-a589-4f05-8f70-c5af5b2f852b lolwhut
  Volume is smaller than the minimum size specified in image metadata. Volume 
size is 1073741824 bytes, minimum size is 10737418240 bytes. (HTTP 400) 
(Request-ID: req-97cc0a55-7d5b-4e14-8f4b-e8a501f96f11)

  Nova is saying that the minimum size is 10G, but the requested bdm
  size is 1. I'm assuming that's coming from the instance data's
  block_device_mapping key, which has volume_size of 1.

  Now, try doing this where you're also requesting to boot from volume,
  of size 15 (this was done via horizon):

  2016-11-21 19:50:28.398 12516 DEBUG nova.api.openstack.wsgi [req-
  4481446f-e026-4e83-b07a-1acdfa08194f 4921001dd4944f1396f7e6d64717f044
  6fe9193745c64a44ab30d6bd1c5cb8bb - default default] Action: 'create',
  calling method: >, body: {"server": {"name": "jlkwhat", "imageRef": "",
  "availability_zone": "ci", "key_name": "turtle-key", "flavorRef": "2",
  "OS-DCF:diskConfig": "AUTO", "max_count": 1,
  "block_device_mapping_v2": [{"boot_index": "0", "uuid":
  "a9a2a4af-0383-47a0-85e6-c3211f11459e", "volume_size": 15,
  

[Yahoo-eng-team] [Bug 1662869] Re: Multiple attempts to detach and disconnect volumes during rebuild

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662869

Title:
  Multiple attempts to detach and disconnect volumes during rebuild

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description
  ===
  The following was noticed during a CI run for 
https://review.openstack.org/#/c/383859/ :

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_17_41_994

  This is due to rebuild calling for two separate detach/disconnects of
  a volume when using the libvirt virt driver, once in
  _rebuild_default_impl in the compute layer and a second time in
  cleanup within the virt driver :

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2653 - 
_rebuild_default_impl
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L989 
- cleanup

  In the logs req-e976fee4-51df-4119-b505-5d68f4583186 tracks the
  rebuild attempt. We see the first attempt to umount succeed here :

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ALL#_2017-02-07_19_17_39_904

  We then see the second attempt here and again an ERROR is logged as we
  don't find the mount to be in use :

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ALL#_2017-02-07_19_17_41_993

  Steps to reproduce
  ==
  Rebuild an instance with volumes attached

  Expected result
  ===
  Only one attempt is made to detach and disconnect each volume from the 
original instance.

  Actual result
  =
  Two attempts are made to detach and disconnect each volume from the original 
instance.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 https://review.openstack.org/#/c/383859/ - but it should reproduce
  against master.

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 n/a

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 n/a

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1662869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631918] Re: _determine_version_cap fails with MessagingTimeout when starting nova-compute

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1631918

Title:
  _determine_version_cap fails with MessagingTimeout when starting nova-
  compute

Status in OpenStack Compute (nova):
  Expired

Bug description:
  On a fresh deployment, there are issues when starting up nova-compute,
  before nova-conductor has started responding to RPC requests.

  The first is a MessagingTimeout in the _determine_version_cap call,
  that is triggered by creating the ComputeManager class.

  This cases the process to exit, but it doesn't seem to quite fully
  exit the process.

  It seems like this happens only when CONF.upgrade_levels.compute =
  "auto"

  This was spotted in this OSA change:
  https://review.openstack.org/#/c/367752

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1631918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674003] Re: Race condition cause instance system_updates to be lost

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1674003

Title:
  Race condition cause instance system_updates to be lost

Status in OpenStack Compute (nova):
  Expired

Bug description:
  We ran into an issue using a March ocata build. We have some
  system_metadata that we need to save very early in a VM's life.
  Previously we did this during scheduling. After the switch to cells
  v2, we now listen for the compute.instance.create.start and add the
  key to the instance's system_metadata then. The problem is that
  because of how nova.objects.Instance.save() works when saving metadata
  there is a race condition that causes some of the system_metadata to
  be lost.

  
  Basic setup of the instance.save() problem:
  test_uuid = 
  inst_ref_1 = nova.objects.Instance.get_by_uuid(context, test_uuid)
  inst_ref_2 = nova.objects.Instance.get_by_uuid(context, test_uuid)

  inst_ref_1.system_metadata.update({'key1': 'val1'})
  inst_ref_2.system_metadata.update({'key2': 'val2'})
  (Note: You need to read or update inst_ref_2.system_metadata at least once 
before calling inst_ref_1.save() the first time otherwise the lazy load on 
inst_ref_2.system_metadata will pick up inst_ref_1's change and hide the issue.)
  inst_ref_1.save()
  (Note: can check db before the next save to confirm the first save worked)
  inst_ref_2.save()

  Afterward, nova.objects.Instance.get_by_uuid(context, 
test_uuid).system_metadata returns {'key2': 'val2'} instead of the desired 
{'key1': 'val1', 'key2': 'val2'}
  Watching the db also reflects that the key1 was present after 
inst_ref_1.save() but was then removed and replaced with key2 after 
inst_ref_2.save().

  The issue is the flow of Instance.save(). It eventually calls 
nova.db.sqlalchemy.api._instance_metadata_update_in_place(). That method 
assumes if a key is found in the db but is not in the passed metadata dict, 
that it should delete the key from the db.
  So in the example above, because the inst_ref_2.system_metadata dictionary 
does not have the key added by inst_ref_1.save(), the inst_ref2.save() is 
deleting the entry added by inst_ref_1.save().

  
  Issue this creates:
  nova.compute.manager._build_and_run_instance() starts by sending the 
compute.instance.create.start notification. Immediately after that a recent 
unrelated change 
(https://github.com/openstack/nova/commit/6d8b58dc6f1cbda8d664b3487674f87049491c74)
 calls instance.system_metadata.update({'boot_roles': 
','.join(context.roles)}). The first instance.save() in 
_build_and_run_instance() is called as a side effect of 'with 
rt.instance_claim(context, instance, node, limits)'. (FWIW it's also called 
again very shortly after that in _build_and_run_instance() itself when vm_state 
and task_state are set).

  This creates the race condition mentioned at the top. Our listener
  gets the compute.instance.create.start notification is also attempting
  to update the instance's system_metadata. The listener has to create
  it's own reference to the same instance so depending on which instance
  reference's save() is called first (the one in our listener or the one
  from _build_and_run_instance()) one of the updates to system_metadata
  gets lost.

  Expected result:
  Independent Instance.save() calls containing don't wipe out non-conflicting 
key changes.

  Actual result:
  They do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1674003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665798] Re: nova-compute is not setting MTU at all when following the plug_ovs_bridge() call path

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665798

Title:
  nova-compute is not setting MTU at all when following the
  plug_ovs_bridge() call path

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Here is the specific code path I was talking about:

  
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L531

  As you can see in the code,

  1. self.plug_ovs_hybrid(instance, vif) ->
  self._plug_bridge_with_port() which will take care of creation of
  Linux bridge and setting of MTU properly.

  2. however self.plug_ovs_bridge(instance, vif) itself does nothing so
  MTU is not being set/honored at all. The end-result of this is that
  the MTU of the VM's tap interface remains 1500 instead of honoring the
  MTU of the corresponding neutron network.Note that in this code
  path we don't need to create the Linux bridge and veth pair at all
  however we still need to set the MTU properly for the VM tap
  interface.

  We believe this is an overlook from this MTU feature patch:
  https://review.openstack.org/#/c/285710/

  Let us know if you need more info then.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668475] Re: The disk size of the flavor of instance booted from volume is bigger than the volume's size

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1668475

Title:
  The disk size of the flavor of instance booted from volume is bigger
  than the volume's size

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The instance booted from volume chooses a flavor, but the disk size of
  the flavoe is bigger than the actual size of volume. finally, the
  instance is created successfully, but the disk size of the volume is
  different from the the disk size of the instance in the hypervisor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1668475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705700] Re: live migration does not work after volume migration

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1705700

Title:
  live migration does not work after volume migration

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hi,

  $subj

  [#]> rpm -qi openstack-nova-common.noarch
  Name: openstack-nova-common
  Epoch   : 1
  Version : 15.0.0
  Release : 1.el7

  STR:
  -configure 2x compute node with cinder storage and lvm back-ends;
  -crate bootable volume from image;
  -create instance and use bootable volume;
  -migrate volume from current node to another
  -try to perform live migration

  Actual Result:
  -live migration fails:
  -
  2017-07-21 11:33:00.554 4552 ERROR nova.virt.libvirt.driver 
[req-792a37bf-3a2e-4976-84ad-cc308eb1ffbf - - - - -] [instance: 
5dd0b6ab-8743-42a2-af7a-38f1fb10dedf] Live Migration failure: missing source 
information for device vda
  2017-07-21 11:33:00.599 4552 ERROR nova.virt.libvirt.driver 
[req-792a37bf-3a2e-4976-84ad-cc308eb1ffbf - - - - -] [instance: 
5dd0b6ab-8743-42a2-af7a-38f1fb10dedf] Migration operation has aborted

  
  Jul 21 11:33:00 compute-02 nova-compute: Traceback (most recent call last):
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 457, in 
fire_timers
  Jul 21 11:33:00 compute-02 nova-compute: timer()
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 58, in __call__
  Jul 21 11:33:00 compute-02 nova-compute: cb(*args, **kw)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 168, in _do_send
  Jul 21 11:33:00 compute-02 nova-compute: waiter.switch(result)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
  Jul 21 11:33:00 compute-02 nova-compute: result = function(*args, **kwargs)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 1087, in context_wrapper
  Jul 21 11:33:00 compute-02 nova-compute: return func(*args, **kwargs)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6150, in 
_live_migration_operation
  Jul 21 11:33:00 compute-02 nova-compute: instance=instance)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  Jul 21 11:33:00 compute-02 nova-compute: self.force_reraise()
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Jul 21 11:33:00 compute-02 nova-compute: six.reraise(self.type_, self.value, 
self.tb)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6143, in 
_live_migration_operation
  Jul 21 11:33:00 compute-02 nova-compute: 
bandwidth=CONF.libvirt.live_migration_bandwidth)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 623, in 
migrate
  Jul 21 11:33:00 compute-02 nova-compute: destination, params=params, 
flags=flags)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
  Jul 21 11:33:00 compute-02 nova-compute: result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
  Jul 21 11:33:00 compute-02 nova-compute: rv = execute(f, *args, **kwargs)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
  Jul 21 11:33:00 compute-02 nova-compute: six.reraise(c, e, tb)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
  Jul 21 11:33:00 compute-02 nova-compute: rv = meth(*args, **kwargs)
  Jul 21 11:33:00 compute-02 nova-compute: File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3
  Jul 21 11:33:00 compute-02 nova-compute: if ret == -1: raise libvirtError 
('virDomainMigrateToURI3() failed', dom=self)
  Jul 21 11:33:00 compute-02 nova-compute: libvirtError: missing source 
information for device vda
  -

  Here is disk description when instance was created
  --
  




4c0b28d5-fc0b-4907-b563-b47b521bc945


  
  --

  Here is disk description during migration:
  --
 
  

[Yahoo-eng-team] [Bug 1697820] Re: Failure to create ResourceProvider resets status and numatopology in table compute_nodes

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697820

Title:
  Failure to create ResourceProvider resets status and numatopology in
  table compute_nodes

Status in OpenStack Compute (nova):
  Expired

Bug description:
  RT call update_available_resource to update resource to db periodic.
  each time,update_available_resource first call _init_compute_node to init 
resource and update them(stats and numalogy is empty and init value) to db, at 
end of update_available_resource, the stats and numalogy is filled with right 
value,then _update() function called, the resource finally update to db.

  But, in  _update(), the resource update to db when the local resource
  is changed(by self._resource_change(compute_node)), this cause the
  stats and numalogy in compute_nodes is always the init or empty value.

  You can see the empty or init value of stats and numalogy in
  compute_nodes table when resource of  nova-compute not change for a
  while.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688038] Re: test_rescued_vm_add_remove_security_group fails with "InstanceNotRescuable: Instance 4869b462-c3cf-4437-8c94-1d0dcd5fff8b cannot be rescued: Driver Error: failed to

2017-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1688038

Title:
  test_rescued_vm_add_remove_security_group fails with
  "InstanceNotRescuable: Instance 4869b462-c3cf-4437-8c94-1d0dcd5fff8b
  cannot be rescued: Driver Error: failed to connect to monitor socket:
  No such process"

Status in OpenStack Compute (nova):
  Expired

Bug description:
  http://logs.openstack.org/73/461473/3/check/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/a277636/console.html#_2017-05-02_18_16_55_064917

  http://logs.openstack.org/73/461473/3/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/a277636/logs/screen-n-cpu.txt.gz#_May_02_17_49_07_771248

  May 02 17:49:07.771248 ubuntu-xenial-rax-ord-8683720 nova-compute[23706]: 
ERROR oslo_messaging.rpc.server [req-f281fc56-b69f-4e1a-a6d0-752871138ace 
tempest-ServerRescueTestJSON-709146246 tempest-ServerRescueTestJSON-709146246] 
Exception during message handling

ERROR oslo_messaging.rpc.server Traceback (most recent call last):

ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
157, in _process_incoming

ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)

ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch

ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, 
ctxt, args)

ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch

ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)

ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 77, in wrapped

ERROR oslo_messaging.rpc.server function_name, call_dict, binary)

ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__

ERROR oslo_messaging.rpc.server self.force_reraise()

ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise

ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)

ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 68, in wrapped

ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)

ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 187, in decorated_function

ERROR oslo_messaging.rpc.server LOG.warning(msg, e, instance=instance)

ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__

ERROR oslo_messaging.rpc.server self.force_reraise()

ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise

ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)

ERROR oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1718819] Re: The linked page does not exist

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/507339
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fdd29a207a430ec9c9f74afd093b70b0c0b2a765
Submitter: Jenkins
Branch:master

commit fdd29a207a430ec9c9f74afd093b70b0c0b2a765
Author: Swaminathan Vasudevan 
Date:   Mon Sep 25 20:08:41 2017 -0700

Fix the link to the rally docs in README.rst

The links to the rally docs are invalid.
This patch fixes it.

Change-Id: I8713a8cdf317d385e770528111b0ac376f89391f
Closes-Bug: #1718819


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718819

Title:
  The linked page does not exist

Status in neutron:
  Fix Released

Bug description:
  In the neutron project /neutron/TESTING.rst  This page does not exist
  yet of "http://rally.readthedocs.io;.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719769] [NEW] Occasional network interruption with mark=1 in conntrack

2017-09-26 Thread Jesse
Public bug reported:

If VM port's security group rules update frequently and network traffic is 
heavy.
There will be situation that OvS security group flows wrongly mark the 
conntrack to 1 and block the VM network connectivity.

If there are 2 VMs, VM A(192.168.111.234) and VM B(192.168.111.233), B allow 
ping from A.
We ping B from A forever.
There will be one conntrack rule in VM B's compute Host.
icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=0 zone=1 
use=2

I try to simulate this issue because it's hard to reproduce this issue in 
normal way.
There is one precondition to notice:
If SG rules change on a port, SG flows on this port will be recreated.
Although all SG flows for this port will be added into OvS flows by
command 'ovs-ofctl add-flows' one-off, but flows will actually be
added into OvS flows one by one.

It's hard to reproduce this issue if we do not hack the codes. 
So I disable security group defer in codes to simulate. (change codes here: 
https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py#L132)
 

Then I start neutron-openvswitch-agent with breakpoint on
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/openvswitch_firewall/firewall.py#L1004

Now we will get mark=1 conntrack rule in VM B's compute Host:
icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=1 zone=1 
use=1

Here after the port's security group rules flows added later, this
mark=1 conntrack rule will not deleted only if timeout for this rule.

In our OpenStack production environment, we encounter this issue and our vital 
system network disconnected.
The reason is that the VM port security rule change frequently and VM network 
traffic is heavy.

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719769

Title:
  Occasional network interruption with mark=1 in conntrack

Status in neutron:
  In Progress

Bug description:
  If VM port's security group rules update frequently and network traffic is 
heavy.
  There will be situation that OvS security group flows wrongly mark the 
conntrack to 1 and block the VM network connectivity.

  If there are 2 VMs, VM A(192.168.111.234) and VM B(192.168.111.233), B allow 
ping from A.
  We ping B from A forever.
  There will be one conntrack rule in VM B's compute Host.
  icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=0 zone=1 
use=2

  I try to simulate this issue because it's hard to reproduce this issue in 
normal way.
  There is one precondition to notice:
  If SG rules change on a port, SG flows on this port will be recreated.
  Although all SG flows for this port will be added into OvS flows by
  command 'ovs-ofctl add-flows' one-off, but flows will actually be
  added into OvS flows one by one.

  It's hard to reproduce this issue if we do not hack the codes. 
  So I disable security group defer in codes to simulate. (change codes here: 
https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py#L132)
 

  Then I start neutron-openvswitch-agent with breakpoint on
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/openvswitch_firewall/firewall.py#L1004

  Now we will get mark=1 conntrack rule in VM B's compute Host:
  icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=1 zone=1 
use=1

  Here after the port's security group rules flows added later, this
  mark=1 conntrack rule will not deleted only if timeout for this rule.

  In our OpenStack production environment, we encounter this issue and our 
vital system network disconnected.
  The reason is that the VM port security rule change frequently and VM network 
traffic is heavy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718282] Re: Port update failed with 500 when trying to recreate default security group

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/505390
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=017185496c9f3f30b4b819ead86dd5bfe5a72597
Submitter: Jenkins
Branch:master

commit 017185496c9f3f30b4b819ead86dd5bfe5a72597
Author: Kevin Benton 
Date:   Tue Sep 19 12:41:47 2017 -0700

Ensure default security group before port update

The default security group can be deleted and updating
a port will recreate it. However, we should do this in
the BEFORE_UPDATE event handler rather than waiting for
it to happen inside of the port update transaction which
violates the transaction semantics of the security group
callbacks.

Closes-Bug: #1718282
Change-Id: I1ce8b558b0a831adcebead512d97554173423955


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718282

Title:
  Port update failed with 500 when trying to recreate default security
  group

Status in neutron:
  Fix Released

Bug description:
  On port update, default security group may be missing. In this case,
  port update will first create the group, then proceed to port object.
  The problem is that when it recreates the group, it uses AFTER_UPDATE
  event, which contradicts the transactional semantics of
  _ensure_default_security_group_handler.

  Logs wise, we get this in neutron-server log:

  Sep 14 12:03:03.604813 ubuntu-xenial-2-node-rax-dfw-10932230 neutron-
  server[30503]: WARNING neutron.plugins.ml2.ovo_rpc [None req-
  71600acd-c114-4dbd-a599-a9126fae14fb tempest-
  NetworkDefaultSecGroupTest-1846858447 tempest-
  NetworkDefaultSecGroupTest-1846858447] This handler is supposed to
  handle AFTER events, as in 'AFTER it's committed', not BEFORE.
  Offending resource event: security_group, after_create. Location:

  And then later:

  Sep 14 12:03:04.038599 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1332, in 
update_port
  Sep 14 12:03:04.038761 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
context.session.expire(port_db)
  Sep 14 12:03:04.038924 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1533, 
in expire
  Sep 14 12:03:04.039083 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
self._expire_state(state, attribute_names)
  Sep 14 12:03:04.039243 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1536, 
in _expire_state
  Sep 14 12:03:04.039406 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
self._validate_persistent(state)
  Sep 14 12:03:04.041280 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1976, 
in _validate_persistent
  Sep 14 12:03:04.041453 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
state_str(state))
  Sep 14 12:03:04.041658 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
InvalidRequestError: Instance '' is not persistent 
within this Session

  Logs can be found in: http://logs.openstack.org/21/504021/1/check
  /gate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-
  nv/c6647c4/logs/screen-q-svc.txt.gz#_Sep_14_12_03_04_041658

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585858] Re: InterfaceDetachFailed: resources.server: Failed to detach interface

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/507635
Committed: 
https://git.openstack.org/cgit/openstack/heat/commit/?id=166ac7869fbef4bc88ee5d26e72c7e01d5e581e2
Submitter: Jenkins
Branch:master

commit 166ac7869fbef4bc88ee5d26e72c7e01d5e581e2
Author: Zane Bitter 
Date:   Tue Sep 26 13:26:13 2017 -0400

Increase interface detach polling period

The gate has started failing due to interface detaches timing out.
Examining the logs, it looks like Nova's interface detach retry takes about
6s to run one attempt. Heat, on the other hand, does 10 retries at 0.5s
intervals. So if Nova has to retry then Heat will fail.

Increase Heat's polling interval to make it more likely that if Nova
succeeds, Heat will see it.

Change-Id: Ie74980a3f806b8c17e4e494ae979725b0078f135
Closes-Bug: #1585858


** Changed in: heat
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585858

Title:
  InterfaceDetachFailed: resources.server: Failed to detach interface

Status in OpenStack Heat:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  It seems the number of attempts to detach interfaces in not enough in
  some cases.

  
heat_integrationtests.functional.test_create_update.UpdateStackTest.test_stack_update_with_replacing_userdata
  2016-05-25 10:37:16.151 | 2016-05-25 10:37:16.020 | 
-
  2016-05-25 10:37:16.151 | 2016-05-25 10:37:16.022 | 
  2016-05-25 10:37:16.152 | 2016-05-25 10:37:16.025 | Captured traceback:
  2016-05-25 10:37:16.152 | 2016-05-25 10:37:16.031 | ~~~
  2016-05-25 10:37:16.152 | 2016-05-25 10:37:16.034 | Traceback (most 
recent call last):
  2016-05-25 10:37:16.152 | 2016-05-25 10:37:16.038 |   File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_create_update.py", 
line 534, in test_stack_update_with_replacing_userdata
  2016-05-25 10:37:16.152 | 2016-05-25 10:37:16.040 | 
parameters=parms_updated)
  2016-05-25 10:37:16.153 | 2016-05-25 10:37:16.044 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 402, in 
update_stack
  2016-05-25 10:37:16.153 | 2016-05-25 10:37:16.046 | 
self._wait_for_stack_status(**kwargs)
  2016-05-25 10:37:16.153 | 2016-05-25 10:37:16.048 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 334, in 
_wait_for_stack_status
  2016-05-25 10:37:16.153 | 2016-05-25 10:37:16.051 | fail_regexp):
  2016-05-25 10:37:16.153 | 2016-05-25 10:37:16.054 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 295, in 
_verify_status
  2016-05-25 10:37:16.153 | 2016-05-25 10:37:16.056 | 
stack_status_reason=stack.stack_status_reason)
  2016-05-25 10:37:16.154 | 2016-05-25 10:37:16.058 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
UpdateStackTest-2129656241/3c284793-623d-47bd-901a-6d71e75bdbca is in 
UPDATE_FAILED status due to 'InterfaceDetachFailed: resources.server: Failed to 
detach interface (bdbc4a8c-114b-42c0-a49a-a27311f5f2e2) from server 
(259f3fff-6e38-4722-9a51-7c7166d78812)'

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1585858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719275] Re: Linux Bridge bridge_mappings, remove unnecessary logic to retrieve bridge name

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/507030
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bb550de3d78e58bb811dc5ad1da69ddb356a3f9d
Submitter: Jenkins
Branch:master

commit bb550de3d78e58bb811dc5ad1da69ddb356a3f9d
Author: Rodolfo Alonso Hernandez 
Date:   Mon Sep 25 10:27:40 2017 +0100

Linux Bridge, remove unnecessary logic to retrieve bridge name

In [1], the function "get_existing_bridge_name" retrieves the value of a
dict using the method get. Before this, it checks if the key value used is
"True". This check is not needed using the dictionary "get" method.

[1] 
https://github.com/openstack/neutron/blob/11.0.0.0rc3/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L155

Change-Id: Iba020c6b297228ae48bbd2a19f540b0152570317
Closes-Bug: #1719275


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719275

Title:
  Linux Bridge bridge_mappings, remove unnecessary logic to retrieve
  bridge name

Status in neutron:
  Fix Released

Bug description:
  In [1], the function "get_existing_bridge_name" retrieves the value of
  a dict using the method get. Before this, it checks if the key value
  used is "True". This check is not needed using the dictionary "get"
  method.

  [1]
  
https://github.com/openstack/neutron/blob/11.0.0.0rc3/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L155

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352256] Please test proposed package

2017-09-26 Thread Ryan Beisner
Hello Ashish, or anyone else affected,

Accepted horizon into kilo-proposed. The package will build now and be
available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:kilo-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-kilo-needed to verification-kilo-done. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-kilo-failed. In either case, details of your testing
will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/kilo
   Status: Fix Released => Fix Committed

** Tags added: verification-kilo-needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352256

Title:
  Uploading a new object fails with Ceph as object storage backend using
  RadosGW

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  While uploading a new Object using Horizon, with Ceph as object
  storage backend, it fails with error mesage "Error: Unable to upload
  object"

  Ceph Release : Firefly

  Error in horizon_error.log:

  
  [Wed Jul 23 09:04:46.840751 2014] [:error] [pid 30045:tid 140685813683968] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
firefly-master.ashish.com
  [Wed Jul 23 09:04:46.842984 2014] [:error] [pid 30045:tid 140685813683968] 
WARNING:urllib3.connectionpool:HttpConnectionPool is full, discarding 
connection: firefly-master.ashish.com
  [Wed Jul 23 09:04:46.843118 2014] [:error] [pid 30045:tid 140685813683968] 
REQ: curl -i http://firefly-master.ashish.com/swift/v1/new-cont-dash/test -X 
PUT -H "X-Auth-Token: 91fc8466ce17e0d22af86de9b3343b2d"
  [Wed Jul 23 09:04:46.843227 2014] [:error] [pid 30045:tid 140685813683968] 
RESP STATUS: 411 Length Required
  [Wed Jul 23 09:04:46.843584 2014] [:error] [pid 30045:tid 140685813683968] 
RESP HEADERS: [('date', 'Wed, 23 Jul 2014 09:04:46 GMT'), ('content-length', 
'238'), ('content-type', 'text/html; charset=iso-8859-1'), ('connection', 
'close'), ('server', 'Apache/2.4.7 (Ubuntu)')]
  [Wed Jul 23 09:04:46.843783 2014] [:error] [pid 30045:tid 140685813683968] 
RESP BODY: 
  [Wed Jul 23 09:04:46.843907 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843930 2014] [:error] [pid 30045:tid 140685813683968] 
411 Length Required
  [Wed Jul 23 09:04:46.843937 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843944 2014] [:error] [pid 30045:tid 140685813683968] 
Length Required
  [Wed Jul 23 09:04:46.843951 2014] [:error] [pid 30045:tid 140685813683968] 
A request of the requested method PUT requires a valid Content-length.
  [Wed Jul 23 09:04:46.843957 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843963 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843969 2014] [:error] [pid 30045:tid 140685813683968]
  [Wed Jul 23 09:04:46.844530 2014] [:error] [pid 30045:tid 140685813683968] 
Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 
411 Length Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844555 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844607 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844900 2014] [:error] [pid 30045:tid 140685813683968] 
https://bugs.launchpad.net/cloud-archive/+bug/1352256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382079] Please test proposed package

2017-09-26 Thread Ryan Beisner
Hello Thiago, or anyone else affected,

Accepted horizon into kilo-proposed. The package will build now and be
available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:kilo-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-kilo-needed to verification-kilo-done. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-kilo-failed. In either case, details of your testing
will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/kilo
   Status: Fix Released => Fix Committed

** Tags removed: verification-kilo-done
** Tags added: verification-kilo-needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382079

Title:
  [SRU] Project selector not working

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Vivid:
  Won't Fix
Status in horizon source package in Wily:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * Not able to switch projects by the project dropdown list.

  [Test Case]

  1 - enable Identity V3 in local_settings.py
  2 - Log in on Horizon
  3 - make sure that the SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  [Regression Potential]

   * None

  When you try to select a new project on the project dropdown, the
  project doesn't change. The commit below has introduced this bug on
  Horizon's master and has passed the tests verifications.

  
https://github.com/openstack/horizon/commit/16db58fabad8934b8fbdfc6aee0361cc138b20af

  For what I've found so far, the context being received in the
  decorator seems to be the old context, with the token to the previous
  project. When you take the decorator out, the context received by the
  "can_access" function receives the correct context, with the token to
  the new project.

  Steps to reproduce:

  1 - Enable Identity V3 (to have a huge token)
  2 - Log in on Horizon (lots of permissions loaded on session)
  3 - Certify that you SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  The project shall remain the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1382079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719730] Re: Reschedule after the late affinity check fails with "'NoneType' object is not iterable"

2017-09-26 Thread Matt Riedemann
Since https://review.openstack.org/#/c/469037 was made in pike, this is
a regression in the pike release.

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => High

** Changed in: nova
   Importance: Undecided => High

** Tags added: affinity requestspec reschedule server-groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719730

Title:
  Reschedule after the late affinity check fails with "'NoneType' object
  is not iterable"

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  Ran into this while hacking on something locally and running the
  server groups functional tests:

  
  ==
  Failed 1 tests - output below:
  ==

  
nova.tests.functional.test_server_group.ServerGroupTestV215.test_rebuild_with_anti_affinity
  
---

  Captured pythonlogging:
  ~~~
  19:45:29,525 ERROR [nova.scheduler.utils] Error from last host: host2 (node 
host2): ['Traceback (most recent call last):\n', '  File 
"nova/compute/manager.py", line 1831, in _do_build_and_run_instance\n
filter_properties)\n', '  File "nova/compute/manager.py", line 2061, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 'RescheduledException: Build of instance 
c249e39f-0d38-40ce-860d-6c72cdeba436 was re-scheduled: Build of instance 
c249e39f-0d38-40ce-860d-6c72cdeba436 was re-scheduled: Anti-affinity instance 
group policy was violated.\n']
  19:45:29,526 WARNING [nova.scheduler.utils] Failed to 
compute_task_build_instances: 'NoneType' object is not iterable
  19:45:29,527 WARNING [nova.scheduler.utils] Setting instance to ERROR state.

  
  Two instances are being booted simultaneously and both land on the same host, 
so the second one will fail the late affinity check and raise a 
RescheduledException to be rescheduled to another host. But conductor fails to 
do that because the 'group_members' key doesn't exist in filter_properties and 
an attempt to make a list out of it fails [1].

  In the past, code [2] was added 'group_members' to filter_properties
  to handle affinity and a more recent change removed most of it but
  missed 'group_members' [3]. So nothing is ever setting
  filter_properties['group_members'] but RequestSpec.from_primitives()
  expects it to be there and blows up trying to make a list from None.

  
  [1] 
https://github.com/openstack/nova/blob/ad6d339/nova/objects/request_spec.py#L205
 
  [2] https://review.openstack.org/#/c/148277
  [3] https://review.openstack.org/#/c/469037

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1719730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719730] [NEW] Reschedule after the late affinity check fails with "'NoneType' object is not iterable"

2017-09-26 Thread melanie witt
Public bug reported:

Ran into this while hacking on something locally and running the server
groups functional tests:


==
Failed 1 tests - output below:
==

nova.tests.functional.test_server_group.ServerGroupTestV215.test_rebuild_with_anti_affinity
---

Captured pythonlogging:
~~~
19:45:29,525 ERROR [nova.scheduler.utils] Error from last host: host2 (node 
host2): ['Traceback (most recent call last):\n', '  File 
"nova/compute/manager.py", line 1831, in _do_build_and_run_instance\n
filter_properties)\n', '  File "nova/compute/manager.py", line 2061, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 'RescheduledException: Build of instance 
c249e39f-0d38-40ce-860d-6c72cdeba436 was re-scheduled: Build of instance 
c249e39f-0d38-40ce-860d-6c72cdeba436 was re-scheduled: Anti-affinity instance 
group policy was violated.\n']
19:45:29,526 WARNING [nova.scheduler.utils] Failed to 
compute_task_build_instances: 'NoneType' object is not iterable
19:45:29,527 WARNING [nova.scheduler.utils] Setting instance to ERROR state.


Two instances are being booted simultaneously and both land on the same host, 
so the second one will fail the late affinity check and raise a 
RescheduledException to be rescheduled to another host. But conductor fails to 
do that because the 'group_members' key doesn't exist in filter_properties and 
an attempt to make a list out of it fails [1].

In the past, code [2] was added 'group_members' to filter_properties to
handle affinity and a more recent change removed most of it but missed
'group_members' [3]. So nothing is ever setting
filter_properties['group_members'] but RequestSpec.from_primitives()
expects it to be there and blows up trying to make a list from None.


[1] 
https://github.com/openstack/nova/blob/ad6d339/nova/objects/request_spec.py#L205
 
[2] https://review.openstack.org/#/c/148277
[3] https://review.openstack.org/#/c/469037

** Affects: nova
 Importance: High
 Status: Confirmed

** Affects: nova/pike
 Importance: High
 Status: Confirmed


** Tags: affinity requestspec reschedule server-groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719730

Title:
  Reschedule after the late affinity check fails with "'NoneType' object
  is not iterable"

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  Ran into this while hacking on something locally and running the
  server groups functional tests:

  
  ==
  Failed 1 tests - output below:
  ==

  
nova.tests.functional.test_server_group.ServerGroupTestV215.test_rebuild_with_anti_affinity
  
---

  Captured pythonlogging:
  ~~~
  19:45:29,525 ERROR [nova.scheduler.utils] Error from last host: host2 (node 
host2): ['Traceback (most recent call last):\n', '  File 
"nova/compute/manager.py", line 1831, in _do_build_and_run_instance\n
filter_properties)\n', '  File "nova/compute/manager.py", line 2061, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 'RescheduledException: Build of instance 
c249e39f-0d38-40ce-860d-6c72cdeba436 was re-scheduled: Build of instance 
c249e39f-0d38-40ce-860d-6c72cdeba436 was re-scheduled: Anti-affinity instance 
group policy was violated.\n']
  19:45:29,526 WARNING [nova.scheduler.utils] Failed to 
compute_task_build_instances: 'NoneType' object is not iterable
  19:45:29,527 WARNING [nova.scheduler.utils] Setting instance to ERROR state.

  
  Two instances are being booted simultaneously and both land on the same host, 
so the second one will fail the late affinity check and raise a 
RescheduledException to be rescheduled to another host. But conductor fails to 
do that because the 'group_members' key doesn't exist in filter_properties and 
an attempt to make a list out of it fails [1].

  In the past, code [2] was added 'group_members' to filter_properties
  to handle affinity and a more recent change removed most of it but
  missed 'group_members' [3]. So nothing is ever setting
  filter_properties['group_members'] but RequestSpec.from_primitives()
  expects it to be there and blows up trying to make a list from None.

  
  [1] 
https://github.com/openstack/nova/blob/ad6d339/nova/objects/request_spec.py#L205
 
  [2] https://review.openstack.org/#/c/148277
  [3] https://review.openstack.org/#/c/469037

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1719730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1719711] [NEW] iptables failed to apply when binding a port with AGENT.debug_iptables_rules enabled

2017-09-26 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/21/504021/2/check/gate-tempest-dsvm-neutron-
scenario-linuxbridge-ubuntu-xenial-nv/e47a3f3/testr_results.html.gz


Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/neutron/tests/tempest/scenario/test_security_groups.py",
 line 127, in test_two_sec_groups
num_servers=1, security_groups=security_groups_list)
  File 
"/opt/stack/new/neutron/neutron/tests/tempest/scenario/test_security_groups.py",
 line 54, in create_vm_testing_sec_grp
const.SERVER_STATUS_ACTIVE)
  File "tempest/common/waiters.py", line 76, in wait_for_server_status
server_id=server_id)
tempest.exceptions.BuildErrorException: Server 
e1120d99-f0eb-43eb-a38b-847843a838b5 failed to build and is in ERROR status
Details: {u'message': u'Build of instance e1120d99-f0eb-43eb-a38b-847843a838b5 
aborted: Failed to allocate the network(s), not rescheduling.', u'code': 500, 
u'created': u'2017-09-26T09:23:42Z'}

In linuxbridge agent log: http://logs.openstack.org/21/504021/2/check
/gate-tempest-dsvm-neutron-scenario-linuxbridge-ubuntu-xenial-
nv/e47a3f3/logs/screen-q-agt.txt.gz?level=TRACE#_Sep_26_09_16_30_623747

Sep 26 09:16:30.623747 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR neutron.agent.linux.iptables_manager 
[None req-78fc6bc1-a089-4d5f-91d8-e5191e45978c None None] IPTables Rules did 
not converge. Diff: # Generated by iptables_manager
Sep 26 09:16:30.623936 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: *filter
Sep 26 09:16:30.624117 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: -D neutron-linuxbri-ibc1a22b9-e 6
Sep 26 09:16:30.624316 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: -I neutron-linuxbri-ibc1a22b9-e 6 -p 1 -j 
RETURN
Sep 26 09:16:30.624482 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: COMMIT
Sep 26 09:16:30.624955 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: # Completed by iptables_manager
Sep 26 09:16:30.635308 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent [None 
req-78fc6bc1-a089-4d5f-91d8-e5191e45978c None None] Error in agent loop. 
Devices info: {'current': set(['tapbc1a22b9-ef', 'tapc9488f0f-ae', 
'tape2d2e245-96', 'tap93881b27-41', 'tapb265ee77-37', 'tapbadc6b64-69', 
'tapa813220a-1d', 'tapa376782a-75', 'tap395ccf4d-c9', 'tapca94a412-e7', 
'tap58f740f2-aa', 'tapb2444941-9f']), 'timestamps': {'tap93881b27-41': 56, 
'tapc9488f0f-ae': 62, 'tape2d2e245-96': 11, 'tapbc1a22b9-ef': 68, 
'tapb265ee77-37': 9, 'tapbadc6b64-69': 55, 'tapa813220a-1d': 66, 
'tapa376782a-75': 65, 'tap395ccf4d-c9': 67, 'tapca94a412-e7': 6, 
'tap58f740f2-aa': 59, 'tapb2444941-9f': 10}, 'removed': set([]), 'added': 
set([]), 'updated': set([])}: IpTablesApplyException: IPTables Rules did not 
converge. Diff: # Generated by iptables_manager
Sep 26 09:16:30.636316 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: *filter
Sep 26 09:16:30.636510 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: -D neutron-linuxbri-ibc1a22b9-e 6
Sep 26 09:16:30.636700 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: -I neutron-linuxbri-ibc1a22b9-e 6 -p 1 -j 
RETURN
Sep 26 09:16:30.636898 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: COMMIT
Sep 26 09:16:30.637075 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: # Completed by iptables_manager
Sep 26 09:16:30.637269 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call 
last):
Sep 26 09:16:30.637683 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 453, in daemon_loop
Sep 26 09:16:30.637962 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent sync = 
self.process_network_devices(device_info)
Sep 26 09:16:30.638211 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 157, in 
wrapper
Sep 26 09:16:30.638373 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent result = f(*args, **kwargs)
Sep 26 09:16:30.638538 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 200, in process_network_devices
Sep 26 09:16:30.638728 ubuntu-xenial-ovh-gra1-11134533 
neutron-linuxbridge-agent[24363]: ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 

[Yahoo-eng-team] [Bug 1719714] [NEW] Excessive logging of "We're on a Pike compute host in a deployment with all Pike compute hosts."

2017-09-26 Thread Matt Riedemann
Public bug reported:

There are two issues with this log message from the resource tracker:

Sep 26 18:44:37 devstack nova-compute[30351]: DEBUG
nova.compute.resource_tracker [None req-992d494e-d328-4204-bcfe-
80d926cf0a65 demo demo] We're on a Pike compute host in a deployment
with all Pike compute hosts. Skipping auto-correction of allocations.
{{(pid=30351) _update_usage_from_instance
/opt/stack/nova/nova/compute/resource_tracker.py:1071}}

1. If you're in Queens, you don't have Pike compute hosts. So the
message is misleading.

2. The message gets logged once per instance on a given compute host
every minute when the update_instance_allocation periodic task runs.

We should fix the message in #1 to say something else than Pike
specifically, and fix #2 to only log that once per periodic, rather than
once per instance per periodic.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: compute low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719714

Title:
  Excessive logging of "We're on a Pike compute host in a deployment
  with all Pike compute hosts."

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  There are two issues with this log message from the resource tracker:

  Sep 26 18:44:37 devstack nova-compute[30351]: DEBUG
  nova.compute.resource_tracker [None req-992d494e-d328-4204-bcfe-
  80d926cf0a65 demo demo] We're on a Pike compute host in a deployment
  with all Pike compute hosts. Skipping auto-correction of allocations.
  {{(pid=30351) _update_usage_from_instance
  /opt/stack/nova/nova/compute/resource_tracker.py:1071}}

  1. If you're in Queens, you don't have Pike compute hosts. So the
  message is misleading.

  2. The message gets logged once per instance on a given compute host
  every minute when the update_instance_allocation periodic task runs.

  We should fix the message in #1 to say something else than Pike
  specifically, and fix #2 to only log that once per periodic, rather
  than once per instance per periodic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1719714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710589] Re: rally sla failure / internal error on load

2017-09-26 Thread Ihar Hrachyshka
** Changed in: neutron
   Importance: High => Undecided

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Importance: Medium => Undecided

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1710589

Title:
  rally sla failure / internal error on load

Status in networking-midonet:
  New

Bug description:
  On networking-midonet gate, for following scenarios, 
max_seconds_per_iteration validation is failing miserably.
  These scenarios are currently disabled for an unrelated issue [1] but there 
is an attempt to revive it. [2]
  IIRC, these scenarios used to work.  I don't remember when it was though. 
(Maybe newton timeframe)

  NetworkPlugin.create_ports
  NetworkPlugin.create_routers
  NetworkPlugin.create_subnets
  NetworkPlugin.create_subnets_routers_interfaces

  [1] bug 1670577
  [2] https://review.openstack.org/#/c/492374/

  It's probably due to retriable exceptions like the following due to
  contentions.

  eg. http://logs.openstack.org/74/492374/3/check/gate-networking-
  midonet-rally-dsvm-ml2-ubuntu-xenial/d1aa9b9/logs/screen-q-svc.txt.gz

  Aug 14 06:08:16.188519 ubuntu-xenial-citycloud-kna1-10421138 neutron-
  server[14391]: DEBUG neutron.db.api [None req-
  6c986099-0e8d-4181-a488-c73d348f2ef5 c_rally_1a0448a8_y9LQrXNX
  c_rally_1a0448a8_QH0dDRH4] Retry wrapper got retriable exception:
  UPDATE statement on table 'standardattributes' expected to update 1
  row(s); 0 were matched. {{(pid=14479) wrapped
  /opt/stack/new/neutron/neutron/db/api.py:129}}

  Aug 14 05:46:27.279500 ubuntu-xenial-citycloud-kna1-10421138 neutron-
  server[14391]: DEBUG neutron.db.api [None req-6134c462-06e4-493f-
  8ff1-9b206dfda3dd c_rally_edf0ce7f_CaBvNcQe c_rally_edf0ce7f_ONZDWSHE]
  Retry wrapper got retriable exception: Failed to create a duplicate
  IpamAllocation: for attribute(s) ['PRIMARY'] with value(s)
  2.224.0.54-29522f59-ea05-4904-b034-fa3555da8ade {{(pid=14479) wrapped
  /opt/stack/new/neutron/neutron/db/api.py:129}}

  Aug 14 05:39:37.960137 ubuntu-xenial-citycloud-kna1-10421138 neutron-
  server[14391]: DEBUG neutron.db.api [None req-f3ecc8f3-2c7a-4b3c-ac8c-
  abe2766e36f4 c_rally_6464adb0_pXKwRzJz c_rally_6464adb0_uyMbSZqP]
  Retry wrapper got retriable exception: Failed to create a duplicate
  DefaultSecurityGroup: for attribute(s) ['PRIMARY'] with value(s)
  bc3b6a26a56646e7b098b9419d17c0d1 {{(pid=14480) wrapped
  /opt/stack/new/neutron/neutron/db/api.py:129}}

  Even without max_seconds_per_iteration, it sometimes times out,
  or causes internal error reaching to the retry limit.

  eg. http://logs.openstack.org/74/492374/4/check/gate-networking-
  midonet-rally-dsvm-ml2-ubuntu-
  xenial/5f6c27d/logs/screen-q-svc.txt.gz?level=TRACE

  Aug 14 13:54:59.896408 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api [None 
req-c66e03cb-f36a-494b-9f83-fcb51dd0a420 c_rally_732dd379_PVser3Et 
c_rally_732dd379_0ywBv9Oh] DB exceeded retry limit.: StaleDataError: UPDATE 
statement on table 'standardattributes' expected to update 1 row(s); 0 were 
matched.
  Aug 14 13:54:59.896764 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api Traceback (most recent call last):
  Aug 14 13:54:59.896968 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  Aug 14 13:54:59.897135 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api return f(*args, **kwargs)
  Aug 14 13:54:59.897300 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 129, in wrapped
  Aug 14 13:54:59.897465 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api LOG.debug("Retry wrapper got 
retriable exception: %s", e)
  Aug 14 13:54:59.897653 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Aug 14 13:54:59.897819 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api self.force_reraise()
  Aug 14 13:54:59.897970 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Aug 14 13:54:59.898135 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
  Aug 14 13:54:59.898300 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 125, in wrapped
  Aug 14 13:54:59.898458 ubuntu-xenial-citycloud-la1-10426313 
neutron-server[13990]: ERROR oslo_db.api return 

[Yahoo-eng-team] [Bug 1716005] Re: validate doc links as part of a release

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/505327
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fc6e9a71d80aaed4a804c4922cc2094b9e6d1857
Submitter: Jenkins
Branch:master

commit fc6e9a71d80aaed4a804c4922cc2094b9e6d1857
Author: Boden R 
Date:   Tue Sep 19 10:37:29 2017 -0600

add doc link validation to release checklist and tox

This patch updates our doc conf.py to support the linkcheck builder in
addition to adding a new 'linkcheck' target in tox to run the builder.
Also the release checklist is updated suggesting the linkcheck tox
target be run prior to a release.

Change-Id: Ia7c282b7331f0b624bb3324f27dfec223cf414f7
Closes-Bug: #1716005


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716005

Title:
  validate doc links as part of a release

Status in neutron:
  Fix Released

Bug description:
  Today we have no validation of links (internal, relative or static) as
  part of our doc build. As a result we can end up with dead links over
  time that are typically noticed by our users... Less than optimal.

  As part of the comments in [1], it was suggested we try to validate
  links in the gate. While sounding simple it actually becomes more
  complex [2] given eventlet usage, considerations for periodic job,
  etc..

  This bug is to track the work to add some sort of validation during
  our build, perhaps as a periodic job.

  
  [1] https://review.openstack.org/#/c/500095/
  [2] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/121833.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1716005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704975] Re: Nova API floating IP delete fails

2017-09-26 Thread Marcus Klein
Reopen as this still happens.

The steps to reproduce are described in detail in the bug description.
Reading the floating IP from Neutron by Nova needs to happen while the
VM is still alive and Nova did not start to delete it. The following
request to delete the floating IP needs to happen after Nova removed the
VM.

This happens for us in production using automated tests with Kitchen.
kitchen destroy schedules the deletion of the VM in Nova. After kitchen
destroy we issue the delete command to the floating IP using old
openstack clients still requesting this to the Nova daemon and not to
the Neutron daemon.

Maybe this is easily reproducible if you add breakpoints in Nova between
fetching the floating IP information from Neutron and before sending the
delete request. The other breakpoint needs to be right before the VM in
Nova is destroyed. Steps to pass the breakpoints have now been described
multiple times.

** Changed in: nova
   Status: Expired => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1704975

Title:
  Nova API floating IP delete fails

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  This happens with OpenStack Newton.
  We use some outdated openstack clients, that do not provide the "floating ip 
delete" command. Therefore we use the "ip floating delete" command.

  The floating IP is allocated using the "ip floating create" command. The 
floating IP is then associated to a newly started virtual machine.
  After some automated tests, the command to delete the virtual machine is 
sent. To keep the project clean, the floating IP is deleted afterwards using 
the "ip floating delete" command. This command fails from time to time due to 
some race condition.

  This race condition seems to be the following. Nova API requests
  information about the floating IP from neutron:

  GET /v2.0/floatingips/3aeb2a7e-ac37-4712-ac69-80dc83a060e6.json HTTP/1.1
  Host: controller:9696
  Connection: keep-alive
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-neutronclient
  X-Auth-Token: 
gABZbYLBVmPuNg6lSJzyQEuUbSfAcUgJTWDYrYaRJM2ZVzfRSpKPINNfl61M7Ohdfm-1lfx03_RqXhCGBRUKrPYgBpeKWCmiF-hsd2CActr6LMw3dbOHPrrPl8JOvVQ36caRyiDSFa4xY2getQjAfitJgdQknspRUdzpJAU8jvPxxHPSXvfrAWM8J1M2NzDnJKs0-JnZmYK5NlbJDcRMgfLtknzTGJJs6TnhqaDts_i234RsWf8

  {"floatingip": {"router_id": "aaab3136-d551-4e23-a62a-d16f89d34ec5",
  "status": "ACTIVE", "description": "", "updated_at":
  "2017-07-18T03:37:33Z", "dns_domain": "", "floating_network_id":
  "f3d64d76-a4e5-473b-878f-78b032ab89df", "fixed_ip_address":
  "192.168.3.10", "floating_ip_address": "10.50.0.62",
  "revision_number": 2, "port_id":
  "4b082eae-a527-45e6-8603-0d0882098e39", "id": "3aeb2a7e-
  ac37-4712-ac69-80dc83a060e6", "dns_name": "", "created_at":
  "2017-07-18T03:37:04Z", "tenant_id":
  "7829521236b143d2a6778e09ba588ec0", "project_id":
  "7829521236b143d2a6778e09ba588ec0"}}

  In the response the floating IP still seems to be associated to the
  virtual machine - but the request to delete the virtual machine was
  already sent.

  Nova API requests information about the port, too:

  GET /v2.0/ports/4b082eae-a527-45e6-8603-0d0882098e39.json HTTP/1.1
  Host: controller:9696
  Connection: keep-alive
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-neutronclient
  X-Auth-Token: 
gABZbYLBVmPuNg6lSJzyQEuUbSfAcUgJTWDYrYaRJM2ZVzfRSpKPINNfl61M7Ohdfm-1lfx03_RqXhCGBRUKrPYgBpeKWCmiF-hsd2CActr6LMw3dbOHPrrPl8JOvVQ36caRyiDSFa4xY2getQjAfitJgdQknspRUdzpJAU8jvPxxHPSXvfrAWM8J1M2NzDnJKs0-JnZmYK5NlbJDcRMgfLtknzTGJJs6TnhqaDts_i234RsWf8

  {"NeutronError": {"message": "Port
  4b082eae-a527-45e6-8603-0d0882098e39 could not be found.", "type":
  "PortNotFound", "detail": ""}}

  But the virtual machine and its floating IP association seems to be
  deleted in the meantime. This makes Nova API fail to delete the
  floating IP:

  2017-07-18 05:38:41.798 7170 INFO nova.api.openstack.wsgi 
[req-b8f7414b-6398-4738-b660-fac693a187ab 
807ca03b46124ca28309d06a8e66d25e7b7adfe872a6f58903552439304fa173 
7829521236b143d2a6778e09ba588ec0 - 27a851d6a3de43b8a4a4900dfa0c3141 
27a851d6a3de43b8a4a4900dfa0c3141] HTTP exception thrown: Floating IP not found 
for ID 3aeb2a7e-ac37-4712-ac69-80dc83a060e6
  2017-07-18 05:38:41.800 7170 INFO nova.osapi_compute.wsgi.server 
[req-b8f7414b-6398-4738-b660-fac693a187ab 
807ca03b46124ca28309d06a8e66d25e7b7adfe872a6f58903552439304fa173 
7829521236b143d2a6778e09ba588ec0 - 27a851d6a3de43b8a4a4900dfa0c3141 
27a851d6a3de43b8a4a4900dfa0c3141] 10.20.30.237 "GET 
/v2.1/7829521236b143d2a6778e09ba588ec0/os-floating-ips/3aeb2a7e-ac37-4712-ac69-80dc83a060e6
 HTTP/1.1" status: 404 len: 466 time: 0.2575810

  openstack ip floating delete 3aeb2a7e-ac37-4712-ac69-80dc83a060e6

  2017-07-18 11:32:50.867 7171 INFO nova.osapi_compute.wsgi.server 

[Yahoo-eng-team] [Bug 1711466] Re: No way to configure timesyncd specifically

2017-09-26 Thread Scott Moser
** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: New => Confirmed

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1711466

Title:
  No way to configure timesyncd specifically

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  Ubuntu (starting from Xenial), uses timesyncd by default as a NTP
  client.

  When configuring NTP, instead of configuring Ubuntu's default NTP
  client, it installs and configures 'ntp' daemon. In Ubuntu Core,
  however, it configures 'timesyncd' (obviously) because Ubuntu Core
  doesn't support 'ntp'.

  First, it would be nice to have consistency between both Ubuntu and
  Ubuntu Core and configure timesyncd (as it is the default anyway).

  Second, lets imagine the use case scenario where a image is configured
  with NTP and you have an NTP snap or a snap that also needs to run
  NTP.

  In Ubuntu core this works nicely because the OS' uses timesyncd as
  time source while the snap would provided NTP services from the snap
  itself to its client.

  However, in Ubuntu this doesn't work nicely because 'ntp' is running
  in the host as an NTP client, however, the snap having NTP inside
  needs to provide NTP services, but it won't be able to provided that
  the host OS *also* has 'ntp' installed.

  As such, it would be nice to either keep consistency (and configure
  timesyncd for both Ubuntu Core and Ubuntu), or have the ability to
  specifically configure 'timesyncd' instead of 'ntpd'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1711466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708460] Re: [RFE] Reaction to network congestion for qos

2017-09-26 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708460

Title:
   [RFE] Reaction to network congestion for qos

Status in neutron:
  New

Bug description:
  Problem description:
  
  In a multi-tenant environment, some VMs could consume the available bandwidth 
and cause network congestion for those that share the same network resources. 
To improve the quality of service, openstack implements a service to limit the 
bandwidth of VM ports and classify the traffic using DSCP field. However, there 
is still a lack of a mechanism policing specific VMs that consume a lot of 
resources and generate network congestion.

  
  API effects:
  
  By adding a new rule in the QoS extension [1] of neutron we could detect and 
react dynamically to the congestion. This rule will police VMs causing 
congestion by limiting the allowed bandwidth in their ports.

  
  Proposal:
  *
  Our proposal is to use Explicit Congestion Notification (ECN) [2] bits as 
marking mechanism. So, the network near congestion will mark packets as 
Congestion Experienced (CE). Neutron will detect these packets and starts 
monitoring the behavior of the VM causing the congestion. When the congestion 
exceeding a threshold, neutron will react by policing this VM using the 
bandwidth limitation rule of QoS extension. Neutron will keep track of the 
congestion rate and will free the policed VMs when the congestion is over.

  References:
  ***
  [1] QoS https://docs.openstack.org/ocata/networking-guide/config-qos.html
  [2] ECN in IP https://tools.ietf.org/html/rfc3168

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719606] [NEW] Incorrect subnets quota check in admin networks panel

2017-09-26 Thread wei.ying
Public bug reported:

In the admin networks table row actions, the subnet can be created, and
the current subnet quota verification get the subnets quota of the
current tenant. The correct access method should be the tenant of the
selected network.

In addition, there is a lack of quota checking to create a subnet table
actions in the network details tab.

** Affects: horizon
 Importance: Undecided
 Assignee: wei.ying (wei.yy)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => wei.ying (wei.yy)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1719606

Title:
  Incorrect subnets quota check in admin networks panel

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the admin networks table row actions, the subnet can be created,
  and the current subnet quota verification get the subnets quota of the
  current tenant. The correct access method should be the tenant of the
  selected network.

  In addition, there is a lack of quota checking to create a subnet
  table actions in the network details tab.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1719606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717917] Re: test_resize_server_error_and_reschedule_was_failed failing due to missing notification

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/504930
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0262e4146d63a0d7b3fbb1942a57a7a136820aec
Submitter: Jenkins
Branch:master

commit 0262e4146d63a0d7b3fbb1942a57a7a136820aec
Author: Balazs Gibizer 
Date:   Mon Sep 18 14:33:13 2017 +0200

stabilize test_resize_server_error_and_reschedule_was_failed

The test_resize_server_error_and_reschedule_was_failed notifiation sample
test only waits for the instance to go to ERROR state and then asserts that
two notifications are emitted. However the second notification,
compute.exception, only emited after the instance is put into ERROR state.
This makes the test unstable.

This patch makes the test stable by waiting for the compute.exception
notification to arrive before asserting the received notifications.

Change-Id: Ia5311ffc12784987c138b127e43cfc52019cb3ea
Closes-Bug: #1717917


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1717917

Title:
  test_resize_server_error_and_reschedule_was_failed failing due to
  missing notification

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The test_resize_server_error_and_reschedule_was_failed case was failed
  in jenkins couple of times[1]. It seems that the test only wait for
  the instance to go to ERROR state but compute.exception notification
  emitted after that which make the test racy.

  [1]
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22testtools.matchers._impl.MismatchError%3A%202%20!%3D%201%3A%20Unexpected%20number%20of%20notifications%3A%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1717917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719561] [NEW] Instance action's updated_at' doesn't updated when action created or action event updated.

2017-09-26 Thread Yikun Jiang
Public bug reported:

Description
===
When we do some operation on instances, will record some instance action(such 
as 'create') in 
'instance_actions' table, and some sub-event will record(such as 
compute__do_build_and_run_instance)
in 'instance_actions_events' table.

we need update the instance action's updated_at when instance action events 
are created.

Steps to reproduce
==
1. Create a instances
nova boot --image 81e58b1a-4732-4255-b4f8-c844430485d2 --flavor 1 yikun

2. Look up record in instance_actions and instance_actions_events
mysql> select * from instance_actions\G
*** 1. row ***
   created_at: 2017-09-25 07:16:07
   updated_at: NULL--->  here 
   deleted_at: NULL
   id: 48
   action: create
instance_uuid: fdd52ec6-100b-4a25-a5db-db7c5ad17fa8
   request_id: req-511dee3e-8951-4360-b72b-3a7ec091e7c8
  user_id: 1687f2a66222421790475760711e40e5
   project_id: 781b620d86534d549dd64902674c0f69
   start_time: 2017-09-25 07:16:05
  finish_time: NULL
  message: NULL
  deleted: 0


mysql> select * from instance_actions_events\G
*** 1. row ***
   created_at: 2017-09-25 07:16:07
   updated_at: 2017-09-25 07:16:22
   deleted_at: NULL
   id: 1
   action: create
instance_uuid: fdd52ec6-100b-4a25-a5db-db7c5ad17fa8
   request_id: req-511dee3e-8951-4360-b72b-3a7ec091e7c8
  user_id: 1687f2a66222421790475760711e40e5
   project_id: 781b620d86534d549dd64902674c0f69
   start_time: 2017-09-25 07:16:05
  finish_time: NULL
  message: NULL
  deleted: 0

  
Expected result
===
Update the instance action's updated_at when instance action events 
are started or finished or instance action created.

Actual result
=
without instance aciton's updated_at

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719561

Title:
  Instance action's updated_at' doesn't updated when action created or
  action event updated.

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When we do some operation on instances, will record some instance action(such 
as 'create') in 
  'instance_actions' table, and some sub-event will record(such as 
compute__do_build_and_run_instance)
  in 'instance_actions_events' table.

  we need update the instance action's updated_at when instance action events 
  are created.

  Steps to reproduce
  ==
  1. Create a instances
  nova boot --image 81e58b1a-4732-4255-b4f8-c844430485d2 --flavor 1 yikun

  2. Look up record in instance_actions and instance_actions_events
  mysql> select * from instance_actions\G
  *** 1. row ***
 created_at: 2017-09-25 07:16:07
 updated_at: NULL--->  here 
 deleted_at: NULL
 id: 48
 action: create
  instance_uuid: fdd52ec6-100b-4a25-a5db-db7c5ad17fa8
 request_id: req-511dee3e-8951-4360-b72b-3a7ec091e7c8
user_id: 1687f2a66222421790475760711e40e5
 project_id: 781b620d86534d549dd64902674c0f69
 start_time: 2017-09-25 07:16:05
finish_time: NULL
message: NULL
deleted: 0

  
  mysql> select * from instance_actions_events\G
  *** 1. row ***
 created_at: 2017-09-25 07:16:07
 updated_at: 2017-09-25 07:16:22
 deleted_at: NULL
 id: 1
 action: create
  instance_uuid: fdd52ec6-100b-4a25-a5db-db7c5ad17fa8
 request_id: req-511dee3e-8951-4360-b72b-3a7ec091e7c8
user_id: 1687f2a66222421790475760711e40e5
 project_id: 781b620d86534d549dd64902674c0f69
 start_time: 2017-09-25 07:16:05
finish_time: NULL
message: NULL
deleted: 0

  
  Expected result
  ===
  Update the instance action's updated_at when instance action events 
  are started or finished or instance action created.

  Actual result
  =
  without instance aciton's updated_at

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1719561/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713442] Re: Update documentation with new image states

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/493436
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=824daf51dc0e5dfd23108ca221f9774e2b8acc38
Submitter: Jenkins
Branch:master

commit 824daf51dc0e5dfd23108ca221f9774e2b8acc38
Author: Rui Yuan Dou 
Date:   Mon Aug 14 13:44:49 2017 +0800

Update image statuses doc for latest change

1.correct typo
2.add new state 'uploading' and 'importing'

Change-Id: I9962fb5f450f13c6a92bce826767ee880e7e7afe
Closes-bug: 1713442


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1713442

Title:
  Update documentation with new image states

Status in Glance:
  Fix Released

Bug description:
  Two new image statuses are introduced in 2.6, 'uploading' and
  'importing'. The image status doc page need to be updated
  https://docs.openstack.org/glance/latest/user/statuses.html.

  Glance specs: https://specs.openstack.org/openstack/glance-
  specs/specs/mitaka/approved/image-import/image-import-refactor.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1713442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1695299] Re: Glance installation fails if password contains '@' symbol

2017-09-26 Thread Erno Kuvaja
** Changed in: glance
   Status: New => In Progress

** Changed in: glance
   Importance: Undecided => High

** Also affects: glance/pike
   Importance: Undecided
   Status: New

** Also affects: glance/queens
   Importance: High
 Assignee: Aizuddin Zali (mymzbe)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1695299

Title:
  Glance installation fails if password contains '@' symbol

Status in Glance:
  In Progress
Status in Glance pike series:
  New
Status in Glance queens series:
  In Progress

Bug description:
  I was doing a fresh installation of devstack today and had set the
  admin password as "Test@321" in local.conf file. The installation
  started failing for Glance and when I had a look at the backtrace, it
  looks like it converts '@' to '%40' and starts failing.

  Either this needs to be fixed or proper note needs to be added in
  devstack setup stating that one cannot use '@' symbol in passwords.

  ubuntu@openstack:~/devstack$ glance --version
  2.6.0
  ubuntu@openstack:~/devstack$

  CRITICAL glance [-] Unhandled error: ValueError: invalid interpolation syntax 
in 'mysql+pymysql://root:Test%40321@127.0.0.1/glance?charset=utf8' at position 
27
  ERROR glance Traceback (most recent call last):
  ERROR glance   File "/usr/local/bin/glance-manage", line 10, in 
  ERROR glance sys.exit(main())
  ERROR glance   File "/opt/stack/glance/glance/cmd/manage.py", line 452, in 
main
  ERROR glance return CONF.command.action_fn()
  ERROR glance   File "/opt/stack/glance/glance/cmd/manage.py", line 291, in 
sync
  ERROR glance self.command_object.sync(CONF.command.version)
  ERROR glance   File "/opt/stack/glance/glance/cmd/manage.py", line 117, in 
sync
  ERROR glance alembic_migrations.place_database_under_alembic_control()
  ERROR glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/__init__.py", line 
73, in place_database_under_alembic_control
  ERROR glance a_config = get_alembic_config()
  ERROR glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/__init__.py", line 
37, in get_alembic_config
  ERROR glance config.set_main_option('sqlalchemy.url', str(engine.url))
  ERROR glance   File 
"/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 218, in 
set_main_option
  ERROR glance self.set_section_option(self.config_ini_section, name, value)
  ERROR glance   File 
"/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 245, in 
set_section_option
  ERROR glance self.file_config.set(section, name, value)
  ERROR glance   File "/usr/lib/python2.7/ConfigParser.py", line 752, in set
  ERROR glance "position %d" % (value, tmp_value.find('%')))
  ERROR glance ValueError: invalid interpolation syntax in 
'mysql+pymysql://root:Test%40321@127.0.0.1/glance?charset=utf8' at position 27
  ERROR glance
  +lib/glance:init_glance:1  exit_trap
  +./stack.sh:exit_trap:492  local r=1
  ++./stack.sh:exit_trap:493  jobs -p
  +./stack.sh:exit_trap:493  jobs=
  +./stack.sh:exit_trap:496  [[ -n '' ]]
  +./stack.sh:exit_trap:502  kill_spinner
  +./stack.sh:kill_spinner:388   '[' '!' -z '' ']'
  +./stack.sh:exit_trap:504  [[ 1 -ne 0 ]]
  +./stack.sh:exit_trap:505  echo 'Error on exit'
  Error on exit
  +./stack.sh:exit_trap:506  generate-subunit 1496417170 632 
fail
  +./stack.sh:exit_trap:507  [[ -z /opt/stack/logs ]]
  +./stack.sh:exit_trap:510  
/home/ubuntu/devstack/tools/worlddump.py -d /opt/stack/logs
  World dumping... see /opt/stack/logs/worlddump-2017-06-02-153642.txt for 
details
  +./stack.sh:exit_trap:516  exit 1
  ubuntu@openstack:~/devstack$

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1695299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697588] Re: update tempest plugin after removal of cred manager aliases

2017-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/507012
Committed: 
https://git.openstack.org/cgit/openstack/ironic-inspector/commit/?id=7571be01d1928490bb47a02a916c7ad8f45156f5
Submitter: Jenkins
Branch:master

commit 7571be01d1928490bb47a02a916c7ad8f45156f5
Author: Luong Anh Tuan 
Date:   Mon Sep 25 14:58:10 2017 +0700

Replace the usage of 'admin_manager' with 'os_admin'

In tempest, alias 'admin_manager' has been moved to 'os_admin'in
version Pike, and it will be removed in version Queens[1].

[1]I5f7164f7a7ec5d4380ca22885000caa0183a0bf7

Change-Id: Ic29cb510a558ceee832fbfae7853106decffbb41
Closes-bug: 1697588


** Changed in: ironic-inspector
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697588

Title:
  update tempest plugin after removal of cred manager aliases

Status in Ironic:
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in tap-as-a-service:
  Fix Released

Bug description:
  Update after tempest change I5f7164f7a7ec5d4380ca22885000caa0183a0bf7

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1697588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719516] [NEW] Networking Option 1: Provider networks in neutron

2017-09-26 Thread SiKu
Public bug reported:

this 
service_plugins = router
no roouter

This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 11.0.1.dev84 on 2017-09-23 19:36
SHA: 5b0191f5241b0ce70ed63952d01aaa1255c60b08
Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/controller-install-option1-ubuntu.rst
URL: 
https://docs.openstack.org/neutron/pike/install/controller-install-option1-ubuntu.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719516

Title:
  Networking Option 1: Provider networks in neutron

Status in neutron:
  New

Bug description:
  this 
  service_plugins = router
  no roouter

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 11.0.1.dev84 on 2017-09-23 19:36
  SHA: 5b0191f5241b0ce70ed63952d01aaa1255c60b08
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/controller-install-option1-ubuntu.rst
  URL: 
https://docs.openstack.org/neutron/pike/install/controller-install-option1-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp