[Yahoo-eng-team] [Bug 1662699] Re: API documentation and behavior do not match for booting with attached volumes

2017-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/430497
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e34f05edb2efc79bfdd8e73cca8fa02ea6ef2d60
Submitter: Jenkins
Branch:master

commit e34f05edb2efc79bfdd8e73cca8fa02ea6ef2d60
Author: Matt Riedemann 
Date:   Tue Feb 7 20:28:13 2017 -0500

Allow None for block_device_mapping_v2.boot_index

The legacy v2 API allowed None for the boot_index [1]. It
allowed this implicitly because the API code would convert
the block_device_mapping_v2 dict from the request into a
BlockDeviceMapping object, which has a boot_index field that
is nullable (allows None).

The API reference documentation [2] also says:

"To disable a device from booting, set the boot index
to a negative value or use the default boot index value,
which is None."

It appears that with the move to v2.1 and request schema
validation, the boot_index schema was erroneously set to
not allow None for a value, which is not backward compatible
with the v2 API behavior.

This change fixes the schema to allow boot_index=None again
and adds a test to show it working.

This should not require a microversion bump since it's fixing
a regression in the v2.1 API which worked in the v2 API and
is already handled throughout Nova's block device code.

Closes-Bug: #1662699

[1] https://github.com/openstack/nova/blob/13.0.0/nova/compute/api.py#L1268
[2] http://developer.openstack.org/api-ref/compute/#create-server

Change-Id: Ice78a0982bcce491f0c9690903ed2c6b6aaab1be


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662699

Title:
  API documentation and behavior do not match for booting with attached
  volumes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Confirmed
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  Description
  ===

  The documentation for block device mapping in
  http://docs.openstack.org/developer/nova/block_device_mapping.html
  #block-device-mapping-v2 indicates that to attach a volume to a new
  instance without booting from that volume a boot_index value of None
  can be passed. Shade was doing this, and it works against some clouds
  (at least DreamHosts's iad2 cloud, "dreamcompute", which is running
  either mitaka or newton). It does not work against nova at least as of
  9ae5b2306b7a7cc9e77c77292256b13926920ead launched with devstack.

  Steps to reproduce
  ==

  1. Run devstack.
  2. Download the downpour git repo from https://github.com/dhellmann/downpour
  3. Use the tiny.yml playbook in the demo directory to launch an instance (see 
the README for some setup instructions).

  Expected Result
  ===

  One new instance named downpour-demo-tiny booted ephemeral with a new
  volume named downpour-demo-tiny attached to it.

  Actual Result
  =

  Error message: Invalid input for field/attribute boot_index. Value:
  None. None is not of type 'integer'

  Additional Info
  ===

  IRC logs from 7 Feb 2017 in #openstack-nova

  [Tue 05:29:35 PM]mriedem: so - I've got this question about boot 
from volume
  [Tue 05:30:51 PM]mriedem: 
http://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server
 says that block_device_mapping_v2.boot_index takes a string and also that None 
should be used for volumes not used as boot volumes
  [Tue 05:32:00 PM]was just packing up...
  [Tue 05:32:20 PM]mriedem: but we just got an error "Invalid 
input for field/attribute boot_index. Value: None. None is not of type 'integer"
  [Tue 05:32:24 PM]mordred: 
http://docs.openstack.org/developer/nova/block_device_mapping.html might be 
helpful
  [Tue 05:32:33 PM]mriedem: yup. it also says None is valid
  [Tue 05:34:01 PM]i feel like something was just recently changed 
in the json schema there, is this master?
  [Tue 05:34:17 PM]dhellmann: you were doing master devstack?
  [Tue 05:34:30 PM]mriedem: I think so yeah?
  [Tue 05:34:43 PM]  mordred : yes, master devstack (from a 
few hours ago)
  [Tue 05:34:45 PM] mordred goes to look at json schema
  [Tue 05:34:51 PM]   
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/block_device_mapping.py#L56
  [Tue 05:35:28 PM]fantastic
  [Tue 05:36:06 PM]what's the image and dest type?
  [Tue 05:36:19 PM]source_type and destination_type i mean
  [Tue 05:37:01 PM]because 
https://github.com/openstack/nova/blob/master/nova/block_device.py#L198
  [Tue 05:37:02 PM]  it's a qcow2 ubuntu image
  [Tue 

[Yahoo-eng-team] [Bug 1661258] Re: Deleted ironic node has an inventory in nova_api database

2017-02-08 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661258

Title:
  Deleted ironic node has an inventory in nova_api database

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Running latest devstack, ironic and nova, I get the following error
  when I request an instance:

  | fault| {"message": "Node 
6cc8803d-4e77-4948-b653-663d8d5e52b7 could not be found. (HTTP 404)", "code": 
500, "details": "  File \"/opt/stack/nova/nova/compute/manager.py\", line 1780, 
in _do_build_and_run_instance |
  |  | filter_properties)   


 |
  |  |   File 
\"/opt/stack/nova/nova/compute/manager.py\", line 2016, in 
_build_and_run_instance 
|
  |  | instance_uuid=instance.uuid, 
reason=six.text_type(e))

 |
  |  | ", "created": 
"2017-02-02T13:42:01Z"} 

|

  On ironic side, this node was indeed deleted, it is also deleted from
  nova.compute_nodes table:

  | created_at  | updated_at  | deleted_at  | id | 
service_id | vcpus | memory_mb | local_gb | vcpus_used | memory_mb_used | 
local_gb_used | hypervisor_type | hypervisor_version | cpu_info | 
disk_available_least | free_ram_mb | free_disk_gb | current_workload | 
running_vms | hypervisor_hostname  | deleted | host_ip| 
supported_instances  | pci_stats

 | metrics | 
extra_resources | stats  | numa_topology | host   | 
ram_allocation_ratio | cpu_allocation_ratio | uuid  
   | disk_allocation_ratio |
  ...
  | 2017-02-02 12:20:27 | 2017-02-02 13:20:15 | 2017-02-02 13:21:15 |  2 |  
 NULL | 1 |  1536 |   10 |  0 |  0 |
 0 | ironic  |  1 |  |   10 |   
 1536 |   10 |0 |   0 | 
6cc8803d-4e77-4948-b653-663d8d5e52b7 |   2 | 192.168.122.22 | [["x86_64", 
"baremetal", "hvm"]] | {"nova_object.version": "1.1", "nova_object.changes": 
["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": 
{"objects": []}, "nova_object.namespace": "nova"} | []  | NULL| 
{"cpu_arch": "x86_64"} | NULL  | ubuntu |1 |
0 | 035be695-0797-44b3-930b-42349e40579e | 0 |

  But in nova_api.inventories it's still there:

  | created_at  | updated_at | id | resource_provider_id | 
resource_class_id | total | reserved | min_unit | max_unit | step_size | 
allocation_ratio |
  ..
  | 2017-02-02 13:20:14 | NULL   | 13 |2 |  
   0 | 1 |0 |1 |1 | 1 |   16 |
  | 2017-02-02 13:20:14 | NULL   | 14 |2 |  
   1 |  1536 |0 |1 | 1536 | 1 |1 |
  | 2017-02-02 13:20:14 | NULL   | 15 |2 |  
   2 |10 |0 |1 |   10 | 1 |1 |

  nova_api.resource_providers bit:
  | created_at  | updated_at  | id | uuid   
  | name | generation | can_host |
  .
  | 2017-02-02 12:20:27 | 2017-02-02 13:20:14 |  2 | 
035be695-0797-44b3-930b-42349e40579e | 6cc8803d-4e77-4948-b653-663d8d5e52b7 |   
   7 |0 |

  Waiting for resource tracker run did not help, node's been deleted for
  ~30 minutes already and the inventory is still there.

  Code versions:
  Devstack commit debc695ddfc8b7b2aeb53c01c624e15f69ed9fa2 Updated from 
generate-devstack-plugins-list.
  Nova commit 5dad7eaef7f8562425cce6b233aed610ca2d3148 Merge "doc: update the 
man page entry for nova-manage db sync"
  Ironic commit 5071b99835143ebcae876432e2982fd27faece10 Merge 

[Yahoo-eng-team] [Bug 1658060] Re: FirewallNotFound exceptions when deleting the firewall in FWaaS-DVR

2017-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/429923
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=6bf84e7afbf07ab33907150d59d0b33d053240b6
Submitter: Jenkins
Branch:master

commit 6bf84e7afbf07ab33907150d59d0b33d053240b6
Author: Cedric Brandily 
Date:   Tue Feb 7 00:16:08 2017 +0100

Do not complain in firewall_group_deleted if the FW is already deleted

Currently firewall_group_deleted[1] crashs if the firewall is already
deleted or deleted concurrently during firewall_deleted call. We should
avoid such behavior as there is no reason to crash if someone already
did the job for us (ie: delete the FW).

Moreover such crash is costly because it triggers a service-sync on FWaaS
l3-reference agent (at least). Typically on a L3-DVR deployment, all
firewall_deleted calls except the first one will fail so quite every
L3-DVR will perform a FWaaS service-sync.

This change updates firewall_group_deleted in order to succeed if the
firewall is already deleted or if the firewall is deleted concurrently.

[1] neutron.services.firewall.fwaas_plugin_v2.FirewallCallbacks

Change-Id: Ic0b228896c8129205224417506bb06471e432955
Closes-Bug: #1658060


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658060

Title:
  FirewallNotFound exceptions when deleting the firewall in FWaaS-DVR

Status in neutron:
  Fix Released

Bug description:
  We have four nodes, and we deploy both the FWaaS and DVR services.
  When deleting the firewall, we always get three FirewallNotFound
  exceptions. At present, we believe that, in DVR environment, evey node
  would run a L3-agent service. This causes a plugin corresponding to
  multiple agents. And each agent will call back the plugin's
  firewall_deleted() (neutron_fwaas/services/firewall/fwaas_plugin.py)
  to delete the instance in DB, but only the first agent will succeed.

  How to reproduce:
  - first create a firewall applied to a DVR router
  - then delete it

  $ neutron router-show test-fwaas
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | distributed   | True |
  | external_gateway_info |  |
  | ha| False|
  | id| cfa3e65e-d101-4cc7-80e5-39daf72c6572 |
  | name  | test-fwaas   |
  | routes|  |
  | status| ACTIVE   |
  | tenant_id | fc170b1b8a9a467b9e1a63d85ced5a86 |
  +---+--+
  $ neutron firewall-create --name fw --router test-fwaas policy
  Created a new firewall:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 1eb3fff7-240f-4f9d-adf6-766e2cad7f59 |
  | id | afd38a9e-cf0a-4667-94e0-853a888fd981 |
  | name   | fw   |
  | router_ids | cfa3e65e-d101-4cc7-80e5-39daf72c6572 |
  | status | CREATED  |
  | tenant_id  | fc170b1b8a9a467b9e1a63d85ced5a86 |
  ++--+
  $ neutron firewall-show fw
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 1eb3fff7-240f-4f9d-adf6-766e2cad7f59 |
  | id | afd38a9e-cf0a-4667-94e0-853a888fd981 |
  | name   | fw   |
  | router_ids | cfa3e65e-d101-4cc7-80e5-39daf72c6572 |
  | status | ACTIVE   |
  | tenant_id  | fc170b1b8a9a467b9e1a63d85ced5a86 |
  ++--+
  $ neutron firewall-delete fw

  $ less neutron-service_error.log
  2017-01-20 19:46:11.593 19338 ERROR oslo_messaging.rpc.dispatcher 
[req-c4af8425-b05a-4c4e-98e0-4dabe0057df7 ] Exception during message handling: 
Firewall 

[Yahoo-eng-team] [Bug 1663087] [NEW] multiprovidernet updates are ignored

2017-02-08 Thread YAMAMOTO Takashi
Public bug reported:

updates of "segments" attributes are accepted and ignored. (at least by ML2)
while it can raise an exception to be consistent with providernet,
i guess it's better to make it allow_put=False to reflect the reality.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1663087

Title:
  multiprovidernet updates are ignored

Status in neutron:
  New

Bug description:
  updates of "segments" attributes are accepted and ignored. (at least by ML2)
  while it can raise an exception to be consistent with providernet,
  i guess it's better to make it allow_put=False to reflect the reality.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1663087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663077] [NEW] [ipv6] when slaas is setup as ipv6_address_mode ipv6-icmp packets are rejected by iptables

2017-02-08 Thread Andrey Grebennikov
Public bug reported:

Mitaka and Newton

Setting up subnet with ipv6 addressing for provider network (baremetal external 
router providing RA).
Expecting the advertising packets to be able to reach out to instance so that 
the instance can pick the subnet from the external router.

What happens:
Iptables on the compute node is only set up to allow certain types of ipv6-icmp:

-A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 130 -j 
RETURN
-A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 131 -j 
RETURN
-A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 132 -j 
RETURN
-A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j 
RETURN
-A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 136 -j 
RETURN

while RA type is 134.
The list of available types most likely has to be extended in the Neutron 
constants or some deeper logic has to be implemented.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1663077

Title:
  [ipv6] when slaas is setup as ipv6_address_mode ipv6-icmp packets are
  rejected by iptables

Status in neutron:
  New

Bug description:
  Mitaka and Newton

  Setting up subnet with ipv6 addressing for provider network (baremetal 
external router providing RA).
  Expecting the advertising packets to be able to reach out to instance so that 
the instance can pick the subnet from the external router.

  What happens:
  Iptables on the compute node is only set up to allow certain types of 
ipv6-icmp:

  -A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 130 -j 
RETURN
  -A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 131 -j 
RETURN
  -A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 132 -j 
RETURN
  -A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j 
RETURN
  -A neutron-linuxbri-i4d4602ea-3 -p ipv6-icmp -m icmp6 --icmpv6-type 136 -j 
RETURN

  while RA type is 134.
  The list of available types most likely has to be extended in the Neutron 
constants or some deeper logic has to be implemented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1663077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499751] Re: OpenStack (nova boot exactly) allows only one SSH key.

2017-02-08 Thread Jon Gjengset
*** This bug is a duplicate of bug 917850 ***
https://bugs.launchpad.net/bugs/917850

** This bug has been marked a duplicate of bug 917850
   assignment of multiple keypair to instances

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499751

Title:
  OpenStack (nova boot exactly) allows only one SSH key.

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  ii  nova-api 1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - API frontend
  ii  nova-cert1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - certificate management
  ii  nova-common  1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - common files
  ii  nova-conductor   1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - conductor service
  ii  nova-consoleauth 1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy  1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler   1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - virtual machine scheduler
  ii  python-nova  1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute Python libraries
  ii  python-novaclient1:2.22.0-0ubuntu1~cloud0 
 all  client library for OpenStack Compute API

  Problem was described at 
https://ask.openstack.org/en/question/82224/is-it-possible-to-create-instance-with-multiple-ssh-keys/
  Looks like OpenStack (nova) allows to specify only one SSH key when instance 
is created. I believe array of strings should be supported instead of single 
string only, as authorized_keys allows for more than one SSH key. Workarounds 
like using merged key are not always an option, as it scales poorly (see 
example in question linked above).

  Wishlist/enhancement request of course.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1499751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1636679] Re: don't know what key-value of volume qos specs in horizon to fill

2017-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427470
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=479fbd552da1bc16a87b0f116d09915e5c39ef87
Submitter: Jenkins
Branch:master

commit 479fbd552da1bc16a87b0f116d09915e5c39ef87
Author: milan potdar 
Date:   Mon Jan 23 16:09:07 2017 +

Add info on key-value of volume QoS spec

Give user the information on value to input for key-value,
when they click on 'manage spec' in:
- admin->system->volume->volume Types

Change-Id: Ib08d2de800370dc47ed113e72c00bec353468a38
Closes-bug: #1636679


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1636679

Title:
  don't know what key-value of volume qos specs in horizon to fill

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  when i want to manage volume qos specs,i don't know what key-value i
  should enter. How about giving user some example in UI.

  Reproduction Steps :-
  1. Login to Horizon Dashboard
  2. Go to : Admin -> System -> Volumes -> Volume Types
  3. Click on "Manage Specs"
  4. don't know what key-value should enter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1636679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663054] [NEW] neutron-dynamic-routing unit test failure

2017-02-08 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/11/430511/1/check/gate-neutron-dynamic-
routing-python35/20b9815/console.html


2017-02-08 03:16:52.841759 | running testr
2017-02-08 03:16:52.841776 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
2017-02-08 03:16:52.841790 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
2017-02-08 03:16:52.841804 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
2017-02-08 03:16:52.841813 | OS_LOG_CAPTURE=1 \
2017-02-08 03:16:52.841828 | ${PYTHON:-python} -m subunit.run discover -t ./ \
2017-02-08 03:16:52.841856 | 
${OS_TEST_PATH:-./neutron_dynamic_routing/tests/unit} \
2017-02-08 03:16:52.841863 | --list 
2017-02-08 03:16:52.841873 | --- import errors ---
2017-02-08 03:16:52.841895 | Failed to import test module: 
neutron_dynamic_routing.tests.unit.db.test_bgp_db
2017-02-08 03:16:52.841909 | Traceback (most recent call last):
2017-02-08 03:16:52.841960 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/.tox/py35/lib/python3.5/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
2017-02-08 03:16:52.841983 | module = self._get_module_from_name(name)
2017-02-08 03:16:52.842024 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/.tox/py35/lib/python3.5/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
2017-02-08 03:16:52.842034 | __import__(name)
2017-02-08 03:16:52.842072 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/neutron_dynamic_routing/tests/unit/db/test_bgp_db.py",
 line 43, in 
2017-02-08 03:16:52.842102 | 
l3_dvr_ha_scheduler_db.L3_DVR_HA_scheduler_db_mixin):
2017-02-08 03:16:52.842133 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/.tox/py35/lib/python3.5/abc.py",
 line 133, in __new__
2017-02-08 03:16:52.842162 | cls = super().__new__(mcls, name, bases, 
namespace)
2017-02-08 03:16:52.842191 | TypeError: Cannot create a consistent method 
resolution
2017-02-08 03:16:52.842214 | order (MRO) for bases L3_DVRsch_db_mixin, 
L3_DVR_HA_scheduler_db_mixin, L3_HA_NAT_db_mixin
2017-02-08 03:16:52.842236 | 
2017-02-08 03:16:52.842262 | Failed to import test module: 
neutron_dynamic_routing.tests.unit.db.test_bgp_dragentscheduler_db
2017-02-08 03:16:52.842286 | Traceback (most recent call last):
2017-02-08 03:16:52.842349 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/.tox/py35/lib/python3.5/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
2017-02-08 03:16:52.842369 | module = self._get_module_from_name(name)
2017-02-08 03:16:52.842411 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/.tox/py35/lib/python3.5/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
2017-02-08 03:16:52.842421 | __import__(name)
2017-02-08 03:16:52.842462 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/neutron_dynamic_routing/tests/unit/db/test_bgp_dragentscheduler_db.py",
 line 30, in 
2017-02-08 03:16:52.842481 | from neutron_dynamic_routing.tests.unit.db 
import test_bgp_db
2017-02-08 03:16:52.842518 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/neutron_dynamic_routing/tests/unit/db/test_bgp_db.py",
 line 43, in 
2017-02-08 03:16:52.842536 | 
l3_dvr_ha_scheduler_db.L3_DVR_HA_scheduler_db_mixin):
2017-02-08 03:16:52.842569 |   File 
"/home/jenkins/workspace/gate-neutron-dynamic-routing-python35/.tox/py35/lib/python3.5/abc.py",
 line 133, in __new__
2017-02-08 03:16:52.842610 | cls = super().__new__(mcls, name, bases, 
namespace)
2017-02-08 03:16:52.842627 | TypeError: Cannot create a consistent method 
resolution
2017-02-08 03:16:52.842653 | order (MRO) for bases L3_DVRsch_db_mixin, 
L3_DVR_HA_scheduler_db_mixin, L3_HA_NAT_db_mixin
2017-02-08 03:16:52.842666 | The test run didn't actually run any tests
2017-02-08 03:16:52.865550 | ERROR: InvocationError: '/bin/sh 
tools/pretty_tox.sh '

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
Milestone: None => ocata-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1663054

Title:
  neutron-dynamic-routing unit test failure

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/11/430511/1/check/gate-neutron-dynamic-
  routing-python35/20b9815/console.html

  
  2017-02-08 03:16:52.841759 | running testr
  2017-02-08 03:16:52.841776 | 
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  2017-02-08 03:16:52.841790 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  2017-02-08 03:16:52.841804 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
  2017-02-08 03:16:52.841813 | OS_LOG_CAPTURE=1 \
  2017-02-08 03:16:52.841828 | 

[Yahoo-eng-team] [Bug 1663049] [NEW] Local routes are not configured correctly in /etc/network/interfaces

2017-02-08 Thread Joshua Griffiths
Public bug reported:

For Debian-based platforms, additional, local routes are not added
correctly in /etc/network/interfaces.

For example, I may have interface `eth0` with address 192.168.0.10/24
but I need to add a *local* route to 192.168.1.0/24 where this subnet is
available on the same physical network - NOT via a gateway.

In this case, my route would look like:
{
"network": "192.168.1.0",
"netmask": "255.255.255.0",
"gateway": "0.0.0.0"
}

The gateway of "0.0.0.0" signifies that this is a local route and a
gateway is not required.

I would expect that /etc/network/interfaces configuration for this route looks 
like one of the below:
---
post-up route add -net 192.168.1.0 netmask 255.255.255.0 dev eth0
post-up ip route add 192.168.1.0/24 dev eth0
---

However, the configuration looks like:
---
post-up route add -net 192.168.1.0 netmask 255.255.255.0 gw 0.0.0.0
---

The `dev eth0` is especially important here as the route table needs to
know which physical interface the subnet is available on.

Whilst this may seem like a very unlikely scenario, this sort of routing
is much more common when the IP addresses are on public subnets and it's
preferable to have them communicate locally rather than via the gateway.

Additionally, it seems that using OpenStack's DHCP to configure the
networking instead does achieve the desired behaviour.

This is working correctly on RHEL-based platforms but I've not tested
any other distros.

For reference, the relevant code is in
`cloudinit.net.eni.Renderer._render_route`

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1663049

Title:
  Local routes are not configured correctly in /etc/network/interfaces

Status in cloud-init:
  New

Bug description:
  For Debian-based platforms, additional, local routes are not added
  correctly in /etc/network/interfaces.

  For example, I may have interface `eth0` with address 192.168.0.10/24
  but I need to add a *local* route to 192.168.1.0/24 where this subnet
  is available on the same physical network - NOT via a gateway.

  In this case, my route would look like:
  {
  "network": "192.168.1.0",
  "netmask": "255.255.255.0",
  "gateway": "0.0.0.0"
  }

  The gateway of "0.0.0.0" signifies that this is a local route and a
  gateway is not required.

  I would expect that /etc/network/interfaces configuration for this route 
looks like one of the below:
  ---
  post-up route add -net 192.168.1.0 netmask 255.255.255.0 dev eth0
  post-up ip route add 192.168.1.0/24 dev eth0
  ---

  However, the configuration looks like:
  ---
  post-up route add -net 192.168.1.0 netmask 255.255.255.0 gw 0.0.0.0
  ---

  The `dev eth0` is especially important here as the route table needs
  to know which physical interface the subnet is available on.

  Whilst this may seem like a very unlikely scenario, this sort of
  routing is much more common when the IP addresses are on public
  subnets and it's preferable to have them communicate locally rather
  than via the gateway.

  Additionally, it seems that using OpenStack's DHCP to configure the
  networking instead does achieve the desired behaviour.

  This is working correctly on RHEL-based platforms but I've not tested
  any other distros.

  For reference, the relevant code is in
  `cloudinit.net.eni.Renderer._render_route`

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1663049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663045] [NEW] Arch distro fails to write network config with empty dns-nameservers

2017-02-08 Thread Jon Gjengset
Public bug reported:

In distros/arch.py, the network configuration is created using

'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '')

However, when dns-nameservers is None, this causes both cloud-init and
cloud-init-local to fail with

failed run of stage init-local

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 513, in 
status_wrap
ret = functor(name, args)
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 254, in 
main_init
init.apply_network_config(bring_up=not args.local)
  File "/usr/lib/python2.7/site-packages/cloudinit/stages.py", line 641, in 
apply_network
return self.distro.apply_network_config(netcfg, bring_up=bring_up)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
154, in app
netconfig, bring_up=bring_up)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
143, in _ap
return self.apply_network(contents, bring_up=bring_up)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
125, in app
dev_names = self._write_network(settings)
  File "/usr/lib/python2.7/site-packages/cloudinit/distros/arch.py", line 67, 
in _write_n
'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '')
TypeError: 'NoneType' object is not iterable


The fix proposed in
https://bbs.archlinux.org/viewtopic.php?pid=1662566#p1662566 seems to
work for me, namely replacing the line with

'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '') if
info.get('dns-nameservers') != None else None

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1663045

Title:
  Arch distro fails to write network config with empty dns-nameservers

Status in cloud-init:
  New

Bug description:
  In distros/arch.py, the network configuration is created using

  'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '')

  However, when dns-nameservers is None, this causes both cloud-init and
  cloud-init-local to fail with

  failed run of stage init-local
  
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 513, in 
status_wrap
  ret = functor(name, args)
File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 254, in 
main_init
  init.apply_network_config(bring_up=not args.local)
File "/usr/lib/python2.7/site-packages/cloudinit/stages.py", line 641, in 
apply_network
  return self.distro.apply_network_config(netcfg, bring_up=bring_up)
File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
154, in app
  netconfig, bring_up=bring_up)
File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
143, in _ap
  return self.apply_network(contents, bring_up=bring_up)
File "/usr/lib/python2.7/site-packages/cloudinit/distros/__init__.py", line 
125, in app
  dev_names = self._write_network(settings)
File "/usr/lib/python2.7/site-packages/cloudinit/distros/arch.py", line 67, 
in _write_n
  'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '')
  TypeError: 'NoneType' object is not iterable
  

  The fix proposed in
  https://bbs.archlinux.org/viewtopic.php?pid=1662566#p1662566 seems to
  work for me, namely replacing the line with

  'DNS': str(tuple(info.get('dns-nameservers'))).replace(',', '') if
  info.get('dns-nameservers') != None else None

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1663045/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663036] [NEW] api-ref: delete server async postcondition doc is missing some info

2017-02-08 Thread Matt Riedemann
Public bug reported:

http://developer.openstack.org/api-ref/compute/?expanded=stop-server-os-
stop-action-detail,delete-server-detail#delete-server

"With correct permissions, you can see the server status as"

AS WHAT?!

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: api-ref low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663036

Title:
  api-ref: delete server async postcondition doc is missing some info

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://developer.openstack.org/api-ref/compute/?expanded=stop-server-
  os-stop-action-detail,delete-server-detail#delete-server

  "With correct permissions, you can see the server status as"

  AS WHAT?!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662298] Re: Plugin policy files should not require copying into horizon's conf dir

2017-02-08 Thread Gary W. Smith
Closing (invalidating)

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
 Assignee: Gary W. Smith (gary-w-smith) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1662298

Title:
  Plugin policy files should not require copying into horizon's conf dir

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The POLICY_FILES_PATH variable currently only accepts a single string
  that indicates where all policy files must reside.  Most plugins
  operate with services whose policy files are not already in horizon,
  so that installing plugins requires that policy files be manually
  copied into this single dir (openstack_dashboard/conf).

  Instead, POLICY_FILES_PATH should support a list of directories, so
  that plugins can add their own policy directory to the
  POLICY_FILES_PATH.

  To that end, two new pluggable settings, ADD_POLICY_FILES_PATH and
  ADD_POLICY_FILES, should be added so that plugins can register their
  policy's location and service/filename, which horizon would then add
  to the POLICY_FILES_PATH and POLICY_FILES settings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1662298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624743] Re: Project image table: admin user sees images which are not shared with me

2017-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/375170
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b01bf0f9a16b6aa48f73bf046e9ef51287cb40cc
Submitter: Jenkins
Branch:master

commit b01bf0f9a16b6aa48f73bf046e9ef51287cb40cc
Author: Brad Pokorny 
Date:   Thu Sep 22 14:27:35 2016 -0700

Make shared image text less confusing for Glance v2

When using Glance v2 and logged in as an admin, the images
panel now shows all the images in the cloud. This is the
way the Glance v2 list api works, but it changed the behavior
from v1. In Horizon, we can't tell whether non-public images
that aren't owned by current project are shared or just from
some other project without making multiple api calls. This
patch makes the text of the images less confusing when using
Glance v2, so that it no longer claims the images are "Shared
with Project".

Change-Id: I2859e104de78a6a633b0e1a2ff30dde674b4bdee
Closes-Bug: #1624743


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624743

Title:
  Project image table: admin user sees images which are not shared with
  me

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The image table of the *Project* image panel should lists public
  images, private images owned by the current project and private images
  shared with the current project.

  However, when a user logs in as a user with admin role, private images
  which are owned by another project and NOT shared with the current
  project are listed in the image table of the project image panel.

  This behavior is confusing and incompatible with the existing
  behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660444] Re: Glance stable/mitaka docs cannot be built

2017-02-08 Thread Ian Cordasco
** Changed in: glance/mitaka
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1660444

Title:
  Glance stable/mitaka docs cannot be built

Status in Glance:
  Invalid
Status in Glance mitaka series:
  Fix Released

Bug description:
  The Glance stable/mitaka docs job has been failing consistently for
  over a week now. An example of a failure is here:
  http://logs.openstack.org/periodic-stable/periodic-glance-docs-
  mitaka/384794b/

  (Quoted below for posterity)

  2017-01-30 06:10:21.562396 | Installing collected packages: six, pep8, pbr, 
pyflakes, mccabe, flake8, hacking, pytz, Babel, PyYAML, stevedore, smmap2, 
gitdb2, GitPython, bandit, coverage, extras, python-mimeparse, linecache2, 
traceback2, argparse, unittest2, testtools, fixtures, mox3, funcsigs, mock, 
Pygments, docutils, MarkupSafe, Jinja2, sphinx, requests, python-subunit, 
testrepository, testresources, testscenarios, psutil, requestsexceptions, 
wrapt, positional, iso8601, keystoneauth1, appdirs, os-client-config, 
debtcollector, oslotest, PyMySQL, psycopg2, pysendfile, qpid-python, pycparser, 
cffi, xattr, futures, python-swiftclient, oslosphinx, dulwich, reno
  2017-01-30 06:10:21.562433 |   Found existing installation: six 1.10.0
  2017-01-30 06:10:21.562461 | Uninstalling six-1.10.0:
  2017-01-30 06:10:21.562495 |   Successfully uninstalled six-1.10.0
  2017-01-30 06:10:21.562524 |   Rolling back uninstall of six
  2017-01-30 06:10:21.562542 | Exception:
  2017-01-30 06:10:21.562573 | Traceback (most recent call last):
  2017-01-30 06:10:21.562659 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pip/basecommand.py",
 line 215, in main
  2017-01-30 06:10:21.562691 | status = self.run(options, args)
  2017-01-30 06:10:21.562779 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pip/commands/install.py",
 line 342, in run
  2017-01-30 06:10:21.562808 | prefix=options.prefix_path,
  2017-01-30 06:10:21.562895 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pip/req/req_set.py",
 line 784, in install
  2017-01-30 06:10:21.562915 | **kwargs
  2017-01-30 06:10:21.563004 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pip/req/req_install.py",
 line 851, in install
  2017-01-30 06:10:21.563052 | self.move_wheel_files(self.source_dir, 
root=root, prefix=prefix)
  2017-01-30 06:10:21.563147 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pip/req/req_install.py",
 line 1064, in move_wheel_files
  2017-01-30 06:10:21.563174 | isolated=self.isolated,
  2017-01-30 06:10:21.563263 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pip/wheel.py",
 line 247, in move_wheel_files
  2017-01-30 06:10:21.563286 | prefix=prefix,
  2017-01-30 06:10:21.563377 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pip/locations.py",
 line 140, in distutils_scheme
  2017-01-30 06:10:21.563407 | d = Distribution(dist_args)
  2017-01-30 06:10:21.563494 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/setuptools/dist.py",
 line 320, in __init__
  2017-01-30 06:10:21.563527 | _Distribution.__init__(self, attrs)
  2017-01-30 06:10:21.563575 |   File "/usr/lib/python2.7/distutils/dist.py", 
line 287, in __init__
  2017-01-30 06:10:21.563628 | self.finalize_options()
  2017-01-30 06:10:21.563722 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/setuptools/dist.py",
 line 386, in finalize_options
  2017-01-30 06:10:21.563759 | ep.require(installer=self.fetch_build_egg)
  2017-01-30 06:10:21.563850 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2318, in require
  2017-01-30 06:10:21.563914 | items = working_set.resolve(reqs, env, 
installer, extras=self.extras)
  2017-01-30 06:10:21.563998 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 862, in resolve
  2017-01-30 06:10:21.564036 | new_requirements = 
dist.requires(req.extras)[::-1]
  2017-01-30 06:10:21.564121 |   File 
"/home/jenkins/workspace/periodic-glance-docs-mitaka/.tox/venv/local/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2562, in requires
  2017-01-30 06:10:21.564150 | dm = self._dep_map
  2017-01-30 06:10:21.564235 |   File 

[Yahoo-eng-team] [Bug 1662820] Re: test_volume_swap failed in Kaminario Cinder Driver CI

2017-02-08 Thread Matt Riedemann
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662820

Title:
  test_volume_swap  failed in Kaminario Cinder Driver CI

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Following test is failing in our Kaminario Cinder Driver CI:
  tempest.api.compute.admin.test_volume_swap.TestVolumeSwap.test_volume_swap 
[421.842922s] ... FAILED

  Following is Traceback in the n-cpu.log:
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager 
[req-3dcb5fb5-ca83-4e80-bfae-dbc96cc4d0de tempest-TestVolumeSwap-1094009596 
tempest-TestVolumeSwap-1094009596] [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] Failed to swap volume 
a86302ea-9104-4084-9825-b863156f4964 for 48910757-0f3b-47ba-a278-e36ddf62d415
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] Traceback (most recent call last):
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 4982, in _swap_volume
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] resize_to)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1303, in swap_volume
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] self._swap_volume(guest, disk_dev, 
conf.source_path, resize_to)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1264, in _swap_volume
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] dev.abort_job(pivot=True)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 704, in abort_job
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] 
self._guest._domain.blockJobAbort(self._disk, flags=flags)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] rv = execute(f, *args, **kwargs)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] six.reraise(c, e, tb)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] rv = meth(*args, **kwargs)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 650, in blockJobAbort
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] if ret == -1: raise libvirtError 
('virDomainBlockJobAbort() failed', dom=self)
  2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] libvirtError: Requested operation is not 
valid: pivot of disk 'vdb' requires an active copy job


  Other details:
  virt_type = qemu

  
  $ dpkg -l | grep qemu
  ii  ipxe-qemu
1.0.0+git-2013.c3d1e78-2ubuntu1.1 all  PXE boot firmware - ROM 
images for qemu
  ii  qemu-keymaps 2.0.0+dfsg-2ubuntu1.31   
 all  QEMU keyboard maps
  ii  qemu-system  2.0.0+dfsg-2ubuntu1.31   
 amd64QEMU full system emulation binaries
  ii  qemu-system-arm  2.0.0+dfsg-2ubuntu1.31   
 amd64QEMU full system emulation binaries (arm)
  ii  

[Yahoo-eng-team] [Bug 1661454] Re: Inadequate Japanese translation for "Browse" on App Catalog tab

2017-02-08 Thread Eddie Ramirez
Invalidating this bug.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1661454

Title:
  Inadequate Japanese translation for "Browse" on App Catalog tab

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in openstack i18n:
  New

Bug description:
  Japanese translation for "Browse" is not adequate for this context. It
  is currently translated into 探索, but it should be 参照 or ブラウズ, but
  unable to locate the string from Zanata

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1661454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Question #452737]: Have a picture and can't find out what it is

2017-02-08 Thread Daniela Dornbush
New question #452737 on anvil:
https://answers.launchpad.net/anvil/+question/452737

I have what we believe to be an Anvil but cant find any information about it. 
It's flat on the top. About 8 1/2"  x 9/ 1/2". It's metal and green. It is 
about 7" high and sits on a stand of about 28 ". Can send picture

-- 
You received this question notification because your team Yahoo!
Engineering Team is an answer contact for anvil.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661797] Re: identify lxd-nova platform to enable Openstack datasource

2017-02-08 Thread Scott Moser
** Also affects: nova-lxd
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: nova-lxd
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => High

** Description changed:

  nova-lxd uses the Openstack Network metadata service.
  
  In an effort to avoid polling metadata services in cloud-init we will disable
  attempts to reach the MD without positive identification of the cloud.
  
- We need to be able to positively identify that the container we are running 
+ We need to be able to positively identify that the container we are running
  inside should have access to an openstack metadata service so we can
  safely assume it will be there.
  
  How can we positively identify that a container is running in nova-lxd?
  Is there anything in the environment (possibly pid 1 environ?) that we
  can look at?
  
- One way I could see doing t his would be for lxd-nova to put 
-CLOUD_PLATFORM='openstack-nova'
+ One way I could see doing t his would be for lxd-nova to put
+    CLOUD_PLATFORM='openstack-nova'
  inside the pid 1 environment.  then cloud-init can look at /proc/1/environ
  and pick that out.
  
  Open to other ideas, and would love it if there was something we could
  do.
  
  Related bugs
-  bug 1660385: Alert user of Ec2 Datasource on lookalike cloud
+  bug 1660385: Alert user of Ec2 Datasource on lookalike cloud
+  bug 1661797: identify lxd-nova platform to enable Openstack datasource 
+  bug 1661693: identify brightbox platform to enable Ec2 datasource

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1661797

Title:
  identify lxd-nova platform to enable Openstack datasource

Status in cloud-init:
  Confirmed
Status in nova-lxd:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  nova-lxd uses the Openstack Network metadata service.

  In an effort to avoid polling metadata services in cloud-init we will disable
  attempts to reach the MD without positive identification of the cloud.

  We need to be able to positively identify that the container we are running
  inside should have access to an openstack metadata service so we can
  safely assume it will be there.

  How can we positively identify that a container is running in nova-lxd?
  Is there anything in the environment (possibly pid 1 environ?) that we
  can look at?

  One way I could see doing t his would be for lxd-nova to put
     CLOUD_PLATFORM='openstack-nova'
  inside the pid 1 environment.  then cloud-init can look at /proc/1/environ
  and pick that out.

  Open to other ideas, and would love it if there was something we could
  do.

  Related bugs
   bug 1660385: Alert user of Ec2 Datasource on lookalike cloud
   bug 1661797: identify lxd-nova platform to enable Openstack datasource 
   bug 1661693: identify brightbox platform to enable Ec2 datasource

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1661797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662762] Re: Authentication for LDAP user fails at MFA rule check

2017-02-08 Thread Lance Bragstad
** Description changed:

  I have a openstack master with LDAP server configured (fernet token
  provider). With the new changes around MFA rules
  (https://blueprints.launchpad.net/keystone/+spec/per-user-auth-plugin-
  reqs), I see that the authentication (POST /token) call fails at
- https://github.com/openstack/keystone/blob/master/keystone/auth/core.py#L377
+ 
https://github.com/openstack/keystone/blob/029476272fb869c6413aa4e70f4cae6f890e598f/keystone/auth/core.py#L377
  
- def check_auth_methods_against_rules(self, user_id, auth_methods):   
- user_ref = self.identity_api.get_user(user_id)
- mfa_rules = user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])
+ def check_auth_methods_against_rules(self, user_id, auth_methods):
+ user_ref = self.identity_api.get_user(user_id)
+ mfa_rules = user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])
  
  In the last line the code flow expects user_Ref to always have an
  options attribute and this is not present for LDAP users due to which we
  get the below and authentication fails
  
  INFO keystone.common.wsgi [req-279e9036-6c6a-4fc8-9dfe-1d219931195c - - - - 
-] POST https://ip9-114-192-140.pok.stglabs.ibm.com:5000/v3/auth/tokens
  ERROR keystone.common.wsgi [req-279e9036-6c6a-4fc8-9dfe-1d219931195c - - - - 
-] 'options'
  ERROR keystone.common.wsgi Traceback (most recent call last):
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in 
__call__
  ERROR keystone.common.wsgi result = method(req, **params)
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 132, in 
authenticate_for_token
  ERROR keystone.common.wsgi auth_context['user_id'], method_names_set):
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/auth/core.py", line 377, in 
check_auth_methods_against_rules
  ERROR keystone.common.wsgi mfa_rules = 
user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])
  ERROR keystone.common.wsgi KeyError: 'options'
  
- 
  dikonoor> dstanek:I am trying to understand if 'options' is a mandatory 
attribute in user_ref.
   dstanek: and how it gets populated
   dikonoor: it appears that it is mandatory and that we only added it 
to the SQL model
   i think maybe the LDAP model should always have an empty options 
dictionary as an attribute
   morgan: ^ does that sound correct?
   dstanek:morgan: either an empty options attribute should be added 
or the MFA rule check code above must be modified to make it 
user_ref.get('options') ..Let me go ahead and open a defect for this
   dikonoor: i prefer empty to the models look the same

** Description changed:

  I have a openstack master with LDAP server configured (fernet token
  provider). With the new changes around MFA rules
  (https://blueprints.launchpad.net/keystone/+spec/per-user-auth-plugin-
  reqs), I see that the authentication (POST /token) call fails at
  
https://github.com/openstack/keystone/blob/029476272fb869c6413aa4e70f4cae6f890e598f/keystone/auth/core.py#L377
  
  def check_auth_methods_against_rules(self, user_id, auth_methods):
  user_ref = self.identity_api.get_user(user_id)
  mfa_rules = user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])
  
  In the last line the code flow expects user_Ref to always have an
  options attribute and this is not present for LDAP users due to which we
  get the below and authentication fails
  
  INFO keystone.common.wsgi [req-279e9036-6c6a-4fc8-9dfe-1d219931195c - - - - 
-] POST https://ip9-114-192-140.pok.stglabs.ibm.com:5000/v3/auth/tokens
  ERROR keystone.common.wsgi [req-279e9036-6c6a-4fc8-9dfe-1d219931195c - - - - 
-] 'options'
  ERROR keystone.common.wsgi Traceback (most recent call last):
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in 
__call__
  ERROR keystone.common.wsgi result = method(req, **params)
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 132, in 
authenticate_for_token
  ERROR keystone.common.wsgi auth_context['user_id'], method_names_set):
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/auth/core.py", line 377, in 
check_auth_methods_against_rules
  ERROR keystone.common.wsgi mfa_rules = 
user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])
  ERROR keystone.common.wsgi KeyError: 'options'
  
- dikonoor> dstanek:I am trying to understand if 'options' is a mandatory 
attribute in user_ref.
-  dstanek: and how it gets populated
-  dikonoor: it appears that it is mandatory and that we only added it 
to the SQL model
-  i think maybe the LDAP model should always have an empty options 
dictionary as an attribute
-  morgan: ^ does that sound correct?
-  dstanek:morgan: either an empty options attribute should be added 
or the MFA rule check code above must be modified to make it 

[Yahoo-eng-team] [Bug 1662900] Re: test_volume_swap fails on the gate

2017-02-08 Thread Michal Dulko
Moving this to Nova, as it looks [1] like it's caused by failures in
Nova's services.

[1] http://logs.openstack.org/49/414549/4/check/gate-tempest-dsvm-
neutron-full-ubuntu-
xenial/58aecae/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-02-07_10_06_17_855

** Project changed: cinder => nova-project

** Project changed: nova-project => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662900

Title:
  test_volume_swap fails on the gate

Status in OpenStack Compute (nova):
  New

Bug description:
  test_volume_swap started to often fail [1] with:

  Captured traceback-1:
  ~
  Traceback (most recent call last):
File "tempest/common/waiters.py", line 202, in wait_for_volume_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Volume 5a071400-4b93-4045-aab9-457c7125e9ab failed to reach 
available status (current in-use) within the required time (196 s).

  Logstash tells us that this started between 3-4 of February.

  [1]
  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22deprecated%5C%22%20AND%20loglevel:%5C%22WARNING%5C%22%20AND%20build_branch:%5C%22master%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1662900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662900] [NEW] test_volume_swap fails on the gate

2017-02-08 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

test_volume_swap started to often fail [1] with:

Captured traceback-1:
~
Traceback (most recent call last):
  File "tempest/common/waiters.py", line 202, in wait_for_volume_status
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: Volume 5a071400-4b93-4045-aab9-457c7125e9ab failed to reach 
available status (current in-use) within the required time (196 s).

Logstash tells us that this started between 3-4 of February.

[1]
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22deprecated%5C%22%20AND%20loglevel:%5C%22WARNING%5C%22%20AND%20build_branch:%5C%22master%5C%22

** Affects: nova
 Importance: Undecided
 Status: New

-- 
test_volume_swap fails on the gate
https://bugs.launchpad.net/bugs/1662900
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662911] [NEW] v3 API create_user does not use default_project_id

2017-02-08 Thread Doug Hellmann
Public bug reported:

The v3 call to create a user doesn't use the default_project_id argument
except to validate it.

https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L918-L919

This caused problems when updating grenade to allow the ocata->pike
tests to run, because the user was not set up with a default role as it
had been under V2.

https://review.openstack.org/#/c/427916/1

http://logs.openstack.org/16/427916/1/check/gate-grenade-dsvm-neutron-ubuntu-xenial/56b7a7d/logs/apache/keystone.txt.gz?level=WARNING#_2017-02-08_13_48_57_247
http://logs.openstack.org/16/427916/1/check/gate-grenade-dsvm-neutron-ubuntu-xenial/56b7a7d/logs/grenade.sh.txt.gz#_2017-02-08_13_48_54_600

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1662911

Title:
  v3 API create_user does not use default_project_id

Status in OpenStack Identity (keystone):
  New

Bug description:
  The v3 call to create a user doesn't use the default_project_id
  argument except to validate it.

  
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L918-L919

  This caused problems when updating grenade to allow the ocata->pike
  tests to run, because the user was not set up with a default role as
  it had been under V2.

  https://review.openstack.org/#/c/427916/1

  
http://logs.openstack.org/16/427916/1/check/gate-grenade-dsvm-neutron-ubuntu-xenial/56b7a7d/logs/apache/keystone.txt.gz?level=WARNING#_2017-02-08_13_48_57_247
  
http://logs.openstack.org/16/427916/1/check/gate-grenade-dsvm-neutron-ubuntu-xenial/56b7a7d/logs/grenade.sh.txt.gz#_2017-02-08_13_48_54_600

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1662911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649446] Re: Non-Admin Access to Revocation Events

2017-02-08 Thread Frode Nordahl
Fix proposed on branch: master
Review: https://review.openstack.org/#/c/428759/

** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Changed in: keystone (Juju Charms Collection)
   Status: New => In Progress

** Changed in: keystone (Juju Charms Collection)
 Assignee: (unassigned) => Frode Nordahl (fnordahl)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649446

Title:
  Non-Admin Access to Revocation Events

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in keystone package in Juju Charms Collection:
  In Progress

Bug description:
  With the default Keystone policy any authed user can list all revocation 
events for the cluster:
  https://github.com/openstack/keystone/blob/master/etc/policy.json#L179

  This can be done by directly calling the API as such:
  curl -g -i -X GET http://localhost/identity/v3/OS-REVOKE/events -H "Accept: 
application/json" -H "X-Auth-Token: "

  and this will provide you with a normal revocation event list (see
  attachment).

  This will allow a user to over time collect a list of user_ids and
  project_ids. The project_ids aren't particularly useful, but the
  user_ids can be used to lock people of of their accounts. Or if rate
  limiting is not setup (a bad idea), or somehow bypassed, would allow
  someone to brute force access to those ids.

  Knowing the ids is no worse than knowing the usernames, but as a non-
  admin you shouldn't have access to such a list anyway.

  It is also worth noting that OpenStack policy files are rife with
  these blank policy rules, not just Keystone. Some are safe and
  intended to be accessible by any authed user, others are checked at
  the code layer, but there may be other rules that are unsafe to expose
  to any authed user and as such should actually default to
  "rule:admin_required" or something other than blank.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605832] Re: no 8021q support

2017-02-08 Thread Jakub Libosvar
Solved by using Ubuntu images

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605832

Title:
  no 8021q support

Status in CirrOS:
  New
Status in neutron:
  Invalid

Bug description:
  Apologies if this is not the right place for a feature request.

  In OpenStack Neutron we are developing a feature for to allow VMs to
  send tagged VLANs and we would like end-to-end testing support for it
  (all of which is currently based on cirros). However, Cirros doesn't
  appear to support creating VLAN interfaces:

  $ sudo ip link add link eth0 name eth0.99 type vlan id 99
  ip: RTNETLINK answers: Operation not supported

  
  Is it possible to have the 8021q kernel module loaded into cirros, or would 
that require too much space?

  
  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1605832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662869] [NEW] Multiple attempts to detach and disconnect volumes during rebuild

2017-02-08 Thread Lee Yarwood
Public bug reported:

Description
===
The following was noticed during a CI run for 
https://review.openstack.org/#/c/383859/ :

http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
devstack-plugin-nfs-
nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_17_41_994

This is due to rebuild calling for two separate detach/disconnects of a
volume when using the libvirt virt driver, once in _rebuild_default_impl
in the compute layer and a second time in cleanup within the virt driver
:

https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2653 - 
_rebuild_default_impl
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L989 
- cleanup

In the logs req-e976fee4-51df-4119-b505-5d68f4583186 tracks the rebuild
attempt. We see the first attempt to umount succeed here :

http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
devstack-plugin-nfs-
nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ALL#_2017-02-07_19_17_39_904

We then see the second attempt here and again an ERROR is logged as we
don't find the mount to be in use :

http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
devstack-plugin-nfs-
nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ALL#_2017-02-07_19_17_41_993

Steps to reproduce
==
Rebuild an instance with volumes attached

Expected result
===
Only one attempt is made to detach and disconnect each volume from the original 
instance.

Actual result
=
Two attempts are made to detach and disconnect each volume from the original 
instance.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

   https://review.openstack.org/#/c/383859/ - but it should reproduce
against master.

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   Libvirt

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   n/a

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   n/a

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662869

Title:
  Multiple attempts to detach and disconnect volumes during rebuild

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  The following was noticed during a CI run for 
https://review.openstack.org/#/c/383859/ :

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_17_41_994

  This is due to rebuild calling for two separate detach/disconnects of
  a volume when using the libvirt virt driver, once in
  _rebuild_default_impl in the compute layer and a second time in
  cleanup within the virt driver :

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2653 - 
_rebuild_default_impl
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L989 
- cleanup

  In the logs req-e976fee4-51df-4119-b505-5d68f4583186 tracks the
  rebuild attempt. We see the first attempt to umount succeed here :

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ALL#_2017-02-07_19_17_39_904

  We then see the second attempt here and again an ERROR is logged as we
  don't find the mount to be in use :

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ALL#_2017-02-07_19_17_41_993

  Steps to reproduce
  ==
  Rebuild an instance with volumes attached

  Expected result
  ===
  Only one attempt is made to detach and disconnect each volume from the 
original instance.

  Actual result
  =
  Two attempts are made to detach and disconnect each volume from the original 
instance.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 https://review.openstack.org/#/c/383859/ - but it should reproduce
  against master.

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 n/a

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 n/a

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1662869/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1662867] [NEW] update_available_resource_for_node racing instance deletion

2017-02-08 Thread Lee Yarwood
Public bug reported:

Description
===
The following trace was seen multiple times during a CI run for 
https://review.openstack.org/#/c/383859/ :

http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_10_25_548
http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_15_26_004

In the first example a request to terminate the instance 60b7cb32
appears to race an existing run of the
update_available_resource_for_node periodic task :

req-fa96477b-34d2-4ab6-83bf-24c269ed7c28

http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
devstack-plugin-nfs-
nv/a4c1057/logs/screen-n-cpu.txt.gz?#_2017-02-07_19_10_25_478

req-dc60ed89-d3da-45f6-b98c-8f57c767d751

http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
devstack-plugin-nfs-
nv/a4c1057/logs/screen-n-cpu.txt.gz?#_2017-02-07_19_10_25_548

Steps to reproduce
==
Delete an instance while update_available_resource_for_node is running

Expected result
===
Either swallow the exception and move on or lock instances in such a way that 
they can't be removed while this periodic task is running.

Actual result
=
update_available_resource_for_node fails and stops.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

   https://review.openstack.org/#/c/383859/ - but it should reproduce
against master.

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   Libvirt

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   n/a

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   n/a

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662867

Title:
  update_available_resource_for_node racing instance deletion

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  The following trace was seen multiple times during a CI run for 
https://review.openstack.org/#/c/383859/ :

  
http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_10_25_548
  
http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_15_26_004

  In the first example a request to terminate the instance 60b7cb32
  appears to race an existing run of the
  update_available_resource_for_node periodic task :

  req-fa96477b-34d2-4ab6-83bf-24c269ed7c28

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?#_2017-02-07_19_10_25_478

  req-dc60ed89-d3da-45f6-b98c-8f57c767d751

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?#_2017-02-07_19_10_25_548

  Steps to reproduce
  ==
  Delete an instance while update_available_resource_for_node is running

  Expected result
  ===
  Either swallow the exception and move on or lock instances in such a way that 
they can't be removed while this periodic task is running.

  Actual result
  =
  update_available_resource_for_node fails and stops.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 https://review.openstack.org/#/c/383859/ - but it should reproduce
  against master.

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 n/a

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 n/a

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1662867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596829] Re: String interpolation should be delayed at logging calls

2017-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/430348
Committed: 
https://git.openstack.org/cgit/openstack/python-manilaclient/commit/?id=8d67ca5cf470b1a0e339cd519aa2ad1a6b044292
Submitter: Jenkins
Branch:master

commit 8d67ca5cf470b1a0e339cd519aa2ad1a6b044292
Author: Gábor Antal 
Date:   Tue Feb 7 17:32:27 2017 +0100

Handle log message interpolation by the logger

According to OpenStack Guideline[1], logged string message should be
interpolated by the logger.

[1]: 
http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
Closes-Bug: #1596829

Change-Id: I0c4a2a1cce98dbf78dd30850951466cd01491cfc


** Changed in: python-manilaclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596829

Title:
  String interpolation should be delayed at logging calls

Status in congress:
  Fix Released
Status in Glance:
  In Progress
Status in glance_store:
  In Progress
Status in heat:
  New
Status in Ironic:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in Glance Client:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-troveclient:
  In Progress

Bug description:
  String interpolation should be delayed to be handled by the logging
  code, rather than being done at the point of the logging call.

  Wrong: LOG.debug('Example: %s' % 'bad')
  Right: LOG.debug('Example: %s', 'good')

  See the following guideline.

  * http://docs.openstack.org/developer/oslo.i18n/guidelines.html
  #adding-variables-to-log-messages

  The rule for it should be added to hacking checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1596829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662102] Re: Enhance tag mechanism

2017-02-08 Thread Alexandra Settle
Copying across action items from duplicate bug #1662644:

Armando Migliaccio (armando-migliaccio) wrote 12 hours ago: #1
We should probably make additions to the base stuff already captured here:

http://docs.openstack.org/mitaka/networking-guide/ops-resource-tags.html

We should review in-tree developer documentation to see whether there's
anything missing.

Changed in neutron:
importance: Undecided → Wishlist
assignee:   nobody → Hirofumi Ichihara (ichihara-hirofumi)
Hide
Hirofumi Ichihara (ichihara-hirofumi) wrote 4 hours ago:#2
Yeah, I'll update devref in Neutron tree.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662102

Title:
  Enhance tag mechanism

Status in neutron:
  New
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/413662
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit b56f008f3a01e5dbbf5b0744a9286a8302c3326a
  Author: Hirofumi Ichihara 
  Date:   Thu Jan 19 13:52:39 2017 +0900

  Enhance tag mechanism
  
  This patch enhances the tag mechanism for subnet, port, subnetpool,
  router resources. The tag-ext as new extension is added so that
  tag supports their resources.
  
  APIImpact: Adds tag support to subnet, port, subnetpool, router
  DocImpact: allow users to set tags on some resources
  
  Change-Id: I3ab8c2f47f283bee7219f39f20b07361b8e0c5f1
  Closes-Bug: #1661608

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662644] Re: Enhance tag mechanism

2017-02-08 Thread Alexandra Settle
*** This bug is a duplicate of bug 1662102 ***
https://bugs.launchpad.net/bugs/1662102

Yes, this has been passed and merged.

I'll have to mark this one as a duplicate for manuals:
https://bugs.launchpad.net/openstack-manuals/+bug/1662102

** This bug has been marked a duplicate of bug 1662102
   Enhance tag mechanism

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662644

Title:
  Enhance tag mechanism

Status in neutron:
  New
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/429621
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit fbf40fe1baac06ce570b660a0a4118e2030c668d
  Author: Hirofumi Ichihara 
  Date:   Thu Jan 19 13:52:39 2017 +0900

  Enhance tag mechanism
  
  This patch enhances the tag mechanism for subnet, port, subnetpool,
  router resources. The tag-ext as new extension is added so that
  tag supports their resources.
  
  APIImpact: Adds tag support to subnet, port, subnetpool, router
  DocImpact: allow users to set tags on some resources
  
  Change-Id: I3ab8c2f47f283bee7219f39f20b07361b8e0c5f1
  Closes-Bug: #1661608
  (cherry picked from commit b56f008f3a01e5dbbf5b0744a9286a8302c3326a)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662821] [NEW] provider bridge is not created in controller node/newton/ubuntu 16.04

2017-02-08 Thread Sothy
Public bug reported:

Hello,
I am running newton openstack in ubuntu 16.04 platform. I have one controller 
node and one compute node. I created network in provider network as shown in 
http://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-provider.html

Howerver, I didnt see any provider bridge in my controller. 
My ifconfig says: there is only one tap interface. I have one netns 
(qdhcp-d6aee39b-8a97-4a69-98c7-9d94093f54af)

I ping from qdhcp namespace with the following command:

sudo ip netns exec qdhcp-d6aee39b-8a97-4a69-98c7-9d94093f54af ping 203.0.113.111
That is IP address of VM. 

The message " Destination Host Unreachable" is shown.

I tried to debug it. I found provider bridge is not created in
controller node. tcpdump shows ARP requests. But provider interface does
not show anything.

In contrast, compute node has provider bridge. I am using  Linux bridge
agent in controller and compute. Both are running.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662821

Title:
  provider bridge is not created in controller node/newton/ubuntu 16.04

Status in neutron:
  New

Bug description:
  Hello,
  I am running newton openstack in ubuntu 16.04 platform. I have one controller 
node and one compute node. I created network in provider network as shown in 
http://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-provider.html

  Howerver, I didnt see any provider bridge in my controller. 
  My ifconfig says: there is only one tap interface. I have one netns 
(qdhcp-d6aee39b-8a97-4a69-98c7-9d94093f54af)

  I ping from qdhcp namespace with the following command:

  sudo ip netns exec qdhcp-d6aee39b-8a97-4a69-98c7-9d94093f54af ping 
203.0.113.111
  That is IP address of VM. 

  The message " Destination Host Unreachable" is shown.

  I tried to debug it. I found provider bridge is not created in
  controller node. tcpdump shows ARP requests. But provider interface
  does not show anything.

  In contrast, compute node has provider bridge. I am using  Linux
  bridge agent in controller and compute. Both are running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662820] [NEW] test_volume_swap failed in Kaminario Cinder Driver CI

2017-02-08 Thread nikesh
Public bug reported:

Following test is failing in our Kaminario Cinder Driver CI:
tempest.api.compute.admin.test_volume_swap.TestVolumeSwap.test_volume_swap 
[421.842922s] ... FAILED

Following is Traceback in the n-cpu.log:
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager 
[req-3dcb5fb5-ca83-4e80-bfae-dbc96cc4d0de tempest-TestVolumeSwap-1094009596 
tempest-TestVolumeSwap-1094009596] [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] Failed to swap volume 
a86302ea-9104-4084-9825-b863156f4964 for 48910757-0f3b-47ba-a278-e36ddf62d415
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] Traceback (most recent call last):
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 4982, in _swap_volume
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] resize_to)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1303, in swap_volume
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] self._swap_volume(guest, disk_dev, 
conf.source_path, resize_to)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1264, in _swap_volume
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] dev.abort_job(pivot=True)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 704, in abort_job
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] 
self._guest._domain.blockJobAbort(self._disk, flags=flags)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] rv = execute(f, *args, **kwargs)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] six.reraise(c, e, tb)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] rv = meth(*args, **kwargs)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 650, in blockJobAbort
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] if ret == -1: raise libvirtError 
('virDomainBlockJobAbort() failed', dom=self)
2017-02-08 07:45:53.873 31516 ERROR nova.compute.manager [instance: 
b9f0e4a3-0913-42fd-ab98-c3bd2541c7b5] libvirtError: Requested operation is not 
valid: pivot of disk 'vdb' requires an active copy job


Other details:
virt_type = qemu


$ dpkg -l | grep qemu
ii  ipxe-qemu1.0.0+git-2013.c3d1e78-2ubuntu1.1 
all  PXE boot firmware - ROM images for qemu
ii  qemu-keymaps 2.0.0+dfsg-2ubuntu1.31
all  QEMU keyboard maps
ii  qemu-system  2.0.0+dfsg-2ubuntu1.31
amd64QEMU full system emulation binaries
ii  qemu-system-arm  2.0.0+dfsg-2ubuntu1.31
amd64QEMU full system emulation binaries (arm)
ii  qemu-system-common   2.0.0+dfsg-2ubuntu1.31
amd64QEMU full system emulation binaries (common files)
ii  qemu-system-mips 2.0.0+dfsg-2ubuntu1.31
amd64QEMU full system emulation binaries (mips)
ii  qemu-system-misc 2.0.0+dfsg-2ubuntu1.31
amd64QEMU full system emulation binaries 

[Yahoo-eng-team] [Bug 1662804] [NEW] Agent is failing to process HA router if initialize() fails

2017-02-08 Thread venkata anil
Public bug reported:

When HA router initialize() function fails for some reason(rabbitmq
restart or no ha_port), keepalived_manager or KeepalivedInstance won't
be configured. In this case, _process_router_if_compatible fails with
exception, then _resync_router(update) will again try to process this
router in loop. As we try initialize() only once(which was failed),
retry of _process_router_if_compatible will always fail(no keepalived
manager or instance) and router is never configured(see below trace).

2017-02-06 18:34:18.539 26120 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qrouter-114a72fe-02ae-4b87-a2e7-70f962df0951', 'ip', '-o', 'link', 'show', 
'qr-e6
3406e1-e7'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:101
2017-02-06 18:34:18.544 26120 DEBUG neutron.agent.linux.utils [-]
Command: ['ip', 'netns', 'exec', 
u'qrouter-114a72fe-02ae-4b87-a2e7-70f962df0951', 'ip', '-o', 'link', 'show', 
u'qr-e63406e1-e7']
Exit code: 0
 execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:156
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info [-] 'NoneType' 
object has no attribute 'get_process'
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 359, in call
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 744, 
in process
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info 
self._process_internal_ports(agent.pd)
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 394, 
in _process_internal_ports
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info 
self.internal_network_added(p)
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 275, in 
internal_network_added
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info 
self._disable_ipv6_addressing_on_interface(interface_name)
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 235, in 
_disable_ipv6_addressing_on_interface
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info if 
self._should_delete_ipv6_lladdr(ipv6_lladdr):
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 217, in 
_should_delete_ipv6_lladdr
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info if 
manager.get_process().active:
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info 
AttributeError: 'NoneType' object has no attribute 'get_process'
2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router '114a72fe-02ae-4b87-a2e7-70f962df0951'
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 506, in 
_process_router_update
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 445, in 
_process_router_if_compatible
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent 
self._process_updated_router(router)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 459, in 
_process_updated_router
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent ri.process(self)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 377, in 
process
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 362, in call
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent self.logger(e)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)