[Yahoo-eng-team] [Bug 1580843] Re: wrong id returned when execute "image import"

2016-05-13 Thread Wenjun Wang
Thanks for wangxiyuan's comments,It's not a bug.

** Changed in: glance
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1580843

Title:
  wrong id returned when execute "image import"

Status in Glance:
  Invalid

Bug description:
  The CMD I used to import an image:

  glance --os-image-api-version 2 task-create --type import --input
  '{"import_from":
  "http://10.43.176.8/images/cirros-0.3.2-x86_64-disk.img","import_from_format":
  "qcow2","image_properties": {"disk_format":
  "qcow2","container_format": "bare"}}'

  This CMD returned as follows:
  
++--+
  | Property   | Value  
  |
  
++--+
  | created_at | 2016-05-12T02:58:50Z   
  |
  | id | 2e7e9564-95fe-4e87-8d32-072c8574e8c5   
  |
  | input  | {"image_properties": {"container_format": "bare", 
"disk_format": "qcow2"},   |
  || "import_from_format": "qcow2", "import_from":  
  |
  || "http://10.43.176.8/images/cirros-0.3.2-x86_64-disk.img"}  
  |
  | message|
  |
  | owner  | d8f2596f60b4481e83a99f6644619fe5   
  |
  | result | None   
  |
  | status | pending
  |
  | type   | import 
  |
  | updated_at | 2016-05-12T02:58:50Z   
  |
  
++--+

  But I can't find an image with ID 2e7e9564-95fe-4e87-8d32-072c8574e8c5,I got 
image list:
  +--+-+
  | ID   | Name|
  +--+-+
  | cf7e0ec2-bf81-4f86-97f2-14d884a57bc2 | |
  | 7136811f-e47e-4832-9088-0ab5f0990ebd | |
  | 3ce6c411-1bd6-490c-b7e6-a70319e244e2 | cirros-0.3.4-x86_64-uec |
  | 133a819a-9cac-4466-8d9c-33ea17438abd | cirros-0.3.4-x86_64-uec-kernel  |
  | 98c4f810-3489-4b57-959a-2625b75d55c8 | cirros-0.3.4-x86_64-uec-ramdisk |
  +--+-+

  I'm sure that image with ID "7136811f-e47e-4832-9088-0ab5f0990ebd" is
  a new one.

  I got these result on Mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1580843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261485] Re: Error messages do not convey problem or resolution

2016-05-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1261485

Title:
  Error messages do not convey problem or resolution

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Error messages frequently do not indicate why the error occurred or
  how the user should address the problem to avoid the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1261485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581325] Re: test_dhcp_agent_main_agent_manager fails in UT

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/315498
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9e663477f5d486f1625372c9798142758fbfec90
Submitter: Jenkins
Branch:master

commit 9e663477f5d486f1625372c9798142758fbfec90
Author: Davanum Srinivas 
Date:   Thu May 12 07:36:01 2016 -0400

Avoid testing oslo.service library internals

oslo.service===1.10.0 added a few kwargs with proper defaults. In
our tests we seem to be testing library internals, causing the tests
to fail with the new library version.

As a workaround to unblock the gate, add modified test code
to pass with both the old 1.9.0 and 1.10.0 of the oslo.service
library.

Change-Id: Ife69c319409a4214658bc94498108b8b3c5db8cb
Closes-Bug: #1581325


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581325

Title:
  test_dhcp_agent_main_agent_manager fails in UT

Status in neutron:
  Fix Released

Bug description:
  We can see UT failure[1]. Oslo.service 1.10.0 caused this error.

  ft8.16: 
neutron.tests.unit.agent.dhcp.test_agent.TestDhcpAgent.test_dhcp_agent_main_agent_manager_StringException:
 Empty attachments:
    pythonlogging:''
    stderr
    stdout

  Traceback (most recent call last):
    File "neutron/tests/unit/agent/dhcp/test_agent.py", line 287, in 
test_dhcp_agent_main_agent_manager
  mock.call().wait()])
    File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 969, in assert_has_calls
  ), cause)
    File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/six.py",
 line 718, in raise_from
  raise value
  AssertionError: Calls not found.
  Expected: [call(),
   call().launch_service(),
   call().wait()]
  Actual: [call(, 
restart_method='reload'),
   call().launch_service(, 
workers=1),
   call().wait()]

  [1]: http://logs.openstack.org/92/314492/1/gate/gate-neutron-
  python27/c2878bb/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580831] Re: Unit test: the path of block_devs is wrong in kilo.

2016-05-13 Thread yuyafei
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580831

Title:
  Unit test: the path of block_devs is wrong in kilo.

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  In kilo, the path of block_devs is ['/dev/disks/by-
  path/%s-iscsi-%s-lun-2' % (location, iqn)], but the real path is
  ['/dev/disk/by-path/%s-iscsi-%s-lun-2' % (location, iqn)] in the nova
  unit test case:
  
nova.tests.unit.virt.libvirt.test_volume.LibvirtVolumeTestCase.test_libvirt_kvm_volume_with_multipath_still_in_use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1580831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581246] Re: Ironic driver: _cleanup_deploy is called with incorrect parameters

2016-05-13 Thread Matt Riedemann
Yeah it doesn't raise that up because it's in a
excutils.save_and_reraise_exception context.

** Tags added: ironic

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581246

Title:
  Ironic driver: _cleanup_deploy is called with incorrect parameters

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  stable/mitaka release.
  If error happens in _generate_configdrive Ironic driver fails cleanup  
because of

  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in 
_build_resources
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] yield resources
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in 
_build_and_run_instance
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] block_device_info=block_device_info)
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 748, in 
spawn
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] flavor=flavor)
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] TypeError: _cleanup_deploy() takes 
exactly 4 arguments (6 given)

  Call
  
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/ironic/driver.py#L747
  Function definition
  
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/ironic/driver.py#L374

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581350] Re: baremetal apis raises 500 InternalServerError if ironic baremetal service is not configured or reachable

2016-05-13 Thread Matt Riedemann
I'm almost inclined to just mark this won't fix since it's a proxy API
to Ironic and in Newton we're going to be deprecating all of these.

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581350

Title:
  baremetal apis raises 500 InternalServerError if ironic baremetal
  service is not configured or reachable

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Baremetal apis raises 500 InternalServerError if ironic baremetal
  service is not configured or reachable.

  Steps to reproduce
  ==

  Command:
  nova baremetal-node-list

  Actual result
  =
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-663fbe2c-81b6-4264-9e02-efe5283e5f8f)

  Command:
  nova baremetal-node-show 1

  Actual result
  =
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-898b8986-ecd3-4d13-a819-1bcd0cf703c8)

  Expected result
  ===
  It should return 503 status code if ironic baremetal service is not 
configured or reachable.

  n-API LOG:

  2016-05-13 06:34:14.337 ERROR nova.api.openstack.extensions 
[req-898b8986-ecd3-4d13-a819-1bcd0cf703c8 admin admin] Unexpected exception in 
API method
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 90, in 
index
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions icli = 
_get_ironic_client()
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 61, in 
_get_ironic_client
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions icli = 
ironic_client.get_client(CONF.ironic.api_version, **kwargs)
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/ironicclient/client.py", line 137, in 
get_client
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions raise 
exc.AmbiguousAuthSystem(exception_msg)
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions 
AmbiguousAuthSystem: Must provide Keystone credentials or user-defined endpoint 
and token
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581656] Re: Broken link to filtering section in Show floating ip details.

2016-05-13 Thread Brian Haley
Can I close this bug then?

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581656

Title:
  Broken link to filtering section in Show floating ip details.

Status in neutron:
  Invalid

Bug description:
  In repo: 
https://github.com/openstack/api-site/blob/master/api-ref/source/networking/v2-ext/layer3-ext.inc
  In operation  "Show floating IP details"  here 
http://developer.openstack.org/api-ref-networking-v2-ext.html#showFloatingIp 
  It offers a link to filtering section which is broken.  Link should point to 
: 
https://wiki.openstack.org/wiki/Neutron/APIv2-specification#Filtering_and_Column_Selection
 
  The “List floating IP” operation links correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581669] [NEW] Preview Page: Modal location is off

2016-05-13 Thread Diana Whitten
Public bug reported:

Preview Page: Modal location is off

There is JavaScript in horizon that is forcing a hardcoded top offset
for the modal.  This is causing shenanigans on the theme preview page
for the lower sections.

** Affects: horizon
 Importance: Medium
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress


** Tags: branding

** Attachment added: "Screen Shot 2016-05-13 at 1.28.22 PM.png"
   
https://bugs.launchpad.net/bugs/1581669/+attachment/4662447/+files/Screen%20Shot%202016-05-13%20at%201.28.22%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581669

Title:
  Preview Page: Modal location is off

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Preview Page: Modal location is off

  There is JavaScript in horizon that is forcing a hardcoded top offset
  for the modal.  This is causing shenanigans on the theme preview page
  for the lower sections.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581667] [NEW] AttributeError - 'NoneType' object has no attribute 'lower'

2016-05-13 Thread Adriano
Public bug reported:

When create a service with region = "None", occurs Error list services
without horizon.


how to test
> create a service any with the region = None

Example:

+--+---+--++-+---+--+
| ID   | Region| Service Name | Service Type   
| Enabled | Interface | URL  |
+--+---+--++-+---+--+
| 01089a905fec48ef957e77281c9aec92 | RegionOne | nova_legacy  | compute_legacy 
| True| public| http://10.0.99.85:8774/v2/$(project_id)s |
| 1238174be563469fa1fd34fec099bcf8 | RegionOne | cinder   | volume 
| True| admin | http://10.0.99.85:8776/v1/$(project_id)s |
| 22303c8d5b0340cda534f0fd33fd2ac3 | None  | nova | image  
| True| public| http://10.0.99.85:500/v1/$(project_id)s  |
-

> go to http://localhost/dashboard/admin/info/

horizon error

File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/templatetags/context_selection.py",
 line 100, in 
AttributeError: 'NoneType' object has no attribute 'lower'
2016-05-11 16:59:02.554680 key=lambda x: x.lower()),

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581667

Title:
  AttributeError - 'NoneType' object has no attribute 'lower'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When create a service with region = "None", occurs Error list services
  without horizon.

  
  how to test
  > create a service any with the region = None

  Example:

  
+--+---+--++-+---+--+
  | ID   | Region| Service Name | Service Type  
 | Enabled | Interface | URL  |
  
+--+---+--++-+---+--+
  | 01089a905fec48ef957e77281c9aec92 | RegionOne | nova_legacy  | 
compute_legacy | True| public| http://10.0.99.85:8774/v2/$(project_id)s 
|
  | 1238174be563469fa1fd34fec099bcf8 | RegionOne | cinder   | volume
 | True| admin | http://10.0.99.85:8776/v1/$(project_id)s |
  | 22303c8d5b0340cda534f0fd33fd2ac3 | None  | nova | image 
 | True| public| http://10.0.99.85:500/v1/$(project_id)s  |
  
-

  > go to http://localhost/dashboard/admin/info/

  horizon error

  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/templatetags/context_selection.py",
 line 100, in 
  AttributeError: 'NoneType' object has no attribute 'lower'
  2016-05-11 16:59:02.554680 key=lambda x: x.lower()),

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581660] [NEW] Neutron-LBaaSv2: Fix network_resources path

2016-05-13 Thread Franklin Naval
Public bug reported:

In neutron_lbaas/tests/tempest/v2/scenario/base.py
network_reources changed upstream in tempest causing errors like this:
http://logs.openstack.org/17/316217/1/check/gate-neutron-lbaasv2-dsvm-scenario/b042f1d/console.html#_2016-05-13_18_53_10_113

related change: https://review.openstack.org/#/c/311221/

** Affects: neutron
 Importance: Undecided
 Assignee: Franklin Naval (franknaval)
 Status: In Progress

** Project changed: octavia => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581660

Title:
  Neutron-LBaaSv2: Fix network_resources path

Status in neutron:
  In Progress

Bug description:
  In neutron_lbaas/tests/tempest/v2/scenario/base.py
  network_reources changed upstream in tempest causing errors like this:
  
http://logs.openstack.org/17/316217/1/check/gate-neutron-lbaasv2-dsvm-scenario/b042f1d/console.html#_2016-05-13_18_53_10_113

  related change: https://review.openstack.org/#/c/311221/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581660] [NEW] Neutron-LBaaSv2: Fix network_resources path

2016-05-13 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In neutron_lbaas/tests/tempest/v2/scenario/base.py
network_reources changed upstream in tempest causing errors like this:
http://logs.openstack.org/17/316217/1/check/gate-neutron-lbaasv2-dsvm-scenario/b042f1d/console.html#_2016-05-13_18_53_10_113

related change: https://review.openstack.org/#/c/311221/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Neutron-LBaaSv2: Fix network_resources path
https://bugs.launchpad.net/bugs/1581660
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581656] [NEW] Broken link to filtering section in Show floating ip details.

2016-05-13 Thread cat lookabaugh
Public bug reported:

In repo: 
https://github.com/openstack/api-site/blob/master/api-ref/source/networking/v2-ext/layer3-ext.inc
In operation  "Show floating IP details"  here 
http://developer.openstack.org/api-ref-networking-v2-ext.html#showFloatingIp 
It offers a link to filtering section which is broken.  Link should point to : 
https://wiki.openstack.org/wiki/Neutron/APIv2-specification#Filtering_and_Column_Selection
 
The “List floating IP” operation links correctly.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581656

Title:
  Broken link to filtering section in Show floating ip details.

Status in neutron:
  New

Bug description:
  In repo: 
https://github.com/openstack/api-site/blob/master/api-ref/source/networking/v2-ext/layer3-ext.inc
  In operation  "Show floating IP details"  here 
http://developer.openstack.org/api-ref-networking-v2-ext.html#showFloatingIp 
  It offers a link to filtering section which is broken.  Link should point to 
: 
https://wiki.openstack.org/wiki/Neutron/APIv2-specification#Filtering_and_Column_Selection
 
  The “List floating IP” operation links correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576586] Re: Link "Download key pair" regenerates keypair

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/311999
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=7bc53a7e77194fa7dd278de6f3f933c191f43cf2
Submitter: Jenkins
Branch:master

commit 7bc53a7e77194fa7dd278de6f3f933c191f43cf2
Author: zhangguoqing 
Date:   Tue May 3 07:36:03 2016 +

rename link "Download key pair" to "Re-generate key pair"

Change-Id: I4885abf32a0ed8fe0edcfa56191d5932f7835b00
Closes-Bug: #1576586


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1576586

Title:
  Link "Download key pair" regenerates keypair

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  To reproduce the problem pass the following steps:

  - login into Horizon
  - click Project
  - click Compute
  - click Access & Security
  - click Key Pairs
  - click Create Key Pair
  - enter key pair name
  - click Generate (in Safari download starts automatically)
  - click Download key pair

  Expected result:

  - Downloads folder contains two copy of the same private key

  Actual result:

  - Downloads folder contains two different private keys

  Workaround:

  - generate public key for each of private keys and determine which one is 
valid
   
  Impact:

  - user can not get SSH access to new compute nodes

  Description of the environment:

  - Liberty OpenStack

  Ways to solve:

  - rename link  "Download key pair" to "Re-generate key pair".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1576586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581635] [NEW] combobox no value default or none

2016-05-13 Thread Adriano
Public bug reported:

how to reproduce the error:

> go url horizon ---> http://localhost/dashboard/identity/users/
> view -> any user
> Verify Project ID  = None 
> Edit any Project ID = None 
> Go combobox "Primary Project"
> Click combobox

In combobox not have an option "None" or "default" Value, causing
confusion for the user.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ux

** Tags added: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581635

Title:
  combobox no value default or none

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  how to reproduce the error:

  > go url horizon ---> http://localhost/dashboard/identity/users/
  > view -> any user
  > Verify Project ID  = None 
  > Edit any Project ID = None 
  > Go combobox "Primary Project"
  > Click combobox

  In combobox not have an option "None" or "default" Value, causing
  confusion for the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580409] Re: misspell words in glance

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/314851
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=a9c2e1166105a5176671d198896fe55c3c19ea34
Submitter: Jenkins
Branch:master

commit a9c2e1166105a5176671d198896fe55c3c19ea34
Author: zhufl 
Date:   Wed May 11 10:43:46 2016 +0800

Correct some misspelt words in glance

There are some misspelt words in glance,
specfied (specified)
pluging (plugin)
cient (client)
siganture (signature)
propertys (properties)
delted (deleted)
collumn (column)
pacakge (package)
sigle (single)
logile (logfile)

Change-Id: Iedb4f4199c88bdad42d80475906635148bb9df83
Closes-Bug: #1580409


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1580409

Title:
  misspell words in glance

Status in Glance:
  Fix Released

Bug description:
  There are some misspell words in glance,
  specfied (specified)glance/async/flows/ovf_process.py
  pluging (plugin)glance/api/v2/image_data.py
  cient (client)  glance/api/v2/images.py
  siganture (signature)   glance/common/signature_utils.py
  propertys (properties)  glance/tests/unit/v1/test_api.py
  delted (deleted)glance/tests/unit/v2/test_registry_client.py
  collumn (column)glance/tests/unit/test_migrations.py
  pacakge (package)   glance/tests/unit/async/flows/test_ovf_process.py
  sigle (single)  glance/tests/unit/test_glare_plugin_loader.py
  logile (logline)glance/tests/utils.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1580409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552960] Re: Tempest and Neutron duplicate tests

2016-05-13 Thread Assaf Muller
** No longer affects: neutron/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552960

Title:
  Tempest and Neutron duplicate tests

Status in neutron:
  In Progress

Bug description:
  Problem statement:

  1) Tests overlap between Tempest and Neutron repos - 264 tests last I 
checked. The effect is:
  1.1) Confusion amongst QA & dev contributors and reviewers. I'm writing a 
test, where should it go? Someone just proposed a Tempest patch for a new 
Neutron API, what should I do with this patch?
  1.2) Wasteful from a computing resources point of view - The same tests are 
being run multiple times for every Neutron patchset.
  2) New APIs (Subnet pools, address scopes, QoS, RBAC, port security, service 
flavors and availability zones), are not covered by Tempest tests. Consumers 
have to adapt and run both the Tempest tests and the Neutron tests in two 
separate runs. This causes a surprising amount of grief.

  Proposed solution:

  For problem 1, we eliminate the overlap. We do this by defining a set
  of tests that will live in Tempest, and another set of tests that will
  live in Neutron. More information may be found here:
  https://review.openstack.org/#/c/280427/. After deciding on the line
  in the sand, we will delete any tests in Neutron that should continue
  to live in Tempest. Some Neutron tests were modified after they were
  copied from Tempest, these modifications will have to be examined and
  then proposed to Tempest. Afterwards these tests may be removed from
  Neutron, eliminating the overlap from the Neutron side. Once this is
  done, overlapping tests may be deleted from Tempest.

  For problem 2, we will develop a Neutron Tempest plugin. This will be
  tracked in a separate bug. Note that there's already a patch for this
  up for review: https://review.openstack.org/#/c/274023/

  * The work is also being tracked here:
  https://etherpad.openstack.org/p/neutron-tempest-defork

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500468] Re: [UI] Cluster Node Process list display is unsightly

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/310476
Committed: 
https://git.openstack.org/cgit/openstack/sahara-dashboard/commit/?id=3e2d96b5a48733c90224950282974a708c58005d
Submitter: Jenkins
Branch:master

commit 3e2d96b5a48733c90224950282974a708c58005d
Author: Mikhail Lelyakin 
Date:   Wed Apr 27 17:00:37 2016 +0300

Fix node processes view in node groups

In tab "Node Groups" list of node processes is
rather ugly. This change fix this bug.

Change-Id: Iaa3e0bae2b43765af261bb0ceb8a881d6a480b5e
Closes-bug: 1500468


** Changed in: sahara
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500468

Title:
  [UI] Cluster Node Process list display is unsightly

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Sahara:
  Fix Released

Bug description:
  ** Low Priority **

  Under Data Processing -> Clusters ->  to see the
  details page, then click on the Node Groups tab.

  The alignment of the list of node processes is rather ugly.  The dots
  for the list appear to the left of everything else in the node group
  details.  It seems like the list should be indented with respect to
  the Node Processes label.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557498] Re: Using logging in the serial console worker blocks Nova

2016-05-13 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress => Won't Fix

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/liberty
   Importance: Undecided => High

** Changed in: nova/liberty
 Assignee: (unassigned) => Lucian Petrut (petrutlucian94)

** Changed in: nova
 Assignee: Lucian Petrut (petrutlucian94) => (unassigned)

** Changed in: nova
   Importance: High => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557498

Title:
  Using logging in the serial console worker blocks Nova

Status in OpenStack Compute (nova):
  Won't Fix
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in os-win:
  Fix Released

Bug description:
  The worker used by Nova to log instance serial console output can log
  an exception message.

  The issue is that logging a message from a different thread causes
  Nova to hang. It seems that the logging file handler causes issues
  when greenthreads and multiple native threads are used at the same,
  and the native threads log messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1557498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581583] [NEW] test_update_disallowed_attributes tests incorrectly

2016-05-13 Thread Niall Bunting
Public bug reported:

test_update_disallowed_attributes in
glance/tests/unit/v2/test_images_resource.py

Does not work as intended. It does not test if the attributes are
disallowed but actually falls back to a validation that happens later on
in the code.

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

** Summary changed:

- check_allowed does not work
+ test_update_disallowed_attributes does not work

** Summary changed:

- test_update_disallowed_attributes does not work
+ test_update_disallowed_attributes tests incorrectly

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1581583

Title:
  test_update_disallowed_attributes tests incorrectly

Status in Glance:
  New

Bug description:
  test_update_disallowed_attributes in
  glance/tests/unit/v2/test_images_resource.py

  Does not work as intended. It does not test if the attributes are
  disallowed but actually falls back to a validation that happens later
  on in the code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1581583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581580] [NEW] Heavy cpu load seen when keepalived state change server gets wsi_default_pool_size requests at same time

2016-05-13 Thread venkata anil
Public bug reported:

With wsgi_default_pool_size=100[1], if the keepalived state change
server gets 100 requests at the same time, while processing the requests
heavy load is seen on cpu, making the network node unresponsive. For
each request, keepalived state change server spawns a new meta data
proxy process(i.e neutron-ns-metadata-proxy). During heavy cpu load,
with "top" command, I can see many metadata proxy processes in "running"
state at same time(see the attachment).

When wsgi_default_pool_size=8, I see state change server spawning 8
metadata proxy processes at a time("top" command shows 8 meta data proxy
processes in "running" state at a time), cpu load is less and metadata
proxy processes(for example, 100) spawned for all requests without
failures.

We can keep wsgi_default_pool_size=100 for neutron API server, and use
seperate configuration for UnixDomainWSGIServer(for example
CONF.unix_domain_wsgi_default_pool_size).

neutron/agent/linux/utils.py
class UnixDomainWSGIServer(wsgi.Server):

def _run(self, application, socket):
"""Start a WSGI service in a new green thread."""
logger = logging.getLogger('eventlet.wsgi.server') 
eventlet.wsgi.server(socket, 
 application,
 max_size=CONF.unix_domain_wsgi_default_pool_size,
 protocol=UnixDomainHttpProtocol,
 log=logger)

[1]
https://github.com/openstack/neutron/commit/9d573387f1e33ce85269d3ed9be501717eed4807

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: l3-ha

** Attachment added: "top.gif"
   https://bugs.launchpad.net/bugs/1581580/+attachment/4662237/+files/top.gif

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581580

Title:
  Heavy cpu load seen when keepalived state change server gets
  wsi_default_pool_size requests at same time

Status in neutron:
  New

Bug description:
  With wsgi_default_pool_size=100[1], if the keepalived state change
  server gets 100 requests at the same time, while processing the
  requests heavy load is seen on cpu, making the network node
  unresponsive. For each request, keepalived state change server spawns
  a new meta data proxy process(i.e neutron-ns-metadata-proxy). During
  heavy cpu load, with "top" command, I can see many metadata proxy
  processes in "running" state at same time(see the attachment).

  When wsgi_default_pool_size=8, I see state change server spawning 8
  metadata proxy processes at a time("top" command shows 8 meta data
  proxy processes in "running" state at a time), cpu load is less and
  metadata proxy processes(for example, 100) spawned for all requests
  without failures.

  We can keep wsgi_default_pool_size=100 for neutron API server, and use
  seperate configuration for UnixDomainWSGIServer(for example
  CONF.unix_domain_wsgi_default_pool_size).

  neutron/agent/linux/utils.py
  class UnixDomainWSGIServer(wsgi.Server):

  def _run(self, application, socket):
  """Start a WSGI service in a new green thread."""
  logger = logging.getLogger('eventlet.wsgi.server') 
  eventlet.wsgi.server(socket, 
   application,
   max_size=CONF.unix_domain_wsgi_default_pool_size,
   protocol=UnixDomainHttpProtocol,
   log=logger)

  [1]
  
https://github.com/openstack/neutron/commit/9d573387f1e33ce85269d3ed9be501717eed4807

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553595] Re: test_external_network_visibility intermittent failure

2016-05-13 Thread Matt Riedemann
** Changed in: neutron/kilo
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553595

Title:
  test_external_network_visibility intermittent failure

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Won't Fix

Bug description:
  Very odd failure:

  http://logs.openstack.org/79/288279/3/gate/gate-neutron-dsvm-
  api/300ee95/testr_results.html.gz

  ft33.1: 
neutron.tests.api.test_networks.NetworksIpV6TestJSON.test_external_network_visibility[id-af774677-42a9-4e4b-bb58-16fe6a5bc1ec,smoke]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2016-03-05 17:07:49,598 11488 INFO [tempest.lib.common.rest_client] 
Request (NetworksIpV6TestJSON:test_external_network_visibility): 200 GET 
http://127.0.0.1:9696/v2.0/networks?router%3Aexternal=True 0.232s
  2016-03-05 17:07:49,599 11488 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: None
  Response - Headers: {'status': '200', 'content-location': 
'http://127.0.0.1:9696/v2.0/networks?router%3Aexternal=True', 'content-type': 
'application/json; charset=UTF-8', 'connection': 'close', 
'x-openstack-request-id': 'req-7c15efb9-e07d-47de-8f49-e77dc2059f57', 
'content-length': '1199', 'date': 'Sat, 05 Mar 2016 17:07:49 GMT'}
  Body: {"networks": [{"status": "ACTIVE", "router:external": true, 
"availability_zone_hints": [], "availability_zones": ["nova"], "qos_policy_id": 
null, "subnets": ["1ee8f3fc-1957-46c6-8a7c-6a5335342871", 
"068121cc-6ed9-4bdb-8813-35fe689642c2"], "shared": false, "tenant_id": 
"e118b21bf7a74b36a7e1339918290567", "created_at": "2016-03-05T16:53:27", 
"tags": [], "ipv6_address_scope": null, "updated_at": "2016-03-05T16:53:27", 
"is_default": true, "admin_state_up": true, "ipv4_address_scope": null, 
"port_security_enabled": true, "mtu": 1450, "id": 
"85a04141-b614-406d-b7d8-912c2a37bc4b", "name": "public"}, {"status": "ACTIVE", 
"router:external": true, "availability_zone_hints": [], "availability_zones": 
["nova"], "qos_policy_id": null, "subnets": 
["d3ea9b6d-a20e-48c0-b7ec-50f6239c5199"], "shared": true, "tenant_id": 
"d6562d45e82f4a85a30dc0cec714e04d", "created_at": "2016-03-05T17:07:31", 
"tags": [], "ipv6_address_scope": null, "updated_at": "2016-03-05T17:07:31", 
"is_default": fals
 e, "admin_state_up": true, "ipv4_address_scope": 
"978d5509-cfa9-4753-9ff3-6bb11fdb6f57", "port_security_enabled": true, "mtu": 
1450, "id": "a005c6f8-1438-42aa-a86c-68d04796d2e9", "name": 
"sharednetwork--1158192641"}]}
  2016-03-05 17:07:49,947 11488 INFO [tempest.lib.common.rest_client] 
Request (NetworksIpV6TestJSON:test_external_network_visibility): 200 GET 
http://127.0.0.1:9696/v2.0/subnets 0.348s
  2016-03-05 17:07:49,948 11488 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'X-Auth-Token': 
'', 'Accept': 'application/json'}
  Body: None
  Response - Headers: {'status': '200', 'content-location': 
'http://127.0.0.1:9696/v2.0/subnets', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close', 'x-openstack-request-id': 
'req-7a2b800a-eeb9-4c4d-92c8-1e2bf556fb89', 'content-length': '1641', 'date': 
'Sat, 05 Mar 2016 17:07:49 GMT'}
  Body: {"subnets": [{"name": "", "enable_dhcp": true, "network_id": 
"a005c6f8-1438-42aa-a86c-68d04796d2e9", "tenant_id": 
"d6562d45e82f4a85a30dc0cec714e04d", "created_at": "2016-03-05T17:07:35", 
"dns_nameservers": [], "updated_at": "2016-03-05T17:07:35", "gateway_ip": 
"8.0.0.1", "ipv6_ra_mode": null, "allocation_pools": [{"start": "8.0.0.2", 
"end": "8.0.0.14"}], "host_routes": [], "ip_version": 4, "ipv6_address_mode": 
null, "cidr": "8.0.0.0/28", "id": "d3ea9b6d-a20e-48c0-b7ec-50f6239c5199", 
"subnetpool_id": "b5058565-3ce7-448c-a581-3411f1aa764b"}, {"name": 
"tempest-BaseTestCase-467862474-subnet", "enable_dhcp": true, "network_id": 
"00d56d28-c7c2-4059-b3f5-146e60110b67", "tenant_id": 
"f3ef1b7cfa324fb29d4ea00646a1bb61", "created_at": "2016-03-05T17:07:37", 
"dns_nameservers": [], "updated_at": "2016-03-05T17:07:37", "gateway_ip": 
"10.100.0.1", "ipv6_ra_mode": null, "allocation_pools": [{"start": 
"10.100.0.2", "end": "10.100.0.14"}], "host_routes": [], "ip_version": 4, 
"ipv6_addr
 ess_mode": null, "cidr": "10.100.0.0/28", "id": 
"08548e7c-5e95-4371-8694-1d4ceba7c2e1", "subnetpool_id": null}, {"name": "", 
"enable_dhcp": true, "network_id": "d527821a-86b1-4bcc-be1f-7231c8640a60", 
"tenant_id": "f3ef1b7cfa324fb29d4ea00646a1bb61", "created_at": 
"2016-03-05T17:07:47", "dns_nameservers": [], "updated_at": 
"2016-03-05T17:07:47", "gateway_ip": "2003:0:0:::1", "ipv6_ra_mode": null, 
"allocation_pools": [{"start": "2003:0:0:::2", "end": 
"2003::::::"}], "host_routes": [], "ip_version": 6, 

[Yahoo-eng-team] [Bug 1521823] Re: reboot test fails in gate-grenade-dsvm-multinode with missing disk path

2016-05-13 Thread Matt Riedemann
Fixed by infra.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521823

Title:
  reboot test fails in gate-grenade-dsvm-multinode with missing disk
  path

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/44/245344/4/gate/gate-grenade-dsvm-
  
multinode/1840523/logs/subnode-2/old/screen-n-cpu.txt.gz?level=TRACE#_2015-12-01_23_58_04_999

  2015-12-01 23:58:04.999 ERROR nova.compute.manager 
[req-8e76a5ba-f89f-4bfe-8218-aeaa580a6e13 
tempest-ServerActionsTestJSON-475123373 
tempest-ServerActionsTestJSON-884217194] [instance: 
e50d2ceb-2be4-4c1a-a190-4aa9ab160af9] Cannot reboot instance: [Errno 2] No such 
file or directory: 
'/opt/stack/data/nova/instances/e50d2ceb-2be4-4c1a-a190-4aa9ab160af9/disk'
  2015-12-01 23:58:05.513 ERROR oslo_messaging.rpc.dispatcher 
[req-8e76a5ba-f89f-4bfe-8218-aeaa580a6e13 
tempest-ServerActionsTestJSON-475123373 
tempest-ServerActionsTestJSON-884217194] Exception during message handling: 
[Errno 2] No such file or directory: 
'/opt/stack/data/nova/instances/e50d2ceb-2be4-4c1a-a190-4aa9ab160af9/disk'
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/exception.py", line 89, in wrapped
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher payload)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/exception.py", line 72, in wrapped
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 350, in decorated_function
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance=instance)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 323, in decorated_function
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 400, in decorated_function
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 378, in decorated_function
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 366, in decorated_function
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-12-01 23:58:05.513 31663 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/old/nova/nova/compute/manager.py", 

[Yahoo-eng-team] [Bug 1525901] Re: Agents report as started before neutron recognizes as active

2016-05-13 Thread Assaf Muller
** Changed in: neutron/kilo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525901

Title:
  Agents report as started before neutron recognizes as active

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  In HA, there is a potential race condition between the openvswitch
  agent and other agents that "own", depend on or manipulate ports. As
  the neutron server resumes on a failover it will not immediately be
  aware of openvswitch agents that have also been activated on failover
  and act as though there are no active openvswitch agents (this is an
  example, it most likely affects other L2 agents). If an agent such as
  the L3 agent starts and begins resync before the neutron server is
  aware of the active openvswitch agent, ports for the routers on that
  agent will be marked as "binding_failed". Currently this is a
  "terminal" state for the port as neutron does not attempt to rebind
  failed bindings on the same host.

  Unfortunately, the neutron agents do not provide even a best-effort
  deterministic indication to the outside service manager (systemd,
  pacemaker, etc...) that it has fully initialized and the neutron
  server should be aware that it is active. Agents should follow the
  same pattern as wsgi based services and notify systemd after it can be
  reasonably assumed that the neutron server should be aware that it is
  alive. That way service startup order logic or constraints can
  properly start an agent that is dependent on other agents *after*
  neutron should be aware that the required agents are active.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500615] Re: Large Ops scenario is taking too long

2016-05-13 Thread Matt Riedemann
We dropped the large ops job.

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500615

Title:
  Large Ops scenario is taking too long

Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  gate-tempest-dsvm-large-ops error rate is spiking since the last 24
  hours (http://goo.gl/G9Zazy) with the following stacktrace :

  2015-09-28 15:02:50.954 | Traceback (most recent call last):
  2015-09-28 15:02:50.954 |   File "tempest/test.py", line 127, in wrapper
  2015-09-28 15:02:50.954 | return f(self, *func_args, **func_kwargs)
  2015-09-28 15:02:50.954 |   File "tempest/scenario/test_large_ops.py", line 
138, in test_large_ops_scenario_3
  2015-09-28 15:02:50.954 | self._large_ops_scenario()
  2015-09-28 15:02:50.954 |   File "tempest/scenario/test_large_ops.py", line 
123, in _large_ops_scenario
  2015-09-28 15:02:50.954 | self.nova_boot()
  2015-09-28 15:02:50.954 |   File "tempest/scenario/test_large_ops.py", line 
119, in nova_boot
  2015-09-28 15:02:50.954 | self._wait_for_server_status('ACTIVE')
  2015-09-28 15:02:50.954 |   File "tempest/scenario/test_large_ops.py", line 
81, in _wait_for_server_status
  2015-09-28 15:02:50.955 | server['id'], status)
  2015-09-28 15:02:50.955 |   File "tempest/common/waiters.py", line 95, in 
wait_for_server_status
  2015-09-28 15:02:50.955 | raise exceptions.TimeoutException(message)
  2015-09-28 15:02:50.955 | tempest.exceptions.TimeoutException: Request timed 
out

  http://logs.openstack.org/23/226923/5/gate/gate-tempest-dsvm-large-
  ops/826845d/console.html#_2015-09-28_15_02_50_955

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1500615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501558] Re: nova-net: libvirtError: Error while building firewall: Some rules could not be created for interface: Unable to update the kernel

2016-05-13 Thread Matt Riedemann
We worked around this in devstack:

https://github.com/openstack-
dev/devstack/commit/7860f2ba3189b0361693c8ee9c65d8d03fb115d6

Using:

https://review.openstack.org/#/c/246581/

** Changed in: nova
   Status: Confirmed => Fix Released

** Changed in: nova
 Assignee: (unassigned) => Chet Burgess (cfb-n)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501558

Title:
  nova-net: libvirtError: Error while building firewall: Some rules
  could not be created for interface: Unable to update the kernel

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Just started seeing this show up in juno jobs:

  http://logs.openstack.org/39/229639/2/check/gate-tempest-dsvm-full-
  juno/a0bb0c0/logs/screen-n-net.txt.gz?level=TRACE#_2015-09-30_22_21_27_400

  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 393, in 
_associate_floating_ip
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
do_associate()
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/lockutils.py", line 272, in inner
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 386, in do_associate
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
interface=interface)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 370, in do_associate
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
interface, fixed['network'])
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/l3.py", line 114, in add_floating_ip
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
l3_interface_id, network)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 784, in 
ensure_floating_forward
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
ensure_ebtables_rules(*floating_ebtables_rules(fixed_ip, network))
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/lockutils.py", line 272, in inner
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 1649, in 
ensure_ebtables_rules
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
_execute(*cmd, run_as_root=True)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 1229, in _execute
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
utils.execute(*cmd, **kwargs)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/utils.py", line 187, in execute
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
processutils.execute(*cmd, **kwargs)
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/processutils.py", line 222, in 
execute
  2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1466696] Re: Cells: Race between instance 'unlock' and 'stop' can cause 'stop' to fail

2016-05-13 Thread Matt Riedemann
** Changed in: nova
   Status: Confirmed => Won't Fix

** Tags removed: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466696

Title:
  Cells: Race between instance 'unlock' and 'stop' can cause 'stop' to
  fail

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Observed in the tempest-dsvm-cells job during
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_lock_unlock_server

  The test locks an instance, attempts to stop it, makes sure that
  fails, unlocks it, attempts to stop it, and makes sure that succeeds.

  The problem happens during the succession of actions "unlock" and
  "stop". The "unlock" does an instance.save() of the locked state at
  the top cell which will sync to the child. If the "stop" request
  reaches the child cell before the instance.save() state locked = False
  syncs to the child cell, the "stop" will fail with the following trace
  in screen-n-cell-child.txt:

  2015-06-18 19:09:23.852 ERROR nova.cells.messaging 
[req-6b3584bb-b52a-41ba-8e88-000e14ba6ec6 ServerActionsTestJSON-1466672685 
ServerActionsTestJSON-639331338] Error processing message locally
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging Traceback (most 
recent call last):
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 200, in _process_locally
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 1256, in 
_process_message_locally
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 850, in stop_instance
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging 
clean_shutdown=clean_shutdown)
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 839, in 
_call_compute_api_with_obj
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging return fn(ctxt, 
instance, *args, **kwargs)
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging   File 
"/opt/stack/new/nova/nova/compute/api.py", line 214, in inner
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging raise 
exception.InstanceIsLocked(instance_uuid=instance.uuid)
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging InstanceIsLocked: 
Instance cb2485ba-e3e5-4668-869b-f145e5f28a1a is locked
  2015-06-18 19:09:23.852 16161 ERROR nova.cells.messaging

  Logstash query: message:"InstanceIsLocked" AND tags:"screen-n-cell-
  child.txt"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSW5zdGFuY2VJc0xvY2tlZFwiIEFORCB0YWdzOlwic2NyZWVuLW4tY2VsbC1jaGlsZC50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNDY3NDMyOTQ0Mn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477534] Re: neutron tempest check failed due to neutron-debug "RTNETLINK answers: File exists" error

2016-05-13 Thread Matt Riedemann
We haven't seen this in the gate in a long time so marking it invalid.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477534

Title:
  neutron tempest check failed due to neutron-debug "RTNETLINK answers:
  File exists" error

Status in neutron:
  Invalid

Bug description:
  http://logs.openstack.org/50/204950/3/check/gate-tempest-dsvm-neutron-
  full/a579098/logs/devstacklog.txt.gz

  
  2015-07-23 11:19:46.643 | + neutron-debug --os-tenant-name admin 
--os-username admin --os-password secretadmin probe-create --device-owner 
compute 24f15c79-75fa-4cf3-b2be-2d9dfd6a80ae
  2015-07-23 11:19:49.567 | 2015-07-23 11:19:49.567 10994 WARNING 
oslo_config.cfg [-] Option "use_namespaces" from group "DEFAULT" is deprecated 
for removal.  Its value may be silently ignored in the future.
  2015-07-23 11:19:50.148 | ++ _get_net_id private
  2015-07-23 11:19:50.149 | ++ neutron --os-tenant-name admin --os-username 
admin --os-password secretadmin net-list
  2015-07-23 11:19:50.149 | ++ grep private
  2015-07-23 11:19:50.149 | ++ awk '{print $2}'
  2015-07-23 11:19:51.224 | + 
private_net_id=2abd5092-7c33-4f94-989a-e40402230e16
  2015-07-23 11:19:51.225 | + neutron-debug --os-tenant-name admin 
--os-username admin --os-password secretadmin probe-create --device-owner 
compute 2abd5092-7c33-4f94-989a-e40402230e16
  2015-07-23 11:19:53.674 | 2015-07-23 11:19:53.674 11120 WARNING 
oslo_config.cfg [-] Option "use_namespaces" from group "DEFAULT" is deprecated 
for removal.  Its value may be silently ignored in the future.
  2015-07-23 11:19:54.168 | 2015-07-23 11:19:54.168 11120 ERROR 
neutron.agent.linux.utils [-] 
  2015-07-23 11:19:54.169 | Command: ['ip', 'netns', 'exec', 
u'qprobe-a3b65685-6d97-4831-958e-590b2c6a4121', 'ip', '-6', 'addr', 'add', 
'fd4b:a41b:a53d:0:f816:3eff:fe4a:e2b5/64', 'scope', 'global', 'dev', 
u'tapa3b65685-6d']
  2015-07-23 11:19:54.169 | Exit code: 2
  2015-07-23 11:19:54.169 | Stdin: 
  2015-07-23 11:19:54.169 | Stdout: 
  2015-07-23 11:19:54.169 | Stderr: RTNETLINK answers: File exists
  2015-07-23 11:19:54.169 | 
  2015-07-23 11:19:54.169 | 2015-07-23 11:19:54.169 11120 ERROR 
neutronclient.shell [-] 
  2015-07-23 11:19:54.169 | Command: ['ip', 'netns', 'exec', 
u'qprobe-a3b65685-6d97-4831-958e-590b2c6a4121', 'ip', '-6', 'addr', 'add', 
'fd4b:a41b:a53d:0:f816:3eff:fe4a:e2b5/64', 'scope', 'global', 'dev', 
u'tapa3b65685-6d']
  2015-07-23 11:19:54.169 | Exit code: 2
  2015-07-23 11:19:54.169 | Stdin: 
  2015-07-23 11:19:54.169 | Stdout: 
  2015-07-23 11:19:54.169 | Stderr: RTNETLINK answers: File exists
  2015-07-23 11:19:54.169 | 
  2015-07-23 11:19:54.230 | + exit_trap

  Logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUlRORVRMSU5LIGFuc3dlcnM6IEZpbGUgZXhpc3RzXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9kZXZzdGFja2xvZy50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNzY1MzMxNzQ2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580790] Re: Metadata display widget should use case insensitive matching

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/315295
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=802ec1f77a2c4aa40804d2f1e46cdc28a757f6d6
Submitter: Jenkins
Branch:master

commit 802ec1f77a2c4aa40804d2f1e46cdc28a757f6d6
Author: Travis Tripp 
Date:   Wed May 11 18:06:27 2016 -0600

Change Metadata Display widget to case insensitive

The metadata-display should match properties using case insensitive.
I've found that if you pass in extra metadata properties to
Glance v1 at create time, it takes all the properties and stores
them as lower case. So when you create an image with metadata of
FOO=BAR, it will store as foo=bar.

There are some properties "CIM_PASD_InstructionSet" that when
create at the same time as creating the image get changed to
cim_pasd_instructionset". When this gets subsequently retrieved
from Glance and displayed in the metadata-display widget,
it doesn't recognize them and won't show them.

To test, you need: https://review.openstack.org/#/c/236042/

Create an Image using the CIM Instruction Set metadata.

Go to image row and expand. You won't see the metadata.

Apply this patch git review -x 315295

Then do the same steps.  You'll see the metadata displayed:

http://imgur.com/eMfLd9H

Change-Id: I5127283e90505f3580af6afea3eb992a91b8dfc8
Closes-Bug: 1580790


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1580790

Title:
  Metadata display widget should use case insensitive matching

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The metadata-display should match properties using case insensitive.
  I've found that if you pass in extra metadata properties to Glance v1
  at create time, it takes all the properties and stores them as lower
  case. So when you create an image with metadata of FOO=BAR, it will
  store as foo=bar.

  Maybe when we move off to v2 Glance that won't be a problem?

  There are some properties "CIM_PASD_InstructionSet" that when created
  at the same time as creating the image get changed to
  cim_pasd_instructionset". When this gets subsequently retrieved from
  Glance and displayed in the metadata-display widget, it doesn't
  recognize them and won't show them.

  See:

  https://review.openstack.org/#/c/236042/

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/rest/glance.py#L189-L190

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1580790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581382] Re: nova migration-list --status returns no results

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/315965
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=11ba939f546a981f9faf429051722393e802
Submitter: Jenkins
Branch:master

commit 11ba939f546a981f9faf429051722393e802
Author: Matthew Booth 
Date:   Fri May 13 09:49:42 2016 +0100

Fix migration query with unicode status

Running 'nova migration-list --status ' from the command line
results in the status being passed to the db query as unicode.

Resolves-bug: #1581382
Change-Id: I6033a84d0255a86295a5d5261641a2a235c436c9


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581382

Title:
  nova migration-list --status returns no results

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  'nova migration-list --status  ' returns no results. On further
  investigation, this is because this status is passed down to
  db.migration_get_all_by_filters() as unicode, which doesn't handle it
  correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408814] Re: Image could not be found after upload. The image may have been deleted during the upload.

2016-05-13 Thread Matt Riedemann
We haven't seen this in a long time so marking it invalid.

** Changed in: glance
   Status: Confirmed => Invalid

** Changed in: cinder
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1408814

Title:
  Image could not be found after upload. The image may have been deleted
  during the upload.

Status in Cinder:
  Invalid
Status in Glance:
  Invalid

Bug description:
  Saw this fail a tempest check job:

  http://logs.openstack.org/81/142281/3/gate/gate-tempest-dsvm-neutron-
  full/960927f/

  The error in the console says "Details: Volume c96a0a8c-b5b4-42b5
  -a1cd-f537d5eee094 failed to reach available status (current:
  creating) within the required time (196 s)."

  And I find in the screen-c-vol.txt, the volume does get created
  successfully, but it's a few minutes after the test failure:

  http://logs.openstack.org/81/142281/3/gate/gate-tempest-dsvm-neutron-
  full/960927f/logs/screen-c-vol.txt.gz#_2015-01-08_17_34_23_438

  I tried a logstash query on the errors in screen-c-vol.txt and find 18
  hits in the last 7 days, check and gate, all failures:

  message:"could not be found after upload. The image may have been
  deleted during the upload" AND tags:"screen-c-vol.txt"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kIGFmdGVyIHVwbG9hZC4gVGhlIGltYWdlIG1heSBoYXZlIGJlZW4gZGVsZXRlZCBkdXJpbmcgdGhlIHVwbG9hZFwiIEFORCB0YWdzOlwic2NyZWVuLWMtdm9sLnR4dFwiIiwiZmllbGRzIjpbImJ1aWxkX3Nob3J0X3V1aWQiLCJidWlsZF9zdGF0dXMiLCJidWlsZF9xdWV1ZSJdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjA3NTU3MTUxMTZ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1408814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251521] Re: Volume detach in tempest fails because libvirt refuses connections

2016-05-13 Thread Matt Riedemann
Haven't seen this in a long time, so marking invalid.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251521

Title:
  Volume detach in tempest fails because libvirt refuses connections

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I just experienced this on https://review.openstack.org/#/c/55492/. It
  looks to me like the detach volume fails because libvirt has become
  unavailable:

  2013-11-15 00:49:45.034 29876 DEBUG nova.openstack.common.rpc.amqp [-] 
received {u'_context_roles': [u'_member_'], u'_context_request_id': 
u'req-0fdc657c-fdb3-4aef-96c6-c7d3c
  6f18b33', u'_context_quota_class': None, u'_context_user_name': 
u'tempest.scenario.manager-tempest-1652099598-user', u'_context_project_name': 
u'tempest.scenario.manager-temp
  est-1652099598-tenant', u'_context_service_catalog': [{u'endpoints_links': 
[], u'endpoints': [{u'adminURL': 
u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u're
  gion': u'RegionOne', u'internalURL': 
u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u'serviceName': 
u'cinder', u'id': u'1a7219a8e4e543909f4b2a497810fa7c', u'pu
  blicURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347'}], 
u'type': u'volume', u'name': u'cinder'}], u'_context_tenant': 
u'49a55ed418d44af8b8104157045e8347', u
  '_context_auth_token': '', u'args': {u'instance': {u'vm_state': 
u'active', u'availability_zone': None, u'terminated_at': None, u'ephemeral_gb': 
0, u'instance_type_
  id': 6, u'user_data': None, u'cleaned': False, u'vm_mode': None, 
u'deleted_at': None, u'reservation_id': u'r-2ivdfgiz', u'id': 101, 
u'security_groups': [{u'deleted_at': None,
   u'user_id': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', u'description': u'default', 
u'deleted': False, u'created_at': u'2013-11-15T00:48:31.00', u'updated_at': 
None, u'project_
  id': u'49a55ed418d44af8b8104157045e8347', u'id': 90, u'name': u'default'}], 
u'disable_terminate': False, u'root_device_name': u'/dev/vda', u'display_name': 
u'scenario-server-
  -tempest-1998782633', u'uuid': u'9dbd99d4-09d7-43df-b8de-c6e65043e012', 
u'default_swap_device': None, u'info_cache': {u'instance_uuid': 
u'9dbd99d4-09d7-43df-b8de-c6e65043e012
  ', u'deleted': False, u'created_at': u'2013-11-15T00:48:31.00', 
u'updated_at': u'2013-11-15T00:49:18.00', u'network_info': 
[{u'ovs_interfaceid': None, u'network': {u'
  bridge': u'br100', u'label': u'private', u'meta': {u'tenant_id': None, 
u'should_create_bridge': True, u'bridge_interface': u'eth0'}, u'id': 
u'd00c22d4-05b0-4c71-86ba-1c5d60b4
  45bd', u'subnets': [{u'ips': [{u'meta': {}, u'type': u'fixed', 
u'floating_ips': [{u'meta': {}, u'type': u'floating', u'version': 4, 
u'address': u'172.24.4.225'}], u'version':
   4, u'address': u'10.1.0.4'}], u'version': 4, u'meta': {u'dhcp_server': 
u'10.1.0.1'}, u'dns': [{u'meta': {}, u'type': u'dns', u'version': 4, 
u'address': u'8.8.4.4'}], u'route
  s': [], u'cidr': u'10.1.0.0/24', u'gateway': {u'meta': {}, u'type': 
u'gateway', u'version': 4, u'address': u'10.1.0.1'}}, {u'ips': [], u'version': 
None, u'meta': {u'dhcp_serv
  er': None}, u'dns': [], u'routes': [], u'cidr': None, u'gateway': {u'meta': 
{}, u'type': u'gateway', u'version': None, u'address': None}}]}, u'devname': 
None, u'qbh_params': 
  None, u'meta': {}, u'address': u'fa:16:3e:57:00:d4', u'type': u'bridge', 
u'id': u'c7cdc48e-2ef3-43b8-9d8f-88c644afac78', u'qbg_params': None}], 
u'deleted_at': None}, u'hostna
  me': u'scenario-server--tempest-1998782633', u'launched_on': 
u'devstack-precise-check-rax-ord-658168.slave.openstack.org', 
u'display_description': u'scenario-server--tempest-
  1998782633', u'key_data': u'ssh-rsa 
B3NzaC1yc2EDAQABAAABAQDWN4HLjQWmJu2prhyp8mSkcVOx3W4dhK6GB1L4upm83DU7Ogj3Tg2cTuMqmO4bIt3gJv+BZB16auiyq5w+SEK8VVSuTresc7dD5qW7dej+bD
  
aF6w/gLsEbP8s0rOvMo93esqF0Cwt7WyqpBXsRr8DEjdPDkJL9fRjFuuGz6sjpM9qAiKd7e1v37y+z39T2y7PoJA5241b0QDG5H6uHNdrCwxIaWxtX5+ac2kUJSxS7FjjtACPgsoBD0tltcpaEQaxmQANdAm4hkhe1rTpP7vfSrmEN
  I0ZrwSjre2ZbWLA0IcM3JJwmsXWzXdPvjNC+GVqWmltugTNH77vOfwTbec+x Generated by 
Nova\n', u'deleted': False, u'config_drive': u'', u'power_state': 1, 
u'default_ephemeral_device': No
  ne, u'progress': 0, u'project_id': u'49a55ed418d44af8b8104157045e8347', 
u'launched_at': u'2013-11-15T00:48:50.00', u'scheduled_at': 
u'2013-11-15T00:48:31.00', u'node'
  : u'devstack-precise-check-rax-ord-658168.slave.openstack.org', 
u'ramdisk_id': u'258c4266-6e2b-4aca-ae56-b495ea8d8b5f', u'access_ip_v6': None, 
u'access_ip_v4': None, u'kernel
  _id': u'47643b93-2bc5-49b3-8c2d-83791ee8a0e2', u'key_name': 
u'scenario-keypair--tempest-1179721946', u'updated_at': 
u'2013-11-15T00:49:16.00', u'host': u'devstack-precise
  -check-rax-ord-658168.slave.openstack.org', u'user_id': 
u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', u'system_metadata': 

[Yahoo-eng-team] [Bug 1312016] Re: nova libvirtError: Unable to add bridge brqxxx-xx port tapxxx-xx: Device or resource busy

2016-05-13 Thread Matt Riedemann
** Changed in: neutron/kilo
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312016

Title:
  nova libvirtError: Unable to add bridge brqxxx-xx port tapxxx-xx:
  Device or resource busy

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Won't Fix

Bug description:
  Hello:  My OpenStack's version is 2013.1.5(G) , plugin is linuxbridge
  ,os is ubuntu12.04.3 , libvirt-bin is '1.1.1-0ubuntu8.9'

  When  i launch three instances , two instances is successful, and one
  of the three is failed to spawn .

  I check the log of nova-compute , I  found the following  errors :

  (it's worth noting that:
  "libvirtError: Unable to add bridge brq233a5889-2e port tap3f81c08a-39: 
Device or resource busy")

  Somebody in the same problem?

  2014-04-24 14:41:58.499 ERROR nova.compute.manager 
[req-4dc590cc-9a34-460d-8c6a-4efdfb9de456 fd7179d2284247179c70db99ee1842db 
4f50d05ffb6b44a29f9b23978e40542b] [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] Instance failed to spawn
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] Traceback (most recent call last):
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1119, in _spawn
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] block_device_info)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1539, in 
spawn
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] block_device_info)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2455, in 
_create_domain_and_network
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] domain = self._create_domain(xml, 
instance=instance)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2416, in 
_create_domain
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] domain.createWithFlags(launch_flags)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in proxy_call
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] rv = execute(f,*args,**kwargs)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] rv = meth(*args,**kwargs)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 728, in createWithFlags
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]

  libvirtError: Unable to add bridge brq233a5889-2e port tap3f81c08a-39:
  Device or resource busy

  2014-04-24 14:41:58.499 60306 TRACE nova.compute.manager [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1]
  2014-04-24 14:41:58.680 AUDIT nova.compute.manager 
[req-4dc590cc-9a34-460d-8c6a-4efdfb9de456 fd7179d2284247179c70db99ee1842db 
4f50d05ffb6b44a29f9b23978e40542b] [instance: 
496c546b-4afc-4b48-9984-08c42cbe36d1] Terminating instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581553] Re: Maas 2.0 Deployment Failing on arm64 , Xenial

2016-05-13 Thread Andres Rodriguez
Hi Sean,

Again, this is not a problem in MAAS. So why would cloud-init report the
time is in the future. What happens if you just don't try to ensure the
clocks are the same? The error in deployment above is telling me that
the clocks are not synced.

Cloud-init only fixes the clock-skew so there are no authentication
issues when sending information back to MAAS. Now, I don't see your
installation log (you can grab it from the WebUI).

Also, MAAS doesn't do NTP as of now, but may do in the future.

I'm marking this as "invalid" for MAAS, and will open a task for curtin
and cloud init. Howeve,r please do attach your installation log.

** Changed in: maas
   Status: New => Invalid

** Also affects: curtin
   Importance: Undecided
   Status: New

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1581553

Title:
  Maas 2.0 Deployment Failing on arm64 , Xenial

Status in cloud-init:
  New
Status in curtin:
  New
Status in MAAS:
  Invalid

Bug description:
  mmm, let me try this again :)

  We have been testing maas 2.0 Beta 4 on some enablement hardware and
  have been seeing a problem in which during enlistment, a clock skew is
  reported on the host console:

  [ 288.753993] cloud-init[1464]: Success
  [ 290.910846] cloud-init[1464]: updated clock skew to 7853431
  [ 290.911437] cloud-init[1464]: request to 
http://10.246.48.112/MAAS/metadata//2012-03-01/ failed. sleeping 1.: HTTP Error 
401: OK
  [ 290.911929] cloud-init[1464]: Success
  [ 292.752177] cloud-init[1464]: updated clock skew to 7853431
  [ 292.752746] cloud-init[1464]: request to 
http://10.246.48.112/MAAS/metadata//2012-03-01/ failed. sleeping 1.: HTTP Error 
401: OK
  [ 292.753234] cloud-init[1464]: Success
  [ 337.916546] cloud-init[1464]: updated clock skew to 7853431
  [ 337.917122] cloud-init[1464]: request to http://10.246.48.112/

  This happens a number of times. And as you mentioned, this is cloud
  init fixing the clock skew.

  The enlistment will complete and we are able to successfully finish
  commissioning the Host.   The host will appear in a ready state via
  the MAAS UI.

  Now before I go further, As mentioned earlier. I always ensure that
  both HOST AND CLIENT dates & time match in UEFI prior to starting
  enlistment. If anything the times on all of the hosts are offset by
  +/- 2 minutes.

  Moving onto deployment:

  When Deploying Xenial on these hosts This is where I get stuck, due to
  the clock-skew, which is fixed via cloud-init, tar will report time
  stamps approx 1 month into the future, while copying the disk image
  and eventually cause the deployment to fail due to a timeout. (tar is
  still extracting the root image)

  Does setting the ntp host in maas settings have any affect on this,
  (for example: if ntp.ubuntu.com was unavailable ? )

  We have been triaging this on our end of the stick, however would like
  some insight from the maas team.

  
  [   19.353487] cloud-init[1207]: Cloud-init v. 0.7.7 running 'init' at Thu, 
11 Feb 2016 16:28:06 +. Up 19.03 seconds.
  [   19.368566] cloud-init[1207]: ci-info: 
Net device 
info
  [   19.388533] cloud-init[1207]: ci-info: 
++---+--+-+---+---+
  [   19.408484] cloud-init[1207]: ci-info: |   Device   |   Up  |   
Address| Mask| Scope | Hw-Address|
  [   19.428484] cloud-init[1207]: ci-info: 
++---+--+-+---+---+
  [   19.80] cloud-init[1207]: ci-info: | enP2p1s0f1 | False |  
.   |  .  |   .   | 40:8d:5c:ba:b9:10 |
  [   19.464490] cloud-init[1207]: ci-info: | lo |  True |  
127.0.0.1   |  255.0.0.0  |   .   | . |
  [   19.484480] cloud-init[1207]: ci-info: | lo |  True |   
::1/128|  .  |  host | . |
  [   19.500475] cloud-init[1207]: ci-info: | enP2p1s0f3 | False |  
.   |  .  |   .   | 40:8d:5c:ba:b9:12 |
  [   19.520503] cloud-init[1207]: ci-info: | enP2p1s0f2 |  True | 
10.246.48.3  | 255.255.0.0 |   .   | 40:8d:5c:ba:b9:11 |
  [   19.536497] cloud-init[1207]: ci-info: | enP2p1s0f2 |  True | 
fe80::428d:5cff:feba:b911/64 |  .  |  link | 40:8d:5c:ba:b9:11 |
  [   19.556478] cloud-init[1207]: ci-info: 
++---+--+-+---+---+
  [   19.576514] cloud-init[1207]: ci-info: Route 
IPv4 info+
  [   19.592494] cloud-init[1207]: ci-info: 

[Yahoo-eng-team] [Bug 1557791] Re: WebSSO tries to call get_domain for Federated domain

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293172
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b1f7fc442f5a8c911d07f910fb0042138711e127
Submitter: Jenkins
Branch:master

commit b1f7fc442f5a8c911d07f910fb0042138711e127
Author: daniel-a-nguyen 
Date:   Tue Mar 15 16:20:08 2016 -0700

Bypass get_domain call to keystone api

Warning messages in horizon and keystone logs regarding the 'Federated' 
domain
indicate that a call to get_domain is failing.

This fix will allow horizon to not attempt calls to retrieve domains
that do not exists.  The 'Federated' domain is a virtual domain that has
no record in the keystone database.

Change-Id: Ic3225815d12472d37c4105b656c5bc75b529c359
Closes-Bug: #1557791


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1557791

Title:
  WebSSO tries to call get_domain for Federated domain

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The Keystone WebSSO authenticates to horizon using a virtual domain
  named 'Federated' as defined by the integration.  There is no physical
  record of this domain in the Keystone database and the API call to
  get_domain will not return a domain.

  The suggested fix for this bug is to bypass the unnecessary call based
  on the domain name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1557791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581398] Re: no need to handle 501 in API layer for some server actions l

2016-05-13 Thread jichenjc
invalid bug

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581398

Title:
  no need to handle 501 in API layer for some server actions l

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  currently we have 
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/pause_server.py#L52
  to handle 501 error 

  but actually,

  https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L655

  it's a cast action so no exception will be reported

  
  change code to make libvirt directly return 501 (nova/virt/libvirt/driver.py)

  def pause(self, instance):
  """Pause VM instance."""
  raise NotImplementedError

  DEBUG (session:225) REQ: curl -g -i -X POST 
http://192.168.122.105:8774/v2.1/611bfd117d714430ac2c927f7e50163c/servers/721a02a5-ba2c-49a3-9fd5-ddcaec487c62/action
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.18" -H 
"X-Auth-Token: {SHA1}933b15a27b80d8a5f090aba7d67640c065c2808c" -d '{"pause": 
null}'
  DEBUG (connectionpool:387) "POST 
/v2.1/611bfd117d714430ac2c927f7e50163c/servers/721a02a5-ba2c-49a3-9fd5-ddcaec487c62/action
 HTTP/1.1" 202 0

  still get 202

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564193] Re: Tutorials related to class Meta should use new style

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/299818
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a7da1dae4d005d104b88d471eb30c9da79c870ad
Submitter: Jenkins
Branch:master

commit a7da1dae4d005d104b88d471eb30c9da79c870ad
Author: guoshan 
Date:   Thu Mar 31 17:12:12 2016 +0800

Tutorials related to class Meta should use new style

The tutorials still use the old style class declaration.
class MyTable(DataTable):
name = Column('name')
email = Column('email')

class Meta:
name = "my_table"
table_actions = (MyAction, MyOtherAction)
row_actions - (MyAction)
It should use new style (inherit from 'object').

Change-Id: Icad23f6359f5f4866c5819f351f5c5cf1c3521fd
closes-bug:#1564193


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1564193

Title:
  Tutorials related to class Meta should use new style

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The tutorials still use the old style class declaration.

  class MyTable(DataTable):
  name = Column('name')
  email = Column('email')

  class Meta:
  name = "my_table"
  table_actions = (MyAction, MyOtherAction)
  row_actions - (MyAction)

  It should use new style (inherit from 'object').

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1564193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543149] Re: Reserve host pages on compute nodes

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/292499
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d52ceaf269ae64575c48aa45002aa4fc5cfb2a86
Submitter: Jenkins
Branch:master

commit d52ceaf269ae64575c48aa45002aa4fc5cfb2a86
Author: Sahid Orentino Ferdjaoui 
Date:   Mon Feb 8 09:37:37 2016 -0500

virt: reserved number of mempages on compute host

Users need to mark as reserved some amount of pages for third party
components.

The most common use case for using huge/large pages is NFV. In the
current state of that feature we can't guarantee the necessary amount
of pages to allow OVS-DPDK to run properly on the compute node, which
result in the instance failing to boot on a well selected
compute-node. OVS-DPDK needs 1 GB hugepages reserved. Since Nova does
not take into account that page reserved for OVS-DPDK it results in
the process not being able to acquire the necessary memory which
results in a failed boot.

This commit adds a new option 'reserved_huge_pages' which takes a list
of string format to select on which host NUMA nodes and from which
pagesize we want to reserve a certain amount of pages. It also updates
NUMAPageTopology to contain a reserved memory pages attribute, which
helps compute the available pages size on host for scheduling/claiming
resources.

Change-Id: Ie04d6362a4e99dcb2504698fc831a366ba746b44
Closes-Bug: #1543149


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543149

Title:
  Reserve host pages on compute nodes

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In some use cases we may want to avoid Nova to use an amount of
  hugepages in compute nodes. (example when using ovs-dpdk). We should
  to provide an option 'reserved_memory_pages' which provides way to
  determine amount of pages we want to reserved for third part
  components

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581442] [NEW] Number of volumes progress bar shows wrong value

2016-05-13 Thread Mounika
Public bug reported:

when I try to create a volume in project,volumes form shows a progress bar 
shows wrong value.
I see that 0 volumes are available but the progress bar shows a green coloured 
bar.

Expected Behaviour: when there are 0 volumes, the progress bar should be
grey and not partially green.

I have enclosed the screenshot hereunder for reference

** Affects: horizon
 Importance: Undecided
 Assignee: Mounika (mounika-mounika1)
 Status: New

** Attachment added: "Screenshot from 2016-05-13 15:44:00.png"
   
https://bugs.launchpad.net/bugs/1581442/+attachment/4661981/+files/Screenshot%20from%202016-05-13%2015%3A44%3A00.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581442

Title:
  Number of volumes progress bar shows wrong value

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  when I try to create a volume in project,volumes form shows a progress bar 
shows wrong value.
  I see that 0 volumes are available but the progress bar shows a green 
coloured bar.

  Expected Behaviour: when there are 0 volumes, the progress bar should
  be grey and not partially green.

  I have enclosed the screenshot hereunder for reference

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581430] [NEW] alignment of back button in create subnet form is not proper

2016-05-13 Thread Mounika
Public bug reported:

Alignment of back button in create subnet form is improper.
The back button is very far from next button and cancel button is missing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581430

Title:
  alignment of back button in create subnet form is not proper

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Alignment of back button in create subnet form is improper.
  The back button is very far from next button and cancel button is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581415] [NEW] No previous pagination tag in admin instances page

2016-05-13 Thread Mounika
Public bug reported:

When I set the Items Per page in settings to 1(or a number less than the page 
size) and then
navigate to admin Instances, I can't find "prev" pagination tag in Instances 
page.
This makes navigating through multiple instances tedious.

Expected Behaviour: Instances page should have "prev" tag which allows users to 
view
previous results

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581415

Title:
  No previous pagination tag in admin instances page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I set the Items Per page in settings to 1(or a number less than the page 
size) and then
  navigate to admin Instances, I can't find "prev" pagination tag in Instances 
page.
  This makes navigating through multiple instances tedious.

  Expected Behaviour: Instances page should have "prev" tag which allows users 
to view
  previous results

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581398] [NEW] no need to handle 501 in API layer for some server actions l

2016-05-13 Thread jichenjc
Public bug reported:

currently we have 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/pause_server.py#L52
to handle 501 error 

but actually,

https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L655

it's a cast action so no exception will be reported


change code to make libvirt directly return 501 (nova/virt/libvirt/driver.py)

def pause(self, instance):
"""Pause VM instance."""
raise NotImplementedError

DEBUG (session:225) REQ: curl -g -i -X POST 
http://192.168.122.105:8774/v2.1/611bfd117d714430ac2c927f7e50163c/servers/721a02a5-ba2c-49a3-9fd5-ddcaec487c62/action
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.18" -H 
"X-Auth-Token: {SHA1}933b15a27b80d8a5f090aba7d67640c065c2808c" -d '{"pause": 
null}'
DEBUG (connectionpool:387) "POST 
/v2.1/611bfd117d714430ac2c927f7e50163c/servers/721a02a5-ba2c-49a3-9fd5-ddcaec487c62/action
 HTTP/1.1" 202 0

still get 202

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581398

Title:
  no need to handle 501 in API layer for some server actions l

Status in OpenStack Compute (nova):
  New

Bug description:
  currently we have 
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/pause_server.py#L52
  to handle 501 error 

  but actually,

  https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L655

  it's a cast action so no exception will be reported

  
  change code to make libvirt directly return 501 (nova/virt/libvirt/driver.py)

  def pause(self, instance):
  """Pause VM instance."""
  raise NotImplementedError

  DEBUG (session:225) REQ: curl -g -i -X POST 
http://192.168.122.105:8774/v2.1/611bfd117d714430ac2c927f7e50163c/servers/721a02a5-ba2c-49a3-9fd5-ddcaec487c62/action
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.18" -H 
"X-Auth-Token: {SHA1}933b15a27b80d8a5f090aba7d67640c065c2808c" -d '{"pause": 
null}'
  DEBUG (connectionpool:387) "POST 
/v2.1/611bfd117d714430ac2c927f7e50163c/servers/721a02a5-ba2c-49a3-9fd5-ddcaec487c62/action
 HTTP/1.1" 202 0

  still get 202

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581382] [NEW] nova migration-list --status returns no results

2016-05-13 Thread Matthew Booth
Public bug reported:

'nova migration-list --status  ' returns no results. On further
investigation, this is because this status is passed down to
db.migration_get_all_by_filters() as unicode, which doesn't handle it
correctly.

** Affects: nova
 Importance: Undecided
 Assignee: Matthew Booth (mbooth-9)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581382

Title:
  nova migration-list --status returns no results

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  'nova migration-list --status  ' returns no results. On further
  investigation, this is because this status is passed down to
  db.migration_get_all_by_filters() as unicode, which doesn't handle it
  correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581367] [NEW] Changed SAN iscsi IP addresses in connection_info can prevent VM startup

2016-05-13 Thread yuyafei
Public bug reported:

The iscsi IP addresses of SAN are stored in nova's block-device-mapping
table after connecting volume and are never re-validated down the line.
Changing the iscsi IP adresses of SAN will prevent the instance from
booting as the stale connection info will enter the instance's XML. We
should check weather the  iscsi IP addresses of SAN are changed before
startup VM, or we need to a way to restore those VMs whose iscsi IP
addresses of SAN are changed.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- The iscsi IP addresses of SAN are stored in nova's block-device-mapping table 
- after connecting volume and are never re-validated down the line. Changing 
the 
- iscsi IP adresses of SAN will prevent the instance from booting as the stale
- connection info will enter the instance's XML. We should check weather the 
- iscsi IP addresses of SAN are changed before startup VM, or we need to a way 
to
- restore those VMs whose iscsi IP addresses of SAN are changed.
+ The iscsi IP addresses of SAN are stored in nova's block-device-mapping
+ table after connecting volume and are never re-validated down the line.
+ Changing the iscsi IP adresses of SAN will prevent the instance from
+ booting as the stale connection info will enter the instance's XML. We
+ should check weather the  iscsi IP addresses of SAN are changed before
+ startup VM, or we need to a way to restore those VMs whose iscsi IP
+ addresses of SAN are changed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581367

Title:
  Changed SAN iscsi IP addresses in connection_info can prevent VM
  startup

Status in OpenStack Compute (nova):
  New

Bug description:
  The iscsi IP addresses of SAN are stored in nova's block-device-
  mapping table after connecting volume and are never re-validated down
  the line. Changing the iscsi IP adresses of SAN will prevent the
  instance from booting as the stale connection info will enter the
  instance's XML. We should check weather the  iscsi IP addresses of SAN
  are changed before startup VM, or we need to a way to restore those
  VMs whose iscsi IP addresses of SAN are changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581350] [NEW] baremetal apis raises 500 InternalServerError if ironic baremetal service is not configured or reachable

2016-05-13 Thread Dinesh Bhor
Public bug reported:

Baremetal apis raises 500 InternalServerError if ironic baremetal
service is not configured or reachable.

Steps to reproduce
==

Command:
nova baremetal-node-list

Actual result
=
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-663fbe2c-81b6-4264-9e02-efe5283e5f8f)

Command:
nova baremetal-node-show 1

Actual result
=
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-898b8986-ecd3-4d13-a819-1bcd0cf703c8)

Expected result
===
It should return 503 status code if ironic baremetal service is not configured 
or reachable.

n-API LOG:

2016-05-13 06:34:14.337 ERROR nova.api.openstack.extensions 
[req-898b8986-ecd3-4d13-a819-1bcd0cf703c8 admin admin] Unexpected exception in 
API method
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 90, in 
index
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions icli = 
_get_ironic_client()
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 61, in 
_get_ironic_client
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions icli = 
ironic_client.get_client(CONF.ironic.api_version, **kwargs)
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/ironicclient/client.py", line 137, in 
get_client
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions raise 
exc.AmbiguousAuthSystem(exception_msg)
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions 
AmbiguousAuthSystem: Must provide Keystone credentials or user-defined endpoint 
and token
2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions

** Affects: nova
 Importance: Undecided
 Assignee: Dinesh Bhor (dinesh-bhor)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581350

Title:
  baremetal apis raises 500 InternalServerError if ironic baremetal
  service is not configured or reachable

Status in OpenStack Compute (nova):
  New

Bug description:
  Baremetal apis raises 500 InternalServerError if ironic baremetal
  service is not configured or reachable.

  Steps to reproduce
  ==

  Command:
  nova baremetal-node-list

  Actual result
  =
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-663fbe2c-81b6-4264-9e02-efe5283e5f8f)

  Command:
  nova baremetal-node-show 1

  Actual result
  =
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-898b8986-ecd3-4d13-a819-1bcd0cf703c8)

  Expected result
  ===
  It should return 503 status code if ironic baremetal service is not 
configured or reachable.

  n-API LOG:

  2016-05-13 06:34:14.337 ERROR nova.api.openstack.extensions 
[req-898b8986-ecd3-4d13-a819-1bcd0cf703c8 admin admin] Unexpected exception in 
API method
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 90, in 
index
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions icli = 
_get_ironic_client()
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 61, in 
_get_ironic_client
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions icli = 
ironic_client.get_client(CONF.ironic.api_version, **kwargs)
  2016-05-13 06:34:14.337 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/ironicclient/client.py", line 137, in 
get_client
  2016-05-13 

[Yahoo-eng-team] [Bug 1581348] [NEW] Can't delete a v4 csnat port when there is a v6 router interface attached

2016-05-13 Thread Hong Hui Xiao
Public bug reported:

Reproduce:
1) I enable DVR in devstack. After installation, there is a DVR, an ipv4+ipv6 
router gateway in DVR, an ipv4 router interface in DVR, and an ipv6 router 
interface in DVR.

2) I want to use delete the v4 subnet. So, I delete the ipv4 router gateway.
[fedora@normal-dvr devstack]$ neutron router-interface-delete router1 
private-subnet
Removed interface from router router1.

3) I try to delete the v4 subnet, but neutron server tell me that the subnet 
can't be deleted, because there is still port(s) being used.
[fedora@normal-dvr devstack]$ neutron subnet-delete private-subnet
Unable to complete operation on subnet d0282930-95ca-4f64-9ae9-8c22be9cb3ab: 
One or more ports have an IP allocation from this subnet.

4) Check the port-list, I found the csnat port is still there.
[fedora@normal-dvr devstack]$ neutron port-list
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   

|
+--+--+---+-+

| bf042acf-40d5-4503-b62e-7389a6fc9bca |  | fa:16:3e:47:a5:40 | 
{"subnet_id": "d0282930-95ca-4f64-9ae9-8c22be9cb3ab", "ip_address": "10.0.0.3"} 
|
+--+--+---+-+

5) But look into the snat namespace, there is no such port there.


Then I can't delete the subnet, because the port is there. I can't delete the 
port, because the port has a device owner network:router_centralized_snat. I 
even can't attach the subnet back to DVR, neutron server will tell me:  Router 
already has a port on subnet.

This problem will not be reproduce if there is no ipv6 subnet attached
to DVR.

Expect: Can use ipv4 no matter if there is ipv6 subnet attached to DVR.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: ipv6 l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: l3-dvr-backlog

** Tags added: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581348

Title:
  Can't delete a v4 csnat port when there is a v6 router interface
  attached

Status in neutron:
  New

Bug description:
  Reproduce:
  1) I enable DVR in devstack. After installation, there is a DVR, an ipv4+ipv6 
router gateway in DVR, an ipv4 router interface in DVR, and an ipv6 router 
interface in DVR.

  2) I want to use delete the v4 subnet. So, I delete the ipv4 router gateway.
  [fedora@normal-dvr devstack]$ neutron router-interface-delete router1 
private-subnet
  Removed interface from router router1.

  3) I try to delete the v4 subnet, but neutron server tell me that the subnet 
can't be deleted, because there is still port(s) being used.
  [fedora@normal-dvr devstack]$ neutron subnet-delete private-subnet
  Unable to complete operation on subnet d0282930-95ca-4f64-9ae9-8c22be9cb3ab: 
One or more ports have an IP allocation from this subnet.

  4) Check the port-list, I found the csnat port is still there.
  [fedora@normal-dvr devstack]$ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 

  |
  
+--+--+---+-+

  | bf042acf-40d5-4503-b62e-7389a6fc9bca |  | fa:16:3e:47:a5:40 | 
{"subnet_id": "d0282930-95ca-4f64-9ae9-8c22be9cb3ab", "ip_address": "10.0.0.3"} 
|
  
+--+--+---+-+

  5) But look into the snat namespace, there is no such port there.

  
  Then I can't delete the subnet, because the port is there. I can't delete the 
port, because the port has a device owner network:router_centralized_snat. I 
even can't attach the subnet back to DVR, neutron server will tell me:  Router 
already has a port on subnet.

  This problem will not be reproduce if there is no ipv6 subnet 

[Yahoo-eng-team] [Bug 1581349] [NEW] DhcpBase etc need to be moved to neutron-lib for out-of-tree dhcp drivers

2016-05-13 Thread YAMAMOTO Takashi
Public bug reported:

an example of out-of-tree dhcp drivers:
https://github.com/openstack/networking-midonet/blob/1963bcf4cf357647aad2e6362f8fea57dce60b57/midonet/neutron/agent/midonet_driver.py#L24

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581349

Title:
  DhcpBase etc need to be moved to neutron-lib for out-of-tree dhcp
  drivers

Status in neutron:
  New

Bug description:
  an example of out-of-tree dhcp drivers:
  
https://github.com/openstack/networking-midonet/blob/1963bcf4cf357647aad2e6362f8fea57dce60b57/midonet/neutron/agent/midonet_driver.py#L24

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581325] [NEW] test_dhcp_agent_main_agent_manager fails in UT

2016-05-13 Thread Hirofumi Ichihara
Public bug reported:

We can see UT failure[1]. Oslo.service 1.9.0 caused this error.

ft8.16: 
neutron.tests.unit.agent.dhcp.test_agent.TestDhcpAgent.test_dhcp_agent_main_agent_manager_StringException:
 Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "neutron/tests/unit/agent/dhcp/test_agent.py", line 287, in 
test_dhcp_agent_main_agent_manager
mock.call().wait()])
  File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 969, in assert_has_calls
), cause)
  File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/six.py",
 line 718, in raise_from
raise value
AssertionError: Calls not found.
Expected: [call(),
 call().launch_service(),
 call().wait()]
Actual: [call(, 
restart_method='reload'),
 call().launch_service(, 
workers=1),
 call().wait()]

[1]: http://logs.openstack.org/92/314492/1/gate/gate-neutron-
python27/c2878bb/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Assignee: Hirofumi Ichihara (ichihara-hirofumi)
 Status: In Progress


** Tags: gate-failure unittest

** Summary changed:

- test_dhcp_agent_main_agent_manager failes in UT
+ test_dhcp_agent_main_agent_manager fails in UT

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581325

Title:
  test_dhcp_agent_main_agent_manager fails in UT

Status in neutron:
  In Progress

Bug description:
  We can see UT failure[1]. Oslo.service 1.9.0 caused this error.

  ft8.16: 
neutron.tests.unit.agent.dhcp.test_agent.TestDhcpAgent.test_dhcp_agent_main_agent_manager_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/unit/agent/dhcp/test_agent.py", line 287, in 
test_dhcp_agent_main_agent_manager
  mock.call().wait()])
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 969, in assert_has_calls
  ), cause)
File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/six.py",
 line 718, in raise_from
  raise value
  AssertionError: Calls not found.
  Expected: [call(),
   call().launch_service(),
   call().wait()]
  Actual: [call(, 
restart_method='reload'),
   call().launch_service(, 
workers=1),
   call().wait()]

  [1]: http://logs.openstack.org/92/314492/1/gate/gate-neutron-
  python27/c2878bb/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564154] Re: allow hiding ng help-panel for workflows

2016-05-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/299705
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=7515f1e717376105d95668fc68f84eeda6ca791e
Submitter: Jenkins
Branch:master

commit 7515f1e717376105d95668fc68f84eeda6ca791e
Author: Cindy Lu 
Date:   Thu May 12 14:18:03 2016 -0700

allow hiding ng help-button for workflow steps

the help toggle button to trigger the help-panel should be hidden
for steps that don't need it

To test: remove 'helpUrl' from one of the steps in
launch-instance-workflow.service.js

Change-Id: I4d34113f85dcc5434b0188a229003094c21b7017
Closes-Bug: #1564154


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1564154

Title:
  allow hiding ng help-panel for workflows

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  not all steps will need a help-panel. we should have a way to hide it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1564154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp