[Yahoo-eng-team] [Bug 1590250] [NEW] vm console showing blank screen

2016-06-07 Thread rohit
Public bug reported:

Hi,
I installed openstack using devstack on new ubuntu machine, it installed 
properly, and VM also instantiated successfully, but when I try to access 
console of VM instance, it is simply showing blank screen.. 
I checked novnc server , it is working and receiving requests. 
I am running followsing services :- 

disable_service n-net
enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta
disable_service n-spice
enable_service n-novnc
disable_service n-xvnc
enable_service n-sproxy
disable_service tempest

this problem is coming on liberty as well as on mitaka.

could this problem relates to system hardware also?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590250

Title:
  vm console showing blank screen

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,
  I installed openstack using devstack on new ubuntu machine, it installed 
properly, and VM also instantiated successfully, but when I try to access 
console of VM instance, it is simply showing blank screen.. 
  I checked novnc server , it is working and receiving requests. 
  I am running followsing services :- 

  disable_service n-net
  enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta
  disable_service n-spice
  enable_service n-novnc
  disable_service n-xvnc
  enable_service n-sproxy
  disable_service tempest

  this problem is coming on liberty as well as on mitaka.

  could this problem relates to system hardware also?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586066] Re: handle oslo.log verbose deprecation

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324124
Committed: 
https://git.openstack.org/cgit/openstack/tacker/commit/?id=0d9f84c4517d9774550e978e2cde857acd504e7a
Submitter: Jenkins
Branch:master

commit 0d9f84c4517d9774550e978e2cde857acd504e7a
Author: Sridhar Ramaswamy 
Date:   Wed Jun 1 22:50:00 2016 +

oslo: remove usage of oslo.log verbose option

The option was deprecated a long time ago, and will be removed in one of
the next library releases, which will render tacker broken if we keep
using the option.

More details:
http://lists.openstack.org/pipermail/openstack-dev/2016-May/095166.html

Change-Id: Iebd08194a600d3537df7a5ee7ab735e8f0a38899
Closes-Bug: #1586066


** Changed in: tacker
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586066

Title:
  handle oslo.log verbose deprecation

Status in neutron:
  Fix Released
Status in tacker:
  Fix Released
Status in OpenStack DBaaS (Trove):
  In Progress

Bug description:
  In https://review.openstack.org/#/c/314573/ the verbose option was
  deleted.

  Time for projects to do the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590116] Re: test_list_pagination_page_reverse_with_href_links failure in gate

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326711
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e68121b2658b77df94879c3cafee68663eda2c8e
Submitter: Jenkins
Branch:master

commit e68121b2658b77df94879c3cafee68663eda2c8e
Author: Ihar Hrachyshka 
Date:   Tue Jun 7 21:23:44 2016 +0200

Match filter criteria when constructing URI for href based iteration

Without that, we compare apples to oranges (expected results excluding
shared networks and actual results including them).

Change-Id: Ia9b8b1e60acad54110a549da3b327820f2a1ec45
Closes-Bug: #1590116


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590116

Title:
  test_list_pagination_page_reverse_with_href_links failure in gate

Status in neutron:
  Fix Released

Bug description:
  Logs: http://logs.openstack.org/56/300056/7/check/gate-neutron-dsvm-
  api/8d75c73/logs/testr_results.html.gz

  Traceback (most recent call last):
File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_networks.py", 
line 144, in test_list_pagination_page_reverse_with_href_links
  self._test_list_pagination_page_reverse_with_href_links()
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 484, 
in inner
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 475, 
in inner
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 686, 
in _test_list_pagination_page_reverse_with_href_links
  self.assertSameOrder(expected_resources, reversed(resources))
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 508, 
in assertSameOrder
  self.assertEqual(len(original), len(actual))
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
411, in assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 7 != 8

  
  The reason of the failure is that while we correctly limit results used for 
comparison to shared=False when fetching expected results with list_networks(), 
we miss shared=False filter when constructing URI for next/previous href 
iteration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589381] Re: There is an error in help info of default_notification_exchange

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326290
Committed: 
https://git.openstack.org/cgit/openstack/oslo.messaging/commit/?id=8674f738b882f38fd965099e2df826d13f2cf08e
Submitter: Jenkins
Branch:master

commit 8674f738b882f38fd965099e2df826d13f2cf08e
Author: liu-lixiu 
Date:   Tue Jun 7 12:22:08 2016 +0800

Modify info of default_notification_exchange

There is a redundant for in help info of
default_notification_exchange, delete it.

Change-Id: I258bfe7fbc06f45398f0800ac69159dc313f2ff2
Closes-Bug: #1589381


** Changed in: oslo.messaging
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1589381

Title:
  There is an error in help info of default_notification_exchange

Status in OpenStack Identity (keystone):
  Invalid
Status in oslo.messaging:
  Fix Released

Bug description:
  version: mitaka master

  question:
  # Exchange name for for sending notifications (string value)
  #default_notification_exchange = ${control_exchange}_notification

  should be:
  Exchange name for sending notifications (string value)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1589381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590028] Re: Angular LI Required Icon isn't Brand Primary Color

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326521
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=1333729e9265d80f68edc76fb39c2e1ec5351c65
Submitter: Jenkins
Branch:master

commit 1333729e9265d80f68edc76fb39c2e1ec5351c65
Author: Diana Whitten 
Date:   Tue Jun 7 07:38:23 2016 -0700

Angular LI Required Icon isn't Brand Primary Color

Closes-bug: #1590028

Change-Id: I976793eb642613cfa5052944b4af56fb4c5040cc


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590028

Title:
  Angular LI Required Icon isn't Brand Primary Color

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  https://i.imgur.com/KmI0NEw.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590179] [NEW] fernet memcache performance regression

2016-06-07 Thread Brant Knudson
Public bug reported:


Fernet token validation performance got worse in mitaka vs in liberty. This is 
because it's not using memcache to cache the token anymore.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: fernet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590179

Title:
  fernet memcache performance regression

Status in OpenStack Identity (keystone):
  New

Bug description:
  
  Fernet token validation performance got worse in mitaka vs in liberty. This 
is because it's not using memcache to cache the token anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590164] [NEW] Can't initialize magic-search query programatically

2016-06-07 Thread Tyr Johanson
Public bug reported:

The current magic search query can't be initialized from Angular. This
is needed in order to pre-filter the search results.

** Affects: horizon
 Importance: Undecided
 Assignee: Tyr Johanson (tyr-6)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590164

Title:
  Can't initialize magic-search query programatically

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The current magic search query can't be initialized from Angular. This
  is needed in order to pre-filter the search results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324244
Committed: 
https://git.openstack.org/cgit/openstack/octavia/commit/?id=1896dfed5cc6ecbed818a13595af72cfe071b73b
Submitter: Jenkins
Branch:master

commit 1896dfed5cc6ecbed818a13595af72cfe071b73b
Author: ZhaoBo 
Date:   Thu Jun 2 14:46:43 2016 +0800

Update nova api version to 2.1

The nova team has decided to remove nova v2 API code completely. And it
had merged: https://review.openstack.org/#/c/311653/

we should bump to use v2.1 ASAP

Change-Id: Iee5582b5a74bceead2484684b214fca685dbaede
Closes-Bug: #1588171


** Changed in: octavia
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in heat:
  In Progress
Status in neutron:
  In Progress
Status in octavia:
  Fix Released
Status in python-openstackclient:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525806] Re: An incorrect value for block_device_mapping_v2 causes HTTP 500 response when creating a VM instance

2016-06-07 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Changed in: nova/mitaka
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525806

Title:
  An incorrect value for block_device_mapping_v2 causes HTTP 500
  response when creating a VM instance

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  An incorrect value for block_device_mapping_v2 causes HTTP 500 response when 
creating a VM instance.
  It should be validated and not to return HTTP 500 response.

  [How to reproduce]
  a) destination_type is ""(an empty string)
  Execute the following command(REST API).
  curl -g -i --cacert "/opt/stack/data/CA/int-ca/ca-chain.pem" -X POST 
http://10.0.2.15:8774/v2.1/e7e043ffac8d4325b2872bd2b53cce2b/os-volumes_boot -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}00abb28e025a6770fc13d70fc6a41e327bca90d6" -d '{"server": {"name": 
"server1", "imageRef": "", "block_device_mapping_v2": [{"boot_index": "0", 
"uuid": "4115a0d1-eee2-4c3e-847d-e50250a989a3", "volume_size": "1", 
"source_type": "image", "destination_type": "", "delete_on_termination": 
false}], "flavorRef": "1", "max_count": 1, "min_count": 1}}'

  The response is as follows:
  --
  HTTP/1.1 500 Internal Server Error
  X-Openstack-Nova-Api-Version: 2.6
  Vary: X-OpenStack-Nova-API-Version
  Content-Length: 194
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-29fb2efe-eda8-43dd-8ea1-5f73b86f6171
  Date: Mon, 14 Dec 2015 07:17:24 GMT

  {"computeFault": {"message": "Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}}
  --

  b) destination_type is neither 'volume' nor 'local'
  Execute the following command(REST API).
  curl -g -i --cacert "/opt/stack/data/CA/int-ca/ca-chain.pem" -X POST 
http://10.0.2.15:8774/v2.1/e7e043ffac8d4325b2872bd2b53cce2b/os-volumes_boot -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}7add7a5e501cc287f6043d83144ea24a69134ae7" -d '{"server": {"name": 
"server1", "imageRef": "", "block_device_mapping_v2": [{"boot_index": "0", 
"uuid": "4115a0d1-eee2-4c3e-847d-e50250a989a3", "volume_size": "1", 
"source_type": "image", "destination_type": "X", "delete_on_termination": 
false}], "flavorRef": "1", "max_count": 1, "min_count": 1}}'

  The response is as follows:
  --
  HTTP/1.1 500 Internal Server Error
  X-Openstack-Nova-Api-Version: 2.6
  Vary: X-OpenStack-Nova-API-Version
  Content-Length: 194
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-f3644722-ba2c-49bf-9db0-badfd7dffa30
  Date: Mon, 14 Dec 2015 07:30:02 GMT

  {"computeFault": {"message": "Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}}
  --

  c) volume_size is ""(an empty string)
  Execute the following command(REST API).
  curl -g -i --cacert "/opt/stack/data/CA/int-ca/ca-chain.pem" -X POST 
http://10.0.2.15:8774/v2.1/e7e043ffac8d4325b2872bd2b53cce2b/os-volumes_boot -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}85cc81c3f710561ddd640ce26c41990703d925ce" -d '{"server": {"name": 
"server1", "imageRef": "", "block_device_mapping_v2": [{"boot_index": "0", 
"uuid": "4115a0d1-eee2-4c3e-847d-e50250a989a3", "volume_size": "", 
"source_type": "image", "destination_type": "volume", "delete_on_termination": 
false}], "flavorRef": "1", "max_count": 1, "min_count": 1}}'

  The response is as follows:
  --
  HTTP/1.1 500 Internal Server Error
  X-Openstack-Nova-Api-Version: 2.1
  Vary: X-OpenStack-Nova-API-Version
  Content-Length: 194
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-46cb61d3-c110-4bbb-9248-3ebe0f909c23
  Date: Mon, 14 Dec 2015 07:36:27 GMT

  {"computeFault": {"message": "Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}}
  --

  d) volume_size is 0
  Execute the following command(REST API).
  curl -g -i --cacert "/opt/stack/data/CA/int-ca/ca-chain.pem" -X POST 
http://10.0.2.15:8774/v2.1/e7e043ffac8d4325b2872bd2b53cce2b/os-volumes_boot -H 
"User-Agent: 

[Yahoo-eng-team] [Bug 1183523] Re: db-archiving fails to clear some deleted rows from instances table

2016-06-07 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1183523

Title:
  db-archiving fails to clear some deleted rows from instances table

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Downstream bug report from Red Hat Bugzilla against Grizzly:
  https://bugzilla.redhat.com/show_bug.cgi?id=960644

  In unit tests, db-archiving moves all 'deleted' rows to the shadow
  tables.  However, in the real-world test, some deleted rows got stuck
  in the instances table.

  I suspect a bug in the way we deal with foreign key constraints.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1183523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576713] Re: Network metadata fails to state correct mtu

2016-06-07 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
 Assignee: (unassigned) => Dr. Jens Rosenboom (j-rosenboom-j)

** Changed in: nova/mitaka
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576713

Title:
  Network metadata fails to state correct mtu

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Scenario:

  Instance is booted on Neutron tenant network with ML2 OVS driver and
  encapsulation. The MTU for that network is automatically calculated as
  1450. Instance has --config-drive=true set.

  Result:

  In /openstack/latest/network_data.json we get:

   "links": [{"ethernet_mac_address": "fa:16:3e:36:96:c8", "mtu": null,
  "type": "ovs", "id": "tapb989c3aa-5c", "vif_id": "b989c3aa-5c1f-
  4d2b-8711-b96c66604902"}]

  Expected:

  Have "mtu": "1450" instead.

  Environment:

  OpenStack Mitaka on Ubuntu 16.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1576713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497484] Re: image-create does not respect the force_raw_images setting

2016-06-07 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497484

Title:
  image-create does not respect the force_raw_images setting

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Instance snapshots of instances sourced from, e.g., QCOW2 images will
  be created in the image service as "qcow2" and then switched to "raw"
  in an update step.

  Use case:

  We decided to drop QCOW2 support from certain product configurations,
  as force_raw_images is enabled by default, and the conversion overhead
  made for a sub-wonderful customer experience.

  After dropping QCOW2 from the acceptable list of image formats from
  Glance, clients could no longer make instance snapshots from instances
  that were spawned from QCOW2 images, despite the fact that the backing
  store was not QCOW2.

  Steps to Reproduce:

  1. Upload a QCOW2 image into Glance
  2. Update Nova/Glance configs to disable QCOW2 images and enable 
force_raw_images
  3. Boot an instance against the QCOW2 image
  4. Create a snapshot of the instance

  Expected behavior:

  A snapshot of the instance

  Actual results:
  ERROR (BadRequest): 
   
400 Bad Request
   
   
400 Bad Request
Invalid disk format 'qcow2' for image.
  
   
   (HTTP 400) (HTTP 400) (Request-ID: 
req-8e8d8d51-8e0c-4033-bb84-774d2ed1f90a)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434710] Re: [Launch Instance Fix] Use Magic Search in each step instead of basic search bar

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317554
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=26359bd00f38452eb7551da55e296680aac38e34
Submitter: Jenkins
Branch:master

commit 26359bd00f38452eb7551da55e296680aac38e34
Author: Matt Borland 
Date:   Tue May 17 09:04:30 2016 -0600

Use Magic-Search for Security Groups step in Launch Instance

Security Groups was the last table in the stanaard Launch Instance steps
that didn't use Magic Search for filtering.  This patch establishes the
use of Magic Search and also removes the search bar from the table to
promote accessibility.

The filters are only for Name and Description right now, but this allows
additional filters to be added as necessary.

Change-Id: I5e6098752d1b1d5c9736c103796b533d656092a6
Closes-Bug: 1434710


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1434710

Title:
  [Launch Instance Fix] Use Magic Search in each step instead of basic
  search bar

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The source, flavor, network and access & security steps in the Launch
  Instance workflow currently uses a basic search bar (Smart-Table
  filtering). Use Magic Search instead for client side faceted search.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1434710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590133] [NEW] help text for cpu_allocation_ratio is wrong

2016-06-07 Thread Chris Friesen
Public bug reported:

In stable/mitaka in resource_tracker.py the help text for the
cpu_allocation_ratio config option reads in part:

 'NOTE: This can be set per-compute, or if set to 0.0, the value '
 'set on the scheduler node(s) will be used '
 'and defaulted to 16.0'),

However, there is no longer any value set on the scheduler node(s).
They use the per-compute-node value set in resource_tracker.py.

Instead, if the value is 0.0 then ComputeNode._from_db_object() will
convert the value to 16.0.  This ensures that the scheduler filters see
a value of 16.0 by default.

In Newton the plan appears to be to change the default value to an
explicit 16.0 (and presumably updating the help text) but that doesn't
help the already-released Mitaka code.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590133

Title:
  help text for cpu_allocation_ratio is wrong

Status in OpenStack Compute (nova):
  New

Bug description:
  In stable/mitaka in resource_tracker.py the help text for the
  cpu_allocation_ratio config option reads in part:

   'NOTE: This can be set per-compute, or if set to 0.0, the value '
   'set on the scheduler node(s) will be used '
   'and defaulted to 16.0'),

  However, there is no longer any value set on the scheduler node(s).
  They use the per-compute-node value set in resource_tracker.py.

  Instead, if the value is 0.0 then ComputeNode._from_db_object() will
  convert the value to 16.0.  This ensures that the scheduler filters
  see a value of 16.0 by default.

  In Newton the plan appears to be to change the default value to an
  explicit 16.0 (and presumably updating the help text) but that doesn't
  help the already-released Mitaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589502] Re: Request Mitaka release for networking-bagpipe

2016-06-07 Thread Thomas Morin
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589502

Title:
  Request Mitaka release for networking-bagpipe

Status in BaGPipe:
  New
Status in neutron:
  New

Bug description:
  Can you please do a release of networking-bagpipe from master branch ?

  Commit: 870d281eeb707fbb6c4de431d764cebb586f872e
  Version: 4.0.0  (first release, but number chosen to be in sync with 
networking-bgpvpn)

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-bagpipe/+bug/1589502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590117] [NEW] Segment Extension get_plugin method should be a classmethod

2016-06-07 Thread Brandon Logan
Public bug reported:

There isn't any reason to have it as an instance method as its only
returning a constant.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590117

Title:
  Segment Extension get_plugin method should be a classmethod

Status in neutron:
  New

Bug description:
  There isn't any reason to have it as an instance method as its only
  returning a constant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590116] [NEW] test_list_pagination_page_reverse_with_href_links failure in gate

2016-06-07 Thread Ihar Hrachyshka
Public bug reported:

Logs: http://logs.openstack.org/56/300056/7/check/gate-neutron-dsvm-
api/8d75c73/logs/testr_results.html.gz

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_networks.py", 
line 144, in test_list_pagination_page_reverse_with_href_links
self._test_list_pagination_page_reverse_with_href_links()
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 484, in 
inner
return f(self, *args, **kwargs)
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 475, in 
inner
return f(self, *args, **kwargs)
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 686, in 
_test_list_pagination_page_reverse_with_href_links
self.assertSameOrder(expected_resources, reversed(resources))
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 508, in 
assertSameOrder
self.assertEqual(len(original), len(actual))
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
411, in assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 7 != 8


The reason of the failure is that while we correctly limit results used for 
comparison to shared=False when fetching expected results with list_networks(), 
we miss shared=False filter when constructing URI for next/previous href 
iteration.

** Affects: neutron
 Importance: High
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590116

Title:
  test_list_pagination_page_reverse_with_href_links failure in gate

Status in neutron:
  In Progress

Bug description:
  Logs: http://logs.openstack.org/56/300056/7/check/gate-neutron-dsvm-
  api/8d75c73/logs/testr_results.html.gz

  Traceback (most recent call last):
File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_networks.py", 
line 144, in test_list_pagination_page_reverse_with_href_links
  self._test_list_pagination_page_reverse_with_href_links()
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 484, 
in inner
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 475, 
in inner
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 686, 
in _test_list_pagination_page_reverse_with_href_links
  self.assertSameOrder(expected_resources, reversed(resources))
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 508, 
in assertSameOrder
  self.assertEqual(len(original), len(actual))
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
411, in assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 7 != 8

  
  The reason of the failure is that while we correctly limit results used for 
comparison to shared=False when fetching expected results with list_networks(), 
we miss shared=False filter when constructing URI for next/previous href 
iteration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586594] Re: in nova/compute/manager.py line 4894 'an network' should be 'a network'

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/322422
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=52aea257e9552276545ada67a8cbe07216f8ee52
Submitter: Jenkins
Branch:master

commit 52aea257e9552276545ada67a8cbe07216f8ee52
Author: QunyingRan 
Date:   Sat May 28 17:56:34 2016 +0800

Modify 'an network' to 'a network'

in nova/compute/manager.py line 4894 'an network' should be 'a network'

Change-Id: I3491b4fcee2ffbc8f6daa3ad9aa5964e8cfb97cb
Closes-Bug: #1586594


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586594

Title:
  in nova/compute/manager.py line 4894 'an network' should be 'a
  network'

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In nova/compute/manager.py line 4894,  'an network' in document should
  be 'a network'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1586594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589960] Re: avoid one unnecessary _get_power_state call

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326431
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6211543493418509d5d1ace1a1ef55a3eebcd6b2
Submitter: Jenkins
Branch:master

commit 6211543493418509d5d1ace1a1ef55a3eebcd6b2
Author: jichenjc 
Date:   Mon May 2 10:59:40 2016 +0800

Avoid unnessary _get_power_state call

the result of _get_power_state can be reused and no need to
call it twice which don't have any state change.

Change-Id: I3c495031d98b35734f37139ac1b1c3a4d25d0a8f
Closes-Bug: 1589960


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589960

Title:
  avoid one unnecessary _get_power_state call

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L877

  has _get_power_state in its function in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1042

  then we call it again

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L878

  
  actually there is no state change in _retry_reboot function
  so we can reuse the state as variable and avoid mock in test

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590104] [NEW] network config from datasource overrides network config from system

2016-06-07 Thread Scott Moser
Public bug reported:

network configuration in system config should override that found as provided 
by a datasource.
The order of precedence should be:
  datasource
  system config
  kernel command line

When juju creates lxc containers they want to be in control of networking and 
do not want cloud-init to configure networking either from the datasource 
(lxc's template provided nocloud) or from fallback.
They are specifying that configuration directly in /etc/network/interfaces.

ProblemType: Bug
DistroRelease: Ubuntu 16.04
Package: cloud-init 0.7.7~bzr1212-0ubuntu1
ProcVersionSignature: Ubuntu 4.4.0-23.41-generic 4.4.10
Uname: Linux 4.4.0-23-generic x86_64
ApportVersion: 2.20.1-0ubuntu2.1
Architecture: amd64
Date: Tue Jun  7 18:16:09 2016
PackageArchitecture: all
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
SourcePackage: cloud-init
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: cloud-init
 Importance: High
 Status: Confirmed

** Affects: cloud-init (Ubuntu)
 Importance: High
 Status: Confirmed

** Affects: cloud-init (Ubuntu Xenial)
 Importance: High
 Status: Confirmed

** Affects: cloud-init (Ubuntu Yakkety)
 Importance: High
 Status: Confirmed


** Tags: amd64 apport-bug uec-images xenial

** Attachment added: "script to reproduce issue and patch the container"
   
https://bugs.launchpad.net/bugs/1590104/+attachment/4679073/+files/test-no-networking

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => High

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: High
   Status: Confirmed

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1590104

Title:
  network config from datasource overrides network config from system

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  network configuration in system config should override that found as provided 
by a datasource.
  The order of precedence should be:
datasource
system config
kernel command line

  When juju creates lxc containers they want to be in control of networking and 
do not want cloud-init to configure networking either from the datasource 
(lxc's template provided nocloud) or from fallback.
  They are specifying that configuration directly in /etc/network/interfaces.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: cloud-init 0.7.7~bzr1212-0ubuntu1
  ProcVersionSignature: Ubuntu 4.4.0-23.41-generic 4.4.10
  Uname: Linux 4.4.0-23-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.1
  Architecture: amd64
  Date: Tue Jun  7 18:16:09 2016
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1590104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522101] Re: In liberty, tempest identity v2 test failed with Invalid input for external_gateway_info. Reason: '' is not a valid UUID.

2016-06-07 Thread Castulo J. Martinez
I could not reproduce the problem, I ran the test in
"tempest.api.identity.test_extension" using a liberty environment and
with an environment with master code and the test passed in both cases.
I will mark this test as Invalid, please feel free to reopen it if you
are able to reproduce it.

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1522101

Title:
  In liberty, tempest identity v2 test failed with Invalid input for
  external_gateway_info. Reason: '' is not a valid UUID.

Status in OpenStack Identity (keystone):
  Invalid
Status in tempest:
  Invalid

Bug description:
  
https://github.com/openstack/tempest/blob/d97c374caa821ec4e653cf32eb8fa8d211fc1517/tempest/common/dynamic_creds.py#L228
  It is creating networks based on credential type. Most probably we need to 
chagne some CONF setting to not to use separate network for testing

  Log:
  2015-12-01 20:13:57.712 19398 DEBUG tempest_lib.common.rest_client 
[req-16b5b381-4472-486d-ad97-e8e6781b9d1a ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json'}
  Body: 
  Response - Headers: {'status': '200', 'content-length': '4594', 'vary': 
'X-Auth-Token', 'server': 'Apache/2.4.10 (Debian)', 'connection': 'close', 
'date': 'Tue, 01 Dec 2015 20:15:09 GMT', 'content-type': 'application/json', 
'x-openstack-request-id': 'req-16b5b381-4472-486d-ad97-e8e6781b9d1a'}
  Body: {"access": {"token": {"issued_at": 
"2015-12-01T20:15:09.293214", "expires": "2015-12-02T00:15:09Z", "id": 
"b99921fe01034f738673d7d5c30288bf", "tenant": {"description": "Bootstrap 
accounts created via keystone deploy", "enabled": true, "id": 
"f2dda17e833a4c3b81dec61d527cecf7", "name": "admin"}, "audit_ids": 
["GZ0CrmK5TBifimJd9nqOTw"]}, "serviceCatalog": [{"endpoints": [{"adminURL": 
"http://192.168.245.9:8070/v2.0;, "region": "region1", "internalURL": 
"http://192.168.245.9:8070/v2.0;, "id": "18008fa6220f40bc86b1c34390bd11c4", 
"publicURL": "https://myhelion.test:8070/v2.0"}], "endpoints_links": [], 
"type": "monitoring", "name": "monasca"}, {"endpoints": [{"adminURL": 
"http://192.168.245.9:9696/;, "region": "region1", "internalURL": 
"http://192.168.245.9:9696/;, "id": "2f501a53bebc470f9664979d084792b1", 
"publicURL": "https://myhelion.test:9696/"}], "endpoints_links": [], "type": 
"network", "name": "neutron"}, {"endpoints": [{"adminURL": 
"http://192.168.245.9:8776/v2/f2dda17e833a4c3b81dec61d527cecf7;, "region": 
"region1", "internalURL": 
"http://192.168.245.9:8776/v2/f2dda17e833a4c3b81dec61d527cecf7;, "id": 
"3b2c82c02aa34d8ebe8f1ab7b66804c9", "publicURL": 
"https://myhelion.test:8776/v2/f2dda17e833a4c3b81dec61d527cecf7"}], 
"endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": 
[{"adminURL": "http://192.168.245.9:9292;, "region": "region1", "internalURL": 
"http://192.168.245.9:9292;, "id": "6e0bddb23a634492b6eaaa2cecee7104", 
"publicURL": "https://myhelion.test:9292"}], "endpoints_links": [], "type": 
"image", "name": "glance"}, {"endpoints": [{"adminURL": 
"http://192.168.245.9:21131/v1;, "region": "region1", "internalURL": 
"http://192.168.245.9:21131/v1;, "id": "0f367c10315f4d319ba5797308740776", 
"publicURL": "https://myhelion.test:21131/v1"}], "endpoints_links": [], "type": 
"hp-catalog", "name": "sherpa"}, {"endpoints": [{"adminURL": 
"http://192.168.245.9:8777/;, "region": "region1", "internalURL": 
"http://192.168.245.9:8777/;, "id": "977b357f525a44029b9a767775d5e98c", 
"publicURL": "https://myhelion.test:8777/"}], "endpoints_links": [], "type": 
"metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": 
"http://192.168.245.9:8776/v1/f2dda17e833a4c3b81dec61d527cecf7;, "region": 
"region1", "internalURL": 
"http://192.168.245.9:8776/v1/f2dda17e833a4c3b81dec61d527cecf7;, "id": 
"06645b9beae24b68abe26fcada6ead8b", "publicURL": 
"https://myhelion.test:8776/v1/f2dda17e833a4c3b81dec61d527cecf7"}], 
"endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": 
[{"adminURL": "http://192.168.245.9:8004/v1/f2dda17e833a4c3b81dec61d527cecf7;, 
"region": "region1", "internalURL": 
"http://192.168.245.9:8004/v1/f2dda17e833a4c3b81dec61d527cecf7;, "id": 
"df4bb1789d524426b39f546c6a66113c", "publicURL": 
"https://myhelion.test:8004/v1/f2dda17e833a4c3b81dec61d527cecf7"}], 
"endpoints_links": [], "type": "orchestration", "name": "heat"}, {"endpoints": 
[{"adminURL": 
"http://192.168.245.9:8080/v1/AUTH_f2dda17e833a4c3b81dec61d527cecf7;, "region": 
"region1", "internalURL": 
"http://192.168.245.9:8080/v1/AUTH_f2dda17e833a4c3b81dec61d527cecf7;, "id": 
"36f0b1e4661643a6af99647a8ab45a75", "publicURL": 
"https://myhelion.test:8080/v1/AUTH_f2dda17e833a4c3b81dec61d527cecf7"}], 
"endpoints_links": [], "type": "object-store", "name": "swift"}, {"endpoints": 
[{"adminURL": 

[Yahoo-eng-team] [Bug 1590103] [NEW] ng launch instance tables can use hzNoItems directive

2016-06-07 Thread Cindy Lu
Public bug reported:

We can reduce code duplication by using the hzNoItems directive to show
a message if there are no items in the table.

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590103

Title:
  ng launch instance tables can use hzNoItems directive

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We can reduce code duplication by using the hzNoItems directive to
  show a message if there are no items in the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501735] Re: Network interface allocation corrupts instance info cache

2016-06-07 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
 Assignee: (unassigned) => Mark Goddard (mgoddard)

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/liberty
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501735

Title:
  Network interface allocation corrupts instance info cache

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  Allocation of network interfaces for an instance can result in
  corruption of the instance info cache in Nova. The result is that the
  cache may contain duplicate entries for network interfaces. This can
  cause failure to boot nodes, as seen with the Libvirt driver.

  Seen on Ubuntu / devstack / commit
  b0013d93ffeaed53bc28d9558def26bdb7041ed7.

  The issue can be reproduced using an instance with a large number of
  interfaces, for example using the heat stack in the attached YAML file
  heat-stack-many-interfaces.yaml. For improved reproducibility, add a
  short sleep in nova.network.neutronv2.api.API.allocate_for_instance,
  just before the call to self.get_instance_nw_info.

  This issue was found by SecurityFun23 when testing the fix for bug
  #1467581.

  The problem appears to be that in
  nova.network.neutronv2.api.API.allocate_for_instance, after the
  Neutron API calls to create/update ports, but before the instance info
  cache is  updated in get_instance_nw_info, it is possible for another
  request to refresh the instance info cache. This will cause the
  new/updated ports to be added to the cache as they are discovered in
  Neutron. Then, the original request resumes, and unconditionally adds
  the new interfaces to the cache. This results in duplicate entries.
  The most likely candidate for another request is probably Neutron
  network-change notifications, which are triggered by the port
  update/create operation. The allocation of multiple interfaces is more
  likely to make the problem to occur, as Neutron API requests are made
  serially for each of the ports, allowing time for the notifications to
  arrive.

  The perceived problem in a more visual form:

  Request:
  - Allocate interfaces for an instance 
(nova.network.neutronv2.api.API.allocate_for_instance)
  - n x Neutron API port create/updates
  --
  Notification:
  - External event notification from Neutron - network-changed 
(nova.compute.manager.ComputeManager.external_instance_event)
  - Refresh instance network cache (network_api.get_instance_nw_info)
  - Query ports for device in Neutron
  - Add new ports to instance info cache
  ---
  Request:
  - Refresh instance network cache with new interfaces (get_instance_nw_info)
  - Unconditionally add duplicate interfaces to cache.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590091] [NEW] bug in handling of ISOLATE thread policy

2016-06-07 Thread Chris Friesen
Public bug reported:

I'm running stable/mitaka in devstack.  I've got a small system with 2
pCPUs, both marked as available for pinning.  They're two cores of a
single processor, no threads.  "virsh capabilities" shows:

  


  

It is my understanding that I should be able to boot up an instance with
two dedicated CPUs and a thread policy of ISOLATE, since I have two
physical cores and no threads.  (Is this correct?)

Unfortunately, the NUMATopology filter fails my host.  The problem is in
_pack_instance_onto_cores():

if (instance_cell.cpu_thread_policy ==
fields.CPUThreadAllocationPolicy.ISOLATE):
# make sure we have at least one fully free core
if threads_per_core not in sibling_sets:
return

pinning = _get_pinning(1,  # we only want to "use" one thread per core
   sibling_sets[threads_per_core],
   instance_cell.cpuset)


Right before the call to _get_pinning() we have the following:

(Pdb) instance_cell.cpu_thread_policy
u'isolate'
(Pdb) threads_per_core
1
(Pdb) sibling_sets 
defaultdict(, {1: [CoercedSet([0, 1])], 2: [CoercedSet([0, 1])]})
(Pdb) sibling_sets[threads_per_core]
[CoercedSet([0, 1])]
(Pdb) instance_cell.cpuset
CoercedSet([0, 1])

In this code snippet, _get_pinning() returns None, causing the filter to
fail the host.  Tracing a bit further in, in _get_pinning() we have the
following line:

if threads_no * len(sibling_set) < len(instance_cores):
return

Coming into this line of code the variables look like this:

(Pdb) threads_no
1
(Pdb) sibling_set
[CoercedSet([0, 1])]
(Pdb) len(sibling_set)
1
(Pdb) instance_cores
CoercedSet([0, 1])
(Pdb) len(instance_cores)
2

So the test evaluates to True, and we bail out.

I don't think this is correct, we should be able to schedule on this
host.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590091

Title:
  bug in handling of ISOLATE thread policy

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm running stable/mitaka in devstack.  I've got a small system with 2
  pCPUs, both marked as available for pinning.  They're two cores of a
  single processor, no threads.  "virsh capabilities" shows:


  
  


  It is my understanding that I should be able to boot up an instance
  with two dedicated CPUs and a thread policy of ISOLATE, since I have
  two physical cores and no threads.  (Is this correct?)

  Unfortunately, the NUMATopology filter fails my host.  The problem is
  in _pack_instance_onto_cores():

  if (instance_cell.cpu_thread_policy ==
  fields.CPUThreadAllocationPolicy.ISOLATE):
  # make sure we have at least one fully free core
  if threads_per_core not in sibling_sets:
  return

  pinning = _get_pinning(1,  # we only want to "use" one thread per core
 sibling_sets[threads_per_core],
 instance_cell.cpuset)

  
  Right before the call to _get_pinning() we have the following:

  (Pdb) instance_cell.cpu_thread_policy
  u'isolate'
  (Pdb) threads_per_core
  1
  (Pdb) sibling_sets 
  defaultdict(, {1: [CoercedSet([0, 1])], 2: [CoercedSet([0, 1])]})
  (Pdb) sibling_sets[threads_per_core]
  [CoercedSet([0, 1])]
  (Pdb) instance_cell.cpuset
  CoercedSet([0, 1])

  In this code snippet, _get_pinning() returns None, causing the filter
  to fail the host.  Tracing a bit further in, in _get_pinning() we have
  the following line:

  if threads_no * len(sibling_set) < len(instance_cores):
  return

  Coming into this line of code the variables look like this:

  (Pdb) threads_no
  1
  (Pdb) sibling_set
  [CoercedSet([0, 1])]
  (Pdb) len(sibling_set)
  1
  (Pdb) instance_cores
  CoercedSet([0, 1])
  (Pdb) len(instance_cores)
  2

  So the test evaluates to True, and we bail out.

  I don't think this is correct, we should be able to schedule on this
  host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503453] Re: unavailable ironic nodes being scheduled to

2016-06-07 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Jay Faulkner (jason-oldos)

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Tags removed: mitaka-backport-potential
** Tags added: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1503453

Title:
  unavailable ironic nodes being scheduled to

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  When the compute resource tracker checks nodes, the ironic driver
  checks the node against a list of states that it should return
  resources for. This is to prevent nodes in various ironic states, like
  our cleaning process, that are not available from being scheduled to
  by nova.

  The logic around this check (
  
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L334-L351
  ) looks for existing instances on the node, and if they aren't found
  it then looks at the conditions for returning the node as unavailable.

  The problem is when you have an orphaned instance on your node, one
  which ironic sees as present but nova does not (usually nova lists it
  as having been deleted).

  The instance detection will return true, causing the memory_mb_used
  and memory_mb values to be set to the retrieved value from
  instance_info['memory_mb'].

  The check for _node_resources_unavailable will not run as it is an
  elif. This means that even if this node is in maintenance state, we
  won't notice and return all zeros for resources as we normally would.

  Once the resource tracker calls _update_usage_from_instance, it will
  not find an instance associated with the node from nova's point of
  view and will return all of the memory as available instead, causing
  builds to be scheduled to this node.

  Ironic will then fail the build attempt due to it showing an instance
  already associated with the node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1503453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586950] Re: ImportError: No module named novadocker.virt.docker.driver

2016-06-07 Thread Sean Dague
The removal of import_object_ns is intentional. If you want to use an
out of tree driver (which really isn't encouraged), you must make that a
namespace package which lives in nova.virt.

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586950

Title:
  ImportError: No module named novadocker.virt.docker.driver

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  /usr/local/bin/nova-compute --config-file /etc/nova /nova.conf & echo $! 
>/opt/stack/status/stack/n-cpu.pid; fg || echo "n-cpu faile d to start" | tee 
"/opt/stack/status/stack/n-cpu.failure"
  [1] 61162
  /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
  2016-05-30 00:12:07.120 ^[[01;33mWARNING oslo_reports.guru_meditation_report 
[^[[00;36m-^[[01;33m] ^[[01;35m^[[01;33mGuru mediation now registers SIGUSR1 
and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be 
registered in a future release, so please use SIGUSR2 to generate reports.^[[00m
  2016-05-30 00:12:07.305 ^[[00;36mINFO nova.virt.driver [^[[00;36m-^[[00;36m] 
^[[01;35m^[[00;36mLoading compute driver 
'novadocker.virt.docker.driver.DockerDriver'^[[00m
  2016-05-30 00:12:07.306 ^[[01;31mERROR nova.virt.driver [^[[00;36m-^[[01;31m] 
^[[01;35m^[[01;31mUnable to load the virtualization driver^[[00m
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver ^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/virt/driver.py", line 1624, in load_compute_driver
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver ^[[01;35m^[[00m
virtapi)
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 44, in 
import_object
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver ^[[01;35m^[[00m
return import_class(import_str)(*args, **kwargs)
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver ^[[01;35m^[[00m
__import__(mod_str)
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver 
^[[01;35m^[[00mImportError: No module named novadocker.virt.docker.driver
  ^[[01;31m2016-05-30 00:12:07.306 TRACE nova.virt.driver ^[[01;35m^[[00m
  n-cpu failed to start

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1586950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587401] Re: Helper method to change status of port to abnormal state is needed in ml2.

2016-06-07 Thread Miguel Angel Ajo
I agree, even in some cases there could be errors, because an l2 agent
extension could eventually not be able to handle a setting, and while
the port would be working, some of the characteristics could have not
been set.

** Changed in: neutron
   Status: New => Opinion

** Changed in: neutron
   Status: Opinion => Confirmed

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1587401

Title:
  Helper method to change status of port to abnormal state is needed in
  ml2.

Status in neutron:
  Confirmed

Bug description:
  Some mechanism drivers cooperate with another backend(SDN controller).
  In this case, drivers may want to change status of port so that
  user can recognize process for the port is failed when failure in calling to 
backend.

  However, currently there is no helper function in PortContext to change 
status of port to
  abnormal status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1587401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590041] [NEW] DVR: regression with router rescheduling

2016-06-07 Thread Oleg Bondarev
Public bug reported:

L3 agent may not fully process dvr router being rescheduled to it which leads 
to loss of external connectivity.
The reason is that with commit 9dc70ed77e055677a4bd3257a0e9e24239ed4cce dvr 
edge router now creates snat_namespace object in constructor while some logic 
in the module still checks for existence of this object: like 
external_gateway_updated() will not fully process router if snat_namespace 
object exists.

The proposal is to revert commit
9dc70ed77e055677a4bd3257a0e9e24239ed4cce and the make another attempt to
fix bug 1557909.

** Affects: neutron
 Importance: High
 Assignee: Oleg Bondarev (obondarev)
 Status: In Progress


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590041

Title:
  DVR: regression with router rescheduling

Status in neutron:
  In Progress

Bug description:
  L3 agent may not fully process dvr router being rescheduled to it which leads 
to loss of external connectivity.
  The reason is that with commit 9dc70ed77e055677a4bd3257a0e9e24239ed4cce dvr 
edge router now creates snat_namespace object in constructor while some logic 
in the module still checks for existence of this object: like 
external_gateway_updated() will not fully process router if snat_namespace 
object exists.

  The proposal is to revert commit
  9dc70ed77e055677a4bd3257a0e9e24239ed4cce and the make another attempt
  to fix bug 1557909.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590042] [NEW] wsgi and eventlet logs flooding on large deployments

2016-06-07 Thread Kam Nasim
Public bug reported:

Based on observation in one of our larger lab deployments, keystone
/keystone-all.log is seeing on average multiple logs per seconds. These
are logged at INFO verbosity which is the default logging level in
keystone.conf

Most frequently seen logs:
   INFO keystone.common.wsgi [-] GET /auth/tokens? 

   INFO eventlet.wsgi.server [-]  - -
[ ] "GET /v  /auth/tokens HTTP/  " 
. 

Other high runner logs: 
   INFO keystone.common.wsgi [-] POST /tokens?POST /tokens 

   INFO eventlet.wsgi.server [-]  - -
[ ] "POST /v  /tokens HTTP/  "  .


   INFO keystone.common.wsgi [-] GET /? 
   INFO eventlet.wsgi.server [-]  - - [ 
] "GET / HTTP/  "  


These log entries cause frequent churn in keystone-all.log.

We have a small fix in place to move GET/POST logs from
keystone.common.wsgi and eventlet.wsgi.server to debug logs and to
enable these logs when the verbosity is set to debug in keystone.conf

** Affects: keystone
 Importance: Undecided
 Assignee: Kam Nasim (knasim-wrs)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Kam Nasim (knasim-wrs)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590042

Title:
  wsgi and eventlet logs flooding on large deployments

Status in OpenStack Identity (keystone):
  New

Bug description:
  Based on observation in one of our larger lab deployments, keystone
  /keystone-all.log is seeing on average multiple logs per seconds.
  These are logged at INFO verbosity which is the default logging level
  in keystone.conf

  Most frequently seen logs:
 INFO keystone.common.wsgi [-] GET /auth/tokens? 

 INFO eventlet.wsgi.server [-]  - -
  [ ] "GET /v  /auth/tokens HTTP/  "
   . 

  Other high runner logs: 
 INFO keystone.common.wsgi [-] POST /tokens?POST 
/tokens 

 INFO eventlet.wsgi.server [-]  - -
  [ ] "POST /v  /tokens HTTP/  "  .
  

 INFO keystone.common.wsgi [-] GET /? 
 INFO eventlet.wsgi.server [-]  - - [ 
] "GET / HTTP/  "  

  
  These log entries cause frequent churn in keystone-all.log.

  We have a small fix in place to move GET/POST logs from
  keystone.common.wsgi and eventlet.wsgi.server to debug logs and to
  enable these logs when the verbosity is set to debug in keystone.conf

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590035] [NEW] Adds RemoteFX support to the Hyper-V driver

2016-06-07 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/42529
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit a39710244a8f26b0d9d80bffa41c04c84b133f17
Author: Adelina Tuvenie 
Date:   Tue Dec 22 17:41:55 2015 +0200

Adds RemoteFX support to the Hyper-V driver

Microsoft RemoteFX enhances the visual experience in RDP connections,
including providing access to virtualized instances of a physical GPU to
multiple guests running on Hyper-V.

In order to use RemoteFX in Hyper-V 2012 R2, one or more DirectX 11 capable
display adapters must be present and the RDS-Virtualization server feature
must be installed.

This patch enables RemoteFX on Hyper-V Server / Windows Server 2012 R2
and above.

Co-Authored-By: Adelina Tuvenie 
Co-Authored-By: Claudiu Belu 

DocImpact

Implements: blueprint hyper-v-remotefx

Change-Id: I91fabd5bce2564d48957e8aec1e50ff2625e5747

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590035

Title:
  Adds RemoteFX support to the Hyper-V driver

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/42529
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit a39710244a8f26b0d9d80bffa41c04c84b133f17
  Author: Adelina Tuvenie 
  Date:   Tue Dec 22 17:41:55 2015 +0200

  Adds RemoteFX support to the Hyper-V driver
  
  Microsoft RemoteFX enhances the visual experience in RDP connections,
  including providing access to virtualized instances of a physical GPU to
  multiple guests running on Hyper-V.
  
  In order to use RemoteFX in Hyper-V 2012 R2, one or more DirectX 11 
capable
  display adapters must be present and the RDS-Virtualization server feature
  must be installed.
  
  This patch enables RemoteFX on Hyper-V Server / Windows Server 2012 R2
  and above.
  
  Co-Authored-By: Adelina Tuvenie 
  Co-Authored-By: Claudiu Belu 
  
  DocImpact
  
  Implements: blueprint hyper-v-remotefx
  
  Change-Id: I91fabd5bce2564d48957e8aec1e50ff2625e5747

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434103] Re: SQL schema downgrades are no longer supported

2016-06-07 Thread Sergey Belous
** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
 Assignee: (unassigned) => Sergey Belous (sbelous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434103

Title:
  SQL schema downgrades are no longer supported

Status in Ceilometer:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in neutron:
  Fix Released
Status in octavia:
  In Progress
Status in Sahara:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Triaged

Bug description:
  Approved cross-project spec: https://review.openstack.org/152337

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1434103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588305] Re: config reserved_huge_pages, nova-compute start failed

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324379
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8093b764a9430a4e1c8cf2d73068f08117fec90e
Submitter: Jenkins
Branch:master

commit 8093b764a9430a4e1c8cf2d73068f08117fec90e
Author: zte-hanrong 
Date:   Thu Jun 2 19:23:46 2016 +0800

Fix nova-compute start failed when reserved_huge_pages has value.

The problem is due to this change:
https://review.openstack.org/#/c/292499/20/nova/conf/virt.py

The code of item_type=types.Dict is not correct, modfiy by
item_type=types.Dict().

Modify the usage discription of config of reserved_huge_pages.
Usage of oslo_config.types.Dict is key:value pairs separated by
commas.

Add unit test because this is a complicated option that might not work
like people expect.

Closes-Bug:#1588305
Change-Id: I06490866f24617cf99764ede73a1938c2d7b7b5c


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588305

Title:
  config reserved_huge_pages, nova-compute start failed

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I set this value in nova.conf
  reserved_huge_pages = node=0,size=2048,count=4

  
  nova-compute restart failed.

  Log is as follow:

  2016-06-02 18:56:04.521 CRITICAL nova [req-
  e9dd76d9-4a4b-4571-bb88-78d751f74274 None None] TypeError: value_type
  must be callable

  2016-06-02 18:56:04.521 TRACE nova Traceback (most recent call last):
  2016-06-02 18:56:04.521 TRACE nova File "/usr/bin/nova-compute", line 10, in 

  2016-06-02 18:56:04.521 TRACE nova sys.exit(main())
  2016-06-02 18:56:04.521 TRACE nova File 
"/opt/stack/nova/nova/cmd/compute.py", line 76, in main
  2016-06-02 18:56:04.521 TRACE nova service.wait()
  2016-06-02 18:56:04.521 TRACE nova File "/opt/stack/nova/nova/service.py", 
line 491, in wait
  2016-06-02 18:56:04.521 TRACE nova _launcher.wait()
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 309, in wait
  2016-06-02 18:56:04.521 TRACE nova status, signo = 
self._wait_for_exit_or_signal()
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 284, in 
_wait_for_exit_or_signal
  2016-06-02 18:56:04.521 TRACE nova self.conf.log_opt_values(LOG, 
logging.DEBUG)
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2525, in 
log_opt_values
  2016-06-02 18:56:04.521 TRACE nova _sanitize(opt, getattr(group_attr, 
opt_name)))
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2946, in __getattr__
  2016-06-02 18:56:04.521 TRACE nova return self._conf._get(name, self._group)
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2567, in _get
  2016-06-02 18:56:04.521 TRACE nova value = self._do_get(name, group, 
namespace)
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2604, in _do_get
  2016-06-02 18:56:04.521 TRACE nova return 
convert(opt._get_from_namespace(namespace, group_name))
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2595, in convert
  2016-06-02 18:56:04.521 TRACE nova self._substitute(value, group, namespace), 
opt)
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2671, in 
_convert_value
  2016-06-02 18:56:04.521 TRACE nova return [opt.type(v) for v in value]
  2016-06-02 18:56:04.521 TRACE nova File 
"/usr/lib/python2.7/site-packages/oslo_config/types.py", line 478, in __init__
  2016-06-02 18:56:04.521 TRACE nova raise TypeError('value_type must be 
callable')
  2016-06-02 18:56:04.521 TRACE nova TypeError: value_type must be callable
  2016-06-02 18:56:04.521 TRACE nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1588305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590028] [NEW] Angular LI Required Icon isn't Brand Primary Color

2016-06-07 Thread Diana Whitten
Public bug reported:

https://i.imgur.com/KmI0NEw.png

** Affects: horizon
 Importance: Medium
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590028

Title:
  Angular LI Required Icon isn't Brand Primary Color

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  https://i.imgur.com/KmI0NEw.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589998] [NEW] cloud-init runs in single user mode

2016-06-07 Thread David Quattlebaum
Public bug reported:

When I choose single user mode by diting the boot params and adding
single, I would expect cloud-init and all the other cloud* services to
see that fact and not run.

I consider this a bug.

Is there a work-around to make it not run.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1589998

Title:
  cloud-init runs in single user mode

Status in cloud-init:
  New

Bug description:
  When I choose single user mode by diting the boot params and adding
  single, I would expect cloud-init and all the other cloud* services to
  see that fact and not run.

  I consider this a bug.

  Is there a work-around to make it not run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1589998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576046] Re: test_create_router_port_and_fail_create_postcommit makes networking-odl py27 and py34 fail

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/310682
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ee9f86c3f8bde27abb82bd474ad1d746c94e3f96
Submitter: Jenkins
Branch:master

commit ee9f86c3f8bde27abb82bd474ad1d746c94e3f96
Author: Rui Zang 
Date:   Thu Apr 28 14:14:00 2016 +0800

Mock mechanism manager instead of the test driver

test_create_router_port_and_fail_create_postcommit mocks
the test driver's create_port_postcommit mechod. But the
mechanism drivers are not always ['logger', 'test'] in all
circumstances. For networking-odl, the mechanism driver is
opendaylight. So the mocking does not take effect in
networking-odl and fails this test case. Other stadium
ML2 drivers may have the same issue.

Change-Id: Iaeaa1ea177f6fdcc81f47d505855c5f699971a6b
Closes-Bug: #1576046


** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576046

Title:
  test_create_router_port_and_fail_create_postcommit  makes networking-
  odl py27 and py34 fail

Status in neutron:
  Fix Released

Bug description:
  New test case introduced to neutron lately by
   
https://github.com/openstack/neutron/commit/cc3ba38641e5a414ec4408b13ecb7bd80ea343ff
   Caused networking-odl py27/py34 gate checking failure, as shown in  
  https://review.openstack.org/#/c/309847/

  The first hunch is the mock for  create_port_postcommit of the test
  driver does not take effect since networking-odl over-writes
  self._mechanism_drivers to ['opendaylight']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1576046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576048] Re: "live_migration" call in Liberty doesn't have backwards compatibility with Kilo

2016-06-07 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Medium => High

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/liberty
   Importance: Undecided => High

** Changed in: nova/mitaka
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576048

Title:
  "live_migration" call in Liberty doesn't have backwards compatibility
  with Kilo

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  "live_migration" call in Liberty doesn't have backwards compatibility
  with Kilo.

  Liberty control plane and Kilo compute nodes.

  Doing live-migration and getting error below on compute node

  TypeError: live_migration() got an unexpected keyword argument
  'migration'

  Workaround(On controller): http://paste.openstack.org/show/495612/

  Environment
  ==
  Libvirt+KVM, Ceph for VMs
  Liberty - Mirantis OpenStack 8.0 (2015.2)
  Kilo - 2015.1.3 tag

  Steps to reproduce
  ===
  1) Install Liberty control plane (api, conductor, schduler, etc.)
  2) Install Kilo compute
  3) Add to nova.conf on controller
[upgrade_levels]
compute=kilo
  3) Try "nova live-migration VM"

  Expected result
  =
  Migration will succeed

  Actual result
  ==
  Traceback on compute node
  http://paste.openstack.org/show/495541/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1576048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588003] Re: Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration

2016-06-07 Thread Alan Pevec
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588003

Title:
  Skip host to guest CPU compatibility check for emulated (QEMU "TCG"
  mode) guests  during live migration

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed

Bug description:
  The _compare_cpu() method of Nova's libvirt driver performs guest vCPU 
  model to destination host CPU model comparison (during live migration) 
  even in the case of emulated (QEMU "TCG" mode) guests, where the CPU
  instructions are emulated completely in software, and no hardware
  acceleration, such as KVM is involved.

  From nova/virt/libvirt/driver.py:

 [...]
 5464 def _compare_cpu(self, guest_cpu, host_cpu_str, instance):
 5465 """Check the host is compatible with the requested CPU
 [...][...]
 5481 if CONF.libvirt.virt_type not in ['qemu', 'kvm']:
 5482 return
 5483

  Skip the comparison for 'qemu' part above.

  Fix for master branch is here:

  https://review.openstack.org/#/c/323467/ -- 
  libvirt: Skip CPU compatibility check for emulated guests

  
  This bug is for stable branch backports: Mitaka and Liberty.

  [Thanks: Daniel P. Berrange for the pointer.]

  
  Related context and references
  --

  (a) This upstream discussion thread where using the custom CPU model 
  ("gate64") is causing live migration CI jobs to fail.

  http://lists.openstack.org/pipermail/openstack-dev/2016-May/095811.html 
  -- "[gate] [nova] live migration, libvirt 1.3, and the gate"

  (b) Gate DevStack change to avoid setting the custom CPU model in 
  nova.conf

  https://review.openstack.org/#/c/320925/4 -- don't set libvirt 
  cpu_model

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1588003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589969] [NEW] [qos][postgresql] neutron qos-bandwidth-limit-rule-create failed

2016-06-07 Thread Dongcan Ye
Public bug reported:

Neutron version is Liberty and db backend is PostgreSQL.

Using following command to create qos ratelimit:
$ neutron qos-policy-create bw-limiter
$ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000 \
  --max-burst-kbps 300

ERROR log in neutron server:
http://paste.openstack.org/show/508633/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589969

Title:
  [qos][postgresql] neutron qos-bandwidth-limit-rule-create failed

Status in neutron:
  New

Bug description:
  Neutron version is Liberty and db backend is PostgreSQL.

  Using following command to create qos ratelimit:
  $ neutron qos-policy-create bw-limiter
  $ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000 \
--max-burst-kbps 300

  ERROR log in neutron server:
  http://paste.openstack.org/show/508633/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1589969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589967] [NEW] Ipsec site connection form should support id when item has no name

2016-06-07 Thread guoshan
Public bug reported:

If create a ipsec policy/ike policy/vpn service without name successfully, it 
could show its id instead of name in dashboard table.
However, if I want to use above(only id without name) to create a Ipsec 
connection, the form select shows blank.
It should support the id.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1589967

Title:
  Ipsec site connection form should support id when item has no name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If create a ipsec policy/ike policy/vpn service without name successfully, it 
could show its id instead of name in dashboard table.
  However, if I want to use above(only id without name) to create a Ipsec 
connection, the form select shows blank.
  It should support the id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1589967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589960] [NEW] avoid one unnecessary _get_power_state call

2016-06-07 Thread jichenjc
Public bug reported:

https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L877

has _get_power_state in its function in
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1042

then we call it again

https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L878


actually there is no state change in _retry_reboot function
so we can reuse the state as variable and avoid mock in test

** Affects: nova
 Importance: Low
 Assignee: jichenjc (jichenjc)
 Status: In Progress


** Tags: compute

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

** Tags added: compute

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589960

Title:
  avoid one unnecessary _get_power_state call

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L877

  has _get_power_state in its function in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1042

  then we call it again

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L878

  
  actually there is no state change in _retry_reboot function
  so we can reuse the state as variable and avoid mock in test

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583754] Re: Style: Default Theme: Should Support Dropup Menus

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/318932
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2679c3864a2520e312c9be9113a5b2ecf13a839f
Submitter: Jenkins
Branch:master

commit 2679c3864a2520e312c9be9113a5b2ecf13a839f
Author: Diana Whitten 
Date:   Thu May 19 12:10:22 2016 -0700

Default theme lacks support for dropup menus

The default theme assumes ALL dropdowns are drop'down' menus, however
Bootstrap supports dropup menus as well.  This simplies the logic in
the theme's dropdown.scss file in order to support all Bootstrap
types.

Also, added the dropup to the theme preview page for ease of testing.

Closes-bug: #1583754
Change-Id: Ib05bc59c35371dc8b2291d4a0522cf4c52047813


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583754

Title:
  Style: Default Theme: Should Support Dropup Menus

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Bootstrap supports dropup menus in addition to dropdown menus, seen
  here: http://getbootstrap.com/components/#dropdowns

  Right now, our little dropdown arrow only works for drop'down' menus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588378] Re: Cancelled live migration are reported as in progress

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324615
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=58cb7e56dbc9e42cd742c6ad9ffd1c62bc2ef0b7
Submitter: Jenkins
Branch:master

commit 58cb7e56dbc9e42cd742c6ad9ffd1c62bc2ef0b7
Author: Andrea Rosa 
Date:   Thu Jun 2 16:27:15 2016 +0100

Cancelled live migration are not in progress

Since we have introduced the new API for aborting a running live
migration we have introduced a new state called "cancelled" which is
applied to all the aborted live migration job in the libvirt driver.
This new status is not filtered by the sqlalchemy query used to get the
list of the all migration in progress for host and node.

Change-Id: I219591297f73c4bb8b1d97aaf298681c0421d1ae
Closes-bug: #1588378


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588378

Title:
  Cancelled live migration are reported as in progress

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  With the introduction of the API for aborting a running live-migration 
(https://review.openstack.org/277971) we have introduced a new status for the 
aborted live migration jobs. This new status called "cancelled" is not filtered 
by the sqlalchemy query used to return the list of migration in progress: 
  
https://github.com/openstack/nova/blob/87dc738763d6a7a10409e14b878f5cdd39ba805e/nova/db/sqlalchemy/api.py#L4851

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1588378/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461459] Re: Allow disabling the evacuate cleanup mechanism in compute manager

2016-06-07 Thread Alexandra Settle
As suggested, the peril warning was successfully added in the docs but only 
noted as a partial bug.
I think this suffices as a fix.

** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461459

Title:
  Allow disabling the evacuate cleanup mechanism in compute manager

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/174779
  commit 6f1f9dbc211356a3d0e2d46d3a984d7ceee79ca6
  Author: Tony Breeds 
  Date:   Tue Jan 27 11:17:54 2015 -0800

  Allow disabling the evacuate cleanup mechanism in compute manager
  
  This mechanism attempts to destroy any locally-running instances on
  startup if instance.host != self.host. The assumption is that the
  instance has been evacuated and is safely running elsewhere. This is
  a dangerous assumption to make, so this patch adds a configuration
  variable to disable this behavior if it's not desired.
  
  Note that disabling it may have implications for the case where
  instances *were* evacuated, given potential shared resources.
  To counter that problem, this patch also makes _init_instance()
  skip initialization of the instance if it appears to be owned
  by another host, logging a prominent warning in that case.
  
  As a result, if you have destroy_after_evacuate=False and you start
  a nova compute with an incorrect hostname, or run it twice from
  another host, then the worst that will happen is you get log
  warnings about the instances on the host being ignored. This should
  be an indication that something is wrong, but still allow for
  fixing it without any loss. If the configuration option is disabled
  and a legitimate evacuation does occur, simply enabling it and then
  restarting the compute service will cause the cleanup to occur.
  
  This is added to the workarounds config group because it is really
  only relevant while evacuate is fundamentally broken in this way.
  It needs to be refactored to be more robust, and once that is done,
  this should be able to go away.
  
  Conflicts:
  nova/compute/manager.py
  nova/tests/unit/compute/test_compute.py
  nova/tests/unit/compute/test_compute_mgr.py
  nova/utils.py
  
  NOTE: In nova/utils.py a new section has been introduced but
  only the option addessed by this backport has been included.
  
  DocImpact: New configuration option, and peril warning
  Partial-Bug: #1419785
  (cherry picked from commit 922148ac45c5a70da8969815b4f47e3c758d6974)
  
  -- squashed with commit --
  
  Create a 'workarounds' config group.
  
  This group is for very specific reasons.
  
  If you're:
  - Working around an issue in a system tool (e.g. libvirt or qemu) where 
the fix
is in flight/discussed in that community.
  - The tool can be/is fixed in some distributions and rather than patch 
the code
those distributions can trivially set a config option to get the 
"correct"
behavior.
  This is a good place for your workaround.
  
  (cherry picked from commit b1689b58409ab97ef64b8cec2ba3773aacca7ac5)
  
  --
  
  Change-Id: Ib9a3c72c096822dd5c65c905117ae14994c73e99

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536226] Re: Not all .po files compiled

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319260
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=16d0cdba47a0ad976f1500cd91c420860c3ad149
Submitter: Jenkins
Branch:master

commit 16d0cdba47a0ad976f1500cd91c420860c3ad149
Author: Sven Anderson 
Date:   Fri May 20 15:54:27 2016 +0200

Let setup.py compile_catalog process all language files

Two years ago the translation files have been split into several
files, separating the log messages of different log levels from each
other, like X.pot, X-log-warning.pot, X-log-info.pot, and so on.
However, the setup.py command `compile_catalogs`, that comes from the
babel package and compiles the corresponding .po files into .mo
files, only supported one file per python package.  This means that
during packaging `compile_catalogs` never compiled the X-log-*.po
files, so the corresponding translations were always missing.

Since babel 2.3 the domain can be set to a space separated list of
domains.  This change adds the the additional log level files to the
domain list.

The obsolete check that .po and .pot files are valid is removed from
tox.ini.

Change-Id: I1f0bfb181e2b84ac6dd0ce61881cd2cc4400bdcb
Closes-Bug: #1536226


** Changed in: keystone
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536226

Title:
  Not all .po files compiled

Status in Cinder:
  New
Status in Glance:
  In Progress
Status in heat:
  New
Status in OpenStack Identity (keystone):
  Fix Released
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack i18n:
  New

Bug description:
  python setup.py compile_catalog only compiles one .po file per
  language to a .mo file. By default  is the project
  name, that is nova.po. This means all other nova-log-*.po files are
  never compiled. The only way to get setup.py compile the other files
  is calling it several times with different domains set, like for
  instance `python setup.py --domain nova-log-info` and so on. Since
  this is not usual, it can be assumed that the usual packages don't
  contain all the .mo files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1536226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589916] [NEW] "glance location-add" failed when url is "cinder://volume-id"

2016-06-07 Thread yuyafei
Public bug reported:

The version is mitaka.

Glance Configuration:
show_image_direct_url=True
show_multiple_locations=True.

Steps:
1. Upload a image (cirros-0.3.1-x86_64-disk.img, 
f71dff58-36ca-46ea-8258-0f3c9a4cd747);
2. Create a volume(id:123fb906-bed5-4b55-8a82-1f2e6bed424b) from the 
image(backend is fujitsu, others same);
3. Add a location to the image(url:http), success;
#glance location-add --url 
http://10.43.176.8/images/cirros-0.3.1-x86_64-disk.img  
f71dff58-36ca-46ea-8258-0f3c9a4cd747
4. Add a location to the image(url:cinder//volume-id) failed;
#glance location-add --url cinder://123fb906-bed5-4b55-8a82-1f2e6bed424b  
f71dff58-36ca-46ea-8258-0f3c9a4cd747
400 Bad Request
Invalid location
(HTTP 400)

The glance-api log is:
2016-06-08 01:38:04.265 DEBUG eventlet.wsgi.server [-] (30577) accepted 
('10.43.203.135', 58926) from (pid=30577) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:868
2016-06-08 01:38:04.267 DEBUG glance.api.middleware.version_negotiation [-] 
Determining version of request: GET /versions Accept: */* from (pid=30577) 
process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:46
2016-06-08 01:38:04.269 INFO eventlet.wsgi.server [-] 10.43.203.135 - - 
[08/Jun/2016 01:38:04] "GET /versions HTTP/1.1" 200 793 0.001778
2016-06-08 01:38:04.373 DEBUG eventlet.wsgi.server [-] (30577) accepted 
('10.43.203.135', 58929) from (pid=30577) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:868
2016-06-08 01:38:04.374 DEBUG glance.api.middleware.version_negotiation [-] 
Determining version of request: PATCH 
/v2/images/f71dff58-36ca-46ea-8258-0f3c9a4cd747 Accept: */* from (pid=30577) 
process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:46
2016-06-08 01:38:04.375 DEBUG glance.api.middleware.version_negotiation [-] 
Using url versioning from (pid=30577) process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:58
2016-06-08 01:38:04.375 DEBUG glance.api.middleware.version_negotiation [-] 
Matched version: v2 from (pid=30577) process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:70
2016-06-08 01:38:04.376 DEBUG glance.api.middleware.version_negotiation [-] new 
path /v2/images/f71dff58-36ca-46ea-8258-0f3c9a4cd747 from (pid=30577) 
process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:71
2016-06-08 01:38:04.604 INFO eventlet.wsgi.server 
[req-fe0ec689-75f0-4f11-b7ff-692ec84c3a2d 346ce385360c43588f48349ed8f4159e 
97330b92c2144c0ea9b8826038d3abe3] 10.43.203.135 - - [08/Jun/2016 01:38:04] 
"PATCH /v2/images/f71dff58-36ca-46ea-8258-0f3c9a4cd747 HTTP/1.1" 400 254 
0.229389

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- "glance location-add" failed when url is cinder
+ "glance location-add" failed when url is "cinder://volume-id"

** Description changed:

  The version is mitaka.
  
  Glance Configuration:
  show_image_direct_url=True
  show_multiple_locations=True.
  
  Steps:
  1. Upload a image (cirros-0.3.1-x86_64-disk.img, 
f71dff58-36ca-46ea-8258-0f3c9a4cd747);
  2. Create a volume(id:123fb906-bed5-4b55-8a82-1f2e6bed424b) from the 
image(backend is fujitsu, others same);
  3. Add a location to the image(url:http), success;
  #glance location-add --url 
http://10.43.176.8/images/cirros-0.3.1-x86_64-disk.img  
f71dff58-36ca-46ea-8258-0f3c9a4cd747
- 4. Add a location to the image(url:cinder) failed;
+ 4. Add a location to the image(url:cinder//volume-id) failed;
  #glance location-add --url cinder://123fb906-bed5-4b55-8a82-1f2e6bed424b  
f71dff58-36ca-46ea-8258-0f3c9a4cd747
  400 Bad Request
  Invalid location
  (HTTP 400)
  
  The glance-api log is:
  2016-06-08 01:38:04.265 DEBUG eventlet.wsgi.server [-] (30577) accepted 
('10.43.203.135', 58926) from (pid=30577) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:868
  2016-06-08 01:38:04.267 DEBUG glance.api.middleware.version_negotiation [-] 
Determining version of request: GET /versions Accept: */* from (pid=30577) 
process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:46
  2016-06-08 01:38:04.269 INFO eventlet.wsgi.server [-] 10.43.203.135 - - 
[08/Jun/2016 01:38:04] "GET /versions HTTP/1.1" 200 793 0.001778
  2016-06-08 01:38:04.373 DEBUG eventlet.wsgi.server [-] (30577) accepted 
('10.43.203.135', 58929) from (pid=30577) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:868
  2016-06-08 01:38:04.374 DEBUG glance.api.middleware.version_negotiation [-] 
Determining version of request: PATCH 
/v2/images/f71dff58-36ca-46ea-8258-0f3c9a4cd747 Accept: */* from (pid=30577) 
process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:46
  2016-06-08 01:38:04.375 DEBUG glance.api.middleware.version_negotiation [-] 
Using url versioning from (pid=30577) process_request 
/opt/stack/glance/glance/api/middleware/version_negotiation.py:58
  2016-06-08 01:38:04.375 DEBUG 

[Yahoo-eng-team] [Bug 1589521] Re: AttributeError: type object 'BaseNetworkTest' has no attribute '_try_delete_resource'

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326150
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=a59df892d3f90f97c967a46b6c18e595d60a9b85
Submitter: Jenkins
Branch:master

commit a59df892d3f90f97c967a46b6c18e595d60a9b85
Author: Ken'ichi Ohmichi 
Date:   Mon Jun 6 14:44:51 2016 -0700

Use call_and_ignore_notfound_exc directly

This patch makes fwaas_client use call_and_ignore_notfound_exc
directly because the client only uses it.

Closes-Bug: #1589521
Change-Id: I3abd9049560ee507b3610ab482c697a239f13a3b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589521

Title:
  AttributeError: type object 'BaseNetworkTest' has no attribute
  '_try_delete_resource'

Status in neutron:
  Fix Released
Status in tap-as-a-service:
  In Progress

Bug description:
  Neutron API job broken by a change in tempest:
  https://review.openstack.org/#/c/277907/

  ...and fwaas tempest plugin using internal symbols from tempest.

  Logs: http://logs.openstack.org/41/325141/3/gate/gate-neutron-dsvm-
  api/281611d/console.html#_2016-06-06_12_27_35_286

  2016-06-06 12:27:35.198 | Failed to import test module: 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions
  2016-06-06 12:27:35.237 | Traceback (most recent call last):
  2016-06-06 12:27:35.242 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 456, 
iNon-zero exit code (2) from test listing.
  2016-06-06 12:27:35.242 | nerror: testr failed (3)
  2016-06-06 12:27:35.243 |  _find_test_path
  2016-06-06 12:27:35.245 | module = self._get_module_from_name(name)
  2016-06-06 12:27:35.280 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  2016-06-06 12:27:35.281 | __import__(name)
  2016-06-06 12:27:35.285 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 23, in 
  2016-06-06 12:27:35.286 | from 
neutron_fwaas.tests.tempest_plugin.tests.api import base
  2016-06-06 12:27:35.286 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/base.py",
 line 21, in 
  2016-06-06 12:27:35.286 | class 
BaseFWaaSTest(fwaas_client.FWaaSClientMixin, base.BaseNetworkTest):
  2016-06-06 12:27:35.286 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/base.py",
 line 22, in BaseFWaaSTest
  2016-06-06 12:27:35.286 | _delete_wrapper = 
base.BaseNetworkTest._try_delete_resource
  2016-06-06 12:27:35.286 | AttributeError: type object 'BaseNetworkTest' has 
no attribute '_try_delete_resource'
  2016-06-06 12:27:35.286 | 
  2016-06-06 12:27:35.286 | Failed to import test module: 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas
  2016-06-06 12:27:35.286 | Traceback (most recent call last):
  2016-06-06 12:27:35.287 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in 
_find_test_path
  2016-06-06 12:27:35.287 | module = self._get_module_from_name(name)
  2016-06-06 12:27:35.287 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  2016-06-06 12:27:35.287 | __import__(name)
  2016-06-06 12:27:35.287 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/scenario/test_fwaas.py",
 line 21, in 
  2016-06-06 12:27:35.287 | from 
neutron_fwaas.tests.tempest_plugin.tests.scenario import base
  2016-06-06 12:27:35.287 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/scenario/base.py",
 line 27, in 
  2016-06-06 12:27:35.287 | manager.NetworkScenarioTest):
  2016-06-06 12:27:35.287 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/scenario/base.py",
 line 28, in FWaaSScenarioTest
  2016-06-06 12:27:35.287 | _delete_wrapper = 
manager.NetworkScenarioTest.delete_wrapper
  2016-06-06 12:27:35.287 | AttributeError: type object 'NetworkScenarioTest' 
has no attribute 'delete_wrapper'
  2016-06-06 12:27:35.287 | The test run didn't actually run any tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1589521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589888] [NEW] Ipsec site connection should support id

2016-06-07 Thread guoshan
Public bug reported:

If create a VPN without name successfully, it could show in dashboard.
Thus unable to use this vpn in Ipsec connection form.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1589888

Title:
  Ipsec site connection should support id

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If create a VPN without name successfully, it could show in dashboard.
  Thus unable to use this vpn in Ipsec connection form.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1589888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589891] [NEW] OVS agent dies if neutron-server is unavailable

2016-06-07 Thread Dmitry Mescheryakov
Public bug reported:

On Mitaka we observed the following stacktrace in OVS agent, after which
it died: http://paste.openstack.org/show/508444/

That happened during neutron-server partial unavailability, when some
part of RPC requests timed out while others succeeded. The nature if the
server unavailability is a different issue, but I think that an agent
should survive no matter what.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589891

Title:
  OVS agent dies if neutron-server is unavailable

Status in neutron:
  New

Bug description:
  On Mitaka we observed the following stacktrace in OVS agent, after
  which it died: http://paste.openstack.org/show/508444/

  That happened during neutron-server partial unavailability, when some
  part of RPC requests timed out while others succeeded. The nature if
  the server unavailability is a different issue, but I think that an
  agent should survive no matter what.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1589891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580440] Re: neutron purge - executing command on non existing tenant returns wrong message

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321012
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=37677ff861de75f78ecdf752a6931e13a44ee537
Submitter: Jenkins
Branch:master

commit 37677ff861de75f78ecdf752a6931e13a44ee537
Author: John Davidge 
Date:   Wed May 25 14:55:09 2016 +0100

[network] Improve neutron purge

This patch makes additions to the neutron purge page to clarify
that the command can be used with tenants that have been deleted
or otherwise do not exist in keystone.

It also documents the expected output when no supported resources
are found.

backport: mitaka
Co-Authored-By: Matt Kassawara 
Closes-Bug: 1580440
Change-Id: I562eb081630e124d690c27ae16d13f9793c5cafe


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580440

Title:
  neutron purge - executing command on non existing tenant returns wrong
  message

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  I executed " neutron purge" command  with a non existing tenant ID and
  received the following:

  neutron purge 25a1c11e26354d7dbb5b204eb1310f33
  Purging resources: 100% complete.
  The following resources could not be deleted: 1 network

  
  We do not have that tenant ID so the message should be :

  There is not tenant with "SPECIFIED ID" id found.


  python-neutron-8.0.0-1.el7ost.noarch
  openstack-neutron-8.0.0-1.el7ost.noarch
  python-neutron-lib-0.0.2-1.el7ost.noarch
  openstack-neutron-metering-agent-8.0.0-1.el7ost.noarch
  openstack-neutron-ml2-8.0.0-1.el7ost.noarch
  openstack-neutron-openvswitch-8.0.0-1.el7ost.noarch
  python-neutronclient-4.1.1-2.el7ost.noarch
  openstack-neutron-common-8.0.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589880] [NEW] report state failed

2016-06-07 Thread kaka
Public bug reported:

Description:
=
set master database read_only=on when switching master nova database to 
slave,after that,I check nova service status
# nova-manage service list
Binary   Host  Zone Status 
State Updated_At
nova-consoleauth 11_120internal enabled
XXX   2016-06-07 08:28:46
nova-conductor   11_120internal enabled
XXX   2016-06-07 08:28:45
nova-cert11_120internal enabled
XXX   2016-05-17 08:12:10
nova-scheduler   11_120internal enabled
XXX   2016-05-17 08:12:24
nova-compute 11_121bx   enabled
XXX   2016-06-07 08:28:49
nova-compute 11_122bx   enabled
XXX   2016-06-07 08:28:42
=

Steps to reproduce
=
# mysql
MariaDB [nova]> set global read_only=on;
=

Environment

Version:Liberty
openstack-nova-conductor-12.0.0-1.el7.noarch

Logs


2016-05-12 11:01:20.343 9198 ERROR oslo.service.loopingcall 
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall [-] Fixed interval 
looping call 'nova.servicegroup.drivers.db.DbDriver._report_state' failed
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 113, in 
_run_loop
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line 87, in 
_report_state
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
service.service_ref.save()
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 213, in 
wrapper
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall return fn(self, 
*args, **kwargs)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/objects/service.py", line 250, in save
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall db_service = 
db.service_update(self._context, self.id, updates)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/db/api.py", line 153, in service_update
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall return 
IMPL.service_update(context, service_id, values)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 146, in wrapper
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall ectxt.value = 
e.inner_exc
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 136, in wrapper
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall return f(*args, 
**kwargs)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 532, in 
service_update
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
service_ref.update(values)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 490, in 
__exit__
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall self.rollback()
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, 
in __exit__
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
compat.reraise(exc_type, exc_value, exc_tb)
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 487, in 
__exit__
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall self.commit()
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 392, in 
commit
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
self._prepare_impl()
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 372, in 
_prepare_impl
2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
self.session.flush()
2016-05-12 

[Yahoo-eng-team] [Bug 1540239] Re: libvirtError "Failed to open file '/dev/mapper/mpathzy': No such file or directory" when creating a intance with Rally

2016-06-07 Thread Takashi NATSUME
Kilo is already EOL.
If this bug can be reproduced in Liberty or later, reopen this report or create 
new one.


** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540239

Title:
  libvirtError "Failed to open file '/dev/mapper/mpathzy': No such file
  or directory" when creating  a intance with Rally

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The nova version is Kilo.
  When use rally to test intancs booting from volume and deleting, and times is 
12 and  concurrency is 5.
  A intance name intance1 is terminating and now another intance name instance2 
is booting from mage(create a new volume).
  If intances1 disconnects volume between  intance2 connects volume  and  
writes xml to disk, and supposing there is no intance at the compute node,
  Nova will disconnect from iscsi portal if no other multipath devices at the 
compute node, it also disconects intances1's iscsi session, then there is no 
session  or multipath device at the compute node. An error occurred while 
trying to launch intances1 with xml: "libvirtError: Failed to open file 
'/dev/mapper/mpathzy': No such file or directory"
   
   steps:
  1. Preparing the openstack compute node where is no intances or volumes 
  2. Install Rally.
  3. Modify the test case: boot from image(create a new volume) and specify the 
compute nodes. Testing times is 12 and  concurrency is 5.
  (rally)[root@control07 my_senarios]# cat boot-from-volume-and-delete.json 
  {
  "NovaServersVolumeArgs.boot_server_from_volume_and_delete": [
  {
  "args": {
  "flavor": {
  "name": "m1.tiny"
  },
  "image": {
  "id": "e77a600c-8f93-4b8d-bd7a-5fe160ecec08"
  },
  "volume_args": {
  "volume_type": "fujitpp"
  },
  "nics":[{"net-id": "8738a337-9445-49f5-8157-6ec005f355db"}],
  "volume_size": 10,
  "availability_zone": "nova:compute04"
  },
  "runner": {
  "type": "constant",
  "times": 12,
  "concurrency": 5
  },
  "context": {
  "users": {
  "tenants": 3,
  "users_per_tenant": 3
  }
  }
  }
  ]
  }
  4. Run the rally.

  Expected result:
  Rally run successfully

  Actual result:
  There are several intances lanched failed with error such as "libvirtError: 
Failed to open file '/dev/mapper/mpathzy': No such file or directory"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1540239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463202] Re: [RFE] Create a full load balancing configuration with one API call

2016-06-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/257201
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=eafbfd23b246b2b9828cdeb00f2acf7cb0ac0aca
Submitter: Jenkins
Branch:master

commit eafbfd23b246b2b9828cdeb00f2acf7cb0ac0aca
Author: Trevor Vardeman 
Date:   Tue Mar 1 14:52:33 2016 -0600

Get Me A LB

Allows a request to be passed to the lbaas API that contains the full graph
structure of a load balancer.  This means that an entire load balancer graph
can be created in one POST to the API, or a partial graph can be.

This creates a new driver method to the driver interface that when 
implemented
will be called when a full or partial graph is attempted to be created.  If
the driver does not implement this method, then the request is not allowed.

Co-Author: Trevor Vardeman 

Closes-Bug: #1463202
Depends-On: Id3a5ddb8efded8c6ad72a7118424ec01c777318d
Change-Id: Ic16ca4cb3cb407d9e91eea74315a39bf86c654f3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463202

Title:
  [RFE] Create a full load balancing configuration with one API call

Status in neutron:
  Fix Released

Bug description:
  There have been many requests to allow the API to expose the ability
  to create a load balancer, listener, pool, member, and health monitor
  in one single API request.  Some reasons for this:

  1) Drivers will know immediately up front what resources will be
  required to satisfy the end configuration.  They can make more
  efficient/optimized decisions based on this.

  2) It's not a good UX to have a user make at minimum 4 API requests
  before they can actually have a load balancer.

  3) Reducing the number of API requests will improve performance of the
  API server when at large scale and other services consuming the same
  API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589812] Re: Live migration fails using "openstack server migrate"

2016-06-07 Thread Chinmaya Bharadwaj
The issue exists in python-openstackclient, not in nova. Moving it to
openstackclient

** Project changed: nova => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589812

Title:
  Live migration fails using "openstack server migrate"

Status in python-openstackclient:
  New

Bug description:
  Description
  ===

  Attempting to migrate a server using the "openstack server migrate"
  command fails if the server lives on shared storage (like Ceph).

  The problem is that nova/compute/api.py:live_migrate() expects a
  "block_migration" boolean whereas
  openstackclient/compute/v2/server.py:take_action() passes a
  "shared_migration" boolean.

  Steps to reproduce
  ==

  Ensure instance in on shared storage and attempt to migrate it:

  [root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5
  --shared-migration --wait testlm

  Expected result
  ===

  [root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5 
--shared-migration --wait testlm
  Complete

  Actual result
  =

  [root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5 
--shared-migration --wait testlm
  phsospc2n4 is not on local storage: Block migration can not be used with 
shared storage. (HTTP 400) (Request-ID: 
req-7d535333-4220-457b-af81-0b90290bf84d)

  
  Environment
  ===

  python-openstackclient-2.2.0-1.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1589812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566972] Re: Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes the issue

2016-06-07 Thread Thomas Goirand
** Changed in: neutron
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566972

Title:
  Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes
  the issue

Status in neutron:
  Confirmed

Bug description:
  When running the unit tests (when building the Debian package for
  Neutron Mitaka RC3), Neutron fails more than 500 unit tests. Upgrading
  from SQLAlchemy 1.0.11 to 1.0.12 fixed the issue.

  Example of failed run:
  https://mitaka-jessie.pkgs.mirantis.com/job/neutron/37/consoleFull

  Moving forward, upgrading the global-requirements.txt to SQLAlchemy
  1.0.12 may not be possible, so probably it'd be nice to fix the issue
  in Neutron.

  FYI, in Debian, I don't really mind, as Debian Sid has version 1.0.12,
  and that's where I upload. For the (non-official) backports to Debian
  Jessie and Ubuntu Trusty, I did a backport of 1.0.12, and that is
  fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589821] [NEW] cleanup_incomplete_migrations periodic task regression with commit 099cf53925c0a0275325339f21932273ee9ce2bc

2016-06-07 Thread Rajesh Tailor
Public bug reported:

 Patch [1] changes the instance filtering condition in periodic task
"cleanup_incomplete_migrations" introduced in [2], in such a way that it
generates new issue, [3]

After change [1] lands,  the condition changes filtering logic, so now
all instances on current host are filtered, which is not expected.

We should filter all instances which where instance uuid is filtered as
per migration records.


[1] https://review.openstack.org/#/c/256102/
[2] https://review.openstack.org/#/c/219299/
[2] https://bugs.launchpad.net/nova/+bug/1586309

** Affects: nova
 Importance: Undecided
 Assignee: Rajesh Tailor (ratailor)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Rajesh Tailor (ratailor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589821

Title:
  cleanup_incomplete_migrations periodic task regression with commit
  099cf53925c0a0275325339f21932273ee9ce2bc

Status in OpenStack Compute (nova):
  In Progress

Bug description:
   Patch [1] changes the instance filtering condition in periodic task
  "cleanup_incomplete_migrations" introduced in [2], in such a way that
  it generates new issue, [3]

  After change [1] lands,  the condition changes filtering logic, so now
  all instances on current host are filtered, which is not expected.

  We should filter all instances which where instance uuid is filtered
  as per migration records.

  
  [1] https://review.openstack.org/#/c/256102/
  [2] https://review.openstack.org/#/c/219299/
  [2] https://bugs.launchpad.net/nova/+bug/1586309

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589812] [NEW] Live migration fails using "openstack server migrate"

2016-06-07 Thread Leland Lucius
Public bug reported:

Description
===

Attempting to migrate a server using the "openstack server migrate"
command fails if the server lives on shared storage (like Ceph).

The problem is that nova/compute/api.py:live_migrate() expects a
"block_migration" boolean whereas
openstackclient/compute/v2/server.py:take_action() passes a
"shared_migration" boolean.

Steps to reproduce
==

Ensure instance in on shared storage and attempt to migrate it:

[root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5
--shared-migration --wait testlm

Expected result
===

[root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5 
--shared-migration --wait testlm
Complete

Actual result
=

[root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5 
--shared-migration --wait testlm
phsospc2n4 is not on local storage: Block migration can not be used with shared 
storage. (HTTP 400) (Request-ID: req-7d535333-4220-457b-af81-0b90290bf84d)


Environment
===

python-openstackclient-2.2.0-1.el7.noarch

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: "One possible solution"
   
https://bugs.launchpad.net/bugs/1589812/+attachment/4678657/+files/livemigration.diff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589812

Title:
  Live migration fails using "openstack server migrate"

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  Attempting to migrate a server using the "openstack server migrate"
  command fails if the server lives on shared storage (like Ceph).

  The problem is that nova/compute/api.py:live_migrate() expects a
  "block_migration" boolean whereas
  openstackclient/compute/v2/server.py:take_action() passes a
  "shared_migration" boolean.

  Steps to reproduce
  ==

  Ensure instance in on shared storage and attempt to migrate it:

  [root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5
  --shared-migration --wait testlm

  Expected result
  ===

  [root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5 
--shared-migration --wait testlm
  Complete

  Actual result
  =

  [root@phsospc2n2 ~]# openstack server migrate --live phsospc2n5 
--shared-migration --wait testlm
  phsospc2n4 is not on local storage: Block migration can not be used with 
shared storage. (HTTP 400) (Request-ID: 
req-7d535333-4220-457b-af81-0b90290bf84d)

  
  Environment
  ===

  python-openstackclient-2.2.0-1.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528637] Re: (Import Refactor) Mark the `/file` endpoint as deprecated

2016-06-07 Thread Flavio Percoco
marking as invalid as we've decided not to deprecate this endpoint

** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1528637

Title:
  (Import Refactor) Mark the `/file` endpoint as deprecated

Status in Glance:
  Invalid

Bug description:
  This is a sub-task for the image import process work:
  https://review.openstack.org/#/c/232371/

  One of the goals of this spec is to improve the image import process
  and allow for other background operations to be executed when the
  image data is added.  To do this without breaking the existing API,
  we're adding a new `/stage` endpoint that supersedes the existing
  `/file` endpoint where the image data is currently uploaded.
  Eventually, we'd like to completely deprecate (and perhaps delete)
  that endpoint. Definitely disable it.

  This lite spec is to mark that endpoint as deprecated and recommend
  deployers to disable it and have users move forward to the new
  `/stage` endpoint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1528637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589804] [NEW] Notification by MESSAGE_PATH only displays once

2016-06-07 Thread Kenji Ishii
Public bug reported:

In mitaka, horizon has a feature to display message when user logged in.
At the moment, it is shown once and it's displayed by popup so these messages 
are hidden after a few seconds. In addition, user cannot show these messages 
without re-login.
We need to provide the way to see these message whenever user want to see.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1589804

Title:
  Notification by MESSAGE_PATH only displays once

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In mitaka, horizon has a feature to display message when user logged in.
  At the moment, it is shown once and it's displayed by popup so these messages 
are hidden after a few seconds. In addition, user cannot show these messages 
without re-login.
  We need to provide the way to see these message whenever user want to see.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1589804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp