[Yahoo-eng-team] [Bug 1529866] Re: Image owner is not properly set

2016-05-18 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1529866

Title:
  Image owner is not properly set

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  I am running Horizon (ii  openstack-dashboard
  2:8.0.0-3) liberty on Debian Stretch. When a user create a image in
  horizon, the owner of image is always left NULL, and the image is
  invisible in dashboard. However, if I create the image in CLI by
  "glance image-create", the owner is correctly set.

  Also, I tried to manually change the owner to user, but the dashboard
  still not show that image. The list returned by "glance image-list"
  shows the correct image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1529866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582304] Re: Some v1 tests fail when http_proxy is enabled

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/316965
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=6f71b25ba28a532d3c17b3a5ee9e19c0ad7ecee2
Submitter: Jenkins
Branch:master

commit 6f71b25ba28a532d3c17b3a5ee9e19c0ad7ecee2
Author: Stuart McLaren 
Date:   Mon May 16 15:58:13 2016 +

Allow tests to run when http proxy is set

We have support in tox.ini for passing http proxy variables:

passenv = *_proxy *_PROXY

but when the glance tests are run with a proxy enabled the following v1
tests would fail:

 v1.test_api.TestGlanceAPI.test_add_location_with_nonempty_body
 v1.test_api.TestGlanceAPI.test_add_copy_from_with_nonempty_body
 
v1.test_api.TestGlanceAPI.test_add_location_with_invalid_location_on_conflict_image_size
 v1.test_api.TestGlanceAPI.test_add_copy_from_with_location
 v1.test_api.TestGlanceAPI.test_download_service_unavailable

Change the url used in the tests so that the tests can be run when
a http_proxy is enabled.

Closes-bug: 1582304
Change-Id: Ic9d324ad98768c9388fed0ce6af27805f9a5863f


** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1582304

Title:
  Some v1 tests fail when http_proxy is enabled

Status in Glance:
  Fix Released

Bug description:
  If you set http_proxy=http://google.com:80 and run the tests with

  tox -epy27

  the following tests will fail

   v1.test_api.TestGlanceAPI.test_add_location_with_nonempty_body
   v1.test_api.TestGlanceAPI.test_add_copy_from_with_nonempty_body
   
v1.test_api.TestGlanceAPI.test_add_location_with_invalid_location_on_conflict_image_size
   v1.test_api.TestGlanceAPI.test_add_copy_from_with_location
   v1.test_api.TestGlanceAPI.test_download_service_unavailable

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1582304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582705] Re: Make agent interface plugging utilize network MTU

2016-05-18 Thread Hirofumi Ichihara
This fix was provided in https://bugs.launchpad.net/neutron/+bug/1552089

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582705

Title:
  Make agent interface plugging utilize network MTU

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/305782
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit c55aba1dba31c6730c80db0118286f1f9e84cd9b
  Author: Kevin Benton 
  Date:   Mon Feb 22 16:41:45 2016 -0800

  Make agent interface plugging utilize network MTU
  
  This changes the 'plug' and 'plug_new' interfaces of the
  LinuxInterfaceDriver to accept an MTU argument. It then
  updates the dhcp agent and l3 agent to pass the MTU that
  is set on the network that the port belongs to. This allows
  it to take into account the overhead calculations that are
  done for encapsulation types.
  
  It's necessary for the L3 agent to have the MTU because it
  must recognize when fragmentation is needed so it can fragment
  or generate an ICMP error.
  
  It's necessary for the DHCP agent to have the MTU so it doesn't
  interfere when it plugs into a bridge with a larger than 1500
  MTU (the bridge would reduce its MTU to match the agent).
  
  If an operator sets 'network_device_mtu', the value of that
  will be used instead to preserve previous behavior.
  
  Conflicts:
neutron/agent/l3/dvr_edge_ha_router.py
neutron/agent/l3/dvr_edge_router.py
neutron/agent/l3/ha_router.py
neutron/agent/linux/interface.py
neutron/tests/functional/agent/l3/test_dvr_router.py
neutron/tests/functional/agent/test_dhcp_agent.py
  
  Additional modifications for Liberty:
  - test_dvr_router_lifecycle_ha_with_snat_with_fips_nmtu renamed into
test_dvr_router_lifecycle_without_ha_with_snat_with_fips_nmtu,
  - the test validates DVR without HA.
  
  Reason for the change: Liberty does not support DVR + HA routers (the
  test raises DvrHaRouterNotSupported without those modifications).
  
  Closes-Bug: #1549470
  Closes-Bug: #1542108
  Closes-Bug: #1542475
  DocImpact: Neutron agents now support arbitrary MTU
 configurations on each network (including
 jumbo frames). This is accomplished by checking
 the MTU value defined for each network on which
 it is wiring VIFs.
  Co-Authored-By: Matt Kassawara 
  (cherry picked from commit 4df8d9a7016ab20fce235833d792b89309ec98a7)
  
  ===
  
  Also squashing in the following fix to pass unit tests for midonet
  interface driver:
  
  Support interface drivers that don't support mtu parameter for plug_new
  
  The method signature before Mitaka did not have the mtu= parameter. We
  should continue supporting the old signature, since it can be used in
  out of tree interface drivers. The class is part of public neutron API,
  so we should make an effort to not break out of tree code.
  
  Local modifications:
  - don't issue a deprecation warning in minor release update.
  
  Change-Id: I8e0c07c76fd0b4c55b66c20ebe29cdb7c07d6f27
  Closes-Bug: #1570392
  (cherry picked from commit 8a86ba1d014a5e758c0569aaf16cfe92492cc7f1)
  
  ===
  
  Change-Id: Ic091fa78dfd133179c71cbc847bf955a06cb248a

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582764] Re: neutron-dynamic-routing agent is inactive

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317550
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=2823484406692ee1769c62fb75313e9a406d1769
Submitter: Jenkins
Branch:master

commit 2823484406692ee1769c62fb75313e9a406d1769
Author: Na 
Date:   Tue May 17 07:56:05 2016 -0700

Fix the issue about BGP dragent reports state failed

The failed reason is import wrong python lib.

Change-Id: I4900dfc4c798f1eb29d86076a620a390d408d204
Closes-Bug: #1582764


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582764

Title:
  neutron-dynamic-routing agent is inactive

Status in neutron:
  Fix Released

Bug description:
  After I enable neutron-bgp-dragent, from neutron-server, the agent
  state is always inactive.

  steve@steve-devstack-test:~/devstack$ neutron agent-list
  
+-+---+-+---+---++---+
  | id  | agent_type| host  
  | availability_zone | alive | admin_state_up | binary|
  
+-+---+-+---+---++---+
  | 0c21a829-4fd6-4375-8e65-36d | DHCP agent| 
steve-devstack-test | nova  | :-)   | True   | 
neutron-dhcp-agent|
  | b4dc434ac   |   |   
  |   |   ||   |
  | 0f9d6886-910d-  | Metadata agent| 
steve-devstack-test |   | :-)   | True   | 
neutron-metadata-agent|
  | 4af4-b248-673b22eb9e78  |   |   
  |   |   ||   |
  | 5908a304-b9d9-4e8c-a0af-| Open vSwitch agent| 
steve-devstack-test |   | :-)   | True   | 
neutron-openvswitch-agent |
  | 96a066a7c87e|   |   
  |   |   ||   |
  | ae74e375-6a75-4ebe-b85c-| L3 agent  | 
steve-devstack-test | nova  | :-)   | True   | 
neutron-l3-agent  |
  | 6628d2baf02f|   |   
  |   |   ||   |
  | dbd9900e-9d16-444d- | BGP dynamic routing agent | 
steve-devstack-test |   | xxx   | True   | 
neutron-bgp-dragent   |
  | afc4-8d0035df5ed5   |   |   
  |   |   ||   |
  
+-+---+-+---+---++---+
  steve@steve-devstack-test:~/devstack$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583391] [NEW] Swift UI delete of files with similar names breaks

2016-05-18 Thread Richard Jones
Public bug reported:

If you attempt to delete two swift files (objects) which are named
"spam" and "spammer" then the first will fail because Horizon's swift
api code layer attempts to determine if the object has any folder
contents. Yep. And because of the way swift "folders" are implemented
(string prefix matching) the result from swift will be "yep, there's two
matches for that prefix" so the Horizon code swift_delete_object()
throws up a conflict error (folder not empty).

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  If you attempt to delete two swift files (objects) which are named
  "spam" and "spammer" then the first will fail because Horizon's swift
  api code layer attempts to determine if the object has any folder
  contents. Yep. And because of the way swift "folders" are implemented
  (string prefix matching) the result from swift will be "yep, there's two
- matches for that prefix" so the Horizon code throws up a conflict error
- (folder not empty).
+ matches for that prefix" so the Horizon code swift_delete_object()
+ throws up a conflict error (folder not empty).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583391

Title:
  Swift UI delete of files with similar names breaks

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If you attempt to delete two swift files (objects) which are named
  "spam" and "spammer" then the first will fail because Horizon's swift
  api code layer attempts to determine if the object has any folder
  contents. Yep. And because of the way swift "folders" are implemented
  (string prefix matching) the result from swift will be "yep, there's
  two matches for that prefix" so the Horizon code swift_delete_object()
  throws up a conflict error (folder not empty).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581246] Re: Ironic driver: _cleanup_deploy is called with incorrect parameters

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/316336
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d2875b78b5746bfcb082a7c5385375d704518581
Submitter: Jenkins
Branch:master

commit d2875b78b5746bfcb082a7c5385375d704518581
Author: Matt Riedemann 
Date:   Fri May 13 22:28:24 2016 -0400

ironic: fix call to _cleanup_deploy on config drive failure

The call to _cleanup_deploy when config drive generation failed
during spawn didn't match the method signature. This was missed
in unit testing because the assertion on the mock of that method
matched the actual call, but not the actual method signature.

This fixes the call and also fixes the test by auto-spec'ing the
_cleanup_deploy method in the mock so that it validates the actual
function signature is called correctly.

In order to use autospec properly here, the mock has to be on the
driver object rather than the class.

Change-Id: Ic2c096ef846f11f94aa828222c927ed7d03051c9
Closes-Bug: #1581246


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581246

Title:
  Ironic driver: _cleanup_deploy is called with incorrect parameters

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  stable/mitaka release.
  If error happens in _generate_configdrive Ironic driver fails cleanup  
because of

  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in 
_build_resources
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] yield resources
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in 
_build_and_run_instance
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] block_device_info=block_device_info)
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 748, in 
spawn
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] flavor=flavor)
  2016-05-12 22:44:55.295 9282 ERROR nova.compute.manager [instance: 
7f8769b3-145a-4b81-8175-e6aa648e1c2a] TypeError: _cleanup_deploy() takes 
exactly 4 arguments (6 given)

  Call
  
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/ironic/driver.py#L747
  Function definition
  
https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/ironic/driver.py#L374

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1581246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554464] Re: Radware lbaas v2 driver should not treat listener without default pool

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289917
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=3f7593ec1618a01a320f66ee53d0005de568d73b
Submitter: Jenkins
Branch:master

commit 3f7593ec1618a01a320f66ee53d0005de568d73b
Author: Evgeny Fedoruk 
Date:   Tue Mar 8 05:30:26 2016 -0800

Do not consider listeners without default pool

Don't send listeners without default pool to back-end system

Change-Id: Iaee577732a585b56bfe108fadde0b84dbc1c2e8d
Closes-Bug: 1554464


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554464

Title:
  Radware lbaas v2 driver should not treat listener without default pool

Status in neutron:
  Fix Released

Bug description:
  Radware lbaas v2 driver should not consider listener with no default pool.
  If pool is deleted and it's the default pool of the listener - do not send 
listener to the back end.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577537] Re: Neutron-LBaaS v2: Update/Delete Pools Admin Tests

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/306182
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=509fa6d52f563f11874f7000ecb25e00be945b65
Submitter: Jenkins
Branch:master

commit 509fa6d52f563f11874f7000ecb25e00be945b65
Author: Franklin Naval 
Date:   Thu Apr 14 18:14:36 2016 -0500

Neutron-LBaaS: Update/Delete Pools Admin Tests

* admin tests around update and delete listeners

Change-Id: I8ee03efd01001eaf0434f80ae219388c64c3fe0c
Closes-Bug: #1577537


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577537

Title:
   Neutron-LBaaS v2: Update/Delete Pools Admin Tests

Status in neutron:
  Fix Released

Bug description:
  * create admin tests around update and delete listeners

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1577537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582911] Re: Relaxed validation for v2 doesn't accept null for user_data like legacy v2 does (liberty)

2016-05-18 Thread Matt Riedemann
Looks like it's still an issue for v2.1 in Newton, we have no compat
mode check for this it looks like:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/user_data.py

** Changed in: nova
   Status: New => Confirmed

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova/liberty
   Importance: Undecided => High

** Changed in: nova/mitaka
   Importance: Undecided => High

** Changed in: nova/liberty
   Status: New => Confirmed

** Changed in: nova/mitaka
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582911

Title:
  Relaxed validation for v2 doesn't accept null for user_data like
  legacy v2 does (liberty)

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) liberty series:
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  Description
  ===
  When moving to the relaxed validation [1] implementation of the v2 API under 
the v2.1 code base, a 'nova boot' request with "user_data": null fails with the 
error:

Returning 400 to user: Invalid input for field/attribute user_data.
  Value: None. None is not of type 'string'

  Under the legacy v2 code base, such a request is allowed.

  
  Steps to reproduce
  ==
  Using the legacy v2 code base under Liberty, make a nova boot call using the 
following json payload:

  {
"server": {
  "name": "mgdlibertyBBC",
  "flavorRef": "1",
  "imageRef": "626ce751-744f-4830-9d38-5e9e4f70fe3f",
  "user_data": null,
  "metadata": {
"created_by": "mdorman"
  },
  "security_groups": [
{
  "name": "default"
}
  ],
  "availability_zone": "glbt1-dev-lab-zone-1,glbt1-dev-lab-zone-2,",
  "key_name": "lm126135-mdorm"
}
  }

  The request succeeds and the instance is created.

  However, using the v2 implementation from the v2.1 code base with the
  same json payload fails:

  2016-05-17 12:47:02.336 18296 DEBUG nova.api.openstack.wsgi [req-
  6d5d4100-7c0c-4ffa-a40c-4a086a473293 mdorman
  40e94f951b704545885bdaa987a25154 - - -] Returning 400 to user: Invalid
  input for field/attribute user_data. Value: None. None is not of type
  'string' __call__ /usr/lib/python2.7/site-
  packages/nova/api/openstack/wsgi.py:1175

  
  Expected result
  ===
  The behavior of the v2 API in the v2.1 code base should be exactly the same 
as the legacy v2 code base.

  
  Actual result
  =
  Request fails under v2.1 code base, but succeeds under legacy v2 code base.

  
  Environment
  ===
  Liberty, 12.0.3 tag (stable/liberty branch on 4/13/2016.  Latest commit 
6fdf1c87b1149e8b395eaa9f4cbf27263cf96ac6)

  
  Logs & Configs
  ==
  Paste config used for legacy v2 code base (request succeeds):

  [composite:osapi_compute]
  use = call:nova.api.openstack.urlmap:urlmap_factory
  /v1.1: openstack_compute_api_legacy_v2
  /v2: openstack_compute_api_legacy_v2
  /v2.1: openstack_compute_api_v21

  Paste config used for v2.1 code base (request fails):

  [composite:osapi_compute]
  use = call:nova.api.openstack.urlmap:urlmap_factory
  /: oscomputeversions
  /v1.1: openstack_compute_api_v21_legacy_v2_compatible
  /v2: openstack_compute_api_v21_legacy_v2_compatible
  /v2.1: openstack_compute_api_v21

  
  [1]  
http://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/api-relax-validation.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583346] [NEW] eslint produces quote-prop warnings

2016-05-18 Thread Matt Borland
Public bug reported:

eslint currently produces quote-prop warnings, that is complaining that:

{a: 'apple'}

should be

{'a': 'apple'}

On IRC we just decided that this warning is not something we want to
follow in Horizon.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583346

Title:
  eslint produces quote-prop warnings

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  eslint currently produces quote-prop warnings, that is complaining
  that:

  {a: 'apple'}

  should be

  {'a': 'apple'}

  On IRC we just decided that this warning is not something we want to
  follow in Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583156] Re: nova image-delete HTTP exception thrown: Unexpected API Error.

2016-05-18 Thread Matt Riedemann
*** This bug is a duplicate of bug 1552533 ***
https://bugs.launchpad.net/bugs/1552533

What version of python-glanceclient do you have installed?

** Changed in: nova
   Status: New => Incomplete

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** This bug has been marked a duplicate of bug 1552533
   AttributeError: 'SessionClient' object has no attribute 'last_request_id'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583156

Title:
  nova image-delete HTTP exception thrown: Unexpected API Error.

Status in OpenStack Compute (nova):
  Incomplete
Status in python-glanceclient:
  New

Bug description:
  when I did:   nova image-delete


  016-05-18 20:27:05.446 ^[[01;31mERROR nova.api.openstack.extensions 
[^[[01;36mreq-0d41d500-38d8-4e4f-b1e2-91a8ec0ec965 ^[[00;36madmin 
demo^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
478, in wrapped
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/compute/images.py", 
line 87, in show
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mimage = self._image_api.get(context, id)
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/api.py", line 93, in get
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mshow_deleted=show_deleted)
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/glance.py", line 282, in show
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00minclude_locations=include_locations)
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/glance.py", line 512, in 
_translate_from_glance
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00minclude_locations=include_locations)
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/glance.py", line 596, in 
_extract_attributes
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00moutput[attr] = getattr(image, attr) or 0
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/glanceclient/openstack/common/apiclient/base.py",
 line 490, in __getattr__
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mself.get()
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/glanceclient/openstack/common/apiclient/base.py",
 line 512, in get
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m{'x_request_id': self.manager.client.last_request_id})
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mAttributeError: 'HTTPClient' object has no attribute 
'last_request_id'
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m
  2016-05-18 20:27:05.448 ^[[00;36mINFO nova.api.openstack.wsgi 
[^[[01;36mreq-0d41d500-38d8-4e4f-b1e2-91a8ec0ec965 ^[[00;36madmin 
demo^[[00;36m] ^[[01;35m^[[00;36mHTTP exception thrown: Unexpected API Error. 
Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API 
log if possible.
  ^[[00m

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583253] Re: Rename Raw backend to Flat

2016-05-18 Thread Matt Riedemann
This is the source of the doc:

http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc
/config-reference/source/compute/hypervisor-kvm.rst

Note it doesn't mention rbd or ploop which are other valid image types.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New => Confirmed

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583253

Title:
  Rename Raw backend to Flat

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  Confirmed

Bug description:
  https://review.openstack.org/279626
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit cff4d78a4a7ddbb18cb875c66b3c56f04a6caea3
  Author: Matthew Booth 
  Date:   Thu Apr 7 13:40:58 2016 +0100

  Rename Raw backend to Flat
  
  As mentioned in a comment (which this patch deletes), calling this
  class 'Raw' was confusing, because it is not always raw. It is also a
  source of bugs, because some code assumes that the format is always
  raw, which it is not. This patch does not fix those bugs.
  
  We rename it to 'Flat', which describes it accurately. We also add
  doctext describing what it does.
  
  DocImpact
  
  The config option libvirt.images_type gets an additional value:
  'flat'. The effect of this is identical to setting the value 'raw'.
  
  Change-Id: I93f0a2cc568b60c2b3f7509449167f03c3f30fb5

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583280] Re: Fuel 8 : Not able to attach volume to instance

2016-05-18 Thread Kevin Nguyen
It turned out the issue is to trying attach volume to instance on
offline compute note. Mark bug as invalid.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583280

Title:
  Fuel 8 : Not able to attach volume to instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description: 
  Use command line to attach volume to instance and got it failed.

  Steps to reproduce:
  cinder create 2 --display-name volume1
  nova volume-attach caab638d-d73b-45cc-909e-b99e5bf856f7 
1409db6a-a148-47c4-8e5a-449c94271357 auto

  
  Symptom: attach volume failed.

  nova volume-attach caab638d-d73b-45cc-909e-b99e5bf856f7 
1409db6a-a148-47c4-8e5a-449c94271357 auto
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-a939cf42-d840-4180-9aba-89d628861303)
  [root@fuel8-controller1 ~(keystone_pepsi)]# nova volume-attach 
caab638d-d73b-45cc-909e-b99e5bf856f7 1409db6a-a148-47c4-8e5a-449c94271357 auto
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-76837fac-a4a0-4a56-988f-f781e2ecd196)

  Expected result: succeed to attach volume to instance.

  Actual result: volume was not attached to instance

  Environment:
  Fuel 8 (3 controller nodes + 2 compute nodes + 3 storage nodes).

  dpkg -l |grep nova
  ii  nova-api 2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - compute API frontend
  ii  nova-cert2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - certificate manager
  ii  nova-common  2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - common files
  ii  nova-conductor   2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - conductor service
  ii  nova-consoleauth 2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - Console Authenticator
  ii  nova-consoleproxy2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - NoVNC proxy
  ii  nova-objectstore 2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - object store
  ii  nova-scheduler   2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - virtual machine scheduler
  ii  python-nova  2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - libraries
  ii  python-novaclient2:2.30.2-1~u14.04+mos3   
  all  client library for OpenStack Compute API

  Storage type : LVM
  cinder --version
  1.4.0


  Thanks
  Kevin

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583319] [NEW] UX: Duplicated view title under Settings section

2016-05-18 Thread Eddie Ramirez
Public bug reported:

Go to Setttings -> User Settings OR -> Change Password

There are two  elements:

: page-header
: modal-title

Both elements contain the same contents and shown in the same area. I
think only one of those is necessary.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Title Change Password.png"
   
https://bugs.launchpad.net/bugs/1583319/+attachment/4665779/+files/Title%20Change%20Password.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583319

Title:
  UX: Duplicated view title under Settings section

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Go to Setttings -> User Settings OR -> Change Password

  There are two  elements:

  : page-header
  : modal-title

  Both elements contain the same contents and shown in the same area. I
  think only one of those is necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583299] [NEW] Add a Common Flow Classifier in Neutron

2016-05-18 Thread cathy Hong Zhang
Public bug reported:

[Existing problem]
Currently, multiple Stadium features inside Neutron need flow classifier 
functionality. In the future there could be more Neutron features that will 
need this functionality. Instead of each feature creating its own FC API and 
resource, we should have one common FC API and resource inside Neutron that can 
be used by all the networking features in OpenStack (inside or outside of 
Neutron stadium). This will avoid a lot of redundancy and maintenance issues.

[Proposal]
Currently the features that need FC are: Tap as a service, SFC, QoS, FW, 
BGP/VPN, GBP. There has been a meeting discussion in the Austin Summit 
(https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit). The 
interested party has a follow-on meeting discussion on 5/17/2016 
(http://eavesdrop.openstack.org/meetings/network_common_flow_classifier/2016/network_common_flow_classifier.2016-05-17-17.02.html).
 

According to the consensus of the meeting, a new RFE for flow classifier
should be submitted in neutron-core and we will develop FC as a RFE over
neutron-core.

A general guideline on the FC design is that it should provide a
superset of FC rules used by existing features and the FC rules should
be easy to extend for new features in the future.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583299

Title:
  Add a Common Flow Classifier in Neutron

Status in neutron:
  New

Bug description:
  [Existing problem]
  Currently, multiple Stadium features inside Neutron need flow classifier 
functionality. In the future there could be more Neutron features that will 
need this functionality. Instead of each feature creating its own FC API and 
resource, we should have one common FC API and resource inside Neutron that can 
be used by all the networking features in OpenStack (inside or outside of 
Neutron stadium). This will avoid a lot of redundancy and maintenance issues.

  [Proposal]
  Currently the features that need FC are: Tap as a service, SFC, QoS, FW, 
BGP/VPN, GBP. There has been a meeting discussion in the Austin Summit 
(https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit). The 
interested party has a follow-on meeting discussion on 5/17/2016 
(http://eavesdrop.openstack.org/meetings/network_common_flow_classifier/2016/network_common_flow_classifier.2016-05-17-17.02.html).
 

  According to the consensus of the meeting, a new RFE for flow
  classifier should be submitted in neutron-core and we will develop FC
  as a RFE over neutron-core.

  A general guideline on the FC design is that it should provide a
  superset of FC rules used by existing features and the FC rules should
  be easy to extend for new features in the future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583289] [NEW] api: page_reverse does not work if limit passed

2016-05-18 Thread Ihar Hrachyshka
Public bug reported:

In API, if page_reverse is passed with limit, then the result is not in
reversed order. This is because common_db_mixin mistakenly apply
.reverse() on the result from database, breaking the order as returned
from database backend.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583289

Title:
  api: page_reverse does not work if limit passed

Status in neutron:
  In Progress

Bug description:
  In API, if page_reverse is passed with limit, then the result is not
  in reversed order. This is because common_db_mixin mistakenly apply
  .reverse() on the result from database, breaking the order as returned
  from database backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583284] [NEW] disconnect_volume calls are made during a remote rebuild of a volume backed instance

2016-05-18 Thread Lee Yarwood
Public bug reported:

Description
===
disconnect_volume calls are made during a remote rebuild of a volume backed 
instance

Steps to reproduce
==
- Evacuate a volume backed instance.
- disconnect_volume is called for each previously attached volume on the now 
remote node rebuilding the instance.

Expected result
===
disconnect_volume is not called unless the instance was previously running on 
the current host.

Actual result
=
disconnect_volume is called regardless of the instance previously running on 
the current host.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

   Multinode devstack

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)

   libvirt + KVM

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   LVM/iSCSI

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)
  
   N/A

** Affects: nova
 Importance: Undecided
 Assignee: Lee Yarwood (lyarwood)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583284

Title:
  disconnect_volume calls are made during a remote rebuild of a volume
  backed instance

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  disconnect_volume calls are made during a remote rebuild of a volume backed 
instance

  Steps to reproduce
  ==
  - Evacuate a volume backed instance.
  - disconnect_volume is called for each previously attached volume on the now 
remote node rebuilding the instance.

  Expected result
  ===
  disconnect_volume is not called unless the instance was previously running on 
the current host.

  Actual result
  =
  disconnect_volume is called regardless of the instance previously running on 
the current host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 Multinode devstack

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)

 libvirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 LVM/iSCSI

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583280] [NEW] Fuel 8 : Not able to attach volume to instance

2016-05-18 Thread Kevin Nguyen
Public bug reported:

Description: 
Use command line to attach volume to instance and got it failed.

Steps to reproduce:
cinder create 2 --display-name volume1
nova volume-attach caab638d-d73b-45cc-909e-b99e5bf856f7 
1409db6a-a148-47c4-8e5a-449c94271357 auto


Symptom: attach volume failed.

nova volume-attach caab638d-d73b-45cc-909e-b99e5bf856f7 
1409db6a-a148-47c4-8e5a-449c94271357 auto
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-a939cf42-d840-4180-9aba-89d628861303)
[root@fuel8-controller1 ~(keystone_pepsi)]# nova volume-attach 
caab638d-d73b-45cc-909e-b99e5bf856f7 1409db6a-a148-47c4-8e5a-449c94271357 auto
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-76837fac-a4a0-4a56-988f-f781e2ecd196)

Expected result: succeed to attach volume to instance.

Actual result: volume was not attached to instance

Environment:
Fuel 8 (3 controller nodes + 2 compute nodes + 3 storage nodes).

dpkg -l |grep nova
ii  nova-api 2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - compute API frontend
ii  nova-cert2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - certificate manager
ii  nova-common  2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - common files
ii  nova-conductor   2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - conductor service
ii  nova-consoleauth 2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - Console Authenticator
ii  nova-consoleproxy2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - NoVNC proxy
ii  nova-objectstore 2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - object store
ii  nova-scheduler   2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - virtual machine scheduler
ii  python-nova  2:12.0.0-1~u14.04+mos43
all  OpenStack Compute - libraries
ii  python-novaclient2:2.30.2-1~u14.04+mos3 
all  client library for OpenStack Compute API

Storage type : LVM
cinder --version
1.4.0


Thanks
Kevin

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "Log was collected from : sosreport -o openstack_nova 
--batch"
   
https://bugs.launchpad.net/bugs/1583280/+attachment/4665709/+files/sosreport-fuel8-controller1.irvineqa.local-20160518174012.tar.xz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583280

Title:
  Fuel 8 : Not able to attach volume to instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Description: 
  Use command line to attach volume to instance and got it failed.

  Steps to reproduce:
  cinder create 2 --display-name volume1
  nova volume-attach caab638d-d73b-45cc-909e-b99e5bf856f7 
1409db6a-a148-47c4-8e5a-449c94271357 auto

  
  Symptom: attach volume failed.

  nova volume-attach caab638d-d73b-45cc-909e-b99e5bf856f7 
1409db6a-a148-47c4-8e5a-449c94271357 auto
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-a939cf42-d840-4180-9aba-89d628861303)
  [root@fuel8-controller1 ~(keystone_pepsi)]# nova volume-attach 
caab638d-d73b-45cc-909e-b99e5bf856f7 1409db6a-a148-47c4-8e5a-449c94271357 auto
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-76837fac-a4a0-4a56-988f-f781e2ecd196)

  Expected result: succeed to attach volume to instance.

  Actual result: volume was not attached to instance

  Environment:
  Fuel 8 (3 controller nodes + 2 compute nodes + 3 storage nodes).

  dpkg -l |grep nova
  ii  nova-api 2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - compute API frontend
  ii  nova-cert2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - certificate manager
  ii  nova-common  2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - common files
  ii  nova-conductor   2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - conductor service
  ii  nova-consoleauth 2:12.0.0-1~u14.04+mos43  
  all  OpenStack Compute - Console Authenticator
  ii  

[Yahoo-eng-team] [Bug 1583272] [NEW] port_unbound(): net_uuid None not in local_vlan_map

2016-05-18 Thread Igor Meneguitte Ávila
Public bug reported:

Hi,

I use OpenStack Kilo 2015.1.3 in Ubuntu 14.0.4 with LB + OVS (3 nodes -
controller, network and compute).

nova --version 2.22.0
neutron --version 2.3.11

nova list

| d2230162-be17-4a17-89b8-da8f17ff4406 | vm-cirros-sec-global-http4 | ACTIVE | 
- | Running | demo-net=192.168.1.25, 30.0.0.103 |
| 29a86d69-5058-49dc-bfba-79415082af39 | vm-ubuntu-sec-global-http | ACTIVE | - 
| Running | demo-net=192.168.1.22, 30.0.0.102 |

ping 30.0.0.102
PING 30.0.0.102 (30.0.0.102) 56(84) bytes of data.
64 bytes from 30.0.0.102: icmp_seq=1 ttl=62 time=3.05 ms
64 bytes from 30.0.0.102: icmp_seq=2 ttl=62 time=2.34 ms

 wget 30.0.0.102
--2016-05-16 15:14:02-- http://30.0.0.102/
Connecting to 30.0.0.102:80... connected.
HTTP request sent, awaiting response...

VM receives the wget request, but does not respond. Apache 2 is
installed on the VM.

ssh igoravila@30.0.0.102

VM does not respond SSH.

nova secgroup-list-rules global_http
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-+---+-+---+--+
| tcp | 443 | 443 | 0.0.0.0/0 | |
| udp | 53 | 53 | 0.0.0.0/0 | |
| tcp | 22 | 22 | 0.0.0.0/0 | |
| tcp | 80 | 80 | 0.0.0.0/0 | |
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-+---+-+---+--+

openvswitch-agent.log (netowrk node):

2016-05-16 14:39:15.872 1103 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-1d9fe87f-8c8a-4d29-afcf-51badf621837 ] Configuration for device 
aa0ffc40-690a-4861-a20b-a057d60b363d completed.
2016-05-16 14:39:16.097 1103 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-1d9fe87f-8c8a-4d29-afcf-51badf621837 ] Port 
fed2747d-f68c-4f17-b57e-785f17155254 updated. Details: {u'profile': {}, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'162fdcc2-74f1-4f9b-b7f1-b782acb38dbf', u'segmentation_id': None, 
u'device_owner': u'neutron:LOADBALANCER', u'physical_network': u'external', 
u'mac_address': u'fa:16:3e:25:25:11', u'device': 
u'fed2747d-f68c-4f17-b57e-785f17155254', u'port_security_enabled': True, 
u'port_id': u'fed2747d-f68c-4f17-b57e-785f17155254', u'fixed_ips': 
[{u'subnet_id': u'7641d89b-2356-4bd2-a01d-33679d03c601', u'ip_address': 
u'30.0.0.107'}], u'network_type': u'flat'}
2016-05-16 14:39:16.099 1103 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-1d9fe87f-8c8a-4d29-afcf-51badf621837 ] Assigning 2 as local vlan for 
net-id=162fdcc2-74f1-4f9b-b7f1-b782acb38dbf
2016-05-16 14:39:16.582 1103 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-1d9fe87f-8c8a-4d29-afcf-51badf621837 ] Configuration for device 
fed2747d-f68c-4f17-b57e-785f17155254 completed.
2016-05-16 14:52:08.632 1103 INFO neutron.agent.securitygroups_rpc 
[req-ad9f1b57-c3d1-41b9-a6c1-f040353ff9ee ] Security group member updated 
[u'e5d46ebd-67c7-4cad-aa68-b276367b77af']
2016-05-16 14:52:09.279 1103 INFO neutron.agent.common.ovs_lib 
[req-1d9fe87f-8c8a-4d29-afcf-51badf621837 ] Port 
9ba102a6-a9d6-4385-87b0-656e58282a20 not present in bridge br-int
2016-05-16 14:52:09.279 1103 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-1d9fe87f-8c8a-4d29-afcf-51badf621837 ] port_unbound(): net_uuid None not 
in local_vlan_map
2016-05-16 14:52:09.280 1103 INFO neutron.agent.securitygroups_rpc 
[req-1d9fe87f-8c8a-4d29-afcf-51badf621837 ] Remove device filter for 
[u'9ba102a6-a9d6-4385-87b0-656e58282a20']

Any ideas?

Regards,

Igor Meneguitte Ávila

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583272

Title:
  port_unbound(): net_uuid None not in local_vlan_map

Status in neutron:
  New

Bug description:
  Hi,

  I use OpenStack Kilo 2015.1.3 in Ubuntu 14.0.4 with LB + OVS (3 nodes
  - controller, network and compute).

  nova --version 2.22.0
  neutron --version 2.3.11

  nova list

  | d2230162-be17-4a17-89b8-da8f17ff4406 | vm-cirros-sec-global-http4 | ACTIVE 
| - | Running | demo-net=192.168.1.25, 30.0.0.103 |
  | 29a86d69-5058-49dc-bfba-79415082af39 | vm-ubuntu-sec-global-http | ACTIVE | 
- | Running | demo-net=192.168.1.22, 30.0.0.102 |

  ping 30.0.0.102
  PING 30.0.0.102 (30.0.0.102) 56(84) bytes of data.
  64 bytes from 30.0.0.102: icmp_seq=1 ttl=62 time=3.05 ms
  64 bytes from 30.0.0.102: icmp_seq=2 ttl=62 time=2.34 ms

   wget 30.0.0.102
  --2016-05-16 15:14:02-- http://30.0.0.102/
  Connecting to 30.0.0.102:80... connected.
  HTTP request sent, awaiting response...

  VM receives the wget request, but does not respond. Apache 2 is
  installed on the VM.

  ssh igoravila@30.0.0.102

  VM does not respond SSH.

  nova secgroup-list-rules global_http
  +-+---+-+---+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  

[Yahoo-eng-team] [Bug 1583266] [NEW] watch_log_file = true badness

2016-05-18 Thread Kevin Fox
Public bug reported:

We had:
watch_log_file = true

Set on our neutron agents. We have DVR enabled and were seeing
significant slowdowns on fip changes. Kevin Benton and I narrowed it
down to watch_log_file = true. It seems to be using blocking calls that
prevent eventlet from performing asyncio properly. Setting it to false
immediately resolved the issue.

Can:
 1. The bug be fixed
 2. The gate checks be updated to set watch_log_file = true by default so that 
its tested in the future?

Please?


Further details:
Mitaka RDO on CentOS 7.2. DVR enabled, L3HA disabled.

Thanks,
Kevin

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583266

Title:
  watch_log_file = true badness

Status in neutron:
  New

Bug description:
  We had:
  watch_log_file = true

  Set on our neutron agents. We have DVR enabled and were seeing
  significant slowdowns on fip changes. Kevin Benton and I narrowed it
  down to watch_log_file = true. It seems to be using blocking calls
  that prevent eventlet from performing asyncio properly. Setting it to
  false immediately resolved the issue.

  Can:
   1. The bug be fixed
   2. The gate checks be updated to set watch_log_file = true by default so 
that its tested in the future?

  Please?

  
  Further details:
  Mitaka RDO on CentOS 7.2. DVR enabled, L3HA disabled.

  Thanks,
  Kevin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558772] Re: Magic-Search shouldn't exist inside of table structure

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317829
Committed: 
https://git.openstack.org/cgit/openstack/magnum-ui/commit/?id=3bd93a817a5f6d8c3afe70b13d97e46b927b78bf
Submitter: Jenkins
Branch:master

commit 3bd93a817a5f6d8c3afe70b13d97e46b927b78bf
Author: shu-mutou 
Date:   Wed May 18 13:36:30 2016 +0900

Move magic-search bar out of the table structure

This patch moves the magic-search bar to out of the table structure.
Also this patch allows much flexibility in layout.

Change-Id: I0312d79143515b0a82a2b17776fbb502a36b6b03
Closes-Bug: #1558772


** Changed in: magnum-ui
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558772

Title:
  Magic-Search shouldn't exist inside of table structure

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum UI:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Committed

Bug description:
  Currently, the way the Angular Magic-Search directive works, it
  requires being placed in the context of a smart-table.  This is not
  ideal and causes trouble with formatting.

  A good solution would allow the search bar directive to be placed
  outside of the table structure in the markup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583253] [NEW] Rename Raw backend to Flat

2016-05-18 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/279626
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit cff4d78a4a7ddbb18cb875c66b3c56f04a6caea3
Author: Matthew Booth 
Date:   Thu Apr 7 13:40:58 2016 +0100

Rename Raw backend to Flat

As mentioned in a comment (which this patch deletes), calling this
class 'Raw' was confusing, because it is not always raw. It is also a
source of bugs, because some code assumes that the format is always
raw, which it is not. This patch does not fix those bugs.

We rename it to 'Flat', which describes it accurately. We also add
doctext describing what it does.

DocImpact

The config option libvirt.images_type gets an additional value:
'flat'. The effect of this is identical to setting the value 'raw'.

Change-Id: I93f0a2cc568b60c2b3f7509449167f03c3f30fb5

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583253

Title:
  Rename Raw backend to Flat

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/279626
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit cff4d78a4a7ddbb18cb875c66b3c56f04a6caea3
  Author: Matthew Booth 
  Date:   Thu Apr 7 13:40:58 2016 +0100

  Rename Raw backend to Flat
  
  As mentioned in a comment (which this patch deletes), calling this
  class 'Raw' was confusing, because it is not always raw. It is also a
  source of bugs, because some code assumes that the format is always
  raw, which it is not. This patch does not fix those bugs.
  
  We rename it to 'Flat', which describes it accurately. We also add
  doctext describing what it does.
  
  DocImpact
  
  The config option libvirt.images_type gets an additional value:
  'flat'. The effect of this is identical to setting the value 'raw'.
  
  Change-Id: I93f0a2cc568b60c2b3f7509449167f03c3f30fb5

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583011] Re: Ryu 4.2 appears broken with python 3, lock Ryu version at 4.0

2016-05-18 Thread Armando Migliaccio
This is not strictly speaking a Neutron issue.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583011

Title:
  Ryu 4.2 appears broken with python 3, lock Ryu version at 4.0

Status in neutron:
  Invalid

Bug description:
  Just saw a gate failure in neutron-dynamic-routing where the python34
  check job fails with the following error
  http://paste.openstack.org/show/497444/.  The gate job ran against Ryu
  4.2 and failed, but the check job ran against Ryu 4.0. It appears a
  recent change in Ryu
  (https://github.com/osrg/ryu/commit/7d42aecb8d6b4e91e4704fabb1d9eca1d873c148)
  failed to add an implementation of __hash__() in the class RouteFamily
  to go along with an implementation of __eq__(). If we cap Ryu at
  version 4.0 while a fix is made to Ryu we don't have this problem.
  Neutron currently specifies the Ryu version as ryu>=3.30.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583235] [NEW] changes in an instance's description are not dynamically reflected

2016-05-18 Thread Lauren Taylor
Public bug reported:

Issue:
A change in an instance's "description" attribute is not dynamically reflected.

Steps to Reproduce:
1.Updates the editable attribute description of the instance using API
PUT /v2.1/​{tenant_id}​/servers/​{server_id}​

Expected Result:
Instance Description field changes should be dynamically reflected.
A notification should be sent on the instance to indicate a change in the 
instance's description.

Actual Result:
Instance Description field changes are not dynamically reflected.
No notification is sent.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583235

Title:
  changes in an instance's description are not dynamically reflected

Status in OpenStack Compute (nova):
  New

Bug description:
  Issue:
  A change in an instance's "description" attribute is not dynamically 
reflected.

  Steps to Reproduce:
  1.Updates the editable attribute description of the instance using API
  PUT /v2.1/​{tenant_id}​/servers/​{server_id}​

  Expected Result:
  Instance Description field changes should be dynamically reflected.
  A notification should be sent on the instance to indicate a change in the 
instance's description.

  Actual Result:
  Instance Description field changes are not dynamically reflected.
  No notification is sent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366501] Re: No update mechanism to report host storage port changes

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366501

Title:
  No update mechanism to report host storage port changes

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  *Descriprtion*
  When a volume is being attached to an instance, the compute transfers the 
host ports to the storage driver through cinder inside the 
initialize_connection call. The compute will only pass online ports and will 
not report any ports that are down (linkdown).

  *Issue*
  If a port that was down at the attach time, comes up afterwards, or ports 
were physically added at some points, there is no update mechanism to inform 
the storage driver about them so the driver will be able to utilize them.

  *Solution*
  There should be an update mechanism that runs periodically and reports host 
changes to the driver through cinder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363568] Re: Nova scheduler no longer has access to requested_networks

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363568

Title:
  Nova scheduler no longer has access to requested_networks

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  With the switch to nova-conductor being responsible for building the
  instances, the scheduler's select_destinations no longer has access to
  the requested networks.

  That is, when schedule_run_instance() was called in the nova-
  scheduler's process space (i.e., as it was in Icehouse), it had the
  ability to interrogate the networks being requested by the user to
  more intelligent placement decisions.

  This precludes schedulers from making placement decisions that are
  affected by the networks being requested at deploy time (i.e., because
  the networks aren't associated with the VMs in any way at deploy
  time).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284708] Re: n-cpu under load does not uptdates it's status

2016-05-18 Thread Markus Zoeller (markus_z)
** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284708

Title:
  n-cpu under load does not uptdates it's status

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  screen-n-sch.txt:
  2014-02-24 22:43:41.502 WARNING nova.scheduler.filters.compute_filter 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo]  ram:6577 disk:75776 
io_ops:9 instances:14 has not been heard from in a while
  2014-02-24 22:43:41.502 INFO nova.filters 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo] Filter ComputeFilter 
returned 0 hosts
  2014-02-24 22:43:41.503 WARNING nova.scheduler.driver 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo] [instance: 
b5a607f0-5280-4033-ba8f-087884d41d28] Setting instance to ERROR state.

  The tempest stress runner with the following example
  https://github.com/openstack/tempest/blob/master/tempest/stress/etc
  /server-create-destroy-test.json can cause this kind of load.

  ./tempest/stress/run_stress.py -t tempest/stress/etc/server-create-
  destroy-test.json -n 1024 -S

  The example config  uses only 8 threads, If you would like to increase the 
number of thread you may need to increase the demo user's quota,
  or enable the use_tenant_isolation.

  tempest.log:
  INFO: Statistics (per process):
  INFO:  Process 0 (ServerCreateDestroyTest): Run 103 actions (0 failed)
  INFO:  Process 1 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO:  Process 2 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO:  Process 3 (ServerCreateDestroyTest): Run 100 actions (0 failed)
  INFO:  Process 4 (ServerCreateDestroyTest): Run 102 actions (2 failed)
  INFO:  Process 5 (ServerCreateDestroyTest): Run 102 actions (1 failed)
  INFO:  Process 6 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO:  Process 7 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO: Summary:
  INFO - 2014-02-24 22:44:22,713.713 INFO: Run 811 actions (3 failed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580467] Re: instance boot with invalid availability zone raises 500 InternalServerError

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/315979
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=26dcd0675a584693d923cdb2aec3cc3a591cb897
Submitter: Jenkins
Branch:master

commit 26dcd0675a584693d923cdb2aec3cc3a591cb897
Author: dineshbhor 
Date:   Wed May 11 10:53:58 2016 +

Return HTTP 400 on boot for invalid availability zone

Currently 'nova boot' fails with 500 InternalServerError
if you pass invalid availability zone.

Caught InvalidInput exception and raised HTTPBadRequest
exception to return 400 status code.

APIImpact: Return 400 status code for invalid availability
zone.

Change-Id: I7b730e71abbcbcf9ee1f537a84646243e9a2da7c
Closes-Bug: #1580467


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580467

Title:
  instance boot with invalid availability zone raises 500
  InternalServerError

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  instance boot with invalid availability zone raises 500
  InternalServerError

  Steps to reproduce
  ==

  Command:
  nova boot --flavor 1 --image 9c1618d0-a6ca-4db6-b2a9-38a9a5cc38b4 
--availability-zone invalidzone instance1

  Actual result
  =
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-dcc9cfd6-1ea6-4e18-9acd-faccf54fdfd4)

  Expected result
  ===
  It should give 400 HTTPBadRequest with proper error message.

  n-API LOG:

  2016-05-11 08:59:37.467 ERROR nova.api.openstack.extensions 
[req-dcc9cfd6-1ea6-4e18-9acd-faccf54fdfd4 admin admin] Unexpected exception in 
API method
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 583, in create
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions 
availability_zone, host, node = parse_az(context, availability_zone)
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 517, in parse_availability_zone
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions 
reason="Unable to parse availability_zone")
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions InvalidInput: 
Invalid input received: Unable to parse availability_zone
  2016-05-11 08:59:37.467 TRACE nova.api.openstack.extensions

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1580467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325304] Re: hypervisors.staticstics().running_vms count includes shutdown vms

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1325304

Title:
  hypervisors.staticstics().running_vms count includes shutdown vms

Status in OpenStack Compute (nova):
  Opinion
Status in python-novaclient:
  Invalid

Bug description:
  Nova client reports:

  In [13]: from novaclient.v1_1 import client

  In [14]: from django.conf import settings

  In [15]: nt = client.Client(settings.OS_USERNAME, settings.OS_PASSWORD,
 settings.OS_TENANT_NAME, settings.OS_AUTH_URL,
 service_type="compute")

  In [16]: nt.hypervisors.statistics().running_vms
  Out[16]: 12

  DB reports:

  mysql> select hostname, availability_zone, vm_state from instances where 
vm_state != 'deleted';
  ++---+--+
  | hostname   | availability_zone | vm_state |
  ++---+--+
  | js1| nova  | active   |
  | js2| nova  | active   |
  | js3| nova  | active   |
  | cirros1| nova  | stopped  |
  | js4| nova  | stopped  |
  | js5| nova  | stopped  |
  | jstest1east| east-zone | stopped  |
  | jstest1west| NULL  | active   |
  | randgen-mpv8sw | NULL  | active   |
  | randgen-fbjk98 | NULL  | stopped  |
  | randgen-tvcl9t | NULL  | active   |
  | stratus| NULL  | active   |
  ++---+--+
  12 rows in set (0.00 sec)

  Either the field name is misleading, or the data is not being filtered
  properly.  As a suggestion, it would be nice to have a total and
  running vm count.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1325304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328939] Re: Setting instance default_ephemeral_device in Ironic driver should be more intelligent

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328939

Title:
  Setting instance default_ephemeral_device in Ironic driver should be
  more intelligent

Status in Ironic:
  Triaged
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The instance default_ephemeral_device value needs to be set within the
  nova driver to the partition where the ephemeral partition is created.
  We currently hard code this value to /dev/sda1 to duplicate the old
  nova-bm behavior. While this makes things work for TripleO [1], we
  should do something smarter to determine the true partition value to
  set (e.g., a Cirros image value should be /dev/vda1).

  We could consider using something like udev by-label names (e.g.,
  /dev/disk/by-label/NNN). This obviously adds a requirement on udev.

  [1] https://bugs.launchpad.net/ironic/+bug/1324286

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1328939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369563] Re: Keep tracking image association when create volume from image

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369563

Title:
  Keep tracking image association when create volume from image

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When booting a instance with volume created from image we should
  reference in the instance the image used to create the volume.

  nova show will show as below.
  ...
  | image| Attempt to boot from volume - no 
image supplied  |
  ...

  Resources:
   - 
http://docs.openstack.org/user-guide/content/create_volume_from_image_and_boot.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369705] Re: Specify exact CPU topology

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369705

Title:
  Specify exact CPU topology

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The work done in https://blueprints.launchpad.net/nova/+spec/virt-
  driver-vcpu-topology to allow setting CPU topology on guests
  implements what might be called a "best attempt" algorithm -- it
  *tries* to give you the topology requested, but does not do so
  reliably. Absolute upper bounds can be set, but it's not possible to
  specify an *exact* topology.

  This seems to violate the principle of least surprise. As such, there
  should be a spec key that forces the topology specs to be *exact*, and
  produces an error if the requested topology cannot be satisfied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384187] Re: Nova admin user not able to list the resources from all other users other than "nova list"

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384187

Title:
  Nova admin user not able to list the resources from all other users
  other than "nova list"

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  While using nova commands as a admin user, we see that other than
  "nova list --all-tenants", other resources like image-list, keypair-
  list, flavor-list or any other resources used by other "users" cannot
  be displayed.

  Listing all resources from all users is a very important usecase since
  this can be used by the "admin" user to display any resource from any
  user and then update/delete it.

  Hence there should be a provision to list resources as "--all-tenants"
  for all the resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391029] Re: compute manager update_available_resource interval needs to be configurable

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391029

Title:
  compute manager update_available_resource interval needs to be
  configurable

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  nova compute manager.update_available_resource() periodic task is not
  configurable for it's interval. For some case, there might be multiple
  compute services in one process, so there will be multiple periodic
  tasks were running in a process, and cpu cycles are mostly occupied by
  this periodic task as it will also get the resource to the
  virtualization layer.

  It's better to expose this periodic task to be configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399573] Re: allows to configure disk driver IO policy

2016-05-18 Thread Markus Zoeller (markus_z)
FWIW, this got proposed for Newton with [1], but it is unlikely that it
will be approved for this release as the backlog of work items in Nova
is too big at the moment. Maybe a release later. I'm closing this bug
report as the effort will be driven by the blueprint.

References:
[1] https://review.openstack.org/#/c/230968/

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399573

Title:
  allows to configure disk driver IO policy

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  libvirt allows to configure the disk IO policy with io=native or
  io=threads which according this email clearly improves performance:

  https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html

  We should give the possibility to configure this as we do for
  disk_cachemode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582103] Re: nova-novncproxy fails to start: NoSuchOptError: no such option in group DEFAULT: verbose

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/316610
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=77f995811fcf32c7f017ac2cbb18e4344661dd70
Submitter: Jenkins
Branch:master

commit 77f995811fcf32c7f017ac2cbb18e4344661dd70
Author: Emilien Macchi 
Date:   Mon May 16 09:24:34 2016 +0200

baseproxy: stop requiring CONF.verbose

Option "verbose" from group "DEFAULT" was deprecated for removal during
Mitaka, and has been removing during Newton.

It has been dropped by oslo.config and nova-novncproxy now fails to
start:
NoSuchOptError: no such option in group DEFAULT: verbose

This patch aims to stop requiring CONF.verbose that does not exist
anymore.

Change-Id: I9533666db73390f28656ffcb1e84fadd51321e91
Closes-Bug: #1582103


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582103

Title:
  nova-novncproxy fails to start: NoSuchOptError: no such option in
  group DEFAULT: verbose

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  nova-novncproxy process fails to start, because of this error:
  NoSuchOptError: no such option in group DEFAULT: verbose

  
  Steps to reproduce
  ==

  1) deploy OpenStack Nova from master and oslo-config 3.9.0
  2) do not configure verbose option in nova.conf, it's deprecated
  3) start nova-novncproxy

  
  Expected result
  ===
  nova-novncproxy should start without error.

  Actual result
  =

  nova-novncproxy starts with error:

  CRITICAL nova [-] NoSuchOptError: no such option in group DEFAULT: verbose
  ERROR nova Traceback (most recent call last):
  ERROR nova   File "/usr/bin/nova-novncproxy", line 10, in 
  ERROR nova sys.exit(main())
  ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", 
line 41, in main
  ERROR nova port=CONF.vnc.novncproxy_port)
  ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", 
line 59, in proxy
  ERROR nova verbose=CONF.verbose,
  ERROR nova   File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 
2189, in __getattr__
  ERROR nova raise NoSuchOptError(name)
  ERROR nova NoSuchOptError: no such option in group DEFAULT: verbose

  Environment
  ===

  Nova was deployed by Puppet OpenStack CI using RDO packaging from
  trunk (current master).

  List of packages:
  
http://logs.openstack.org/20/316520/2/check/gate-puppet-openstack-integration-3-scenario001-tempest-centos-7/f2c0699/logs/rpm-qa.txt.gz

  Nova logs: http://logs.openstack.org/20/316520/2/check/gate-puppet-
  openstack-integration-3-scenario001-tempest-centos-7/f2c0699/logs/nova
  /nova-novncproxy.txt.gz

  Nova config: http://logs.openstack.org/20/316520/2/check/gate-puppet-
  openstack-integration-3-scenario001-tempest-
  centos-7/f2c0699/logs/etc/nova/nova.conf.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397879] Re: User is not informed if reboot VM fail

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397879

Title:
  User is not informed if reboot VM fail

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The current workflow of rebooting a instance is:

  1. Reboot an active instance. If succeed, instance is still in active
  state.

  2. Reboot an active instance. If fail, and power_state is running,
  instance is still in active state.

  3. Reboot an active instance. If fail, and power_state is not running,
  instance will become error state.

  4. Reboot an error instance. If succeed, instance will become active
  state.

  5. Reboot an error instance. If fail, instance is still in error
  state.

  
  #1 and #2 are completely opposite result from user perspective. Especially, 
user should always hope to be informed, if his/her operation is not done as 
expected.  However, the instance information output by Nova api does not 
distinguish #1 and #2,  so that the user is not able to know if the reboot 
instance operation succeed or fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1397879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400233] Re: nova should support the user to assgign to a specified host for cold migrate

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400233

Title:
  nova should support the user to assgign to a specified host for cold
  migrate

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  In the  cold_migrate, purpose host can be  only chosen by the
  scheduler , if the user want to assign to a specified host, the
  existing processing logic can not do this request, so I think the nova
  should support this function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402514] Re: Nova API os-floating-ips doesn't support all_tenants

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402514

Title:
  Nova API os-floating-ips doesn't support all_tenants

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Nova client command supports --all-tenants 
(http://docs.openstack.org/cli-reference/content/novaclient_commands.html) but 
Nova API service doesn't support it even for Admin.
  Here is an example:

  stack@stack-cnt11:~/workspace/python-congressclient$ . 
../tripleo-incubator/overcloudrc-user  # user: demo
  stack@stack-cnt11:~/workspace/python-congressclient$ nova floating-ip-create
  ++---+--+-+
  | Ip | Server Id | Fixed Ip | Pool |
  ++---+--+-+
  | 192.0.2.49 | - | - | ext-net |
  ++---+--+-+
  stack@stack-cnt11:~/workspace/python-congressclient$ nova floating-ip-list  
--all-tenants
  ++---++-+
  | Ip | Server Id | Fixed Ip | Pool |
  ++---++-+
  | 192.0.2.46 | - | 172.16.0.6 | ext-net |
  | 192.0.2.49 | - | - | ext-net |
  ++---++-+
  stack@stack-cnt11:~/workspace/python-congressclient$ . 
../tripleo-incubator/overcloudrc  # admin
  stack@stack-cnt11:~/workspace/python-congressclient$ nova floating-ip-list  
--all-tenants
  ++---+--+-+
  | Ip | Server Id | Fixed Ip | Pool |
  ++---+--+-+
  | 192.0.2.48 | - | - | ext-net |
  ++---+--+-+

  This impacts Congress Nova Driver to populate Floating IPs
  (https://bugs.launchpad.net/congress/+bug/1376462).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403276] Re: Some image properties request may override flavor request

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403276

Title:
  Some image properties request may override flavor request

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Currently, if the image properties include hw_serial_port_count,
  hw_cpu_max_sockets, hw_cpu_max_cores or hw_cpu_max_threads, but the
  flavor does not specify the corresponding request, the image request
  will be used to boot the instance.

  This has potential issue because if flavor has no statement, the image
  request should fail. Otherwise, it will become a shortcut to resource
  request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407939] Re: _poll_volume_usage and _heal_instance_info_cache should allow default 0 value

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

NOTE: 
We are currently working on improving the help text of all config options (bp 
"centralize-config-options-newton") and these options will also get a proper 
help text then. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407939

Title:
  _poll_volume_usage and _heal_instance_info_cache should allow default
  0 value

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  currently timely function _poll_volume_usage and _heal_instance_info_cache 
  don't allow CONF.volume_usage_poll_interval, 
CONF.heal_instance_info_cache_interval to be 0
  respectively ,otherwise the function will not be executed 

  oslo already checked _periodic_spacing if it's < 0, then the periodic task is 
disabled
  it's = 0, then the periodic task spacing is default (60) so no need to check 
it in nova manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417943] Re: add locked status for servers respone

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417943

Title:
  add locked status for servers respone

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  We can add locked status for server_list response, thus we can get
  this attribute in the horizon.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/tables.py#L740-L741

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1417943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425583] Re: Volume cannot be attached via VirtIO while batting from ISO

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425583

Title:
  Volume cannot be attached via VirtIO while batting from ISO

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I am using a Nova version 2014.2.2 setup with VirtIO. I am trying to
  attach a volume via VirtIO driver to a machine with an ISO boot image.
  However, nova does not create a configuration for the volume attaching
  it via VirtIO but rather via IDE. The following steps are causing the
  problem.

  First I create a new instance, which works fine:

  nova boot --flavor 1 --image gentoo.iso --nic net-
  id=bea32667-0d4d-4546-877a-9cff5551d164 test-vm

  Then I try to attach a volume via VirtIO. I will explain, why I use
  vdb later:

  nova volume-attach test-vm 28c2ce69-4430-4216-8758-3fe4a3d8d322
  /dev/vdb

  The volume is then attached as /dev/hdb to the instance instead of
  /dev/vdb. The log shows the following:

  2015-02-25 16:44:04.028 10549 DEBUG nova.compute.utils [req-934ddef9
  -103a-4ad0-b05c-12f0857b9174 None] Using /dev/hd instead of /dev/vd
  get_next_device_name /usr/lib64/python2.7/site-
  packages/nova/compute/utils.py:174

  When I try to attach the volume to /dev/vda it fails completely,
  because then Nova tries to attach it to /dev/hda that it assumes to be
  already occupied by the cdrom emulation with the ISO image.

  I looked into the utils.py file and could not really figure out the
  reason why the prefix is created based on the root device name. So I
  am not sure whether this is a bug or intended functionality. In the
  latter case I would appreciate an explanation especially how to
  enforce the attachment via VirtIO only for the one device in question.

  Thank you and best regards

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408563] Re: New spawned instance power state should be synced immediately if power state is not running

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408563

Title:
  New spawned instance power state should be synced immediately if power
  state is not  running

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  New spawned instance power state should be synced  immediately if
  power state is not  running. If not so, the vm state is not the same
  with the powerstate  until the period task execs a sync operation, and
  some operations based on state  is not corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410949] Re: list servers should notify user when invalid search options are passed

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410949

Title:
  list servers should notify user when invalid search options are passed

Status in OpenStack Compute (nova):
  Opinion
Status in python-novaclient:
  Invalid

Bug description:
  Given a bank of instances:
  =

  ID | Server
  8cf19a86-c523-4155-9bce-f78837be8c7d | test1.example.com
  b15ef5bf-35c0-4c83-acd9-f8129fcc364b | test2.example.com
  e0787cd2-5781-4007-b42f-ebee6a0bf1ec | test3.example.com

  The following (simplified) code:
  =

  from novaclient.client import Client 
  ...
  nova = Client(2, session=sess)
  ...
  # intentionally take the first item returned in list
  server = nova.servers.list(detailed=True, search_opts={'id': 'FAKE-UUID'})[0]

  Expected:
  
  server.id = null  (because no server would be returned)

  Actual:
  ==
  server.id = '8cf19a86-c523-4155-9bce-f78837be8c7d'

  I am not sure if it is returning the next closest serer, the last
  server built, etc. All I know is I am not getting back the server I
  would have expected back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1410949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416015] Re: Add 'user_id' to REST os-simple-tenant-usage output

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416015

Title:
  Add 'user_id' to REST os-simple-tenant-usage output

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Hi,

  Request to add 'user_id' to os-simple-tenant-usage REST output.
  Purpose is to give tenants a bit more auditing capability as to which
  user created and terminated instances. If there is not a better way to
  accomplish this, I believe the patch below will do the trick.

  Thanks,
  -Steve

  --- nova/api/openstack/compute/contrib/simple_tenant_usage.py 2015-01-29 
02:05:53.322814055 +
  +++ nova/api/openstack/compute/contrib/simple_tenant_usage.py.patch   
2015-01-29 02:02:04.136577506 +
  @@ -164,6 +164,7 @@ class SimpleTenantUsageController(object
   info['vcpus'] = instance.vcpus
   
   info['tenant_id'] = instance.project_id
  +info['user_id'] = instance.user_id
   
   # NOTE(mriedem): We need to normalize the start/end times back
   # to timezone-naive so the response doesn't change after the

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498277] Re: Error in admin Network Details page when dhcp_agent_scheduler not enabled

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/313100
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e33cbd6d6cfcdfe893a1d1e4ddf3bef7b8c929c2
Submitter: Jenkins
Branch:master

commit e33cbd6d6cfcdfe893a1d1e4ddf3bef7b8c929c2
Author: Yosef Hoffman 
Date:   Mon May 16 10:12:10 2016 -0400

Don’t error if dhcp_agent_scheduler not enabled

First check to see if dhcp_agent_scheduler is enabled before attempting
to list dhcp agents hosting network. If it is not enabled, just return
an empty list. Also updated tests as necessary.

Change-Id: I5f14a5c364f8f1f362f788fb005fb773cae7b9f4
Closes-Bug: #1498277


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1498277

Title:
  Error in admin Network Details page when dhcp_agent_scheduler not
  enabled

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  If the  Neutron dhcp_agent_scheduler extension is not enabled, when
  viewing the Network Details page for a network under Admin > System >
  Networks, an error message is thrown that says "Unable to list dhcp
  agents hosting network". It should not be an error if there are
  intentionally no DHCP agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1498277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419069] Re: Network Performance Problem with GRE using Openvswitch

2016-05-18 Thread Markus Zoeller (markus_z)
@Eren: If I understand your comment #14 correctly, the issue(s)
described in this bug report are solved, I'm setting it to "Fix
released". If this is not the case, it makes sense to open a new bug
report which describes only one specific issue (not multiple at once).

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419069

Title:
  Network Performance Problem with GRE using Openvswitch

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We are having GRE performance issues with Juno installation. From VM
  to network node, we can only get 3Gbit on 10Gbit interface. Finally, I
  tracked and solved the issue but that requires patches to nova and
  neutron-plugin-openvswitch. I am reporting this bug to find a clean
  solution instead of a hack.

  The isssue is caused by MTU setting and lack of multiqueue net support
  in kvm. As official openstack documentation suggests, MTU settings are
  1500 by default. This creates a bottleneck in VMs and it's only
  possible to process 3Gbit network traffic with 1500 MTU  and without
  MQ support enabled in KVM.

  What I did to solve the issue:
  1- Set physical interface (em1) mtu to 9000
  2- Set network_device_mtu = 8950 in nova and neutron.conf (both on 
compute/network nodes)
  3- Set br-int mtu to 8950 manually
  4- Set br-tun mtu to 8976 manually
  5- Set VM MTU to be 8950 in dnsmasq-neutron.conf
  6- Patch nova config code to add  element in 
libvirt.xml
  7- Run "ethtool -L eth0 combined 4" in VMs

  With network_device_mtu setting, tap/qvo/qvb in compute nodes and
  internal legs in the router/dhcp namespace in network node can be set
  automatically. However, it only solves half of the problem. I still
  need to set mtu to br-int and br-tun interfaces.

  To enable MQ support in KVM, I needed to patch nova. Currently, there
  is no possible way to set queues in libvirt.xml. Without MQ support,
  even if jumbo frames are enabled, VMs are limited to 5Gbit. This is
  because of the fact that [vhost-] process is bound to one CPU and
  network load cannot be distributed to other CPUs. When MQ is enabled,
  [vhost-] can be distributed to other cores, which gives 9.3Gbit
  performance.

  I am adding my ugly hacks just to give some idea on code change. I
  know that it is not a right way. Let's discuss how to properly address
  this issue.

  Should I open another related bug to nova as this issue needs a change
  in nova code as well?

  Note: this is a different bug than
  https://bugs.launchpad.net/bugs/1252900 affecting Juno release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1419069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421471] Re: os-simple-tenant-usage performs poorly with many instances

2016-05-18 Thread Markus Zoeller (markus_z)
It's been a while since the performance was measured and there is not
activity around this bug report. I'm closing it as "Opinion". If this
issue is still observed with the latest release, then the report can be
reopended.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421471

Title:
  os-simple-tenant-usage performs poorly with many instances

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The SQL underlying the os-simple-tenant-usage API call results in very
  slow operations when the database has many (20,000+) instances. In
  testing, the objects.InstanceList.get_active_by_window_joined call in
  
nova/api/openstack/compute/contrib/simple_tenant_usage.py:SimpleTenantUsageController._tenant_usages_for_period
  takes 24 seconds to run.

  Some basic timing analysis has shown that the initial query in
  nova/db/sqlalchemy/api.py:instance_get_active_by_window_joined runs in
  *reasonable* time (though still 5-6 seconds) and the bulk of the time
  is spent in the subsequent _instances_fill_metadata call which pulls
  in system_metadata info by using a SELECT with an IN clause containing
  the 20,000 uuids listed, resulting in execution times over 15 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558614] Re: The QoS notification_driver is just a service_provider, and we should look into moving to that

2016-05-18 Thread Miguel Angel Ajo
After thinking about it, they are not the same thing, service providers
were designed to have multiple backends to pick from, at resource
creation time.

This is not the case of qos_plugin, where the notification driver is
just a plug to the backend implementation, while we remove the burden of
DB manipulation, and provide qos policy objects to the notification
driver as they change.

** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558614

Title:
  The QoS notification_driver is just a service_provider, and we should
  look into moving to that

Status in neutron:
  Opinion

Bug description:
  The notification_driver parameter for QoS is just a service provider, that 
it's then called
  from the QoS plugin, when a policy is created, changed or deleted.

  We should look into moving into the standard naming of
  "service_providers" and deprecate the other.

  
  
https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/qos_base.py#L17

  
https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/manager.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438338] Re: servers api should return security group ids instead of names

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438338

Title:
  servers api should return security group ids instead of names

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Creating this from https://bugs.launchpad.net/python-
  novaclient/+bug/1394462

  In nova-network security group names can't be duplicated, but in
  neutron, they can. For this reason, it would be nice to return server
  security groups as ids instead of names.

  Here's is a sample request and response showing the current state:
  "security_groups": [{"name": "default"}],

  DEBUG (connectionpool:415) "GET 
/v2/038c717809174199a297f4ef774e6852/servers/d2b729b8-a626-4050-a756-d5a450c99811
 HTTP/1.1" 200 1757
  DEBUG (session:223) RESP: [200] date: Mon, 30 Mar 2015 17:57:50 GMT 
content-length: 1757 content-type: application/json x-compute-request-id: 
req-d6c33e18-cf62-4848-88ff-e57b64bd55e3
  RESP BODY: {"server": {"status": "ACTIVE", "updated": "2015-03-25T19:04:47Z", 
"hostId": "cfca2250a844c76f4dd5ba369b2550ad3fb07d545e58e395b2271486", 
"OS-EXT-SRV-ATTR:host": "vagrant-ubuntu-trusty-64.localdomain", "addresses": 
{"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0d:0c:b3", "version": 4, 
"addr": "10.0.0.2", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": 
"http://10.0.2.15:8774/v2/038c717809174199a297f4ef774e6852/servers/d2b729b8-a626-4050-a756-d5a450c99811;,
 "rel": "self"}, {"href": 
"http://10.0.2.15:8774/038c717809174199a297f4ef774e6852/servers/d2b729b8-a626-4050-a756-d5a450c99811;,
 "rel": "bookmark"}], "key_name": null, "image": {"id": 
"d0ddfda2-dbdc-48ae-b65e-27ca407d32ce", "links": [{"href": 
"http://10.0.2.15:8774/038c717809174199a297f4ef774e6852/images/d0ddfda2-dbdc-48ae-b65e-27ca407d32ce;,
 "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": 
"active", "OS-EXT-SRV-ATTR:instance_name": "instance-0001", 
"OS-SRV-USG:launched_at": "2015-03-
 25T19:04:47.00", "OS-EXT-SRV-ATTR:hypervisor_hostname": 
"vagrant-ubuntu-trusty-64.localdomain", "flavor": {"id": "42", "links": 
[{"href": "http://10.0.2.15:8774/038c717809174199a297f4ef774e6852/flavors/42;, 
"rel": "bookmark"}]}, "id": "d2b729b8-a626-4050-a756-d5a450c99811", 
"security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at": null, 
"OS-EXT-AZ:availability_zone": "nova", "user_id": 
"0771ff29994b428fa15dfb4ec1b6bc7d", "name": 
"ServerActionsTestJSON-instance-367700261", "created": "2015-03-25T19:04:35Z", 
"tenant_id": "a0c8d64c558b42d5a7b32a229c9f9a3e", "OS-DCF:diskConfig": "MANUAL", 
"os-extended-volumes:volumes_attached": [], "accessIPv4": "", "accessIPv6": "", 
"progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "True", "metadata": 
{}}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507761] Re: qos wrong units in max-burst-kbps option (per-second is wrong)

2016-05-18 Thread Miguel Angel Ajo
As per review discussion, this can't really be done until we had API
microversioning, deferring until then.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507761

Title:
  qos wrong units in max-burst-kbps option (per-second is wrong)

Status in neutron:
  Won't Fix

Bug description:
  In neutron in qos bw limit rule table in database and in API extension
  parameter "max-burst-kbps" has got wrong units suggested. Burst should
  be given in kb (kilobits) instead of kbps (kilobits per second)

  
  example of ovs configuration:
  http://openvswitch.org/support/config-cookbooks/qos-rate-limiting/ it is "a 
parameter to the policing algorithm to indicate the maximum amount of data (in 
Kb) that this interface can send beyond the policing rate."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507654] Re: Use VersionedObjectSerializer for RPC push/pull interfaces

2016-05-18 Thread Miguel Angel Ajo
moved to won't fix until other endpoints use OVOs, and then we can
reconsider this.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507654

Title:
  Use VersionedObjectSerializer for RPC push/pull interfaces

Status in neutron:
  Won't Fix

Bug description:
  Instead of reimplementing the serialization in neutron, allow
  oslo.versionedobjects to handle it by using their own serializer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442004] Re: instance group data model allows multiple polices

2016-05-18 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (maybe with a spec [2]). I'll recommend to 
read [3] if not yet done. 

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442004

Title:
  instance group data model allows multiple polices

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Now only two policy available, but only one can be used with a server
  group.

  $  nova server-group-create name "affinity" "anti-affinity"
  ERROR (BadRequest): Invalid input received: Conflicting policies configured! 
(HTTP 400) (Request-ID: req-1af553f8-5fd6-4227-870b-be963aad2b62)
  $  nova server-group-create name "affinity" "affinity"
  ERROR (BadRequest): Invalid input received: Duplicate policies configured! 
(HTTP 400) (Request-ID: req-4b697798-89ec-48e1-9840-5e627c08657b)

  The 
https://review.openstack.org/#/c/168372/1/specs/liberty/approved/soft-affinity-for-server-group.rst,cm,
  contains two additional policy name,  but

  "These new soft-affinity and soft-anti-affinity policies are mutually
  exclusive with each other and with the other existing server-group
  policies."

  Suggesting to remove the 'instance_group_policy' table and add the
  'policy' field to the 'instance_groups' tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583184] [NEW] [RFE] bgp-dynamic-routing router association

2016-05-18 Thread YAMAMOTO Takashi
Public bug reported:

for some use cases and/or backend implementations,
it's more straightforward to associate a bgp speaker to a router,
rather than to networks as the current api does.

cf. https://review.openstack.org/#/c/317028/  (midonet vendor extension)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-bgp rfe

** Tags added: l3-bgp

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583184

Title:
  [RFE] bgp-dynamic-routing router association

Status in neutron:
  New

Bug description:
  for some use cases and/or backend implementations,
  it's more straightforward to associate a bgp speaker to a router,
  rather than to networks as the current api does.

  cf. https://review.openstack.org/#/c/317028/  (midonet vendor
  extension)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583181] [NEW] Unable to create instance from volume

2016-05-18 Thread Lluis Gifre
Public bug reported:

Scenario: OpenStack Liberty on Ubuntu 14.04.

Unable to create instance from volume.
I'm using the same flavor used to create the original instance from image.

Horizon reports:
Volume is smaller than the minimum size specified in image metadata.
Volume size is 1073741824 bytes, minimum size is 3221225472 bytes.

Steps to reproduce the issue:

1. Upload image
   name=myImg
   source=Image Location
   
location=https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
   format=QCOW2
   architecture=AMD64
   minimumDisk=3 GB
   minimumRAM=1024 MB
   copyData=True

2. Create flavor
   name=myFlv
   vCPUs=1
   RAM=1024 MB
   RootDisk=5 GB
   Ephemeral Disk=0 GB
   Swap Disk=0 MB

3. Create instance
   name=inst1
   flavor=myFlv
   instanceCount=1
   instanceBootSource=Boot from image (creates new volume)
   image=myImg
   deviceSize=5 GB
>> creates volume 753a90bd-889e-4fe1-9083-ace513402f97

4. Terminate instance inst1

5. Create instance
   name=inst2
   flavor=myFlv
   instanceCount=1
   instanceBootSource=Boot from volume
   volume=753a90bd-889e-4fe1-9083-ace513402f97

>> Throws Error 1:
Volume is smaller than the minimum size specified in image metadata.
Volume size is 1073741824 bytes, minimum size is 3221225472 bytes.
(HTTP 400) (Request-ID: req-8e65f375-06e8-4b24-a714-052febd891c1)

>> Throws Error 2:
Error: Unable to launch instance named "test".

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583181

Title:
  Unable to create instance from volume

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Scenario: OpenStack Liberty on Ubuntu 14.04.

  Unable to create instance from volume.
  I'm using the same flavor used to create the original instance from image.

  Horizon reports:
  Volume is smaller than the minimum size specified in image metadata.
  Volume size is 1073741824 bytes, minimum size is 3221225472 bytes.

  Steps to reproduce the issue:

  1. Upload image
 name=myImg
 source=Image Location
 
location=https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
 format=QCOW2
 architecture=AMD64
 minimumDisk=3 GB
 minimumRAM=1024 MB
 copyData=True

  2. Create flavor
 name=myFlv
 vCPUs=1
 RAM=1024 MB
 RootDisk=5 GB
 Ephemeral Disk=0 GB
 Swap Disk=0 MB

  3. Create instance
 name=inst1
 flavor=myFlv
 instanceCount=1
 instanceBootSource=Boot from image (creates new volume)
 image=myImg
 deviceSize=5 GB
  >> creates volume 753a90bd-889e-4fe1-9083-ace513402f97

  4. Terminate instance inst1

  5. Create instance
 name=inst2
 flavor=myFlv
 instanceCount=1
 instanceBootSource=Boot from volume
 volume=753a90bd-889e-4fe1-9083-ace513402f97

  >> Throws Error 1:
  Volume is smaller than the minimum size specified in image metadata.
  Volume size is 1073741824 bytes, minimum size is 3221225472 bytes.
  (HTTP 400) (Request-ID: req-8e65f375-06e8-4b24-a714-052febd891c1)

  >> Throws Error 2:
  Error: Unable to launch instance named "test".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374999] Re: iSCSI volume detach does not correctly remove the multipath device descriptors

2016-05-18 Thread Ryan Beisner
This bug was fixed in the package nova - 1:2015.1.4-0ubuntu2
---

 nova (1:2015.1.4-0ubuntu2) trusty-kilo; urgency=medium
 .
   * d/p/fix-iscsi-detach.patch (LP: #1374999)
 - Clear latest path for last remaining iscsi disk to ensure
   disk is properly removed.
 .
 nova (1:2015.1.4-0ubuntu1) trusty-kilo; urgency=medium
 .
   * New upstream stable release (LP: #1580334).
   * d/p/skip-proxy-test.patch: Skip test_ssl_server and test_two_servers as
 they are hitting ProxyError during package builds.
 .
 nova (1:2015.1.3-0ubuntu1) trusty-kilo; urgency=medium
 .
   * New upstream stable release (LP: #1559215).
 .
 nova (1:2015.1.2-0ubuntu2) vivid; urgency=medium
 .
   * d/control: Bump oslo.concurrency to >= 1.8.2 (LP: #1518016).
 .
 nova (1:2015.1.2-0ubuntu1) vivid; urgency=medium
 .
   * Resynchronize with stable/kilo (68e9359) (LP: #1506058):
 - [68e9359] Fix quota update in init_instance on nova-compute restart
 - [d864603] Raise InstanceNotFound when save FK constraint fails
 - [db45b1e] Give instance default hostname if hostname is empty
 - [61f119e] Relax restrictions on server name
 - [2e731eb] Remove unnecessary 'context' param from quotas reserve method
 call
 - [5579928] Updated from global requirements
 - [08d1153] Don't expect meta attributes in object_compat that aren't in 
the
 db obj
 - [5c6f01f] VMware: pass network info to config drive.
 - [17b5052] Allow to use autodetection of volume device path
 - [5642b17] Delete orphaned instance files from compute nodes
 - [8110cdc] Updated from global requirements
 - [1f5b385] Hyper-V: Fixes serial port issue on Windows Threshold
 - [24251df] Handle FC LUN IDs greater 255 correctly on s390x architectures
 - [dcde7e7] Update obj_reset_changes signatures to match
 - [e16fcfa] Unshelving volume backed instance fails
 - [8fccffd] Make pagination tolerate a deleted marker
 - [587092c] Fix live-migrations usage of the wrong connector information
 - [8794b93] Don't check flavor disk size when booting from volume
 - [c1ad497] Updated from global requirements
 - [0b37312] Hyper-V: Removes old instance dirs after live migration
 - [2d571b1] Hyper-V: Fixes live migration configdrive copy operation
 - [07506f5] Hyper-V: Fix SMBFS volume attach race condition
 - [60356bf] Hyper-V: Fix missing WMI namespace issue on Windows 2008 R2
 - [83fb8cc] Hyper-V: Fix virtual hard disk detach
 - [6c857c2] Updated from global requirements
 - [0313351] Compute: replace incorrect instance object with dict
 - [9724d50] Don't pass the service catalog when making glance requests
 - [b5020a0] libvirt: Kill rsync/scp processes before deleting instance
 - [3f337f8] Support host type specific block volume attachment
 - [cb2a8fb] Fix serializer supported version reporting in object_backport
 - [701c889] Execute _poll_shelved_instances only if shelved_offload_time is
 > 0
 - [eb3b1c8] Fix rebuild of an instance with a volume attached
 - [e459add] Handle unexpected clear events call
 - [8280575] Support ssh-keygen of OpenSSH 6.8
 - [9a51140] Kilo-Removing extension "OS-EXT-VIF-NET" from v2.1 ext list
 - [b3f7b77] Fix wrong check when use image in local
 - [b13726b] Fix race between resource audit and cpu pinning
* debian/patches/not-check-disk-size.patch: Dropped no longer needed.
 .
 nova (1:2015.1.1-0ubuntu2) vivid; urgency=medium
 .
   [ Corey Bryant ]
   * d/rules: Prevent dh_python2 from guessing dependencies.
 .
   [ Liang Chen ]
   * d/p/not-check-disk-size.patch: Fix booting from volume error
 when flavor disk too small (LP: #1457517)
 .
 nova (1:2015.1.1-0ubuntu1) vivid; urgency=medium
 .
   * Resynchronize with stable/kilo (d8a470d) (LP: #1481008):
 - [e6e39e1] Remove incorrect Instance 1.18 relationship for PciDevice 1.2
 - [a55ea8c] Fix the incorrect PciDeviceList version number
 - [e56aed8] Add support for forcing migrate_flavor_data
 - [ccd002b] Fix migrate_flavor_data string substitution
 - [124b501] Allow libvirt cleanup completion when serial ports already 
released
 - [4908d46] Fixed incorrect dhcp_server value during nova-network creation
 - [0cf44ff] Fixed nova-network dhcp-hostsfile update during live-migration
 - [dc6af6b] libvirt: handle code=38 + sigkill (ebusy) in destroy()
 - [6e22a8b] hypervisor support matrix: add kvm on system z in kilo release
 - [e013ebf] Fix max_number for migrate_flavor data
 - [2b5fe5d] Reduce window for allocate_fixed_ip / release_fixed_ip race in 
nova-net
 - [cd6353a] Mark ironic credential config as secret
 - [48a6217] Ensure to store context in thread local after spawn/spawn_n
 - [fc7f1ab] Store context in local store after spawn_n
 - [199f0ab] Fixes TypeError when libvirt version is 
BAD_LIBVIRT_CPU_POLICY_VERSIONS
 - [1f4088d] Add 'docker' to the 

[Yahoo-eng-team] [Bug 1382079] Re: [SRU] Project selector not working

2016-05-18 Thread Ryan Beisner
This bug was fixed in the package horizon - 1:2015.1.4-0ubuntu2
---

 horizon (1:2015.1.4-0ubuntu2) trusty-kilo; urgency=medium
 .
   * d/p/remove-can-access-caching.patch (LP: #1382079): Remove session
 caching of can_access call results which was disabling the project
 selector.


** Changed in: cloud-archive/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382079

Title:
  [SRU] Project selector not working

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Vivid:
  Won't Fix
Status in horizon source package in Wily:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * Not able to switch projects by the project dropdown list.

  [Test Case]

  1 - enable Identity V3 in local_settings.py
  2 - Log in on Horizon
  3 - make sure that the SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  [Regression Potential]

   * None

  When you try to select a new project on the project dropdown, the
  project doesn't change. The commit below has introduced this bug on
  Horizon's master and has passed the tests verifications.

  
https://github.com/openstack/horizon/commit/16db58fabad8934b8fbdfc6aee0361cc138b20af

  For what I've found so far, the context being received in the
  decorator seems to be the old context, with the token to the previous
  project. When you take the decorator out, the context received by the
  "can_access" function receives the correct context, with the token to
  the new project.

  Steps to reproduce:

  1 - Enable Identity V3 (to have a huge token)
  2 - Log in on Horizon (lots of permissions loaded on session)
  3 - Certify that you SESSION_BACKEND is "signed_cookies"
  4 - Try to change project on the dropdown

  The project shall remain the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1382079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583156] [NEW] nova image-delete HTTP exception thrown: Unexpected API Error.

2016-05-18 Thread SHI Peiqi
Public bug reported:

when I did:   nova image-delete


016-05-18 20:27:05.446 ^[[01;31mERROR nova.api.openstack.extensions 
[^[[01;36mreq-0d41d500-38d8-4e4f-b1e2-91a8ec0ec965 ^[[00;36madmin 
demo^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
478, in wrapped
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/compute/images.py", 
line 87, in show
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mimage = self._image_api.get(context, id)
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/api.py", line 93, in get
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mshow_deleted=show_deleted)
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/glance.py", line 282, in show
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00minclude_locations=include_locations)
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/glance.py", line 512, in 
_translate_from_glance
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00minclude_locations=include_locations)
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/glance.py", line 596, in 
_extract_attributes
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00moutput[attr] = getattr(image, attr) or 0
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/glanceclient/openstack/common/apiclient/base.py",
 line 490, in __getattr__
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mself.get()
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/glanceclient/openstack/common/apiclient/base.py",
 line 512, in get
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m{'x_request_id': self.manager.client.last_request_id})
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mAttributeError: 'HTTPClient' object has no attribute 
'last_request_id'
^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m
2016-05-18 20:27:05.448 ^[[00;36mINFO nova.api.openstack.wsgi 
[^[[01;36mreq-0d41d500-38d8-4e4f-b1e2-91a8ec0ec965 ^[[00;36madmin 
demo^[[00;36m] ^[[01;35m^[[00;36mHTTP exception thrown: Unexpected API Error. 
Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API 
log if possible.
^[[00m

** Affects: nova
 Importance: Undecided
 Assignee: SHI Peiqi (uestc-shi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => SHI Peiqi (uestc-shi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583156

Title:
  nova image-delete HTTP exception thrown: Unexpected API Error.

Status in OpenStack Compute (nova):
  New

Bug description:
  when I did:   nova image-delete


  016-05-18 20:27:05.446 ^[[01;31mERROR nova.api.openstack.extensions 
[^[[01;36mreq-0d41d500-38d8-4e4f-b1e2-91a8ec0ec965 ^[[00;36madmin 
demo^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
478, in wrapped
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/compute/images.py", 
line 87, in show
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mimage = self._image_api.get(context, id)
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/image/api.py", line 93, in get
  ^[[01;31m2016-05-18 20:27:05.446 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m

[Yahoo-eng-team] [Bug 1583142] [NEW] Roles inheritance for groups is not visible in user's role assignments

2016-05-18 Thread Dmitri
Public bug reported:

If I applied role inheritance to a group GR-A in scope of project PR-A:

/v3/OS-
INHERIT/projects/PR-A/groups/GR-A/roles/ROLE-A/inherited_to_projects

this role assignment is listed in the result of:

/v3/role_assignments?scope.project.id=PR-A=GR-A


but is not in the result of:

/v3/role_assignments?scope.project.id=PR-A=USR-A

whereby USR-A is a member of the group GR-A.

BUT it is part of result of the query:

/v3/role_assignments?scope.project.id=SUB-PR-A=USR-A

whereby SUB-PR-A is a child of PR-A.

I think the inherited roles assignment should be valid in the project
scope of PR-A for both groups and users.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: assignment keystone os-inherit role

** Description changed:

  If I applied role inheritance to a group GR-A in scope of project PR-A:
- {code}
- /v3/OS-INHERIT/projects/PR-A/groups/GR-A/roles/ROLE-A/inherited_to_projects 
- {code}
+ 
+ /v3/OS-
+ INHERIT/projects/PR-A/groups/GR-A/roles/ROLE-A/inherited_to_projects
+ 
  this role assignment is listed in the result of:
- {code}
+ 
  /v3/role_assignments?scope.project.id=PR-A=GR-A
- {code}
+ 
  
  but is not in the result of:
- {code}
+ 
  /v3/role_assignments?scope.project.id=PR-A=USR-A
- {code}
+ 
  whereby USR-A is a member of the group GR-A.
- {code}
+ 
  BUT it is part of result of the query:
- {code}
+ 
  /v3/role_assignments?scope.project.id=SUB-PR-A=USR-A
- {code}
+ 
  whereby SUB-PR-A is a child of PR-A.
  
  I think the inherited roles assignment should be valid in the project
  scope of PR-A for both groups and users.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1583142

Title:
  Roles inheritance for groups is not visible in user's role assignments

Status in OpenStack Identity (keystone):
  New

Bug description:
  If I applied role inheritance to a group GR-A in scope of project
  PR-A:

  /v3/OS-
  INHERIT/projects/PR-A/groups/GR-A/roles/ROLE-A/inherited_to_projects

  this role assignment is listed in the result of:

  /v3/role_assignments?scope.project.id=PR-A=GR-A

  
  but is not in the result of:

  /v3/role_assignments?scope.project.id=PR-A=USR-A

  whereby USR-A is a member of the group GR-A.

  BUT it is part of result of the query:

  /v3/role_assignments?scope.project.id=SUB-PR-A=USR-A

  whereby SUB-PR-A is a child of PR-A.

  I think the inherited roles assignment should be valid in the project
  scope of PR-A for both groups and users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1583142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583145] [NEW] live-migration monitoring is not working properly

2016-05-18 Thread Joseph Lanoux
Public bug reported:

Live-migration monitoring is based on the memory migration only. Which
means that if a live-migration is taking a long time to migrate a disk,
only the live_migration_progress_timeout parameter will be taken into
account and will supersede the live_migration_completion_timeout
parameter. In other words, because live_migration_progress_timeout is
logically smaller than live_migration_completion_timeout, the later will
never be used except in a case where the disk migration is fast and the
memory migration is slow.

Steps to reproduce:
- live-migrate an instance with lots a data on its ephemeral disk
- observe that the live-migration is aborted because of 
live_migration_progress_timeout only.

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- live-migration monitoring is working properly
+ live-migration monitoring is not working properly

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583145

Title:
  live-migration monitoring is not working properly

Status in OpenStack Compute (nova):
  New

Bug description:
  Live-migration monitoring is based on the memory migration only. Which
  means that if a live-migration is taking a long time to migrate a
  disk, only the live_migration_progress_timeout parameter will be taken
  into account and will supersede the live_migration_completion_timeout
  parameter. In other words, because live_migration_progress_timeout is
  logically smaller than live_migration_completion_timeout, the later
  will never be used except in a case where the disk migration is fast
  and the memory migration is slow.

  Steps to reproduce:
  - live-migrate an instance with lots a data on its ephemeral disk
  - observe that the live-migration is aborted because of 
live_migration_progress_timeout only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572474] Re: [Pluggable IPAM] Deadlock on simultaneous update subnet and ip allocation from subnet

2016-05-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/309067
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=310074b2d457a6de2fdf141d4ede6b6044efc002
Submitter: Jenkins
Branch:master

commit 310074b2d457a6de2fdf141d4ede6b6044efc002
Author: Pavel Bondar 
Date:   Thu Apr 21 17:31:13 2016 +0300

Check if pool update is needed in reference driver

Commit 6ed8f45fdf529bacae32b074844aa1320b005b51 had some negative impact
on concurrent ip allocations. To make ipam driver aware of subnet
updates (mostly for thirdparty drivers) ipam driver is always called with
allocation pools even if pools are not changed.

Current way of handling that event is deleting old pools and creating
new pools. But on scale it may cause issues, because of this:
- deleting allocation pools removes availability ranges by foreign key;
- any ip allocation modifies availability range;
These events concurently modify availability range records causing
deadlocks.

This fix prevents deleting and recreating pools and availability ranges
in cases where allocation pools are not changed. So it eliminates
negative impact on concurency added by always calling ipam driver on
subnet update.
This fix aims to provide backportable solution to be used with
6ed8f45fdf529bacae32b074844aa1320b005b51.

Complete solution that eliminates concurrent modifications in
availability range table is expected to be devivered with
ticket #1543094, but it will not be backportable because of the scope of
the change.

Change-Id: I29e03a79c34b150a822697f7b556ed168a57c064
Related-Bug: #1534625
Closes-Bug: #1572474


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572474

Title:
  [Pluggable IPAM] Deadlock on simultaneous update subnet and ip
  allocation from subnet

Status in neutron:
  Fix Released

Bug description:
  Observed in logs 'Lock wait timeout exceeded; try restarting transaction' 
[1], when two requests are concurently executed in neutron:
  - request A calls update subnet req-5f9fc363-4b22-48e0-97e2-504aa7c3dda3
  - request B calls create port on the same subnet 
req-ccd11684-ad2b-4937-a3c1-dc46aaa36b2d
  As a result both requests  failed.

  Request A tries to delete 'ipamallocationpools' for subnet_id and it 
effectivelly removes 'ipamavailabilityranges' by foreign key.
  Request B allocates ip and modifies av_range record in 
'ipamavailabilityranges'.
  So looks like collision caused by concurent access to 
'ipamavailabilityranges' table.

  [1] http://logs.openstack.org/23/181023/68/check/gate-tempest-dsvm-
  neutron-full/a9180e0/logs/screen-q-svc.txt.gz#_2016-04-19_15_43_05_837

  StackTrace with both requests failed:
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
[req-5f9fc363-4b22-48e0-97e2-504aa7c3dda3 tempest-NetworksIpV6Test-714183411 -] 
DBAPIError exception wrapped from (pymysql.err.InternalError) (1205, u'Lock 
wait timeout exceeded; try restarting transaction') [SQL: u'DELETE FROM 
ipamallocationpools WHERE ipamallocationpools.ipam_subnet_id = 
%(ipam_subnet_id_1)s'] [parameters: {u'ipam_subnet_id_1': 
u'0b896671-8cc2-4e08-bbfe-05655e6c479c'}]
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 158, in 
execute
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters result 
= self._query(query)
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 308, in _query
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 820, in 
query
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 

[Yahoo-eng-team] [Bug 1583107] [NEW] live-migration abortion parameters

2016-05-18 Thread Joseph Lanoux
Public bug reported:

The live_migration_downtime, live_migration_downtime_steps and
live_migration_downtime_delay default values are not honored.

Those parameters tune the live-migration abortion and must be higher
than the LIVE_MIGRATION_DOWNTIME_MIN, LIVE_MIGRATION_DOWNTIME_STEPS_MIN
and LIVE_MIGRATION_DOWNTIME_DELAY_MIN minimum values.

However, those parameters have default values that are higher than the
minimum values but they are overridden by the minimum ones.

Steps to reproduce:
- live-migrate an instance
- simulate a downtime
- observe that the minimum values are used instead of the default ones.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583107

Title:
  live-migration abortion parameters

Status in OpenStack Compute (nova):
  New

Bug description:
  The live_migration_downtime, live_migration_downtime_steps and
  live_migration_downtime_delay default values are not honored.

  Those parameters tune the live-migration abortion and must be higher
  than the LIVE_MIGRATION_DOWNTIME_MIN,
  LIVE_MIGRATION_DOWNTIME_STEPS_MIN and
  LIVE_MIGRATION_DOWNTIME_DELAY_MIN minimum values.

  However, those parameters have default values that are higher than the
  minimum values but they are overridden by the minimum ones.

  Steps to reproduce:
  - live-migrate an instance
  - simulate a downtime
  - observe that the minimum values are used instead of the default ones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583096] [NEW] ml2 supported extensions list is inaccurate

2016-05-18 Thread YAMAMOTO Takashi
Public bug reported:

depending on which mech drivers and service plugins are configured,
ml2 supported extensions list can be inaccurate.

for example, it has dhcp_agent_scheduler hardcoded.
but it isn't appropriate for (some of?) non agent based mech drivers.
(note: there are tempest test cases which expect at least one agent is actually 
registered
if the extension is enabled)

another example is address-scope.  it actually involves l3
implementation. (routing decision)

similar bug for l3: https://bugs.launchpad.net/neutron/+bug/1450067

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583096

Title:
  ml2 supported extensions list is inaccurate

Status in neutron:
  New

Bug description:
  depending on which mech drivers and service plugins are configured,
  ml2 supported extensions list can be inaccurate.

  for example, it has dhcp_agent_scheduler hardcoded.
  but it isn't appropriate for (some of?) non agent based mech drivers.
  (note: there are tempest test cases which expect at least one agent is 
actually registered
  if the extension is enabled)

  another example is address-scope.  it actually involves l3
  implementation. (routing decision)

  similar bug for l3: https://bugs.launchpad.net/neutron/+bug/1450067

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583083] [NEW] Horizon instance table should display instance owner

2016-05-18 Thread Oleg Lodygensky
Public bug reported:

Horizon instance table displays instance name, image name, IP address
etc. But not the owner, even in detailed instance view.

It would be very helpful that the instance table lists owner too.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583083

Title:
  Horizon instance table should display instance owner

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon instance table displays instance name, image name, IP address
  etc. But not the owner, even in detailed instance view.

  It would be very helpful that the instance table lists owner too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308839] Re: ProcessExecutionError exception is not defined in exception.py now

2016-05-18 Thread Martin Pitt
There is an SRU waiting in the trusty-proposed queue for this. Please
clarify in which  Ubuntu release(s) this is already fixed, or upload the
fix to yakkety, so that the trusty SRU can proceed.

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308839

Title:
  ProcessExecutionError exception is not defined in exception.py now

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  New
Status in nova source package in Trusty:
  New

Bug description:
  ubuntu@devstack-master:/opt/stack/nova$ grep -r 
exception.ProcessExecutionError *  
  nova/virt/libvirt/volume.py:except exception.ProcessExecutionError as 
exc:

  commit 5e016846708ef62c92dcf607f03c67c36ce5c23f has been fixed all
  other wrong used places, but this one is added after this change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357368] Re: Source side post Live Migration Logic cannot disconnect multipath iSCSI devices cleanly

2016-05-18 Thread Martin Pitt
There is an SRU waiting in the trusty-proposed queue for this. Please
clarify in which  Ubuntu release(s) this is already fixed, or upload the
fix to yakkety, so that the trusty SRU can proceed.

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357368

Title:
  Source side post Live Migration Logic cannot disconnect multipath
  iSCSI devices cleanly

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in nova package in Ubuntu:
  New
Status in nova source package in Trusty:
  New

Bug description:
  When a volume is attached to a VM in the source compute node through
  multipath, the related files in /dev/disk/by-path/ are like this

  stack@ubuntu-server12:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.50:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.a5-lun-24
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-24

  The information on its corresponding multipath device is like this
  stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:24 sdl 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:24 sdj 8:144 active undef running

  But when the VM is migrated to the destination, the related
  information is like the following example since we CANNOT guarantee
  that all nodes are able to access the same iSCSI portals and the same
  target LUN number. And the information is used to overwrite
  connection_info in the DB before the post live migration logic is
  executed.

  stack@ubuntu-server13:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b5-lun-100
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-100

  stack@ubuntu-server13:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:100 sdf 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:100 sdg 8:144 active undef running

  As a result, if post live migration in source side uses ,  and 
 to find the devices to clean up, it may use 192.168.3.51, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 100.
  However, the correct one should be 192.168.3.50, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 24.

  Similar philosophy in (https://bugs.launchpad.net/nova/+bug/1327497)
  can be used to fix it: Leverage the unchanged multipath_id to find
  correct devices to delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475411] Re: During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2016-05-18 Thread Martin Pitt
There is an SRU waiting in the trusty-proposed queue for this. Please
clarify in which  Ubuntu release(s) this is already fixed, or upload the
fix to yakkety, so that the trusty SRU can proceed.

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  New
Status in nova source package in Trusty:
  New

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins 
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge "Port crypto to Python 3"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583069] [NEW] neutron.tests.unit.agent.ovsdb.native.test_connection raises exception in different thread

2016-05-18 Thread Jakub Libosvar
Public bug reported:

neutron.tests.unit.agent.ovsdb.native.test_connection raises following
exception due to mock object passed to poll() function.

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/threading.py", line 804, in __bootstrap_inner
self.run()
  File "/usr/lib64/python2.7/threading.py", line 757, in run
self.__target(*self.__args, **self.__kwargs)
  File "neutron/agent/ovsdb/native/connection.py", line 110, in safe_run
self.run()
  File "neutron/agent/ovsdb/native/connection.py", line 122, in run
self.poller.fd_wait(self.txns.alert_fileno, poller.POLLIN)
  File "/usr/lib/python2.7/site-packages/ovs/poller.py", line 122, in fd_wait
self.poll.register(fd, events)
  File "/usr/lib/python2.7/site-packages/ovs/poller.py", line 55, in register
assert isinstance(fd, int)
AssertionError

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Jakub Libosvar (libosvar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583069

Title:
  neutron.tests.unit.agent.ovsdb.native.test_connection raises exception
  in different thread

Status in neutron:
  New

Bug description:
  neutron.tests.unit.agent.ovsdb.native.test_connection raises following
  exception due to mock object passed to poll() function.

  Exception in thread Thread-1:
  Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 804, in __bootstrap_inner
  self.run()
File "/usr/lib64/python2.7/threading.py", line 757, in run
  self.__target(*self.__args, **self.__kwargs)
File "neutron/agent/ovsdb/native/connection.py", line 110, in safe_run
  self.run()
File "neutron/agent/ovsdb/native/connection.py", line 122, in run
  self.poller.fd_wait(self.txns.alert_fileno, poller.POLLIN)
File "/usr/lib/python2.7/site-packages/ovs/poller.py", line 122, in fd_wait
  self.poll.register(fd, events)
File "/usr/lib/python2.7/site-packages/ovs/poller.py", line 55, in register
  assert isinstance(fd, int)
  AssertionError

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552708] Re: OpenStack Compute node linux bridge error in configuration

2016-05-18 Thread Darragh O'Reilly
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552708

Title:
  OpenStack Compute node linux bridge error in configuration

Status in neutron:
  Invalid

Bug description:
  On the virtual machines we have two network interfaces. Eth0
  (10.70.70.0/24) and eth1 (10.71.71.0/24)

  i use eth1 for managment and eth0 for public access

  The DHCP reply packets are not able to reach the TAP interface on the
  compute node because the bridge is sending the packet back to eth0.

  Eth0 is saying to the bridge that he has the INSTANCE interface and
  the packet is sent back to eth0. Eth0 sends it again to br interface
  and they get in a loop. At the end the packet gets droped. Here brctl
  showmacs on the bridge

  i am using only linux bridge no OVS

  here is brctl showmacs form the bridge of the compute node

  $ brctl showmacs brq466a96cb-7a
  port no mac addris local?   ageing timer
2 00:0c:29:18:5d:36   no 0.26
2 00:0c:29:a0:61:0a   yes0.00
2 00:0c:29:a0:61:0a   yes0.00
2 00:19:56:be:2d:ad   no 1.85
2 00:19:56:be:2d:ae   no 1.85
2 64:9e:f3:35:b0:37   no 0.00
2 fa:16:3e:76:b2:65   no18.17
2 fa:16:3e:f4:33:9c   no11.16
1 fe:16:3e:76:b2:65   yes0.00
1 fe:16:3e:76:b2:65   yes0.00

  The interfaces are in promiscus mode! i can not understand why eth0 is
  registering the instance on his port, port 2 !

  i use tcpdump to verify the presence of the DHCP reply and it is
  arriving to the bridge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583024] Re: neutron-heat: ERROR: HEAT-E99001

2016-05-18 Thread Michal Adamczyk
** Also affects: magnum
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583024

Title:
  neutron-heat: ERROR: HEAT-E99001

Status in Magnum:
  New
Status in neutron:
  New
Status in puppet-magnum:
  New

Bug description:
  Hi,

  I am using Mitaka release on RDO (CentOS 7.2). I am using Puppet to
  deploy OpenStack.

  As magnum interacts with heat,neutron I am addressing this bug here,
  sorry if this is a wrong place.

  I have the following error while creating swarm bay in magnum:

  ERROR: HEAT-E99001 Service neutron is not available for resource type
  OS::Neutron::HealthMonitor, reason: Service endpoint not in service
  catalog.

  I can create lb, it's working, etc.

  AFAIK magnum supports only lbaasv1. More info inside this thread:
  http://lists.openstack.org/pipermail/openstack-
  dev/2016-May/094714.html

  Question: does lbaas v1 plugin work in Mitaka?

  I did enabled lbaasv2 for Mitaka as apparently this is the only
  supported version according to the docs:
  http://docs.openstack.org/mitaka/networking-guide/adv-config-
  lbaas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum/+bug/1583024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520008] Re: JSON Encoder crashes on settings with translations: "ValueError: Circular reference detected"

2016-05-18 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Invalid

** Changed in: horizon
Milestone: next => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1520008

Title:
  JSON Encoder crashes on settings with translations: "ValueError:
  Circular reference detected"

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Description:

  Horizon uses the REST_API_REQUIRED_SETTINGS key in the
  local_settings.py configuration file In order to make the contained
  configuration available to the the client side angular code.

  Some configurations include ugettext objects which fail to serialize
  when passing through http.

  eg. 
  OPENSTACK_IMAGE_BACKEND = {
  'image_formats': [
  ('aki', _('AKI - Amazon Kernel Image'))
  ]
  }

  Steps to Reproduce:
  * Add 'OPENSTACK_IMAGE_BACKEND' to the 'REST_API_REQUIRED_SETTINGS' key in 
the local_settings.py.
  * call the settings endpoint: http://localhost:8000/api/settings
  * Django throws a 500 server exception with the following stack trace: see 
attached

  Impact:
  As a workaround, we are setting configuration settings as constants, instead 
of getting it from the settings service. See 
  
https://review.openstack.org/#/c/236042/22/openstack_dashboard/static/app/core/images/images.module.js

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1520008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583028] [NEW] [fullstack] Add new tests for router functionality

2016-05-18 Thread venkata anil
Public bug reported:

 Add fullstack tests for following router(legacy, HA, DVR, HA with DVR)
use cases

 1) test east west traffic
 2) test snat and floatingip

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583028

Title:
  [fullstack] Add new tests for router functionality

Status in neutron:
  New

Bug description:
   Add fullstack tests for following router(legacy, HA, DVR, HA with
  DVR) use cases

   1) test east west traffic
   2) test snat and floatingip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583024] [NEW] neutron-heat: ERROR: HEAT-E99001

2016-05-18 Thread Michal Adamczyk
Public bug reported:

Hi,

I am using Mitaka release on RDO (CentOS 7.2). I am using Puppet to
deploy OpenStack.

As magnum interacts with heat,neutron I am addressing this bug here,
sorry if this is a wrong place.

I have the following error while creating swarm bay in magnum:

ERROR: HEAT-E99001 Service neutron is not available for resource type
OS::Neutron::HealthMonitor, reason: Service endpoint not in service
catalog.

I can create lb, it's working, etc.

AFAIK magnum supports only lbaasv1. More info inside this thread:
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094714.html

Question: does lbaas v1 plugin work in Mitaka?

I did enabled lbaasv2 for Mitaka as apparently this is the only
supported version according to the docs:
http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: puppet-magnum
 Importance: Undecided
 Status: New


** Tags: heat lbaas magnum neutron

** Also affects: puppet-magnum
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583024

Title:
  neutron-heat: ERROR: HEAT-E99001

Status in neutron:
  New
Status in puppet-magnum:
  New

Bug description:
  Hi,

  I am using Mitaka release on RDO (CentOS 7.2). I am using Puppet to
  deploy OpenStack.

  As magnum interacts with heat,neutron I am addressing this bug here,
  sorry if this is a wrong place.

  I have the following error while creating swarm bay in magnum:

  ERROR: HEAT-E99001 Service neutron is not available for resource type
  OS::Neutron::HealthMonitor, reason: Service endpoint not in service
  catalog.

  I can create lb, it's working, etc.

  AFAIK magnum supports only lbaasv1. More info inside this thread:
  http://lists.openstack.org/pipermail/openstack-
  dev/2016-May/094714.html

  Question: does lbaas v1 plugin work in Mitaka?

  I did enabled lbaasv2 for Mitaka as apparently this is the only
  supported version according to the docs:
  http://docs.openstack.org/mitaka/networking-guide/adv-config-
  lbaas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583025] [NEW] Warning in extract_messages tox environment

2016-05-18 Thread Zhenguo Niu
Public bug reported:

When running the extract_messages tox environment the following warning
is issued:

WARNING:test command found but not installed in testenv
  cmd: /bin/rm
  env: /home/jenkins/workspace/gate-horizon-pep8/.tox/pep8
Maybe you forgot to specify a dependency? See also the whitelist_externals 
envconfig setting.

** Affects: horizon
 Importance: Undecided
 Assignee: Zhenguo Niu (niu-zglinux)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Zhenguo Niu (niu-zglinux)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583025

Title:
  Warning in extract_messages tox environment

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When running the extract_messages tox environment the following
  warning is issued:

  WARNING:test command found but not installed in testenv
cmd: /bin/rm
env: /home/jenkins/workspace/gate-horizon-pep8/.tox/pep8
  Maybe you forgot to specify a dependency? See also the whitelist_externals 
envconfig setting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582706] Re: Help message for --request-format is confusing

2016-05-18 Thread Hirofumi Ichihara
This is in client side.

@Sharat Sharma: Yes, it was deprecated in Mitaka cycle[1]. You can just
remove it in Newton cycle.

[1]:https://review.openstack.org/#/c/144439/

** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided => Wishlist

** Changed in: python-neutronclient
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582706

Title:
  Help message for --request-format is confusing

Status in python-neutronclient:
  Confirmed

Bug description:
   --request-format {json}
  DEPRECATED! Only JSON request format is supported.

  This is confusing. It is more helpful if it was "XML DEPRECATED! Only
  JSON request format is supported."

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1582706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582937] Re: OVS agent should have an option to skip OpenFlow controller clean-up

2016-05-18 Thread YAMAMOTO Takashi
** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582937

Title:
  OVS agent should have an option to skip OpenFlow controller clean-up

Status in neutron:
  Opinion

Bug description:
  Currently the OVS agent will cleanup all OpenFlow controller
  configuration when it starts up in ofctl mode. If other services have
  configured a OpenFlow controller, then that configuration will be
  removed as a side-effect. Thus, it would be useful to have an option
  to skip this cleanup if ofctl is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580927] Re: spans beyond the subnet for /31 and /32 in IPam

2016-05-18 Thread wujun
I think it is not a bug. When needing to create a point to point connection via 
a subnet, /30 is the recommended cidr.
Because the first IP address is considered as Network ID, and the last IP 
address is considered as broadcast address, the other IP address can be 
allocated, in a given subnet.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580927

Title:
  spans beyond the subnet for /31 and /32 in IPam

Status in neutron:
  Invalid

Bug description:
  summary:  When needing to create a point to point connection via a
  subnet, generally and /31 is the recommended cidr.  Neutron supports
  /31 via disabling dhcp and gateway on a subnet.   However, IPam does
  not provide the allocation pool of the subnet properly and a VM cannot
  be created.

  Steps to reproduce

  root@ubuntu:~# neutron subnet-create  --disable-dhcp --no-gateway 
--cidr=10.14.0.20/31 --name bug-subnet 69c5342a-5526-4257-880a-f8fd2e633de9
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 10.14.0.20/31|
  | dns_nameservers   |  |
  | enable_dhcp   | False|
  | gateway_ip|  |
  | host_routes   |  |
  | id| 63ce4e26-9838-4fa3-b2d5-e59f88f5b7ce |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | bug-subnet   |
  | network_id| 69c5342a-5526-4257-880a-f8fd2e633de9 |
  | subnetpool_id |  |
  | tenant_id | ca02fc470acc4a27b468dff32ee850b2 |
  +---+--+
  root@ubuntu:~# neutron subnet-update --allocation-pool 
start=10.14.0.20,end=10.14.0.21 bug-subnet
  The allocation pool 10.14.0.20-10.14.0.21 spans beyond the subnet cidr 
10.14.0.20/31.

  Recommended Fix:

  in db/ipam_backend_mixin.py :: function: validate_allocation_pools
  ~~lines: 276

 if start_ip < subnet_first_ip or end_ip > subnet_last_ip:
  LOG.info(_LI("Found pool larger than subnet "
   "CIDR:%(start)s - %(end)s"),
   {'start': start_ip, 'end': end_ip})
  raise n_exc.OutOfBoundsAllocationPool(
  pool=ip_pool,
  subnet_cidr=subnet_cidr)

  This if block should have a special case for ipv4 /31 and /32  for "<= and 
>=" :   
  start_ip <= subnet_first_ip or end_ip >= subnet_last_ip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578520] Re: attache many port as soon as possible, some ports was lost in "instance_info_caches"

2016-05-18 Thread bailin.zhang
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1578520

Title:
  attache many port as soon as possible, some ports was lost in
  "instance_info_caches"

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In version mitaka

  for example:
  1. a VM has two ports.
  2. create two new ports
  3. attach new ports to VM as  soon as possible
  4. the two new ports were attached to the VM,but there are only three ports 
in the "network_info" of table "instance_info_caches"

  
  Detail infomation: http://paste.openstack.org/show/496176/

  When attach a port , the update of table "instance_info_caches"  is delayed.
  attach another port quickly,  the current instance_info_caches do not has the 
first port.  the first port is lost.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1578520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583011] [NEW] Ryu 4.2 appears broken with python 3, lock Ryu version at 4.0

2016-05-18 Thread Ryan Tidwell
Public bug reported:

Just saw a gate failure in neutron-dynamic-routing where the python34
check job fails with the following error
http://paste.openstack.org/show/497444/.  The gate job ran against Ryu
4.2 and failed, but the check job ran against Ryu 4.0. It appears a
recent change in Ryu
(https://github.com/osrg/ryu/commit/7d42aecb8d6b4e91e4704fabb1d9eca1d873c148)
failed to add an implementation of __hash__() in the class RouteFamily
to go along with an implementation of __eq__(). If we cap Ryu at version
4.0 while a fix is made to Ryu we don't have this problem. Neutron
currently specifies the Ryu version as ryu>=3.30.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583011

Title:
  Ryu 4.2 appears broken with python 3, lock Ryu version at 4.0

Status in neutron:
  New

Bug description:
  Just saw a gate failure in neutron-dynamic-routing where the python34
  check job fails with the following error
  http://paste.openstack.org/show/497444/.  The gate job ran against Ryu
  4.2 and failed, but the check job ran against Ryu 4.0. It appears a
  recent change in Ryu
  (https://github.com/osrg/ryu/commit/7d42aecb8d6b4e91e4704fabb1d9eca1d873c148)
  failed to add an implementation of __hash__() in the class RouteFamily
  to go along with an implementation of __eq__(). If we cap Ryu at
  version 4.0 while a fix is made to Ryu we don't have this problem.
  Neutron currently specifies the Ryu version as ryu>=3.30.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583004] [NEW] Incorrect availability zone shown by nova show command

2016-05-18 Thread Praveen Kumar Dubey
Public bug reported:

Nova show command shows availability zone as "AZ1" but nova database
shows as 'nova' after live migrating VM to other node which is in
different availability zone. This is for Juno release.

nova show a5bbc4fa-7ffb-42c3-b60e-b54885227bdd | grep availibility_zone
| OS-EXT-AZ:availability_zone  | AZ1   |


node:~# mysql -e "use nova; select availability_zone from instances where 
uuid='a5bbc4fa-7ffb-42c3-b60e-b54885227bdd';"
+---+
| availability_zone |
+---+
| nova  |
+---+

This is causing problem while trying to resize the vm from m1.large to
m1.xlarge flavor. The node (with availability zone as 'nova') doesn't
have enough memory left. After live migrating this VM to another node
(with availability zone as 'AZ1'), it tries to launch on same node (with
availability zone as 'nova') during resize procedure. After changing the
DB value to 'AZ1', VM was resized successfully.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583004

Title:
  Incorrect availability zone shown by nova show command

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova show command shows availability zone as "AZ1" but nova database
  shows as 'nova' after live migrating VM to other node which is in
  different availability zone. This is for Juno release.

  nova show a5bbc4fa-7ffb-42c3-b60e-b54885227bdd | grep availibility_zone
  | OS-EXT-AZ:availability_zone  | AZ1   |

  
  node:~# mysql -e "use nova; select availability_zone from instances where 
uuid='a5bbc4fa-7ffb-42c3-b60e-b54885227bdd';"
  +---+
  | availability_zone |
  +---+
  | nova  |
  +---+

  This is causing problem while trying to resize the vm from m1.large to
  m1.xlarge flavor. The node (with availability zone as 'nova') doesn't
  have enough memory left. After live migrating this VM to another node
  (with availability zone as 'AZ1'), it tries to launch on same node
  (with availability zone as 'nova') during resize procedure. After
  changing the DB value to 'AZ1', VM was resized successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp