[Yahoo-eng-team] [Bug 1630449] [NEW] nova-api - newton - python34 - TypeError: memoryview: str object does not have the buffer interface

2016-10-04 Thread Matthew Thode
Public bug reported:

Log file is attached, but it looks like nova still isn't ready for
python3

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-api py34 fail log"
   
https://bugs.launchpad.net/bugs/1630449/+attachment/4754721/+files/nova-py34-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630449

Title:
  nova-api - newton - python34 - TypeError: memoryview: str object does
  not have the buffer interface

Status in OpenStack Compute (nova):
  New

Bug description:
  Log file is attached, but it looks like nova still isn't ready for
  python3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630446] [NEW] postgresql online_data_migrations fail for m->n

2016-10-04 Thread Matthew Thode
Public bug reported:

Info is in the log file attached.  Will be opening another bug for
possibly related failures.

I should mention that db sync went fine

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova pg live migration log"
   
https://bugs.launchpad.net/bugs/1630446/+attachment/4754719/+files/nova-pg-live-migrate-fail.log

** Description changed:

  Info is in the log file attached.  Will be opening another bug for
  possibly related failures.
+ 
+ I should mention that db sync went fine

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630446

Title:
  postgresql online_data_migrations fail for m->n

Status in OpenStack Compute (nova):
  New

Bug description:
  Info is in the log file attached.  Will be opening another bug for
  possibly related failures.

  I should mention that db sync went fine

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629862] Re: DevStack: stack.sh stuck at neutron-ovs-cleanup for a long time

2016-10-04 Thread venkatamahesh
** Project changed: networking-sfc => neutron

** Description changed:

  stack.sh hanging at neutron-ovs-cleanup step while running devstack
- master with networking-sfc master
+ master with neutron master

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629862

Title:
  DevStack: stack.sh stuck at neutron-ovs-cleanup for a long time

Status in neutron:
  New

Bug description:
  stack.sh hanging at neutron-ovs-cleanup step while running devstack
  master with neutron master

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1629862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630448] [NEW] postgres newton post upgrade failure DBAPIError exception wrapped from (psycopg2.ProgrammingError) column build_requests.instance_uuid does not exist

2016-10-04 Thread Matthew Thode
Public bug reported:

This could be related to https://bugs.launchpad.net/nova/+bug/1630446
but I am reporting it here because it might not be :D

Error was encountered after a m->n migration, db sync went fine.  Error
log is attached.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova pg post migrate log"
   
https://bugs.launchpad.net/bugs/1630448/+attachment/4754720/+files/nova-pg-post-migrate-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630448

Title:
  postgres newton post upgrade failure DBAPIError exception wrapped from
  (psycopg2.ProgrammingError) column build_requests.instance_uuid does
  not exist

Status in OpenStack Compute (nova):
  New

Bug description:
  This could be related to https://bugs.launchpad.net/nova/+bug/1630446
  but I am reporting it here because it might not be :D

  Error was encountered after a m->n migration, db sync went fine.
  Error log is attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629862] [NEW] DevStack: stack.sh stuck at neutron-ovs-cleanup for a long time

2016-10-04 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

stack.sh hanging at neutron-ovs-cleanup step while running devstack
master with networking-sfc master

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
DevStack: stack.sh stuck at neutron-ovs-cleanup for a long time
https://bugs.launchpad.net/bugs/1629862
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623102] Re: FWaaSv2 - Error message about 'Rule association' is wrong

2016-10-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/373964
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=9727aacea293f0fd221956471e0a2da2ec8bbf26
Submitter: Jenkins
Branch:master

commit 9727aacea293f0fd221956471e0a2da2ec8bbf26
Author: Yushiro FURUKAWA 
Date:   Wed Sep 21 18:34:04 2016 +0900

Fix an argument for an exception message

This commit fixes an argument for following exception messages and also
removes unnecessary whitespace in FirewallRuleNotAssociatedWithPolicy.

  * FirewallRuleNotAssociatedWithPolicy
  * FirewallRuleAlreadyAssociated

Change-Id: If227e250cacfac37735d1b20a24f40c28e637c6e
Closes-Bug: #1623102


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623102

Title:
  FWaaSv2 - Error message about 'Rule association' is wrong

Status in neutron:
  Fix Released

Bug description:
  When try to firewall_policy insert_rule or remove_rule with invalid
  request, following error message is displayed:

  [Error message]
  {
"NeutronError": {
  "message": "Operation cannot be performed since Firewall Rule 
19230148-740b-4546-9d9a-ab0c50178369 is already associated with FirewallPolicy 
",
  "type": "FirewallRuleAlreadyAssociated",
  "detail": ""
}
  }

  or

  {
"NeutronError": {
  "message": "Firewall Rule 19230148-740b-4546-9d9a-ab0c50178369 is not 
associated  with Firewall Policy .",
  "type": "FirewallRuleNotAssociatedWithPolicy",
  "detail": ""
}
  }

  In fact,  should be ID or name for
  firewall_policy.

  [How to reproduce]
  $ source devstack/openrc admin admin
  $ export TOKEN=`openstack token issue| grep ' id '| get_field 2`
  $ curl -s -X PUT -H "content-type:application/json" -d '{"firewall_rule_id": 
"19230148-740b-4546-9d9a-ab0c50178369"}' -H "x-auth-token:$TOKEN" 
localhost:9696/v2.0/fwaas/firewall_policies/e84a79af-b16d-4e2d-a36e-ad3cff41dbd3/insert_rule

  or

  $ curl -s -X PUT -H "content-type:application/json" -d
  '{"firewall_rule_id": "19230148-740b-4546-9d9a-ab0c50178369"}' -H "x
  -auth-token:$TOKEN" localhost:9696/v2.0/fwaas/firewall_policies
  /e84a79af-b16d-4e2d-a36e-ad3cff41dbd3/remove_rule

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630439] [NEW] linuxbridge-agent fails to start on python3.4

2016-10-04 Thread Matthew Thode
Public bug reported:

I'll attach a log with the failure, but to my eyes they seem like py2to3
errors (things missed or something)

starts fine in python2.7

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "failure log"
   
https://bugs.launchpad.net/bugs/1630439/+attachment/4754701/+files/neutron-34-lb-fail.log

** Description changed:

  I'll attach a log with the failure, but to my eyes they seem like py2to3
  errors (things missed or something)
+ 
+ starts fine in python2.7

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630439

Title:
  linuxbridge-agent fails to start on python3.4

Status in neutron:
  New

Bug description:
  I'll attach a log with the failure, but to my eyes they seem like
  py2to3 errors (things missed or something)

  starts fine in python2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630434] [NEW] policy.v3cloudsample.json doesn't allow domain admin list role assignments on project

2016-10-04 Thread John Lin
Public bug reported:

My OpenStack version is Mitaka.

With an admin domain-scoped token, a domain admin cannot list role
assignments on the project in the domain. The error messages are:

{
"error": {
"code": 403,
"message": "You are not authorized to perform the requested action: 
identity:list_role_assignments",
"title": "Forbidden"
}
}

I am currently using a workaround: adding include_subtree=true to use
"identity:list_role_assignments_for_tree".

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1630434

Title:
  policy.v3cloudsample.json doesn't allow domain admin list role
  assignments on project

Status in OpenStack Identity (keystone):
  New

Bug description:
  My OpenStack version is Mitaka.

  With an admin domain-scoped token, a domain admin cannot list role
  assignments on the project in the domain. The error messages are:

  {
  "error": {
  "code": 403,
  "message": "You are not authorized to perform the requested action: 
identity:list_role_assignments",
  "title": "Forbidden"
  }
  }

  I am currently using a workaround: adding include_subtree=true to use
  "identity:list_role_assignments_for_tree".

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1630434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630427] Re: singledispatch is missing from requirements

2016-10-04 Thread Matthew Thode
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630427

Title:
  singledispatch is missing from requirements

Status in neutron:
  Invalid

Bug description:
  singledispatch is used by neutron-db-manage when running in python2.7.
  I had to manually install it (after packaging newton neutron).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630435] [NEW] make the assignment backend default to sql

2016-10-04 Thread Steve Martinelli
Public bug reported:

Currently, we do not provide a default for the assignment driver:

https://github.com/openstack/keystone/blob/master/keystone/conf/assignment.py#L18-L28

Which results in a deprecation message:

Deprecated: Use of the identity driver config to automatically configure
the same assignment driver has been deprecated, in the "O" release, the
assignment driver will need to be explicitly configured if different
than the default (SQL).

Some background... once upon a time, there was an LDAP backend for
assignment, it was removed in the M release. We had logic built-in so
deployers needed to only specify one backend (identity or assignment)
and we would default to the one they picked. This is no longer the case
since we only have a single assignment backend.

** Affects: keystone
 Importance: High
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1630435

Title:
  make the assignment backend default to sql

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  Currently, we do not provide a default for the assignment driver:

  
https://github.com/openstack/keystone/blob/master/keystone/conf/assignment.py#L18-L28

  Which results in a deprecation message:

  Deprecated: Use of the identity driver config to automatically
  configure the same assignment driver has been deprecated, in the "O"
  release, the assignment driver will need to be explicitly configured
  if different than the default (SQL).

  Some background... once upon a time, there was an LDAP backend for
  assignment, it was removed in the M release. We had logic built-in so
  deployers needed to only specify one backend (identity or assignment)
  and we would default to the one they picked. This is no longer the
  case since we only have a single assignment backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1630435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630429] [NEW] Disabling subnetpool feautre

2016-10-04 Thread kesper
Public bug reported:

I am facing some issues with subnetpool, so I dont want to use this
feautre, but i couldnt find any options to disable this. Is there any
way I can do this?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630429

Title:
  Disabling subnetpool feautre

Status in neutron:
  New

Bug description:
  I am facing some issues with subnetpool, so I dont want to use this
  feautre, but i couldnt find any options to disable this. Is there any
  way I can do this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630427] [NEW] singledispatch is missing from requirements

2016-10-04 Thread Matthew Thode
Public bug reported:

singledispatch is used by neutron-db-manage when running in python2.7.
I had to manually install it (after packaging newton neutron).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630427

Title:
  singledispatch is missing from requirements

Status in neutron:
  New

Bug description:
  singledispatch is used by neutron-db-manage when running in python2.7.
  I had to manually install it (after packaging newton neutron).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630420] [NEW] test_rescue_config_drive (libvirt driver unit test) isn't mocking genisoimage

2016-10-04 Thread Augustina Ragwitz
Public bug reported:

I was running unit tests on a bare bones vm that didn't have genisoimage
installed and the test_rescue_config_drive test failed.

==
Failed 1 tests - output below:
==

nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_rescue_config_drive
---

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/virt/libvirt/test_driver.py", line 16420, in 
test_rescue_config_drive
instance, exists=lambda name: name != 'disk.config.rescue')
  File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1305, in patched
return func(*args, **keywargs)
  File "nova/tests/unit/virt/libvirt/test_driver.py", line 16374, in 
_test_rescue
network_info, image_meta, rescue_password)
  File "nova/virt/libvirt/driver.py", line 2531, in rescue
self._create_domain(xml, post_xml_callback=gen_confdrive)
  File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1062, in __call__
return _mock_self._mock_call(*args, **kwargs)
  File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1128, in _mock_call
ret_val = effect(*args, **kwargs)
  File "nova/tests/unit/virt/libvirt/test_driver.py", line 16368, in 
fake_create_domain
post_xml_callback()
  File "nova/virt/libvirt/driver.py", line 3130, in _create_configdrive
cdb.make_drive(config_drive_local_path)
  File "nova/virt/configdrive.py", line 143, in make_drive
self._make_iso9660(path, tmpdir)
  File "nova/virt/configdrive.py", line 97, in _make_iso9660
run_as_root=False)
  File "nova/utils.py", line 296, in execute
return processutils.execute(*cmd, **kwargs)
  File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_concurrency/processutils.py",
 line 363, in execute
env=env_variables)
  File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 54, in __init__
subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
  File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory


When I installed genisoimage, the test passed.

genisoimage is the default value for mkisofs_cmd (configurable). It's
called in the _make_iso9660 method for creating an image. Besides the
issue of shelling out to a process going beyond the scope of what a unit
test should cover, this also creates a hard dependency on genisoimage.

Other areas in the code mock the call to genisoimage. This test should do 
something similar.
https://github.com/openstack/nova/blob/master/nova/tests/unit/test_configdrive2.py#L49

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt testing

** Tags added: libvirt testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630420

Title:
  test_rescue_config_drive (libvirt driver unit test) isn't mocking
  genisoimage

Status in OpenStack Compute (nova):
  New

Bug description:
  I was running unit tests on a bare bones vm that didn't have
  genisoimage installed and the test_rescue_config_drive test failed.

  ==
  Failed 1 tests - output below:
  ==

  
nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_rescue_config_drive
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/virt/libvirt/test_driver.py", line 16420, in 
test_rescue_config_drive
  instance, exists=lambda name: name != 'disk.config.rescue')
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1305, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/virt/libvirt/test_driver.py", line 16374, in 
_test_rescue
  network_info, image_meta, rescue_password)
File "nova/virt/libvirt/driver.py", line 2531, in rescue
  self._create_domain(xml, post_xml_callback=gen_confdrive)
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1062, in __call__
  return _mock_self._mock_call(*args, **kwargs)
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1128, in _mock_call
  ret_val = effect(*args, **kwargs)
File 

[Yahoo-eng-team] [Bug 1630416] [NEW] Issues on admin create network modal

2016-10-04 Thread Ying Zuo
Public bug reported:

Steps to reproduce:
1. go to admin/system/networks panel
2. click create network
3. click the submit button without putting in any data 

You will notice these issues on the modal:

1. Segmentation ID field is marked as a required field but there's no
error message for it. Also, there should be validator to check the value
is a whole number.

2. The value in Admin State field is changed from "UP" to "True".

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630416

Title:
  Issues on admin create network modal

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. go to admin/system/networks panel
  2. click create network
  3. click the submit button without putting in any data 

  You will notice these issues on the modal:

  1. Segmentation ID field is marked as a required field but there's no
  error message for it. Also, there should be validator to check the
  value is a whole number.

  2. The value in Admin State field is changed from "UP" to "True".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1630416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630412] [NEW] Associate floating IP pop up shows all the existing ports in the 'port to be associated to' dropdown

2016-10-04 Thread kiran-vemuri
Public bug reported:

Description:

When I try to associate floating IP by clicking the instance 'actions' drop 
down and select 'associate floating IP'. In the pop up, in the "Port to be 
associated" drop down, I see a list of ports of all the instances.

Expected Behaviour:
--
Since I am selecting the 'associate floating IP' option by clicking on a 
specific instance's actions, for a user it makes more sense to show the ports 
that are related to that specific instance.


Environment:
---
OpenStack Mitaka on Ubuntu 14.04 server

Reproduction Steps:
---

Steps from horizon:
1. Create a network/subnet.
2. Create 2 instances attached to the above subnet
3. Click on instance1's actions to associate floating IP and click on the 
'ports to be associated' drop down
4. it displays a list of all existing ports related to all the instances


Please refer to the attached screenshots for more details


Thanks,
Kiran Vemuri

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "HorizonScreenshot"
   
https://bugs.launchpad.net/bugs/1630412/+attachment/4754565/+files/Screen%20Shot%202016-10-04%20at%205.03.13%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630412

Title:
  Associate floating IP pop up shows all the existing ports in the 'port
  to be associated to' dropdown

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description:
  
  When I try to associate floating IP by clicking the instance 'actions' drop 
down and select 'associate floating IP'. In the pop up, in the "Port to be 
associated" drop down, I see a list of ports of all the instances.

  Expected Behaviour:
  --
  Since I am selecting the 'associate floating IP' option by clicking on a 
specific instance's actions, for a user it makes more sense to show the ports 
that are related to that specific instance.

  
  Environment:
  ---
  OpenStack Mitaka on Ubuntu 14.04 server

  Reproduction Steps:
  ---

  Steps from horizon:
  1. Create a network/subnet.
  2. Create 2 instances attached to the above subnet
  3. Click on instance1's actions to associate floating IP and click on the 
'ports to be associated' drop down
  4. it displays a list of all existing ports related to all the instances

  
  Please refer to the attached screenshots for more details

  
  Thanks,
  Kiran Vemuri

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1630412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630410] [NEW] fixed_ips list out of order

2016-10-04 Thread Armando Migliaccio
Public bug reported:

Change [1], led to failures like [2], in that the order of fixed_ips is
no longer preserved between POST and GET requests. This was taken care
for some other attributes of the Port resource like allowed address
pairs, but not all.

Even though the API is lax about the order of specific attributes, we
should attempt to restore the old behavior to avoid more damaging side
effects in clients that are assuming the list be returned in the order
in which fixed IPs are created.

[1] https://review.openstack.org/#/c/373582
[2] 
https://www.google.com/url?q=http%3A%2F%2Flogs.openstack.org%2F63%2F377163%2F4%2Fcheck%2Fgate-shade-dsvm-functional-neutron%2Fe621e3d%2Fconsole.html=D=1=AFQjCNG3xbH0QxdjxrkuAseWtPGpUm4heg

** Affects: neutron
 Importance: Medium
 Assignee: Kevin Benton (kevinbenton)
 Status: Confirmed


** Tags: newton-backport-potential

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: neutron
Milestone: None => ocata-1

** Tags added: newton-backport-potential

** Description changed:

  Change [1], led to failures like [2], in that the order of fixed_ips is
  no longer preserved between POST and GET requests. This was taken care
- of some some other attributes of the Port resources like allowed address
- pairs, but not for all.
+ for some other attributes of the Port resource like allowed address
+ pairs, but not all.
  
  Even though the API is lax about the order of specific attributes, we
  should attempt at restoring the old behavior to avoid more damaging side
  effects in clients that are assuming the list be returned in the order
  in which fixed IPs are created.
  
  [1] https://review.openstack.org/#/c/373582
  [2] 
https://www.google.com/url?q=http%3A%2F%2Flogs.openstack.org%2F63%2F377163%2F4%2Fcheck%2Fgate-shade-dsvm-functional-neutron%2Fe621e3d%2Fconsole.html=D=1=AFQjCNG3xbH0QxdjxrkuAseWtPGpUm4heg

** Description changed:

  Change [1], led to failures like [2], in that the order of fixed_ips is
  no longer preserved between POST and GET requests. This was taken care
  for some other attributes of the Port resource like allowed address
  pairs, but not all.
  
  Even though the API is lax about the order of specific attributes, we
- should attempt at restoring the old behavior to avoid more damaging side
+ should attempt to restore the old behavior to avoid more damaging side
  effects in clients that are assuming the list be returned in the order
  in which fixed IPs are created.
  
  [1] https://review.openstack.org/#/c/373582
  [2] 
https://www.google.com/url?q=http%3A%2F%2Flogs.openstack.org%2F63%2F377163%2F4%2Fcheck%2Fgate-shade-dsvm-functional-neutron%2Fe621e3d%2Fconsole.html=D=1=AFQjCNG3xbH0QxdjxrkuAseWtPGpUm4heg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630410

Title:
  fixed_ips list out of order

Status in neutron:
  Confirmed

Bug description:
  Change [1], led to failures like [2], in that the order of fixed_ips
  is no longer preserved between POST and GET requests. This was taken
  care for some other attributes of the Port resource like allowed
  address pairs, but not all.

  Even though the API is lax about the order of specific attributes, we
  should attempt to restore the old behavior to avoid more damaging side
  effects in clients that are assuming the list be returned in the order
  in which fixed IPs are created.

  [1] https://review.openstack.org/#/c/373582
  [2] 
https://www.google.com/url?q=http%3A%2F%2Flogs.openstack.org%2F63%2F377163%2F4%2Fcheck%2Fgate-shade-dsvm-functional-neutron%2Fe621e3d%2Fconsole.html=D=1=AFQjCNG3xbH0QxdjxrkuAseWtPGpUm4heg

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628980] Re: Failed to add bridge: sudo: no tty present and no askpass program specified

2016-10-04 Thread Illia Polliul
>From neutron-plugin-linuxbridge-agent.log I suggest you to check
permissions on /etc/sudoers.d/neutron_sudoers file.

Also, moving this bug to neutron project, as it doesn't have anything to
do with fuel-plugin-contrail.

** Changed in: fuel-plugin-contrail
   Status: Incomplete => Invalid

** Project changed: fuel-plugin-contrail => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628980

Title:
  Failed to add bridge: sudo: no tty present and no askpass program
  specified

Status in neutron:
  Invalid

Bug description:
  Release :- Liberty
  Adding additional compute nodes

  While launching a VM using the newly added node(By selecting the host
  aggregate that include the new compute node)

  The /etc/nova/nova-compute.log shows below error

  Failed to add bridge: sudo: no tty present and no askpass program
  specified

  From the horizon GUI the generic error of "No valid host" is
  displayed.

  Message
  No valid host was found. There are not enough hosts available.

  Code 500
  Details
  File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 739, 
in build_instances request_spec, filter_properties) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 343, in 
wrapped return func(*args, **kwargs) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 52, 
in select_destinations context, request_spec, filter_properties) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method return getattr(self.instance, __name)(*args, **kwargs) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 34, in 
select_destinations context, request_spec, filter_properties) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 120, in 
select_destinations request_spec=request_spec, 
filter_properties=filter_properties) File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call retry=self.retry) File "/usr/local/lib/python
 2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send 
timeout=timeout, retry=retry) File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 470, in send retry=retry) File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 461, in _send raise result

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628980] [NEW] Failed to add bridge: sudo: no tty present and no askpass program specified

2016-10-04 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Release :- Liberty
Adding additional compute nodes

While launching a VM using the newly added node(By selecting the host
aggregate that include the new compute node)

The /etc/nova/nova-compute.log shows below error

Failed to add bridge: sudo: no tty present and no askpass program
specified

>From the horizon GUI the generic error of "No valid host" is displayed.

Message
No valid host was found. There are not enough hosts available.

Code 500
Details
File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 739, in 
build_instances request_spec, filter_properties) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 343, in 
wrapped return func(*args, **kwargs) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 52, 
in select_destinations context, request_spec, filter_properties) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method return getattr(self.instance, __name)(*args, **kwargs) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 34, in 
select_destinations context, request_spec, filter_properties) File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 120, in 
select_destinations request_spec=request_spec, 
filter_properties=filter_properties) File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call retry=self.retry) File "/usr/local/lib/python2.
 7/dist-packages/oslo_messaging/transport.py", line 90, in _send 
timeout=timeout, retry=retry) File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 470, in send retry=retry) File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 461, in _send raise result

** Affects: neutron
 Importance: Undecided
 Status: Invalid


** Tags: linuxbridge-agent neutron
-- 
Failed to add bridge: sudo: no tty present and no askpass program specified
https://bugs.launchpad.net/bugs/1628980
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600326] Re: neutron-lbaas health monitor timeout and delay values interpreted as milliseconds

2016-10-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/380660
Committed: 
https://git.openstack.org/cgit/openstack/octavia/commit/?id=ef11747a56ed210d9ae59e7c08b08fb6b6dc604b
Submitter: Jenkins
Branch:master

commit ef11747a56ed210d9ae59e7c08b08fb6b6dc604b
Author: Paul Glass 
Date:   Fri Sep 30 21:39:52 2016 +

Switch HAProxy health check timeout to seconds

Change-Id: If8166b8e76ca6c1b15963ef99bec07ff2a6fb118
Closes-Bug: #1600326


** Changed in: octavia
   Status: In Progress => Fix Released

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600326

Title:
  neutron-lbaas health monitor timeout and delay values interpreted as
  milliseconds

Status in neutron:
  Fix Released
Status in octavia:
  Fix Released

Bug description:
  The timeout and delay values on the health monitor objects in Neutron
  LBaaS are purportedly in units of seconds, but the numeric value is
  passed all the the way down to the HAProxy configuration[1] file (in
  both the HAProxy namespace driver and Octavia) where it is interpreted
  in milliseconds:

  * 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-timeout%20check
  * https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-inter

  Due to this unit mismatch, a user may configure a pool with a
  reasonable 10 second timeout, and the service may appear to function
  normally until even a small load causes the backend servers to exceed
  a 10 millisecond timeout and then they are removed from the pool.

  A timeout value less than one second is useful some settings, such as
  monitoring a pool of backend servers serving static content, let the
  database field stores this value as an integer.

  1: https://github.com/openstack/neutron-
  
lbaas/blob/b322615e4869eb42ed7888a3492eae4a52f3b4db/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_proxies.j2#L72

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630350] [NEW] getting list of service providers broken with pagination

2016-10-04 Thread Ankur
Public bug reported:

Bug at Neutron Head as of (10/4)

While implementing service-provider-list into OpenStack Client ran into this 
issue with pagination. 
https://review.openstack.org/#/c/381298/4/openstack/network/v2/_proxy.py

With pagination set to True.
The client returns incorrect data and breaks OpenStack Client.

Neutron Server Trace:
2016-10-04 13:35:21.217 ERROR oslo_messaging.rpc.server [req-60a4325d-a161-4a0a-
b3bf-646561a4c0fa None None] Exception during message handling
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server Traceback (most recent c
all last):
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File "/usr/local/lib/p
ython2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in _process_inco
ming
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server res = self.dispatche
r.dispatch(message)
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
150, in dispatch
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
121, in _do_dispatch
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 206, in 
get_external_network_id
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server net_id = 
self.plugin.get_external_network_id(context)
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/db/external_net_db.py", line 199, in 
get_external_network_id
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server raise 
n_exc.TooManyExternalNetworks()
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server 
TooManyExternalNetworks: More than one external network exists.
2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server 

When pulled Neutron back to commit: commit 
c7610950f75352be6693ce7da1c52e87eeaf8bc0
Client returns correct data when Pagination set to true.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630350

Title:
  getting list of service providers broken with pagination

Status in neutron:
  New

Bug description:
  Bug at Neutron Head as of (10/4)

  While implementing service-provider-list into OpenStack Client ran into this 
issue with pagination. 
  https://review.openstack.org/#/c/381298/4/openstack/network/v2/_proxy.py

  With pagination set to True.
  The client returns incorrect data and breaks OpenStack Client.

  Neutron Server Trace:
  2016-10-04 13:35:21.217 ERROR oslo_messaging.rpc.server 
[req-60a4325d-a161-4a0a-
  b3bf-646561a4c0fa None None] Exception during message handling
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server Traceback (most 
recent c
  all last):
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/p
  ython2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_inco
  ming
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server res = 
self.dispatche
  r.dispatch(message)
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
150, in dispatch
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
121, in _do_dispatch
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 206, in 
get_external_network_id
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server net_id = 
self.plugin.get_external_network_id(context)
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/db/external_net_db.py", line 199, in 
get_external_network_id
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server raise 
n_exc.TooManyExternalNetworks()
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server 
TooManyExternalNetworks: More than one external network exists.
  2016-10-04 13:35:21.217 TRACE oslo_messaging.rpc.server 

  When pulled Neutron back to commit: commit 
c7610950f75352be6693ce7da1c52e87eeaf8bc0
  Client returns correct data when Pagination set to true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630350/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1630308] [NEW] Enable auto-complete for user name

2016-10-04 Thread Alexander Bashmakov
Public bug reported:

Currently the user name field in the open dashboard login screen does
not allow auto-complete of previous values. This is a feature request to
enhance the UX by enabling this functionality, so that users don't have
to re-type their name every time.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630308

Title:
  Enable auto-complete for user name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently the user name field in the open dashboard login screen does
  not allow auto-complete of previous values. This is a feature request
  to enhance the UX by enabling this functionality, so that users don't
  have to re-type their name every time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1630308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627177] Re: Liberty to Mitaka nova list fails when force create a VM on a bad compute node that is not reachable

2016-10-04 Thread Alexandra Settle
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1627177

Title:
  Liberty to Mitaka nova list fails when force create a VM on a bad
  compute node that is not reachable

Status in OpenStack Compute (nova):
  New
Status in openstack-ansible:
  Invalid

Bug description:
  Liberty to Mitaka Upgrade

  Nova list fails when a user forces a compute creation on one of the
  nova computes that is in a bad state.

  nova list limiting to the other hypervisors works though and all other
  api commands work, except for the nova list.

  This is happening when u create a vm on a bad compute else it works
  fine.

  
  Log 

  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack 
[req-16a27bf2-bebe-4f13-a408-4b5b09129d6c 111ea8c6602e44bc8d7b9a125c86f12a 
48d9424cadf145e59c98d5ca53c54f11 - - -] Caught error: 'str' object has no 
attribute 'metadata'
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack Traceback (most recent 
call last):
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/nova/api/openstack/__init__.py",
 line 139, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack return 
req.get_response(self.application)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/request.py", 
line 1317, in send
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/request.py", 
line 1281, in call_application
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/dec.py", line 
144, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/dec.py", line 
130, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/dec.py", line 
195, in call_func
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 467, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack response = 
req.get_response(self._app)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/request.py", 
line 1317, in send
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/request.py", 
line 1281, in call_application
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/dec.py", line 
144, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/dec.py", line 
144, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/routes/middleware.py",
 line 136, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack response = 
self.app(environ, start_response)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/dec.py", line 
144, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-13.3.3/lib/python2.7/site-packages/webob/dec.py", line 
130, in __call__
  2016-09-23 12:51:24.479 2241 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2016-09-23 12:51:24.479 2241 ERROR 

[Yahoo-eng-team] [Bug 1616793] Re: neutron availability-zone-list got a db error using postgresql

2016-10-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/381520
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5b9fe3494a80b3185b5d35dccbc71132211dd989
Submitter: Jenkins
Branch:master

commit 5b9fe3494a80b3185b5d35dccbc71132211dd989
Author: Ann Kamyshnikova 
Date:   Tue Oct 4 11:20:27 2016 +0300

Fix _list_availability_zones for PostgreSQL

For PostgreSQL  _list_availability_zones crashes if you're using
GROUP BY without object id. Use with_entities to use group_by
correctly.

Closes-bug: #1616793

Change-Id: Ibc09666bc5863a1980acd0a34d6545841a93a481


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1616793

Title:
  neutron availability-zone-list got a db error using postgresql

Status in neutron:
  Fix Released

Bug description:
  when use cli command "neutron availability-zone-list" with postgresql
  db backend,I got a server error,like below:

  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, in 
resource
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 148, in wrapper
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 341, in index
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource return 
self._items(request, True, parent_id)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 267, in _items
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/agents_db.py", line 155, in 
get_availability_zones
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource context, 
filters))]
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/agents_db.py", line 132, in 
_list_availability_zones
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource 
Agent.agent_type):
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2736, in 
__iter__
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource return 
self._execute_and_instances(context)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2751, in 
_execute_and_instances
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in 
execute
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource return 
meth(self, multiparams, params)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in 
_execute_on_connection
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource return 
connection._execute_clauseelement(self, multiparams, params)
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in 
_execute_clauseelement
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource compiled_sql, 
distilled_params
  2016-08-25 13:41:17.808 29748 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1629868] Re: times out because of no dbus

2016-10-04 Thread Scott Moser
*** This bug is a duplicate of bug 1629797 ***
https://bugs.launchpad.net/bugs/1629797

I'm 95% sure that this is a dupe of bug 1629797
I'm going to mark it as such, and if we find out otherwise un-dupe it.


** This bug has been marked a duplicate of bug 1629797
   resolve service in nsswitch.conf adds 25 seconds to failed lookups before 
systemd-resolved is up

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1629868

Title:
  times out because of no dbus

Status in cloud-init:
  New
Status in MAAS:
  New
Status in cloud-init package in Ubuntu:
  New

Bug description:
  Given this command line:
  BOOT_IMAGE=ubuntu/amd64/hwe-y/yakkety/daily/boot-kernel nomodeset 
iscsi_target_name=iqn.2004-05.com.ubuntu:maas:ephemeral-ubuntu-amd64-hwe-y-yakkety-daily
 iscsi_target_ip=2001:67c:1562:8010::2:1 iscsi_target_port=3260 
iscsi_initiator=kearns ip=kearns:BOOTIF ro 
root=/dev/disk/by-path/ip-2001:67c:1562:8010::2:1:3260-iscsi-iqn.2004-05.com.ubuntu:maas:ephemeral-ubuntu-amd64-hwe-y-yakkety-daily-lun-1
 overlayroot=tmpfs 
cloud-config-url=http://maas-boot-vm-xenial.dt-maas:5240/MAAS/metadata/latest/by-id/8w4gkk/?op=get_preseed
 log_host=maas-boot-vm-xenial.dt-maas log_port=514 --- console=ttyS1 
BOOTIF=01-38:63:bb:43:b8:bc

  Where:
  root@ubuntu:~# host maas-boot-vm-xenial.dt-maas
  maas-boot-vm-xenial.dt-maas is an alias for maas-boot-vm-xenial.maas.
  maas-boot-vm-xenial.maas has address 10.246.0.5
  maas-boot-vm-xenial.maas has IPv6 address 2001:67c:1562:8010::2:1

  cloud-init takes "forever" to run, because there is a 25 second pause
  every time it tries to report status to the maas server.  This is
  because hostname resolution assumes a working dbus, and takes 25
  seconds to timeout on connecting to dbus to get the answer.

  Related bugs:
   * bug 1611074: Reformatting of ephemeral drive fails on resize of Azure VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1629868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629861] Re: L3: missing registry callback notification at Router Interface creation

2016-10-04 Thread Thomas Morin
Changed the title to reflect the case that the bug is in code used also
apparently used by non-DVR setups.

** Summary changed:

- L3 DVR: missing registry callback notification at Router Interface creation
+ L3: missing registry callback notification at Router Interface creation

** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629861

Title:
  L3: missing registry callback notification at Router Interface
  creation

Status in networking-bgpvpn:
  New
Status in neutron:
  In Progress

Bug description:
  The code triggering a ROUTER_INTERFACE AFTER_CREATE registry
  notification is not ran if the Router to which an interface is added
  has no gateway connected.

  [1]
  
https://github.com/openstack/neutron/blob/930655cf57de523181b2d59bb4428b9f23991cce/neutron/db/l3_dvr_db.py#L427
  )

  This behavior is problematic for components that need to rely on this
  functionality, and not consistent with what the non-DVR code does [2].

  [2]
  
https://github.com/openstack/neutron/blob/f4ba9ea8ac18fabac80cb0443bc18d9f950482b3/neutron/db/l3_db.py#L815

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1629861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630298] [NEW] Logs for Django 1.9 and 1.10 are filled with spam about 'Developer' dashboard

2016-10-04 Thread Rob Cresswell
Public bug reported:

The logs are as below, but slightly different depending on the current
test. They don't cause any actual problem, but the spam is annoying.


2016-10-04 14:28:59.700626 | Traceback (most recent call last):
2016-10-04 14:28:59.700702 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/loader_tags.py",
 line 209, in render
2016-10-04 14:28:59.700729 | return template.render(context)
2016-10-04 14:28:59.700785 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/base.py",
 line 208, in render
2016-10-04 14:28:59.700809 | return self._render(context)
2016-10-04 14:28:59.700868 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/test/utils.py",
 line 92, in instrumented_test_render
2016-10-04 14:28:59.700892 | return self.nodelist.render(context)
2016-10-04 14:28:59.700948 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/base.py",
 line 992, in render
2016-10-04 14:28:59.700971 | bit = node.render_annotated(context)
2016-10-04 14:28:59.701029 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/base.py",
 line 959, in render_annotated
2016-10-04 14:28:59.701049 | return self.render(context)
2016-10-04 14:28:59.701106 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/library.py",
 line 223, in render
2016-10-04 14:28:59.701133 | _dict = self.func(*resolved_args, 
**resolved_kwargs)
2016-10-04 14:28:59.701179 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/horizon/templatetags/horizon.py",
 line 66, in horizon_nav
2016-10-04 14:28:59.701206 | for dash in Horizon.get_dashboards():
2016-10-04 14:28:59.701249 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/horizon/base.py",
 line 748, in get_dashboards
2016-10-04 14:28:59.701273 | dashboard = self._registered(item)
2016-10-04 14:28:59.701315 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/horizon/base.py",
 line 225, in _registered
2016-10-04 14:28:59.701344 | "slug": slug})
2016-10-04 14:28:59.701375 | NotRegistered: Dashboard with slug "developer" is 
not registered.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: New


** Tags: newton-backport-potential

** Changed in: horizon
Milestone: None => ocata-1

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Low

** Tags added: newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630298

Title:
  Logs for Django 1.9 and 1.10 are filled with spam about 'Developer'
  dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The logs are as below, but slightly different depending on the current
  test. They don't cause any actual problem, but the spam is annoying.


  2016-10-04 14:28:59.700626 | Traceback (most recent call last):
  2016-10-04 14:28:59.700702 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/loader_tags.py",
 line 209, in render
  2016-10-04 14:28:59.700729 | return template.render(context)
  2016-10-04 14:28:59.700785 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/base.py",
 line 208, in render
  2016-10-04 14:28:59.700809 | return self._render(context)
  2016-10-04 14:28:59.700868 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/test/utils.py",
 line 92, in instrumented_test_render
  2016-10-04 14:28:59.700892 | return self.nodelist.render(context)
  2016-10-04 14:28:59.700948 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/base.py",
 line 992, in render
  2016-10-04 14:28:59.700971 | bit = node.render_annotated(context)
  2016-10-04 14:28:59.701029 |   File 
"/home/jenkins/workspace/gate-horizon-tox-py27dj19-ubuntu-xenial/.tox/py27dj19/local/lib/python2.7/site-packages/django/template/base.py",
 line 959, in render_annotated
  2016-10-04 14:28:59.701049 | return self.render(context)
  2016-10-04 14:28:59.701106 |   File 

[Yahoo-eng-team] [Bug 1630134] Re: Networking API v2.0 (CURRENT): Update Network Request missing the 'qos-policy-id' parameter.

2016-10-04 Thread Darek Smigiel
** Tags added: doc

** Also affects: openstack-api-site
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630134

Title:
  Networking API v2.0 (CURRENT): Update Network Request missing the
  'qos-policy-id' parameter.

Status in neutron:
  New
Status in openstack-api-site:
  New

Bug description:
  The API reference http://developer.openstack.org/api-
  ref/networking/v2/index.html doesn't reflect the fact a Qos Policy
  linked to an existing network can be updated:

  
  $ curl -s -H "X-Auth-Token: $OS_TOKEN" 
http://${OS_HOST}:9696/v2.0/qos/policies | python -mjson.tool{
  "policies": [
  {
  "description": "This policy limits the ports to 10Mbit max.",
  "id": "c4e80891-5d77-480f-8970-a7223fd72f4b",
  "name": "10Mbit",
  "rules": [],
  "shared": false,
  "tenant_id": "5a23535b5dda4770bccc856d0167e53f"
  }
  ]
  }

  $ curl -s  -H "X-Auth-Token: $OS_TOKEN"  -H "Content-Type: application/json" 
http://${OS_HOST}:9696/v2.0/networks/b18d3079-fcaa-41b7-8aec-0d009789fff8| 
python -mjson.tool
  {
  "network": {
  "admin_state_up": false,
  "id": "b18d3079-fcaa-41b7-8aec-0d009789fff8",
  "mtu": 0,
  "name": "cristalnet",
  "port_security_enabled": true,
  "provider:network_type": "vxlan",
  "provider:physical_network": null,
  "provider:segmentation_id": 39,
  "qos_policy_id": null,
  "router:external": false,
  "shared": true,
  "status": "ACTIVE",
  "subnets": [],
  "tenant_id": "5a23535b5dda4770bccc856d0167e53f",
  "vlan_transparent": null
  }
  }

  $ curl -s \
  >   -X PUT \
  >   -H "X-Auth-Token: $OS_TOKEN" \
  >   -H "Content-Type: application/json" \
  >   -d '{"network": { "qos_policy_id": "c4e80891-5d77-480f-8970-a7223fd72f4b" 
 } }' \
  > http://${OS_HOST}:9696/v2.0/networks/b18d3079-fcaa-41b7-8aec-0d009789fff8 | 
python -mjson.tool
  {
  "network": {
  "admin_state_up": false,
  "id": "b18d3079-fcaa-41b7-8aec-0d009789fff8",
  "mtu": 0,
  "name": "cristalnet",
  "port_security_enabled": true,
  "provider:network_type": "vxlan",
  "provider:physical_network": null,
  "provider:segmentation_id": 39,
  "qos_policy_id": "c4e80891-5d77-480f-8970-a7223fd72f4b",
  "router:external": false,
  "shared": true,
  "status": "ACTIVE",
  "subnets": [],
  "tenant_id": "5a23535b5dda4770bccc856d0167e53f",
  "vlan_transparent": null
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630257] [NEW] DBDeadlock occurs during test_dualnet_multi_prefix_dhcpv6_stateless

2016-10-04 Thread Sergey Belous
Public bug reported:

The test test_dualnet_multi_prefix_dhcpv6_stateless
(tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless[compute
,id-cf1c4425-766b-45b8-be35-e2959728eb00,network]) failed with error on
python-openstackclient gates:

2016-10-04 13:59:59.811433 | Captured traceback:
2016-10-04 13:59:59.811450 | ~~~
2016-10-04 13:59:59.811471 | Traceback (most recent call last):
2016-10-04 13:59:59.811513 |   File "tempest/test.py", line 107, in wrapper
2016-10-04 13:59:59.811558 | return f(self, *func_args, **func_kwargs)
2016-10-04 13:59:59.811616 |   File "tempest/scenario/test_network_v6.py", 
line 256, in test_dualnet_multi_prefix_dhcpv6_stateless
2016-10-04 13:59:59.811640 | dualnet=True)
2016-10-04 13:59:59.811672 |   File "tempest/scenario/test_network_v6.py", 
line 203, in _prepare_and_test
2016-10-04 13:59:59.811696 | self.subnets_v6[i]['gateway_ip'])
2016-10-04 13:59:59.811728 |   File "tempest/scenario/test_network_v6.py", 
line 213, in _check_connectivity
2016-10-04 13:59:59.811751 | (dest, source.ssh_client.host)
2016-10-04 13:59:59.811794 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
2016-10-04 13:59:59.811818 | raise self.failureException(msg)
2016-10-04 13:59:59.811857 | AssertionError: False is not true : Timed out 
waiting for 2003::1 to become reachable from 172.24.5.14

http://logs.openstack.org/11/376311/3/check/gate-tempest-dsvm-neutron-
src-python-openstackclient/04dabcd/console.html

At this time in neutron-server logs DBDeadlock occurs:

2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/managers.py", line 433, in 
_call_on_drivers
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/mech_agent.py", line 60, in 
create_port_precommit
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self._insert_provisioning_block(context)
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/mech_agent.py", line 83, in 
_insert_provisioning_block
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
provisioning_blocks.L2_AGENT_ENTITY)
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 159, in wrapped
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers return 
method(*args, **kwargs)
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/db/provisioning_blocks.py", line 74, in 
add_provisioning_component
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
context.session.add(record)
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 490, 
in __exit__
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self.rollback()
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
compat.reraise(exc_type, exc_value, exc_tb)
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 487, 
in __exit__
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self.commit()
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 392, 
in commit
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self._prepare_impl()
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 372, 
in _prepare_impl
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self.session.flush()
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2019, 
in flush
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self._flush(objects)
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2137, 
in _flush
2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
transaction.rollback(_capture_exception=True)
2016-10-04 

[Yahoo-eng-team] [Bug 1630259] [NEW] Rolling upgrade does not work well in Newton release

2016-10-04 Thread Anh Tran
Public bug reported:

I have 3 Controller nodes running HA active/active mode. Using Mysql-server as 
shared database.
After upgrade Controller1, I start it to handle the request to make the system 
no downtime.
But when a request is handling by Controller1, an error happended: "There is 
either no auth token in the request or the certificate issuer is not trusted. 
No auth context will be set". Keystone raise that: KeyError: 'is_domain' 

How to reproduce:
Follow this guide: 
http://docs.openstack.org/developer/keystone/upgrading.html#upgrading-without-downtime

# Controller1
$ sudo service apache2 stop

$ cd /opt/stack/keystone/
$ git checkout remotes/origin/stable/newton
$ git checkout -b stable/newton remotes/origin/stable/newton
$ sudo pip install -r requirements.txt --upgrade

$ keystone-manage doctor
$ keystone-manage db_sync --expand
$ keystone-manage db_sync --migrate
$ sudo python setup.py install
$ sudo service apache2 start

# Controller2 or any openstack clients
$ for i in {1..10}; do openstack neutron network list; done
...
503 Service Unavailable
The server is currently unavailable. Please try again at a later time
...

Full log in kestone here: http://paste.openstack.org/show/584107/

After I upgraded all 3 Controller nodes follow the same above steps
except upgrading db, the error never occurs again.

At step 9 in the guideline: "Upgrade all keystone nodes to the next release, 
and restart them one at a time..."
I think we will have downtime in this process. So I tried to upgrade 
controller1 first, then make it online to ensure that the system have not 
downtime.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1630259

Title:
  Rolling upgrade does not work well in Newton release

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have 3 Controller nodes running HA active/active mode. Using Mysql-server 
as shared database.
  After upgrade Controller1, I start it to handle the request to make the 
system no downtime.
  But when a request is handling by Controller1, an error happended: "There is 
either no auth token in the request or the certificate issuer is not trusted. 
No auth context will be set". Keystone raise that: KeyError: 'is_domain' 

  How to reproduce:
  Follow this guide: 
http://docs.openstack.org/developer/keystone/upgrading.html#upgrading-without-downtime

  # Controller1
  $ sudo service apache2 stop

  $ cd /opt/stack/keystone/
  $ git checkout remotes/origin/stable/newton
  $ git checkout -b stable/newton remotes/origin/stable/newton
  $ sudo pip install -r requirements.txt --upgrade

  $ keystone-manage doctor
  $ keystone-manage db_sync --expand
  $ keystone-manage db_sync --migrate
  $ sudo python setup.py install
  $ sudo service apache2 start

  # Controller2 or any openstack clients
  $ for i in {1..10}; do openstack neutron network list; done
  ...
  503 Service Unavailable
  The server is currently unavailable. Please try again at a later time
  ...

  Full log in kestone here: http://paste.openstack.org/show/584107/

  After I upgraded all 3 Controller nodes follow the same above steps
  except upgrading db, the error never occurs again.

  At step 9 in the guideline: "Upgrade all keystone nodes to the next release, 
and restart them one at a time..."
  I think we will have downtime in this process. So I tried to upgrade 
controller1 first, then make it online to ensure that the system have not 
downtime.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1630259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629040] Re: Incorrect hyper-v driver capability

2016-10-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/381081
Committed: 
https://git.openstack.org/cgit/openstack/compute-hyperv/commit/?id=3050e9925eaedef6c752e995e1d90c73d5222deb
Submitter: Jenkins
Branch:master

commit 3050e9925eaedef6c752e995e1d90c73d5222deb
Author: Lucian Petrut 
Date:   Mon Oct 3 15:30:11 2016 +0300

Disable 'supports_migrate_to_same_host' HyperV driver capability

The Hyper-V driver incorrectly enables the
'supports_migrate_to_same_host' capability.

This capability seems to have been introduced having the VMWare
cluster architecture in mind, but it leads to unintended behavior
in case of the HyperV driver.

For this reason, the Hyper-V CI is failing on an recently introduced
tempest test, which asserts that the host has changed.

This change disables this driver capability.

(cherry picked from commit Ibb4f1d4e40ccc98dc297e463b127772a49207d9a)

Change-Id: I9325055e5ff0757ac50bcfe4929d4c5e6e665e41
Closes-Bug: #1629040


** Changed in: compute-hyperv
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629040

Title:
  Incorrect hyper-v driver capability

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The Hyper-V driver incorrectly enables the
  'supports_migrate_to_same_host' capability.

  This capability seems to have been introduced having the VMWare
  cluster architecture in mind, but it leads to unintended behavior in
  case of the HyperV driver.

  For this reason, the Hyper-V CI is failing on the test_cold_migration
  tempest test, which asserts that the host has changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1629040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627691] Re: non-existent namespace errors in DHCP agent

2016-10-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/376010
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=98de72e9f32090a93c9c1b6f45cc0f2385c27792
Submitter: Jenkins
Branch:master

commit 98de72e9f32090a93c9c1b6f45cc0f2385c27792
Author: Kevin Benton 
Date:   Sat Sep 24 01:53:38 2016 -0700

Don't try to delete non-existent namespace

The namespace in the DHCP agent may already have been
deleted by a previous event when _destroy_namespace_and_port
is called (e.g. all subnets deleted from network and then network
is deleted). To avoid log errors every time this happens, check
for the existence of the namespace before trying to delete it.

Closes-Bug: #1627691
Change-Id: I204ba7a0de056f13af505541d67f0acdd70fd54d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627691

Title:
  non-existent namespace errors in DHCP agent

Status in neutron:
  Fix Released

Bug description:
  The DHCP agent log gets sprinkled with ERROR logs when a call to
  disable DHCP for a network happens twice. This can happen if an
  agent's port is deleted and then a delete network call for its network
  happens before it resyncs (both events call 'disable' on the driver).

  
  
http://logs.openstack.org/91/375791/9/check/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/16e9b8f/logs/screen-q-dhcp.txt.gz?level=TRACE

  2016-09-25 12:56:37.463 15334 ERROR neutron.agent.linux.utils [req-
  3882fc97-98a1-410e-8638-4de87841e5ee - -] Exit code: 1; Stdin: ;
  Stdout: ; Stderr: Cannot remove namespace file "/var/run/netns/qdhcp-
  3e0f2433-c53d-43e4-8c1e-5c6b863ad693": No such file or directory

  2016-09-25 12:56:37.463 15334 WARNING neutron.agent.linux.dhcp 
[req-3882fc97-98a1-410e-8638-4de87841e5ee - -] Failed trying to delete 
namespace: qdhcp-3e0f2433-c53d-43e4-8c1e-5c6b863ad693
  2016-09-25 12:57:24.774 15334 ERROR neutron.agent.linux.utils 
[req-113e268a-f42d-4a45-8e4e-e5530a14f43f - -] Exit code: 1; Stdin: ; Stdout: ; 
Stderr: Cannot remove namespace file 
"/var/run/netns/qdhcp-82696d3a-3ef9-4744-a685-cfd38730b541": No such file or 
directory

  2016-09-25 12:57:24.774 15334 WARNING neutron.agent.linux.dhcp 
[req-113e268a-f42d-4a45-8e4e-e5530a14f43f - -] Failed trying to delete 
namespace: qdhcp-82696d3a-3ef9-4744-a685-cfd38730b541
  2016-09-25 12:57:32.672 15334 ERROR neutron.agent.linux.utils 
[req-702f653b-f69b-485b-917f-9231622a5fae - -] Exit code: 1; Stdin: ; Stdout: ; 
Stderr: Cannot remove namespace file 
"/var/run/netns/qdhcp-4a536dcd-5047-4c39-91cb-30ab11cb3d73": No such file or 
directory

  2016-09-25 12:57:32.673 15334 WARNING neutron.agent.linux.dhcp [req-
  702f653b-f69b-485b-917f-9231622a5fae - -] Failed trying to delete
  namespace: qdhcp-4a536dcd-5047-4c39-91cb-30ab11cb3d73

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617299] Re: NFS based Nova Live Migration eratically fails

2016-10-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/366857
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1af73d1fb3169c5b3cce77d94316922496bbaf9a
Submitter: Jenkins
Branch:master

commit 1af73d1fb3169c5b3cce77d94316922496bbaf9a
Author: Tom Patzig 
Date:   Wed Sep 7 11:16:49 2016 +0200

refresh instances_path when shared storage used

When doing Live migration with shared storage, it happens erratically,
that the check for the shared storage test_file fails. Because the shared
volume is under heavy IO (many instances on many compute nodes) the client
does not immediately sees the new content of the folder. This delay
could take up to 30s.
This can be fixed if the client is forced to refresh the directories
content, which can be achieved by 'touch' on the directory. Doing so,
the test_file is visibile instantly, within ms.
The patch adds a 'touch' on instances_path in 
check_shared_storage_test_file,
before checking the existence of the file.

Change-Id: I16be39142278517f43e6eca3441a56cbc9561113
Closes-Bug: #1617299


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1617299

Title:
  NFS based Nova Live Migration eratically fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  Hello,

  in our productive Openstack environment we encountered in the last weeks that 
Openstack Nova VM Live migrations fails.
  Currently this is only visible in our automated test environment. Every 15 
minutes an automated test is started and it fails 3-4 times a day.

  On the Nova instance path we have mounted a central NetApp NFS share
  to support real Live migrations between different hypervisors.

  When we analysed the issue we found the error message and trace:
  BadRequest:  is not on shared storage: Live migration can not 
be used without shared storage except a booted from volume VM which does not 
have a local disk. (HTTP 400) (Request-ID: 
req-8e709fd1-9d72-453b-b4b1-1f26112ea3d3)
   
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/rally/task/runner.py", line 66, in 
_run_scenario_once
  getattr(scenario_inst, method_name)(**scenario_kwargs)
File 
"/usr/lib/python2.7/site-packages/rally/plugins/openstack/scenarios/nova/servers.py",
 line 640, in boot_and_live_migrate_server
  block_migration, disk_over_commit)
File "/usr/lib/python2.7/site-packages/rally/task/atomic.py", line 84, in 
func_atomic_actions
  f = func(self, *args, **kwargs)
File 
"/usr/lib/python2.7/site-packages/rally/plugins/openstack/scenarios/nova/utils.py",
 line 721, in _live_migrate
  disk_over_commit=disk_over_commit)
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 433, 
in live_migrate
  disk_over_commit)
File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 
370, in substitution
  return methods[-1].func(obj, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 
1524, in live_migrate
  'disk_over_commit': disk_over_commit})
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 
1691, in _action
  info=info, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 
1702, in _action_return_resp_and_body
  return self.api.client.post(url, body=body)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 461, in 
post
  return self._cs_request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 436, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 409, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 403, in 
request
  raise exceptions.from_response(resp, body, url, method)
  BadRequest:  is not on shared storage: Live migration can not 
be used without shared storage except a booted from volume VM which does not 
have a local disk. (HTTP 400) (Request-ID: 
req-8e709fd1-9d72-453b-b4b1-1f26112ea3d3)
   
  We examined the respective hypervisors for some problems with the NFS 
share/mount, but everything looks really good. Also the message log file shows 
no issues during the test timeframe.
   
  The next step was to examine the Nova code to get a hint why Nova is bringing 
up such an error.
  In the Nova code we found the test procedure how Nova checks if there is a 
shared filesystem between source and destination hypervisor.
   
  In "nova/nova/virt/libvirt/driver.py"
   
  In function 

[Yahoo-eng-team] [Bug 1533899] Re: Add support for proxies

2016-10-04 Thread Stuart Bishop
Snap layer is using @mvo's approach above.

** Changed in: layer-snap
 Assignee: (unassigned) => Stuart Bishop (stub)

** Changed in: layer-snap
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1533899

Title:
  Add support for proxies

Status in cloud-init:
  New
Status in Snap Layer:
  Fix Released
Status in Snapcraft:
  Invalid
Status in Snappy:
  Triaged
Status in snapweb:
  Invalid

Bug description:
  Actually ubuntu-core do not support proxy configuration, that is a problem 
for example for run correctly "webdm": "Error: Get 
https://search.apps.ubuntu.com/api/v1/search? read: connection refused".
  I can configure proxy with export http_proxy="http://x.x.x.x.x; but only 
affects the session not the wide system.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1533899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613116] Re: openstack mitaka mirantis failed install neutron and compute

2016-10-04 Thread Ann Taraday
SQLite is not supported backend for running migrations in production.

** Changed in: neutron
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1613116

Title:
  openstack mitaka mirantis failed install neutron and compute

Status in neutron:
  Opinion

Bug description:
  using aptitude  -t jessie-mitaka-backports install openstack-dashboard
  openstack-dashboard-apache ceilometer-agent-compute openstack-compute-
  node

  after adding repositories
  deb http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports main
  deb-src http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports 
main
  deb http://mitaka-jessie.pkgs.mirantis.com/debian 
jessie-mitaka-backports-nochange main
  deb-src http://mitaka-jessie.pkgs.mirantis.com/debian 
jessie-mitaka-backports-nochange main

  fresh install of Jessie 8.5.0

  creating database neutrondb: success.
  verifying database neutrondb exists: success.
  No handlers could be found for logger "oslo_config.cfg"
  INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
  INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  INFO  [alembic.runtime.migration] Running upgrade  -> kilo, kilo_initial
  INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, 
nsxv_vdr_metadata.py
  INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 
599c6a226151, neutrodb_ipam
  INFO  [alembic.runtime.migration] Running upgrade 599c6a226151 -> 
52c5312f6baf, Initial operations in support of address scopes
  INFO  [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 
313373c0ffee, Flavor framework
  INFO  [alembic.runtime.migration] Running upgrade 313373c0ffee -> 
8675309a5c4f, network_rbac
  INFO  [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 
45f955889773, quota_usage
  INFO  [alembic.runtime.migration] Running upgrade 45f955889773 -> 
26c371498592, subnetpool hash
  INFO  [alembic.runtime.migration] Running upgrade 26c371498592 -> 
1c844d1677f7, add order to dnsnameservers
  INFO  [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 
1b4c6e320f79, address scope support in subnetpool
  INFO  [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 
48153cb5f051, qos db changes
  INFO  [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 
9859ac9c136, quota_reservations
  INFO  [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 
34af2b5c5a59, Add dns_name to Port
  INFO  [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 
59cb5b6cf4d, Add availability zone
  INFO  [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 
13cfb89f881a, add is_default to subnetpool
  /usr/lib/python2.7/dist-packages/alembic/util/messaging.py:69: UserWarning: 
Skipping unsupported ALTER for creation of implicit constraint
warnings.warn(msg)
  INFO  [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 
32e5974ada25, Add standard attribute table
  INFO  [alembic.runtime.migration] Running upgrade 32e5974ada25 -> 
ec7fcfbf72ee, Add network availability zone
  INFO  [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> 
dce3ec7a25c9, Add router availability zone
  INFO  [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> 
c3a73f615e4, Add ip_version to AddressScope
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
749, in main
  return_val |= bool(CONF.command.func(config, CONF.command.name))
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
225, in do_upgrade
  desc=branch, sql=CONF.command.sql)
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
127, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/alembic/command.py", line 174, in 
upgrade
  script.run_env()
File "/usr/lib/python2.7/dist-packages/alembic/script/base.py", line 397, 
in run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, 
in load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 126, in 
  run_migrations_online()
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in run_migrations_online
  context.run_migrations()

[Yahoo-eng-team] [Bug 1630161] [NEW] nova image-list is deprecated, but it should work even now

2016-10-04 Thread Attila Fazekas
Public bug reported:

On newton it looks like:
$ nova image-list
WARNING: Command image-list is deprecated and will be removed after Nova 15.0.0 
is released. Use python-glanceclient or openstackclient instead.
ERROR (VersionNotFoundForAPIMethod): API version 'API Version Major: 2, Minor: 
37' is not supported on 'list' method.

It is supposed to be still supported, since newton is just 14.


nova (14.0.0.0rc2.dev21)
python-novaclient (6.0.0)

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: python-novaclient
 Importance: Undecided
 Status: New

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630161

Title:
  nova image-list is deprecated, but it should work even now

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  New

Bug description:
  On newton it looks like:
  $ nova image-list
  WARNING: Command image-list is deprecated and will be removed after Nova 
15.0.0 is released. Use python-glanceclient or openstackclient instead.
  ERROR (VersionNotFoundForAPIMethod): API version 'API Version Major: 2, 
Minor: 37' is not supported on 'list' method.

  It is supposed to be still supported, since newton is just 14.

  
  nova (14.0.0.0rc2.dev21)
  python-novaclient (6.0.0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630141] [NEW] The flavor metadatas changes are not saved well

2016-10-04 Thread Béla Vancsics
Public bug reported:

I created a new flavor and I changed the metadata - twice (Update Metadata - 
e.g.: CIM Processor Allocation Setting -> Instruction Set Extension -> ARM:DSP 
and ARM:DSP and ARM:NEON , add and remove) in dashboard.
If I want to change these settings, then the changes are not saved well.

Steps:
1) Create a new flavor
2) Update Metadata
3) Add new Existing Metadata (e.g.: CIM Processor Allocation Setting -> 
Instruction Set Extension -> select: ARM:DSP and ARM:DSP and ARM:NEON) and save 
it
4) Update Metadata again
5) Remove the CIM Processor Allocation Setting Existing Metadata and save it

Results: the Existing Metadata is not changed (the ARM:DSP, ARM:DSP and
ARM:NEON are "active", these are in Existing Metadata box)

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- The flavor metadatas changes are not saved well (in dashboard)
+ The flavor metadatas changes are not saved well

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630141

Title:
  The flavor metadatas changes are not saved well

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I created a new flavor and I changed the metadata - twice (Update Metadata - 
e.g.: CIM Processor Allocation Setting -> Instruction Set Extension -> ARM:DSP 
and ARM:DSP and ARM:NEON , add and remove) in dashboard.
  If I want to change these settings, then the changes are not saved well.

  Steps:
  1) Create a new flavor
  2) Update Metadata
  3) Add new Existing Metadata (e.g.: CIM Processor Allocation Setting -> 
Instruction Set Extension -> select: ARM:DSP and ARM:DSP and ARM:NEON) and save 
it
  4) Update Metadata again
  5) Remove the CIM Processor Allocation Setting Existing Metadata and save it

  Results: the Existing Metadata is not changed (the ARM:DSP, ARM:DSP
  and ARM:NEON are "active", these are in Existing Metadata box)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1630141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630131] [NEW] Support easy navigation from security groups tab and Edit security group instances

2016-10-04 Thread Prathyusha Vanka
Public bug reported:

It is difficult to map appropriate security group to instance after instance is 
created from Edit security groups.
If there is extra button in security groups tab to add to instances after 
creating new security group, it will be helpful. Similar to Associate button in 
Floating IPs tab.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "security_mapping.PNG"
   
https://bugs.launchpad.net/bugs/1630131/+attachment/4753990/+files/security_mapping.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1630131

Title:
  Support easy navigation from security groups tab and Edit security
  group instances

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It is difficult to map appropriate security group to instance after instance 
is created from Edit security groups.
  If there is extra button in security groups tab to add to instances after 
creating new security group, it will be helpful. Similar to Associate button in 
Floating IPs tab.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1630131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630132] [NEW] Unable to create subnet pool with less no. of IP

2016-10-04 Thread kesper
Public bug reported:

In Mitaka, if i want to create subnet,  suppose 10.10.1.212-10.10.1.215,
with gateway 10.10.1.214 with subnetpool, 10.10.1.0/24. We are able to
do that via.

neutron subnet-create --allocation-pool
start=10.10.1.212,end=10.10.1.215 --gateway 10.10.1.214 --subnet-pool
demo_pool private_network 10.10.1.215/24


But now in Newton, same thing is not working, now i am not able to use fixed 
range and allocation pool, and even i only do with subnetpool it is taking 
first ip, 10.10.1.213 is a gateway instead 10.10.1.214.

Its after this patch merged into devstack:

https://review.openstack.org/#/c/356026/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630132

Title:
  Unable to create subnet pool with less no. of IP

Status in neutron:
  New

Bug description:
  In Mitaka, if i want to create subnet,  suppose
  10.10.1.212-10.10.1.215, with gateway 10.10.1.214 with subnetpool,
  10.10.1.0/24. We are able to do that via.

  neutron subnet-create --allocation-pool
  start=10.10.1.212,end=10.10.1.215 --gateway 10.10.1.214 --subnet-pool
  demo_pool private_network 10.10.1.215/24

  
  But now in Newton, same thing is not working, now i am not able to use fixed 
range and allocation pool, and even i only do with subnetpool it is taking 
first ip, 10.10.1.213 is a gateway instead 10.10.1.214.

  Its after this patch merged into devstack:

  https://review.openstack.org/#/c/356026/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630134] [NEW] Networking API v2.0 (CURRENT): Update Network Request missing the 'qos-policy-id' parameter.

2016-10-04 Thread Gilles Dubreuil
Public bug reported:

The API reference http://developer.openstack.org/api-
ref/networking/v2/index.html doesn't reflect the fact a Qos Policy
linked to an existing network can be updated:


$ curl -s -H "X-Auth-Token: $OS_TOKEN" http://${OS_HOST}:9696/v2.0/qos/policies 
| python -mjson.tool{
"policies": [
{
"description": "This policy limits the ports to 10Mbit max.",
"id": "c4e80891-5d77-480f-8970-a7223fd72f4b",
"name": "10Mbit",
"rules": [],
"shared": false,
"tenant_id": "5a23535b5dda4770bccc856d0167e53f"
}
]
}

$ curl -s  -H "X-Auth-Token: $OS_TOKEN"  -H "Content-Type: application/json" 
http://${OS_HOST}:9696/v2.0/networks/b18d3079-fcaa-41b7-8aec-0d009789fff8| 
python -mjson.tool
{
"network": {
"admin_state_up": false,
"id": "b18d3079-fcaa-41b7-8aec-0d009789fff8",
"mtu": 0,
"name": "cristalnet",
"port_security_enabled": true,
"provider:network_type": "vxlan",
"provider:physical_network": null,
"provider:segmentation_id": 39,
"qos_policy_id": null,
"router:external": false,
"shared": true,
"status": "ACTIVE",
"subnets": [],
"tenant_id": "5a23535b5dda4770bccc856d0167e53f",
"vlan_transparent": null
}
}

$ curl -s \
>   -X PUT \
>   -H "X-Auth-Token: $OS_TOKEN" \
>   -H "Content-Type: application/json" \
>   -d '{"network": { "qos_policy_id": "c4e80891-5d77-480f-8970-a7223fd72f4b"  
> } }' \
> http://${OS_HOST}:9696/v2.0/networks/b18d3079-fcaa-41b7-8aec-0d009789fff8 | 
> python -mjson.tool
{
"network": {
"admin_state_up": false,
"id": "b18d3079-fcaa-41b7-8aec-0d009789fff8",
"mtu": 0,
"name": "cristalnet",
"port_security_enabled": true,
"provider:network_type": "vxlan",
"provider:physical_network": null,
"provider:segmentation_id": 39,
"qos_policy_id": "c4e80891-5d77-480f-8970-a7223fd72f4b",
"router:external": false,
"shared": true,
"status": "ACTIVE",
"subnets": [],
"tenant_id": "5a23535b5dda4770bccc856d0167e53f",
"vlan_transparent": null
}
}

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  The API reference http://developer.openstack.org/api-
  ref/networking/v2/index.html doesn't reflect the fact a Qos Policy
  linked to an existing network can be updated:
  
- ```
+ 
  $ curl -s -H "X-Auth-Token: $OS_TOKEN" 
http://${OS_HOST}:9696/v2.0/qos/policies | python -mjson.tool{
- "policies": [
- {
- "description": "This policy limits the ports to 10Mbit max.",
- "id": "c4e80891-5d77-480f-8970-a7223fd72f4b",
- "name": "10Mbit",
- "rules": [],
- "shared": false,
- "tenant_id": "5a23535b5dda4770bccc856d0167e53f"
- }
- ]
+ "policies": [
+ {
+ "description": "This policy limits the ports to 10Mbit max.",
+ "id": "c4e80891-5d77-480f-8970-a7223fd72f4b",
+ "name": "10Mbit",
+ "rules": [],
+ "shared": false,
+ "tenant_id": "5a23535b5dda4770bccc856d0167e53f"
+ }
+ ]
  }
  
  $ curl -s  -H "X-Auth-Token: $OS_TOKEN"  -H "Content-Type: application/json" 
http://${OS_HOST}:9696/v2.0/networks/b18d3079-fcaa-41b7-8aec-0d009789fff8| 
python -mjson.tool
  {
- "network": {
- "admin_state_up": false,
- "id": "b18d3079-fcaa-41b7-8aec-0d009789fff8",
- "mtu": 0,
- "name": "cristalnet",
- "port_security_enabled": true,
- "provider:network_type": "vxlan",
- "provider:physical_network": null,
- "provider:segmentation_id": 39,
- "qos_policy_id": null,
- "router:external": false,
- "shared": true,
- "status": "ACTIVE",
- "subnets": [],
- "tenant_id": "5a23535b5dda4770bccc856d0167e53f",
- "vlan_transparent": null
- }
+ "network": {
+ "admin_state_up": false,
+ "id": "b18d3079-fcaa-41b7-8aec-0d009789fff8",
+ "mtu": 0,
+ "name": "cristalnet",
+ "port_security_enabled": true,
+ "provider:network_type": "vxlan",
+ "provider:physical_network": null,
+ "provider:segmentation_id": 39,
+ "qos_policy_id": null,
+ "router:external": false,
+ "shared": true,
+ "status": "ACTIVE",
+ "subnets": [],
+ "tenant_id": "5a23535b5dda4770bccc856d0167e53f",
+ "vlan_transparent": null
+ }
  }
  
  $ curl -s \
  >   -X PUT \
  >   -H "X-Auth-Token: $OS_TOKEN" \
  >   -H "Content-Type: application/json" \
  >   -d '{"network": { "qos_policy_id": "c4e80891-5d77-480f-8970-a7223fd72f4b" 
 } }' \
  > http://${OS_HOST}:9696/v2.0/networks/b18d3079-fcaa-41b7-8aec-0d009789fff8 | 
python -mjson.tool
  {
- "network": {
-