[Yahoo-eng-team] [Bug 1649965] Re: dhcp agent floods logs when full path interface_driver is used

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/418016
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6f20f1c0ac65c92019cd6d6f0935fab5bab9696d
Submitter: Jenkins
Branch:master

commit 6f20f1c0ac65c92019cd6d6f0935fab5bab9696d
Author: Kevin Benton 
Date:   Mon Jan 9 10:35:24 2017 -0800

Suppress annoying "Could not load" stevedore warnings

Override the DriverManager __init__ method to be able to
pass warn_on_missing_entrypoint=False to the
NamedExtensionManager so we don't mislead operators with
"Could not load" warning messages when plugins ultimately
load just fine by full class path.

Closes-Bug: #1649965
Change-Id: I52964905d12c0f35c86862ac04d2c6e13db80358


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649965

Title:
  dhcp agent floods logs when full path interface_driver is used

Status in neutron:
  Fix Released

Bug description:
  If interface_driver is configured with full path to its driver, every
  attempt to import driver issues a log message.

  e.g.

  2016-12-14 16:30:20.579 145867 WARNING stevedore.named 
[req-c4b047d5-5332-4780-86f5-605cadba95a8 4a3d7b1190284a2ca20e72d4f9470b18 
47a9c124ce514bffb655f8feecfcce1b - - -] Could not load 
neutron.agent.linux.interfa
  ce.OVSInterfaceDriver
  2016-12-14 16:30:20.620 145867 WARNING stevedore.named 
[req-a3b4c6d0-1a22-4301-a6fa-261598ca4342 4a3d7b1190284a2ca20e72d4f9470b18 
47a9c124ce514bffb655f8feecfcce1b - - -] Could not load 
neutron.agent.linux.interfa
  ce.OVSInterfaceDriver
  2016-12-14 16:30:20.661 145867 WARNING stevedore.named 
[req-e3d5a931-4cb8-4d56-9cba-b39131b5be52 4a3d7b1190284a2ca20e72d4f9470b18 
47a9c124ce514bffb655f8feecfcce1b - - -] Could not load 
neutron.agent.linux.interfa
  ce.OVSInterfaceDriver

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656183] [NEW] Delete tags return 200 status code but api-ref says 204

2017-01-12 Thread Ghanshyam Mann
Public bug reported:

delete tags API DELETE metadefs/namespaces/%s/tags actually return 200
status code because response ResponseSerializer is not implemented for
delete_tags in
https://github.com/openstack/glance/blob/master/glance/api/v2/metadef_namespaces.py

but api-ref below mentioned status code will be 204.

http://developer.openstack.org/api-ref/image/v2/metadefs-index?expanded
=delete-all-tag-definitions-detail#delete-all-tag-definitions

Tests for tags are going to be added in tempest -
https://review.openstack.org/#/c/403998/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1656183

Title:
  Delete tags return 200 status code but api-ref says 204

Status in Glance:
  New

Bug description:
  delete tags API DELETE metadefs/namespaces/%s/tags actually return 200
  status code because response ResponseSerializer is not implemented for
  delete_tags in
  
https://github.com/openstack/glance/blob/master/glance/api/v2/metadef_namespaces.py

  but api-ref below mentioned status code will be 204.

  http://developer.openstack.org/api-ref/image/v2/metadefs-
  index?expanded=delete-all-tag-definitions-detail#delete-all-tag-
  definitions

  Tests for tags are going to be added in tempest -
  https://review.openstack.org/#/c/403998/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1656183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655014] Re: Job gate-keystone-dsvm-functional-ubuntu-xenial is broken for stable/newton

2017-01-12 Thread Steve Martinelli
** Changed in: keystone
   Status: Fix Committed => Fix Released

** Changed in: keystone
   Importance: Undecided => High

** Changed in: keystone
 Assignee: (unassigned) => Steve Martinelli (stevemar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1655014

Title:
  Job gate-keystone-dsvm-functional-ubuntu-xenial is broken for
  stable/newton

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This is a test patch https://review.openstack.org/#/c/417831/.

  Job gate-keystone-dsvm-functional-ubuntu-xenial seems to be broken,
  gate logs: http://logs.openstack.org/31/417831/1/check/gate-keystone-
  dsvm-functional-ubuntu-xenial/63e55d8/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1655014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645553] Re: [api] relationship links result in 404

2017-01-12 Thread Steve Martinelli
** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645553

Title:
  [api] relationship links result in 404

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Detail doc mentioned in identity v3 api-ref for role assignment is not
  right one.

  api-ref - http://developer.openstack.org/api-ref/identity/v3/?expanded
  =list-effective-role-assignments-detail

  non accessible link - http://docs.openstack.org/api/openstack-
  identity/3/rel/role_assignments

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649574] Re: vpnaas gate jobs are failing with openswan

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/410511
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=55b0e6618f8a2221aa2ee36c45174d714f3718c6
Submitter: Jenkins
Branch:master

commit 55b0e6618f8a2221aa2ee36c45174d714f3718c6
Author: YAMAMOTO Takashi 
Date:   Wed Dec 14 11:47:36 2016 +0900

devstack: Switch the default to strongswan

Closes-Bug: #1649574
Change-Id: I955546e4c63daacf0c8b4e979917eacb3a4e29d7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649574

Title:
  vpnaas gate jobs are failing with openswan

Status in neutron:
  Fix Released

Bug description:
  following jobs are failing

  gate-neutron-vpnaas-dsvm-api-ubuntu-xenial-nv
  gate-neutron-vpnaas-dsvm-functional-ubuntu-xenial

  eg. http://logs.openstack.org/78/408878/1/check/gate-neutron-vpnaas-
  dsvm-functional-ubuntu-xenial/31c0c21/console.html

  2016-12-13 03:23:23.122072 | + 
/opt/stack/new/neutron-vpnaas/devstack/plugin.sh:neutron_agent_vpnaas_install_agent_packages:14
 :   install_package openswan
  2016-12-13 03:23:23.123674 | + functions-common:install_package:1285:   
update_package_repo
  2016-12-13 03:23:23.125364 | + functions-common:update_package_repo:1257 :   
NO_UPDATE_REPOS=False
  2016-12-13 03:23:23.126944 | + functions-common:update_package_repo:1258 :   
REPOS_UPDATED=True
  2016-12-13 03:23:23.128576 | + functions-common:update_package_repo:1259 :   
RETRY_UPDATE=False
  2016-12-13 03:23:23.130157 | + functions-common:update_package_repo:1261 :   
[[ False = \T\r\u\e ]]
  2016-12-13 03:23:23.131643 | + functions-common:update_package_repo:1265 :   
is_ubuntu
  2016-12-13 03:23:23.132903 | + functions-common:is_ubuntu:466   :   
[[ -z deb ]]
  2016-12-13 03:23:23.134496 | + functions-common:is_ubuntu:469   :   
'[' deb = deb ']'
  2016-12-13 03:23:23.136126 | + functions-common:update_package_repo:1266 :   
apt_get_update
  2016-12-13 03:23:23.137769 | + functions-common:apt_get_update:1059 :   
[[ True == \T\r\u\e ]]
  2016-12-13 03:23:23.139346 | + functions-common:apt_get_update:1059 :   
[[ False != \T\r\u\e ]]
  2016-12-13 03:23:23.140965 | + functions-common:apt_get_update:1060 :   
return
  2016-12-13 03:23:23.142465 | + functions-common:install_package:1286:   
real_install_package openswan
  2016-12-13 03:23:23.143939 | + functions-common:real_install_package:1271 :   
is_ubuntu
  2016-12-13 03:23:23.145398 | + functions-common:is_ubuntu:466   :   
[[ -z deb ]]
  2016-12-13 03:23:23.146901 | + functions-common:is_ubuntu:469   :   
'[' deb = deb ']'
  2016-12-13 03:23:23.148310 | + functions-common:real_install_package:1272 :   
apt_get install openswan
  2016-12-13 03:23:23.149982 | + functions-common:apt_get:1087:   
local xtrace result
  2016-12-13 03:23:23.152192 | ++ functions-common:apt_get:1088:   
set +o
  2016-12-13 03:23:23.152282 | ++ functions-common:apt_get:1088:   
grep xtrace
  2016-12-13 03:23:23.155615 | + functions-common:apt_get:1088:   
xtrace='set -o xtrace'
  2016-12-13 03:23:23.156974 | + functions-common:apt_get:1089:   
set +o xtrace
  2016-12-13 03:23:23.161695 | + functions-common:apt_get:1100:   
sudo DEBIAN_FRONTEND=noninteractive http_proxy= https_proxy= no_proxy= apt-get 
--option Dpkg::Options::=--force-confold --assume-yes install openswan
  2016-12-13 03:23:23.188474 | Reading package lists...
  2016-12-13 03:23:23.334278 | Building dependency tree...
  2016-12-13 03:23:23.335012 | Reading state information...
  2016-12-13 03:23:23.350723 | Package openswan is not available, but is 
referred to by another package.
  2016-12-13 03:23:23.350762 | This may mean that the package is missing, has 
been obsoleted, or
  2016-12-13 03:23:23.350786 | is only available from another source
  2016-12-13 03:23:23.350799 | 
  2016-12-13 03:23:23.352792 | E: Package 'openswan' has no installation 
candidate
  2016-12-13 03:23:23.356052 | + functions-common:apt_get:1104:   
result=100
  2016-12-13 03:23:23.357655 | + functions-common:apt_get:1107:   
time_stop apt-get
  2016-12-13 03:23:23.359003 | + functions-common:time_stop:2398  :   
local name
  2016-12-13 03:23:23.360321 | + functions-common:time_stop:2399  :   
local end_time
  2016-12-13 03:23:23.361866 | + functions-common:time_stop:2400  :   
local elapsed_time
  2016-12-13 03:23:23.363173 | + functions-common:time_stop:2401  :   
local total
  2016-12-13 03:23:23.364458 | + functions-common:time_stop:2402  :   
local start_time
  2016-12-13 03:23:23.365963 | + functions-common:time_stop:2404  :   
name=apt-get
  2016-12-13 03:23:23.367539 | + functions-common:time_stop:2405  :   

[Yahoo-eng-team] [Bug 1656156] [NEW] Interfaces reference old cinder client, should be openstackclient

2017-01-12 Thread Richard Jones
Public bug reported:

The following interface strings need to be updated:

 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/snapshots/_update_status.html
 9:the cinder snapshot-reset-state command.
 
 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volume_types/_associate_qos_spec.html
 8:  {% blocktrans %}This is equivalent to the cinder qos-associate 
and cinder qos-disassociate commands.{% endblocktrans %}
 
 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volume_types/_create_qos_spec.html
 10:  cinder qos-create command. Once the QoS Spec gets created,
 
 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volume_types/_create_volume_type.html
 11:  cinder type-create command. Once the volume type gets 
created,
 
 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volumes/_manage_volume.html
 11:This is equivalent to the cinder manage command.
 
 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volumes/_unmanage_volume.html
 11:This is equivalent to the cinder unmanage command.
 
 
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volumes/_update_status.html
 9:the cinder reset-state command.
 
 
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_accept_transfer.html
 6:{% trans "Ownership of a volume can be transferred from one project 
to another.  Accepting a transfer requires obtaining the Transfer ID and 
Authorization Key from the donor.  This is equivalent to the cinder 
transfer-accept command." %}
 
 
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_create_transfer.html
 6:{% trans 'Ownership of a volume can be transferred from one project 
to another. Once a volume transfer is created in a donor project, it then can 
be "accepted" by a recipient project. This is equivalent to the cinder 
transfer-create command.' %}
 
 
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_retype.html
 8:  This is equivalent to the cinder retype command.
 
 
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_upload_to_image.html
 8:  This is equivalent to the cinder upload-to-image command.

** Affects: horizon
 Importance: Medium
 Status: Triaged

** Changed in: horizon
   Status: New => Triaged

** Changed in: horizon
   Importance: Undecided => Medium

** Changed in: horizon
Milestone: None => ocata-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1656156

Title:
  Interfaces reference old cinder client, should be openstackclient

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  The following interface strings need to be updated:

   
openstack_dashboard/dashboards/admin/volumes/templates/volumes/snapshots/_update_status.html
   9:the cinder snapshot-reset-state command.
   
   
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volume_types/_associate_qos_spec.html
   8:  {% blocktrans %}This is equivalent to the cinder 
qos-associate and cinder qos-disassociate commands.{% 
endblocktrans %}
   
   
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volume_types/_create_qos_spec.html
   10:  cinder qos-create command. Once the QoS Spec gets created,
   
   
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volume_types/_create_volume_type.html
   11:  cinder type-create command. Once the volume type gets 
created,
   
   
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volumes/_manage_volume.html
   11:This is equivalent to the cinder manage command.
   
   
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volumes/_unmanage_volume.html
   11:This is equivalent to the cinder unmanage command.
   
   
openstack_dashboard/dashboards/admin/volumes/templates/volumes/volumes/_update_status.html
   9:the cinder reset-state command.
   
   
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_accept_transfer.html
   6:{% trans "Ownership of a volume can be transferred from one project 
to another.  Accepting a transfer requires obtaining the Transfer ID and 
Authorization Key from the donor.  This is equivalent to the cinder 
transfer-accept command." %}
   
   
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_create_transfer.html
   6:{% trans 'Ownership of a volume can be transferred from one project 
to another. Once a volume transfer is created in a donor project, it then can 
be "accepted" by a recipient project. This is equivalent to the cinder 
transfer-create command.' %}
   
   
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_retype.html
   8:  This is equivalent to the cinder retype command.
   
   

[Yahoo-eng-team] [Bug 1655833] Re: The error response for placement API doesn't work with i18n

2017-01-12 Thread Alex Xu
I think I understand that wrong after some testing. Probably _() return
unicode also, so that shouldn't be a problem.

Maybe exc.format_message() is really a old thing.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1655833

Title:
  The error response for placement API doesn't work with i18n

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In most of placement API code, we just put the exception into the error 
message like this
  
https://github.com/openstack/nova/blob/d57ce1dda839865c8060458760fa27dbbd7e2aee/nova/api/openstack/placement/handlers/resource_class.py#L108

  This is equal to str(exc). This won't works for i18n.

  We can use "exc.format_message()"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1655833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656076] Re: The keystone server auth pluigin methods could mismatch user_id in auth_context

2017-01-12 Thread Morgan Fainberg
Turns out the issue comes from the test suite not using the AuthContext
object. A new patch to ensure we are using AuthContext not a dict will
be proposed in lieu of the current fix.

** Changed in: keystone/mitaka
   Status: In Progress => Invalid

** Changed in: keystone/newton
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656076

Title:
  The keystone server auth pluigin methods could mismatch user_id in
  auth_context

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Identity (keystone) mitaka series:
  Invalid
Status in OpenStack Identity (keystone) newton series:
  Invalid
Status in OpenStack Identity (keystone) ocata series:
  In Progress

Bug description:
  The keystone server blindly overwrites the auth_context.user_id in
  each auth method that is run. This means that the last auth_method
  that is run for a given authentication request dictates the user_id.

  While this is not exploitable externally without misconfiguration of
  the external plugin methods and supporting services, this is a bad
  state that could relatively easily result in someone ending up
  authenticated with the wrong user_id.

  The simplest fix will be to have the for loop in the authentication
  controller (that iterates over the methods) to verify the user_id does
  not change between auth_methods executed.

  
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/auth/controllers.py#L550-L557

  This has been marked as public security for hardening purposes, likely
  a "Class D" https://security.openstack.org/vmt-process.html#incident-
  report-taxonomy

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656144] [NEW] Please update or remove github repo

2017-01-12 Thread Jeremy Bicha
Public bug reported:

Please update this GitHub repo or remove it or at least update the
README to point to the Git repository you are using instead:

https://github.com/cloud-init/cloud-init

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1656144

Title:
  Please update or remove github repo

Status in cloud-init:
  New

Bug description:
  Please update this GitHub repo or remove it or at least update the
  README to point to the Git repository you are using instead:

  https://github.com/cloud-init/cloud-init

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1656144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656025] Re: os-vif 1.4.0 breaks nova unit tests

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/419558
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d33a2b1cbb0fb497b8612ee66822cd48fe4acfe6
Submitter: Jenkins
Branch:master

commit d33a2b1cbb0fb497b8612ee66822cd48fe4acfe6
Author: Matt Riedemann 
Date:   Thu Jan 12 11:49:52 2017 -0500

Make unit tests work with os-vif 1.4.0

The expected VIFHostUser object in this test is setting the
vif_name field on the object which didn't actually exist until
version 1.1 of that object which is being released in os-vif 1.4.0.

The test passes against os-vif 1.3.0 and VIFHostUser 1.0 today
because the obj_to_primitive() routine will not include anything
that's not a field on the object, which is vif_name in this case.

But when moving to os-vif 1.4.0, we're hitting a failure because
the expected object has vif_name set but the actual object doesn't
because though the vif_name field is defined, it's not actually
used yet in the code, so it's not set in the primitive and our
object comparison fails.

Change-Id: I1c27726d583a41ab69d9eab23e8484e7e047942d
Closes-Bug: #1656025


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656025

Title:
  os-vif 1.4.0 breaks nova unit tests

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here in the patch that bumps upper-constraints to use os-vif
  1.4.0:

  http://logs.openstack.org/21/418421/4/check/gate-cross-nova-python27
  -db-ubuntu-xenial/376a0f3/console.html#_2017-01-11_10_21_37_885392

  'vif_name': u'nicdc065497-3c' is a new field in 1.4.0:

  https://review.openstack.org/#/c/390225/

  The nova unit tests are using a strict expected representation of the
  vif object at version 1.0 so the new field breaks things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656127] [NEW] 404 error on contributor docs pages

2017-01-12 Thread Anne Gentle
Public bug reported:

Go to
http://docs.openstack.org/developer/neutron/devref/template_model_sync_test.html
and there's a broken link to oslo.db docs for test_migrations,
http://docs.openstack.org/developer/oslo.db/api/sqlalchemy/test_migrations.html.
Other broken links and referring pages include:


{"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_ovs.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/layer3.html"},
{"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario4b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
{"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario3b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
{"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario1b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
{"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_lb.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},

While we're working hard to get redirects in place, better to get the
"real" link in there when you can.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc

** Summary changed:

- 404 error on contributor docs page
+ 404 error on contributor docs pages

** Description changed:

  Go to
  
http://docs.openstack.org/developer/neutron/devref/template_model_sync_test.html
  and there's a broken link to oslo.db docs for test_migrations,
- 
http://docs.openstack.org/developer/oslo.db/api/sqlalchemy/test_migrations.html
+ 
http://docs.openstack.org/developer/oslo.db/api/sqlalchemy/test_migrations.html.
+ Other broken links and referring pages include:
+ 
+ 
+ {"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_ovs.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/layer3.html"},
+ {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario4b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
+ {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario3b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
+ {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario1b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
+ {"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_lb.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
+ 
+ While we're working hard to get redirects in place, better to get the
+ "real" link in there when you can.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656127

Title:
  404 error on contributor docs pages

Status in neutron:
  New

Bug description:
  Go to
  
http://docs.openstack.org/developer/neutron/devref/template_model_sync_test.html
  and there's a broken link to oslo.db docs for test_migrations,
  
http://docs.openstack.org/developer/oslo.db/api/sqlalchemy/test_migrations.html.
  Other broken links and referring pages include:

  
  {"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_ovs.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/layer3.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario4b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario3b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/deploy_scenario1b.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},
  {"url": 
"http://docs.openstack.org/newton/networking-guide/scenario_legacy_lb.html;, 
"status": 404, "referer": 
"http://docs.openstack.org/developer/neutron/devref/linuxbridge_agent.html"},

  While we're working hard to get redirects in place, better to get the
  "real" link in there when you can.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656076] Re: The keystone server auth pluigin methods could mismatch user_id in auth_context

2017-01-12 Thread Morgan Fainberg
** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Medium

** Changed in: keystone
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

** Also affects: keystone/newton
   Importance: Undecided
   Status: New

** Also affects: keystone/mitaka
   Importance: Undecided
   Status: New

** Also affects: keystone/ocata
   Importance: Medium
 Assignee: Morgan Fainberg (mdrnstm)
   Status: Triaged

** Changed in: keystone/newton
   Status: New => Triaged

** Changed in: keystone/newton
   Importance: Undecided => Medium

** Changed in: keystone/mitaka
   Status: New => Triaged

** Changed in: keystone/mitaka
   Importance: Undecided => Medium

** Changed in: keystone/mitaka
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

** Changed in: keystone/newton
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656076

Title:
  The keystone server auth pluigin methods could mismatch user_id in
  auth_context

Status in OpenStack Identity (keystone):
  Triaged
Status in OpenStack Identity (keystone) mitaka series:
  Triaged
Status in OpenStack Identity (keystone) newton series:
  Triaged
Status in OpenStack Identity (keystone) ocata series:
  Triaged

Bug description:
  The keystone server blindly overwrites the auth_context.user_id in
  each auth method that is run. This means that the last auth_method
  that is run for a given authentication request dictates the user_id.

  While this is not exploitable externally without misconfiguration of
  the external plugin methods and supporting services, this is a bad
  state that could relatively easily result in someone ending up
  authenticated with the wrong user_id.

  The simplest fix will be to have the for loop in the authentication
  controller (that iterates over the methods) to verify the user_id does
  not change between auth_methods executed.

  
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/auth/controllers.py#L550-L557

  This has been marked as public security for hardening purposes, likely
  a "Class D" https://security.openstack.org/vmt-process.html#incident-
  report-taxonomy

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656124] [NEW] Broken link to nova/devref in http://docs.openstack.org/developer/nova/how_to_get_involved.html#how-to-do-great-nova-spec-reviews

2017-01-12 Thread Anne Gentle
Public bug reported:

Looks like one of the links in
http://docs.openstack.org/developer/nova/how_to_get_involved.html#how-
to-do-great-nova-spec-reviews needs to be updated.

That, or, some redirect is happening so that you can't ever get to a
/devref/ listing. The link is
http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html
#when-is-a-blueprint-needed.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc

** Tags added: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656124

Title:
  Broken link to nova/devref in
  http://docs.openstack.org/developer/nova/how_to_get_involved.html#how-
  to-do-great-nova-spec-reviews

Status in OpenStack Compute (nova):
  New

Bug description:
  Looks like one of the links in
  http://docs.openstack.org/developer/nova/how_to_get_involved.html#how-
  to-do-great-nova-spec-reviews needs to be updated.

  That, or, some redirect is happening so that you can't ever get to a
  /devref/ listing. The link is
  http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html
  #when-is-a-blueprint-needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655182] Re: keystone-manage mapping_engine tester problems

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/418165
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=f2d0f8c9ab38172a6e37b02339eac59da911435c
Submitter: Jenkins
Branch:master

commit f2d0f8c9ab38172a6e37b02339eac59da911435c
Author: John Dennis 
Date:   Tue Nov 29 11:36:32 2016 -0500

Fix keystone-manage mapping_engine tester

There were several problems with keystone-manage mapping_engine

* It aborts with a backtrace because of wrong number of arguments
  passed to the RuleProcessor, it was missing the mapping_id
  parameter.

* Error messages related to input data were cryptic and inprecise.

* The --engine-debug option did not work.

A fake mapping_id is now generated and passed to the RuleProcessor.

If there was invalid data passed it was nearly impossible to determine
what was causing the error, the command takes 2 input files, but which
file contained the error? At what line? Why? For example I was
consistently getting this error:

Error while parsing line: '{': need more than 1 value to unpack

and had no idea of what was wrong, the JSON looked valid to me. Turns
out the assertion file is not formatted as JSON (yes this is
documented in the help message but given the rules are JSON formatted
and the RuleProcessor expects a dict for the assertion_data it's
reasonsable to assume the data in the assertion file is formatted as a
JSON object).

The documentation in mapping_combinations.rst added a note in the
section suggesting the use of the keystone-manage mapping_engine
tester alerting the reader to the expected file formats.

The MappingEngineTester class was refactored slighly to allow each
method to know what file it was operating on and emit error messages
that identify the file. The error message in addition to the pathname
now includes the offending line number as well. As a bonus it doesn't
fail if there is a blank line. The error message now looks like this:

assertion file input.txt at line 4 expected 'key: value' but found 'foo' 
see help for file format

The mapping_engine.LOG.logger level is now explictily set to DEBUG
when --engine-debug is passed instead of (mistakenly assuming it
defaulted to DEBUG) otherwise it's set to WARN.

Closes-Bug: 1655182
Signed-off-by: John Dennis 
Change-Id: I2dea0f38b127ec185b79bfe06dd6a212da75cbca


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1655182

Title:
  keystone-manage mapping_engine tester problems

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  There are several problems with keystone-manage mapping_engine
  
  * It aborts with a backtrace because of wrong number of arguments
passed to the RuleProcessor
  
  * The --engine-debug option does not work.
  
  * Error messages related to input data are cryptic and inprecise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1655182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656101] [NEW] delete swift container does not work

2017-01-12 Thread Eric Peterson
Public bug reported:

When trying to delete a container within swift, we get a pop-up to
confirm we want it deleted.

If you act very quickly, you can confirm "yes" and delete the container.
If you are slow, you get redirected back to the overview / home page.

** Affects: horizon
 Importance: High
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1656101

Title:
  delete swift container does not work

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  When trying to delete a container within swift, we get a pop-up to
  confirm we want it deleted.

  If you act very quickly, you can confirm "yes" and delete the
  container.   If you are slow, you get redirected back to the overview
  / home page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1656101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614822] Re: api-ref: security-group-rules api missing request parameters table.

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/357620
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=79edae50298724881cb60f6008e6f812d96eedad
Submitter: Jenkins
Branch:master

commit 79edae50298724881cb60f6008e6f812d96eedad
Author: Nguyen Phuong An 
Date:   Fri Aug 19 12:54:14 2016 +0700

api-ref: Adding request parameter for sec-grp-rule

This patch adds request parameters table for security-group-rule
api.

Change-Id: Ie5f66855567d0e459b958288d2782e4d3b63b1a8
Partially-Implements: blueprint neutron-in-tree-api-ref
Co-Authored-By: Anindita Das 
Closes-Bug: #1614822


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614822

Title:
  api-ref: security-group-rules api missing request parameters table.

Status in neutron:
  Fix Released

Bug description:
  security-group-rule api missing request parameter table in
  http://developer.openstack.org/api-ref/networking/v2/index.html
  #security-group-rules-security-group-rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656075] [NEW] DiscoveryFailure when trying to get resource providers from the scheduler report client

2017-01-12 Thread Matt Riedemann
Public bug reported:

I noticed this in a TripleO job:

http://logs.openstack.org/04/419604/1/check/gate-puppet-openstack-
integration-4-scenario004-tempest-centos-7/5d95a8c/logs/nova/nova-
compute.txt.gz#_2017-01-12_18_53_43_459

2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
[req-59098025-7c99-41b2-aaa9-0e5714770b3a - - - - -] Error updating resources 
for node centos-7-osic-cloud1-s3500-6631948.
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager Traceback (most recent 
call last):
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6537, in 
update_available_resource_for_node
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 540, 
in update_available_resource
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return f(*args, 
**kwargs)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 564, 
in _update_available_resource
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self._init_compute_node(context, resources)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 451, 
in _init_compute_node
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self.scheduler_client.update_resource_stats(self.compute_node)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 60, 
in update_resource_stats
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self.reportclient.update_resource_stats(compute_node)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return 
getattr(self.instance, __name)(*args, **kwargs)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 476, 
in update_resource_stats
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
compute_node.hypervisor_hostname)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 296, 
in _ensure_resource_provider
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager rp = 
self._get_resource_provider(uuid)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 53, in 
wrapper
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return f(self, *a, 
**k)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 209, 
in _get_resource_provider
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager resp = 
self.get("/resource_providers/%s" % uuid)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 174, 
in get
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
endpoint_filter=self.ks_filter, raise_exc=False)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 710, in get
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return 
self.request(url, 'GET', **kwargs)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return 
wrapped(*args, **kwargs)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 467, in 
request
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager auth_headers = 
self.get_auth_headers(auth)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 770, in 
get_auth_headers
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return 
auth.get_headers(self, **kwargs)
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/plugin.py", line 90, in 
get_headers
2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager token = 

[Yahoo-eng-team] [Bug 1656076] [NEW] The keystone server auth pluigin methods could mismatch user_id in auth_context

2017-01-12 Thread Morgan Fainberg
*** This bug is a security vulnerability ***

Public security bug reported:

The keystone server blindly overwrites the auth_context.user_id in each
auth method that is run. This means that the last auth_method that is
run for a given authentication request dictates the user_id.

While this is not exploitable externally without misconfiguration of the
external plugin methods and supporting services, this is a bad state
that could relatively easily result in someone ending up authenticated
with the wrong user_id.

The simplest fix will be to have the for loop in the authentication
controller (that iterates over the methods) to verify the user_id does
not change between auth_methods executed.

https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/auth/controllers.py#L550-L557

This has been marked as public security for hardening purposes, likely a
"Class D" https://security.openstack.org/vmt-process.html#incident-
report-taxonomy

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: authentication security

** Tags added: authentication security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656076

Title:
  The keystone server auth pluigin methods could mismatch user_id in
  auth_context

Status in OpenStack Identity (keystone):
  New

Bug description:
  The keystone server blindly overwrites the auth_context.user_id in
  each auth method that is run. This means that the last auth_method
  that is run for a given authentication request dictates the user_id.

  While this is not exploitable externally without misconfiguration of
  the external plugin methods and supporting services, this is a bad
  state that could relatively easily result in someone ending up
  authenticated with the wrong user_id.

  The simplest fix will be to have the for loop in the authentication
  controller (that iterates over the methods) to verify the user_id does
  not change between auth_methods executed.

  
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/auth/controllers.py#L550-L557

  This has been marked as public security for hardening purposes, likely
  a "Class D" https://security.openstack.org/vmt-process.html#incident-
  report-taxonomy

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645644] Re: ntp not using expected servers

2017-01-12 Thread Scott Moser
** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

** No longer affects: maas (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1645644

Title:
  ntp not using expected servers

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  In Progress

Bug description:
  cloud-init: 0.7.8-49-g9e904bb-0ubuntu1~16.04.1

  Expected NTP server address is written in /etc/ntp.conf by cloud-init through 
vendor-data. However, `ntpq -p` shows the default ntp pools, not my local NTP 
server written in /etc/ntp.conf.
  It looks like cloud-init needs to write /etc/ntp.conf before installing ntp 
package, or restart ntp after writing /etc/ntp.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1645644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655785] Re: Use the session loader in keystoneauth1 for designate

2017-01-12 Thread Boden R
This doc impact bug affects the designate DNS driver configuration
reference.

** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655785

Title:
  Use the session loader in keystoneauth1 for designate

Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/416048
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit b38f1cb1f737dede90f3df74f3515d63c357fa30
  Author: Gyorgy Szombathelyi 
  Date:   Mon Dec 19 13:58:10 2016 +0100

  Use the session loader in keystoneauth1 for designate
  
  Using the session loader has the benefit of compatibility with
  settings in other sections (like keystone_authtoken), and the
  ability to use client certs and setting the timeout. This changes
  the designate.ca_cert setting to designate.cafile, but the former
  is added as a deprecated option, so existing config files will work.
  
  DocImpact
  ca_cert in [designate] is deprecated, use cafile instead.
  
  Change-Id: I9f2173b02af5c3929a96ef8c773d587e9b673d62

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1655785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656056] Re: NeutronAdminCredentialConfigurationInvalid all over n-cpu logs in successful gate runs during network teardown

2017-01-12 Thread Matt Riedemann
I think this is the token request in the keystone logs:

req-6a68c08b-0993-4e32-a9f6-e47fa90c5ffd

And looking at:

http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
src-neutron-lib-ubuntu-
xenial/dafcfca/logs/apache/keystone.txt.gz#_2017-01-12_17_05_25_050

This is the auth context:

{
   'is_delegated_auth':False,
   'access_token_id':None,
   'user_id':u'808414769ae84ea6907e692a87380188',
   'roles':[
  u'service'
   ],
   'user_domain_id':u'default',
   'consumer_id':None,
   'trustee_id':None,
   'is_domain':False,
   'is_admin_project':True,
   'trustor_id':None,
   'token':,
   'project_id':u'078441a1151d46a2810e59c2d6c417a5',
   'trust_id':None,
   'project_domain_id':u'default'
}

And then there is this:

http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
src-neutron-lib-ubuntu-
xenial/dafcfca/logs/apache/keystone.txt.gz#_2017-01-12_17_05_25_056

2017-01-12 17:05:25.056 2916 WARNING keystone.common.wsgi [req-
6a68c08b-0993-4e32-a9f6-e47fa90c5ffd 808414769ae84ea6907e692a87380188
078441a1151d46a2810e59c2d6c417a5 - default default] Could not find
project: 2140f03492484f8580c440e5a999ac6e

But that's not the same project id above in the auth context
(078441a1151d46a2810e59c2d6c417a5).

^ could all be a red herring too.

** No longer affects: python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656056

Title:
  NeutronAdminCredentialConfigurationInvalid all over n-cpu logs in
  successful gate runs during network teardown

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  I've been seeing this all over the n-cpu logs in gate runs on master
  even in successful runs:

  http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
  src-neutron-lib-ubuntu-
  xenial/dafcfca/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-01-12_17_05_25_061

  2017-01-12 17:05:25.061 1236 ERROR nova.network.neutronv2.api [req-
  dfe387da-8f68-40f7-a781-9fec1f7408b1 tempest-ImagesTestJSON-2082975254
  tempest-ImagesTestJSON-2082975254] Neutron client was not able to
  generate a valid admin token, please verify Neutron admin credential
  located in nova.conf

  -

  http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
  src-neutron-lib-ubuntu-
  xenial/dafcfca/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-01-12_17_05_25_623

  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager 
[req-dfe387da-8f68-40f7-a781-9fec1f7408b1 tempest-ImagesTestJSON-2082975254 
tempest-ImagesTestJSON-2082975254] [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] Setting instance vm_state to ERROR
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] Traceback (most recent call last):
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2425, in 
do_terminate_instance
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._delete_instance(context, 
instance, bdms, quotas)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/hooks.py", line 154, in inner
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] rv = f(*args, **kwargs)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2388, in _delete_instance
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] quotas.rollback()
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self.force_reraise()
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] six.reraise(self.type_, self.value, 
self.tb)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2352, in _delete_instance
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._shutdown_instance(context, 
instance, bdms)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1656056] Re: NeutronAdminCredentialConfigurationInvalid all over n-cpu logs in successful gate runs during network teardown

2017-01-12 Thread Matt Riedemann
It could somehow be related to this change in neutronclient:

https://review.openstack.org/#/q/b8a05333ddc4e248e18080750e3cfa9cbedbca53

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656056

Title:
  NeutronAdminCredentialConfigurationInvalid all over n-cpu logs in
  successful gate runs during network teardown

Status in OpenStack Compute (nova):
  Confirmed
Status in python-neutronclient:
  New

Bug description:
  I've been seeing this all over the n-cpu logs in gate runs on master
  even in successful runs:

  http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
  src-neutron-lib-ubuntu-
  xenial/dafcfca/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-01-12_17_05_25_061

  2017-01-12 17:05:25.061 1236 ERROR nova.network.neutronv2.api [req-
  dfe387da-8f68-40f7-a781-9fec1f7408b1 tempest-ImagesTestJSON-2082975254
  tempest-ImagesTestJSON-2082975254] Neutron client was not able to
  generate a valid admin token, please verify Neutron admin credential
  located in nova.conf

  -

  http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
  src-neutron-lib-ubuntu-
  xenial/dafcfca/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-01-12_17_05_25_623

  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager 
[req-dfe387da-8f68-40f7-a781-9fec1f7408b1 tempest-ImagesTestJSON-2082975254 
tempest-ImagesTestJSON-2082975254] [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] Setting instance vm_state to ERROR
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] Traceback (most recent call last):
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2425, in 
do_terminate_instance
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._delete_instance(context, 
instance, bdms, quotas)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/hooks.py", line 154, in inner
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] rv = f(*args, **kwargs)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2388, in _delete_instance
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] quotas.rollback()
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self.force_reraise()
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] six.reraise(self.type_, self.value, 
self.tb)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2352, in _delete_instance
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._shutdown_instance(context, 
instance, bdms)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2257, in _shutdown_instance
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._try_deallocate_network(context, 
instance, requested_networks)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2179, in 
_try_deallocate_network
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] 
self._set_instance_obj_error_state(context, instance)
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self.force_reraise()
  2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1656056] [NEW] NeutronAdminCredentialConfigurationInvalid all over n-cpu logs in successful gate runs during network teardown

2017-01-12 Thread Matt Riedemann
Public bug reported:

I've been seeing this all over the n-cpu logs in gate runs on master
even in successful runs:

http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
src-neutron-lib-ubuntu-
xenial/dafcfca/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-01-12_17_05_25_061

2017-01-12 17:05:25.061 1236 ERROR nova.network.neutronv2.api [req-
dfe387da-8f68-40f7-a781-9fec1f7408b1 tempest-ImagesTestJSON-2082975254
tempest-ImagesTestJSON-2082975254] Neutron client was not able to
generate a valid admin token, please verify Neutron admin credential
located in nova.conf

-

http://logs.openstack.org/08/418108/4/gate/gate-tempest-dsvm-neutron-
src-neutron-lib-ubuntu-
xenial/dafcfca/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-01-12_17_05_25_623

2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager 
[req-dfe387da-8f68-40f7-a781-9fec1f7408b1 tempest-ImagesTestJSON-2082975254 
tempest-ImagesTestJSON-2082975254] [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] Setting instance vm_state to ERROR
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] Traceback (most recent call last):
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2425, in 
do_terminate_instance
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._delete_instance(context, 
instance, bdms, quotas)
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/hooks.py", line 154, in inner
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] rv = f(*args, **kwargs)
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2388, in _delete_instance
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] quotas.rollback()
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self.force_reraise()
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] six.reraise(self.type_, self.value, 
self.tb)
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2352, in _delete_instance
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._shutdown_instance(context, 
instance, bdms)
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2257, in _shutdown_instance
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._try_deallocate_network(context, 
instance, requested_networks)
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2179, in 
_try_deallocate_network
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] 
self._set_instance_obj_error_state(context, instance)
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self.force_reraise()
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] six.reraise(self.type_, self.value, 
self.tb)
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2173, in 
_try_deallocate_network
2017-01-12 17:05:25.157 1236 ERROR nova.compute.manager [instance: 
4c7f8d12-2142-457c-b725-c95cf931cfa7] self._deallocate_network(context, 
instance, 

[Yahoo-eng-team] [Bug 1656051] [NEW] Angular registry detailViews no longer display as tabs

2017-01-12 Thread Travis Tripp
Public bug reported:

The angular registry allows you to create tabs dynamically by
registering them [0][1].  This technique is used by the instances panel
in the searchlight UI [2].  Sometime in Ocata it got broken and now tabs
no longer show up in the instances panel in either default or material
theme.

https://imgur.com/a/JIO3Z

[0]
https://github.com/openstack/horizon/blob/2394397bfd258fe584cc565a924e4dd216f6b224/horizon/static/framework/conf
/resource-type-registry.service.js#L84

[1]
https://github.com/openstack/horizon/blob/2925562c1a3f0a9b3e2d55833691a7b0ad10eb2a/horizon/static/framework/widgets/details/details.directive.js#L28-L61

[2] https://github.com/openstack/searchlight-
ui/blob/master/searchlight_ui/static/resources/os-nova-
servers/details/details.module.js#L52-L72

** Affects: horizon
 Importance: High
 Status: New

** Affects: horizon/ocata
 Importance: High
 Status: New

** Affects: searchlight
 Importance: Undecided
 Status: New

** Also affects: searchlight
   Importance: Undecided
   Status: New

** Also affects: horizon/ocata
   Importance: High
   Status: New

** Description changed:

  The angular registry allows you to create tabs dynamically by
- registering them [1].  This technique is used by the instances panel in
- the searchlight UI [2].  Sometime in Ocata it got broken and now tabs no
- longer show up in the instances panel in either default or material
+ registering them [0][1].  This technique is used by the instances panel
+ in the searchlight UI [2].  Sometime in Ocata it got broken and now tabs
+ no longer show up in the instances panel in either default or material
  theme.
  
  https://imgur.com/a/JIO3Z
  
  [0]
  
https://github.com/openstack/horizon/blob/2394397bfd258fe584cc565a924e4dd216f6b224/horizon/static/framework/conf
  /resource-type-registry.service.js#L84
  
  [1]
  
https://github.com/openstack/horizon/blob/2925562c1a3f0a9b3e2d55833691a7b0ad10eb2a/horizon/static/framework/widgets/details/details.directive.js#L28-L61
  
  [2] https://github.com/openstack/searchlight-
  ui/blob/master/searchlight_ui/static/resources/os-nova-
  servers/details/details.module.js#L52-L72

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1656051

Title:
  Angular registry detailViews no longer display as tabs

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Dashboard (Horizon) ocata series:
  New
Status in OpenStack Search (Searchlight):
  New

Bug description:
  The angular registry allows you to create tabs dynamically by
  registering them [0][1].  This technique is used by the instances
  panel in the searchlight UI [2].  Sometime in Ocata it got broken and
  now tabs no longer show up in the instances panel in either default or
  material theme.

  https://imgur.com/a/JIO3Z

  [0]
  
https://github.com/openstack/horizon/blob/2394397bfd258fe584cc565a924e4dd216f6b224/horizon/static/framework/conf
  /resource-type-registry.service.js#L84

  [1]
  
https://github.com/openstack/horizon/blob/2925562c1a3f0a9b3e2d55833691a7b0ad10eb2a/horizon/static/framework/widgets/details/details.directive.js#L28-L61

  [2] https://github.com/openstack/searchlight-
  ui/blob/master/searchlight_ui/static/resources/os-nova-
  servers/details/details.module.js#L52-L72

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1656051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656045] [NEW] Dashboard panels intermittently disappear when they are in the 'default' panel group.

2017-01-12 Thread Travis Tripp
Public bug reported:

In a panel enabled.py file you can make a panel appear directly in the
dashboard nav menu by putting it in the default panel group:

# The slug of the panel group the PANEL is associated with.
# If you want the panel to show up without a panel group,
# use the panel group "default".

https://github.com/openstack/horizon/blob/master/horizon/base.py#L51-L52

At some point in Ocata, something changed that made panels
intermittently disappear from the nav menu if they are not in a panel
group and you are using the 'default' theme.  If you switch to the
material theme, this does not occur.

For example, with searchlight ui enabled [0]

The search panel will appear in the nave menu when you first open the
project dashboard.  However, if you then expand the compute panel group
and click into Instances, Volumes, etc, when the page reloads and
displays one of those panel, the search panel (which is in the default
panel group [1]) will no longer appear in the left hand menu nav.

http://imgur.com/a/WDd6y

[0] https://github.com/openstack/searchlight-ui
[1] 
https://github.com/openstack/searchlight-ui/blob/master/searchlight_ui/enabled/_1001_project_search_panel.py#L21

** Affects: horizon
 Importance: High
 Status: New

** Affects: horizon/ocata
 Importance: High
 Status: New

** Summary changed:

- Dashboard panels intermittently disappear when they aren't nested in a panel 
group
+ Dashboard panels intermittently disappear when they are in the 'default' 
panel group.

** Also affects: horizon/ocata
   Importance: High
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1656045

Title:
  Dashboard panels intermittently disappear when they are in the
  'default' panel group.

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Dashboard (Horizon) ocata series:
  New

Bug description:
  In a panel enabled.py file you can make a panel appear directly in the
  dashboard nav menu by putting it in the default panel group:

  # The slug of the panel group the PANEL is associated with.
  # If you want the panel to show up without a panel group,
  # use the panel group "default".

  https://github.com/openstack/horizon/blob/master/horizon/base.py#L51-L52

  At some point in Ocata, something changed that made panels
  intermittently disappear from the nav menu if they are not in a panel
  group and you are using the 'default' theme.  If you switch to the
  material theme, this does not occur.

  For example, with searchlight ui enabled [0]

  The search panel will appear in the nave menu when you first open the
  project dashboard.  However, if you then expand the compute panel
  group and click into Instances, Volumes, etc, when the page reloads
  and displays one of those panel, the search panel (which is in the
  default panel group [1]) will no longer appear in the left hand menu
  nav.

  http://imgur.com/a/WDd6y

  [0] https://github.com/openstack/searchlight-ui
  [1] 
https://github.com/openstack/searchlight-ui/blob/master/searchlight_ui/enabled/_1001_project_search_panel.py#L21

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1656045/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655914] Re: vpnaas test failure for router info

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/419396
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=f8c826263d2c5108dd6845ef99e712157628c1aa
Submitter: Jenkins
Branch:master

commit f8c826263d2c5108dd6845ef99e712157628c1aa
Author: YAMAMOTO Takashi 
Date:   Thu Jan 12 19:18:28 2017 +0900

tests: Add 'agent' argument for LegacyRouter

Following the recent Neutron change. [1]

[1] I61c6128ed1973deb8440c54234e77a66987d7e28

Closes-Bug: #1655914
Change-Id: I620df9c533b4d1543a16132e35e6e7dc901efdfe


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655914

Title:
  vpnaas test failure for router info

Status in neutron:
  Fix Released

Bug description:
  recent router info change broke vpnaas tests.

  eg. http://logs.openstack.org/76/415976/3/check/gate-neutron-vpnaas-
  dsvm-functional-sswan-ubuntu-xenial/a6a6b34/testr_results.html.gz

  ft1.1: 
neutron_vpnaas.tests.functional.strongswan.test_strongswan_driver.TestStrongSwanDeviceDriver.test_process_lifecycle_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"neutron_vpnaas/tests/functional/strongswan/test_strongswan_driver.py", line 
121, in setUp
  self.router = legacy_router.LegacyRouter(FAKE_ROUTER_ID, **ri_kwargs)
  TypeError: __init__() takes at least 6 arguments (5 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656025] [NEW] os-vif 1.4.0 breaks nova unit tests

2017-01-12 Thread Matt Riedemann
Public bug reported:

Seen here in the patch that bumps upper-constraints to use os-vif 1.4.0:

http://logs.openstack.org/21/418421/4/check/gate-cross-nova-python27-db-
ubuntu-xenial/376a0f3/console.html#_2017-01-11_10_21_37_885392

'vif_name': u'nicdc065497-3c' is a new field in 1.4.0:

https://review.openstack.org/#/c/390225/

The nova unit tests are using a strict expected representation of the
vif object at version 1.0 so the new field breaks things.

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: os-vif testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656025

Title:
  os-vif 1.4.0 breaks nova unit tests

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Seen here in the patch that bumps upper-constraints to use os-vif
  1.4.0:

  http://logs.openstack.org/21/418421/4/check/gate-cross-nova-python27
  -db-ubuntu-xenial/376a0f3/console.html#_2017-01-11_10_21_37_885392

  'vif_name': u'nicdc065497-3c' is a new field in 1.4.0:

  https://review.openstack.org/#/c/390225/

  The nova unit tests are using a strict expected representation of the
  vif object at version 1.0 so the new field breaks things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656026] [NEW] Exception don't follow a punctuation convention

2017-01-12 Thread Lance Bragstad
Public bug reported:

If you happen to take a look through keystone exception module [0].
You'll notice that some of the exceptions use proper punctuation, while
other do not. David Stanek mentioned this in a review [1], and we
thought it was appropriate to track it as a low-hanging-fruit bug.

We should decide what that convention should be for keystone, then apply
it to all of our exceptions consistently.

[0] 
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/exception.py#L105-L118
[1] https://review.openstack.org/#/c/415895/8/keystone/exception.py

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Description changed:

  If you happen to take a look through keystone exception module [0].
- You'll notice that some of the exception use proper punctuation, while
+ You'll notice that some of the exceptions use proper punctuation, while
  other do not. David Stanek mentioned this in a review [1], and we
  thought it was appropriate to track it as a low-hanging-fruit bug.
  
  We should decide what that convention should be for keystone, then apply
  it to all of our exceptions consistently.
  
  [0] 
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/exception.py#L105-L118
  [1] https://review.openstack.org/#/c/415895/8/keystone/exception.py

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656026

Title:
  Exception don't follow a punctuation convention

Status in OpenStack Identity (keystone):
  New

Bug description:
  If you happen to take a look through keystone exception module [0].
  You'll notice that some of the exceptions use proper punctuation,
  while other do not. David Stanek mentioned this in a review [1], and
  we thought it was appropriate to track it as a low-hanging-fruit bug.

  We should decide what that convention should be for keystone, then
  apply it to all of our exceptions consistently.

  [0] 
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/exception.py#L105-L118
  [1] https://review.openstack.org/#/c/415895/8/keystone/exception.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616240] Re: Traceback in vif.py execv() arg 2 must contain only strings

2017-01-12 Thread Ryan Beisner
This bug was fixed in the package python-oslo.privsep - 1.13.0-0ubuntu1.1~cloud0
---

 python-oslo.privsep (1.13.0-0ubuntu1.1~cloud0) xenial-newton; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 python-oslo.privsep (1.13.0-0ubuntu1.1) yakkety; urgency=medium
 .
   * d/p/deal-with-conf-config-dir.patch: Cherry pick patch from upstream
 stable/newton branch to properly handle CONF.config_dir (LP: #1616240).


** Changed in: cloud-archive/newton
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1616240

Title:
  Traceback in vif.py execv() arg 2 must contain only strings

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.privsep:
  Fix Released
Status in python-oslo.privsep package in Ubuntu:
  Fix Released
Status in python-oslo.privsep source package in Yakkety:
  Fix Released
Status in python-oslo.privsep source package in Zesty:
  Fix Released

Bug description:
  While bringing up VM with the latest master (August 23,2016) I see
  this traceback and VM fails to launch.

  Complete log is here: http://paste.openstack.org/show/562688/
  nova.conf used is here: http://paste.openstack.org/show/562757/

  The issue is 100% reproducible in my testbed.

  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager 
[req-81060644-0dd7-453c-a68c-0d9cffe28fe7 3d1cd826f71a49cc81b33e85329f94b3 
f738285a670c4be08d8a5e300aa25504 - - -] [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] Instance failed to spawn
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] Traceback (most recent call last):
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
2075, in _build_resources
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] yield resources
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
1919, in _build_and_run_instance
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] block_device_info=block_device_info)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 2583, in spawn
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] post_xml_callback=gen_confdrive)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 4803, in _create_domain_and_network
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] self.plug_vifs(instance, network_info)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 684, in plug_vifs
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] self.vif_driver.plug(instance, vif)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", 
line 801, in plug
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] self._plug_os_vif(instance, vif_obj)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", 
line 783, in _plug_os_vif
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] raise exception.NovaException(msg)
  2016-08-23 17:17:28.941 8808 ERROR nova.compute.manager [instance: 
95f11702-9e64-445d-a3cd-2cde074a4219] NovaException: Failure running os_vif 
plugin plug method: Failed to plug VIF 
VIFBridge(active=False,address=fa:16:3e:c0:4a:fd,bridge_name='qbrb7b522a4-3f',has_traffic_filtering=True,id=b7b522a4-3faa-42ca-8e0f-d8c241432e1f,network=Network(f32fdde6-bb99-4981-926b-a7df30f0a612),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=True,vif_name='tapb7b522a4-3f').
 Got 

[Yahoo-eng-team] [Bug 1649341] Re: Undercloud upgrade fails with "Cell mappings are not created, but required for Ocata"

2017-01-12 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649341

Title:
  Undercloud upgrade fails with "Cell mappings are not created, but
  required for Ocata"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New
Status in puppet-nova:
  Fix Released
Status in tripleo:
  In Progress

Bug description:
  Trying to upgrade with recent trunk nova and puppet-nova gives this
  error:

  Notice: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: 
error: Cell mappings are not created, but required for Ocata. Please run 
nova-manage db simple_cell_setup before continuing.
  Error: /usr/bin/nova-manage  api_db sync returned 1 instead of one of [0]
  Error: /Stage[main]/Nova::Db::Sync_api/Exec[nova-db-sync-api]/returns: change 
from notrun to 0 failed: /usr/bin/nova-manage  api_db sync returned 1 instead 
of one of [0]

  
  Debugging manually gives:

  $ sudo /usr/bin/nova-manage  api_db sync
  error: Cell mappings are not created, but required for Ocata. Please run 
nova-manage db simple_cell_setup before continuing.

  
  but...

  $ sudo nova-manage db simple_cell_setup
  usage: nova-manage db [-h]


{archive_deleted_rows,null_instance_uuid_scan,online_data_migrations,sync,version}
...
  nova-manage db: error: argument action: invalid choice: 'simple_cell_setup' 
(choose from 'archive_deleted_rows', 'null_instance_uuid_scan', 
'online_data_migrations', 'sync', 'version')

  
  I tried adding openstack-nova* to the delorean-current whitelist, but with 
the latest nova packages there still appears to be this mismatch.

  [stack@instack /]$ rpm -qa | grep nova
  openstack-nova-conductor-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  python-nova-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-scheduler-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  puppet-nova-10.0.0-0.20161211003757.09b9f7b.el7.centos.noarch
  python2-novaclient-6.0.0-0.20161003181629.25117fa.el7.centos.noarch
  openstack-nova-api-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-cert-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-common-15.0.0-0.20161212155146.909410c.el7.centos.noarch
  openstack-nova-compute-15.0.0-0.20161212155146.909410c.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656017] [NEW] nova-manage cell_v2 map_cell0 always returns a non-0 exit code

2017-01-12 Thread Matt Riedemann
Public bug reported:

See the discussion in this review:

https://review.openstack.org/#/c/409890/1/nova/cmd/manage.py@1289

The map_cell0 CLI is really treated like a function and it's used by the
simple_cell_setup command. If map_cell0 is used as a standalone command
it always returns a non-0 exit code because it's returning a CellMapping
object (or failing with a duplicate entry error if the cell0 mapping
already exists).

We should split the main part of the map_cell0 function out into a
private method and then treat map_cell0 as a normal CLI with integer
exit codes (0 on success, >0 on failure) and print out whatever
information is needed when mapping cell0, like the uuid for example.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: cells nova-manage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656017

Title:
  nova-manage cell_v2 map_cell0 always returns a non-0 exit code

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  See the discussion in this review:

  https://review.openstack.org/#/c/409890/1/nova/cmd/manage.py@1289

  The map_cell0 CLI is really treated like a function and it's used by
  the simple_cell_setup command. If map_cell0 is used as a standalone
  command it always returns a non-0 exit code because it's returning a
  CellMapping object (or failing with a duplicate entry error if the
  cell0 mapping already exists).

  We should split the main part of the map_cell0 function out into a
  private method and then treat map_cell0 as a normal CLI with integer
  exit codes (0 on success, >0 on failure) and print out whatever
  information is needed when mapping cell0, like the uuid for example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649845] Re: Interface drivers don't update port MTU if the port already exists

2017-01-12 Thread Stephen Finucane
** Also affects: os-vif
   Importance: Undecided
   Status: New

** Changed in: os-vif
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

** Changed in: os-vif
   Importance: Undecided => Medium

** Changed in: os-vif
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649845

Title:
  Interface drivers don't update port MTU if the port already exists

Status in networking-midonet:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in os-vif:
  Fix Released

Bug description:
  This is needed because Neutron allows to change MTU values for
  networks (through configuration options modification and neutron-
  server restart). Without that, there is no way to apply new MTU for
  DHCP and router ports without migrating resources to other nodes.

  I suggest we apply MTU on conseqent plug() attempts, even if port
  exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1649845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656010] [NEW] Incorrect notification to nova about ironic baremetall port (for nodes in 'cleaning' state)

2017-01-12 Thread George Shuklin
Public bug reported:

version: newton (2:9.0.0-0ubuntu1~cloud0)

When neutron trying to bind port for Ironic baremetall node, it sending
wrong notification to nova about port been ready. neutron send it with
'device_id' == ironic-node-id, and nova rejects it as 'not found' (there
is no nova instance with such id).

Log:
neutron.db.provisioning_blocks[22265]: DEBUG Provisioning for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 completed by entity DHCP. 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:147
neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153
neutron.callbacks.manager[22265]: DEBUG Notify callbacks 
[('neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned--9223372036854150578',
 >)] for port, 
provisioning_complete [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] 
_notify_loop /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:142
neutron.plugins.ml2.plugin[22265]: DEBUG Port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 cannot update to ACTIVE because it is not 
bound. [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _port_provisioned 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py:224
oslo_messaging._drivers.amqpdriver[22265]: DEBUG sending reply msg_id: 
254703530cd3440584c980d72ed93011 reply queue: 
reply_8b6e70ad5191401a9512147c4e94ca71 time elapsed: 0.0452275519492s 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _send_reply 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:73
neutron.notifiers.nova[22263]: DEBUG Sending events: [{'name': 
'network-changed', 'server_uuid': u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] 
send_events /usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:257
novaclient.v2.client[22263]: DEBUG REQ: curl -g -i --insecure -X POST 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}592539c9fcd820d7e369ea58454ee17fe7084d5e" -d '{"events": [{"name": 
"network-changed", "server_uuid": "d02c7361-5e3a-4fdf-89b5-f29b3901f0fc"}]}' 
_http_log_request /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:337
novaclient.v2.client[22263]: DEBUG RESP: [404] Content-Type: application/json; 
charset=UTF-8 Content-Length: 78 X-Compute-Request-Id: 
req-a029af9e-e460-476f-9993-4551f3b210d6 Date: Thu, 12 Jan 2017 15:43:37 GMT 
Connection: keep-alive 
RESP BODY: {"itemNotFound": {"message": "No instances found for any event", 
"code": 404}}
 _http_log_response 
/usr/lib/python2.7/dist-packages/keystoneauth1/session.py:366
novaclient.v2.client[22263]: DEBUG POST call to compute for 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 used request id req-a029af9e-e460-476f-9993-4551f3b210d6 _log_request_id 
/usr/lib/python2.7/dist-packages/novaclient/client.py:85
neutron.notifiers.nova[22263]: DEBUG Nova returned NotFound for event: 
[{'name': 'network-changed', 'server_uuid': 
u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] send_events 
/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:263
oslo_messaging._drivers.amqpdriver[22265]: DEBUG received message msg_id: 
0bf04ac8fedd4234bd6cd6c04547beca reply to 
reply_8b6e70ad5191401a9512147c4e94ca71 __call__ 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:194
neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-47c505d7-4eb5-4c71-9656-9e0927408822 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153


Port info:
+-+---+
| Field   | Value   
  |
+-+---+
| admin_state_up  | True
  |
| binding:host_id | d02c7361-5e3a-4fdf-89b5-f29b3901f0fc
  |
| binding:profile | {"local_link_information": [{"switch_info": "c426s1", 
"port_id": "1/1/21",|
| | "switch_id": "60:96:9f:69:b4:b4"}]} 
  |
| binding:vif_details | {}  
  |
| binding:vif_type| binding_failed  

[Yahoo-eng-team] [Bug 1646526] Re: bgpvpn functionality lost on openvswitch restart

2017-01-12 Thread Ihar Hrachyshka
Indeed we don't call ext_manager.initialize() on OVS restart detection,
hence flows not set. I believe it's to be fixed on neutron side, so
changing the component accordingly.

A logical fix could be moving .initialize() call under rpc_loop, but
since .initialize() usually registers consumers, and this should happen
before we call to consume_in_threads() [and this happens on agent
__init__], it probably won't work.

An alternative to that could be making note of all flow operations made
by each extension on OVSCookieBridge and replay those on OVS restart.
There is a problem with it though: it may be very wasteful (most entries
in the journal could be irrelevant), and even unsafe (we may expose the
node to a state from the journal past).

To avoid that problem, we could try to keep the latest flow state in-
memory and dumb just that. I am not sure if mere interception of
[add|mod|delete]_flow gives enough information to maintain that state
though. (That's another case where we suffer from lack of proper flow
manager in neutron.)

If nothing works, we can add an 'OVS restarted' event that extensions
could intercept (or entry point to be called by agent in such scenario).
Of course, the extension would need to implement it, which will require
from it to maintain the in-memory map of its flows.

** Project changed: bgpvpn => neutron

** Changed in: neutron
Milestone: 6.0.0 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1646526

Title:
  bgpvpn functionality lost on openvswitch restart

Status in neutron:
  Confirmed

Bug description:
  On an openvswitch restart (desired restart or after a crash), neutron
  openvswitch agent detects the restart, and sets up br-int and br-tun
  again.  However, the base setup for br-tun/br-mpls/br-int integration
  is done only at startup, via the initialize callback of the agent
  extension API.  As a result on openvswitch restart the br-tun flows to
  forward traffic to and from br-mpls are lost, and the BGPVPN
  connectivity is interrupted on the compute node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1646526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646526] [NEW] bgpvpn functionality lost on openvswitch restart

2017-01-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

On an openvswitch restart (desired restart or after a crash), neutron
openvswitch agent detects the restart, and sets up br-int and br-tun
again.  However, the base setup for br-tun/br-mpls/br-int integration is
done only at startup, via the initialize callback of the agent extension
API.  As a result on openvswitch restart the br-tun flows to forward
traffic to and from br-mpls are lost, and the BGPVPN connectivity is
interrupted on the compute node.

** Affects: neutron
 Importance: High
 Assignee: Thomas Morin (tmmorin-orange)
 Status: Confirmed


** Tags: bagpipe
-- 
bgpvpn functionality lost on openvswitch restart
https://bugs.launchpad.net/bugs/1646526
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649138] Re: Initial LDAP bind occurs inconsistently depending on deployment configuration

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407561
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=f8ee249bf08cefd8468aa15c589dab48bd5c4cd8
Submitter: Jenkins
Branch:master

commit f8ee249bf08cefd8468aa15c589dab48bd5c4cd8
Author: Colleen Murphy 
Date:   Tue Dec 6 15:40:02 2016 +0100

Add anonymous bind to get_connection method

If no username and password is specified in the keystone ldap
configuration, it may still be possible to bind to an LDAP server
anonymously if the LDAP server is configured to allow it. Currently,
upon creating a connection object, keystone only attempts to bind to
the LDAP server if a username and password has been provided to it.
This would rarely be an issue because pyldap attempts a reconnect upon
executing any ldap command, if necessary, and hence the anonymous bind
just happens later. It is a problem now because logic was added[1] to
check if the server errored during that initial connection, and for it
to work correctly the initial connection needs to happen in a
predictable place. This patch adds an anonymous bind to the
get_connection method so that no matter the credential configuration
the initial connection is consistent.

This required adding mocks to many of the LDAP backend tests since
every LDAP interaction now attempts a simple_bind_s() regardless of
whether credentials are configured in keystone.

[1] https://review.openstack.org/#/c/390948

Closes-bug: #1649138

Change-Id: I193c9537c107092e48f7ea1d25ff9c17f872c15b


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649138

Title:
  Initial LDAP bind occurs inconsistently depending on deployment
  configuration

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Some operators configure their LDAP identity backends to allow
  anonymous binds for access to read-only information. This is a valid
  configuration within keystone, as keystone does not require LDAP
  credentials to be set in its config. Currently, if keystone is given
  LDAP credentials, it will attempt an initial authenticated bind at the
  same time that it creates a connection object[1]. If keystone does not
  have LDAP credentials, the first time it attempts to bind to the LDAP
  server will be upon the first time it executes a query, because pyldap
  will automatically attempt a "reconnect[2] if necessary, so there's
  not normally any problem. The only reason this would be a problem
  would be if we were trying to do some connection validation, which
  arose in a recent review[3]. In order to validate the connection, the
  first connection needs to happen in a predictable place regardless of
  the method of binding.

  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/identity/backends/ldap/common.py?h=11.0.0.0b1#n1286
  [2] 
https://github.com/pyldap/pyldap/blob/pyldap-2.4.25.1/Lib/ldap/ldapobject.py#L1069
  [3] https://review.openstack.org/#/c/390948/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654502] Re: create a new flavor failed

2017-01-12 Thread Illia Polliul
** Project changed: fuel-plugin-contrail => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1654502

Title:
  create a new flavor failed

Status in OpenStack Compute (nova):
  New

Bug description:
  execute the command:
  [root]openstack flavor create m1.little --id 6 --ram 1024 --disk 5 --vcpu 1 
--public --rxtx-factor 1

  the command return:
  [root]Not all flavors have been migrated to the API database (HTTP 409) 
(Request-ID: req-37957ba2-1c1c-45ff-8dcf-4458d99f3729)

  follow this prompt,i found that instance_types exists in database
  nova,bu not exists in database nova-api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1654502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654502] [NEW] create a new flavor failed

2017-01-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

execute the command:
[root]openstack flavor create m1.little --id 6 --ram 1024 --disk 5 --vcpu 1 
--public --rxtx-factor 1

the command return:
[root]Not all flavors have been migrated to the API database (HTTP 409) 
(Request-ID: req-37957ba2-1c1c-45ff-8dcf-4458d99f3729)

follow this prompt,i found that instance_types exists in database
nova,bu not exists in database nova-api.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
create a new flavor failed
https://bugs.launchpad.net/bugs/1654502
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655979] [NEW] NUMATopologyFilter modifies the provided RequestSpec

2017-01-12 Thread Stephen Finucane
Public bug reported:

The 'NUMATopologyFilter' makes a call to 'numa_fit_instance_to_host' in
order to determine whether an instance with a sample topology could fit
on a given host. This function is provided with an InstanceNUMATopology
object, which was extracted from the RequestSpec provided to the filter.
However, the 'numa_fit_instance_to_host' call has the side effect of
modifying a couple of fields on this InstanceNUMATopology object,
notably the pinning information, which appears to be propagated to
subsequent calls of the filter. The reason for this propagation is
presumably Python's "call-by-object" model [1].

We should ensure the original RequestSpec is not modified, thus
preventing possible issues in the future.

[1] https://jeffknupp.com/blog/2012/11/13/is-python-callbyvalue-or-
callbyreference-neither/

** Affects: nova
 Importance: Undecided
 Assignee: Stephen Finucane (stephenfinucane)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1655979

Title:
  NUMATopologyFilter modifies the provided RequestSpec

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The 'NUMATopologyFilter' makes a call to 'numa_fit_instance_to_host'
  in order to determine whether an instance with a sample topology could
  fit on a given host. This function is provided with an
  InstanceNUMATopology object, which was extracted from the RequestSpec
  provided to the filter. However, the 'numa_fit_instance_to_host' call
  has the side effect of modifying a couple of fields on this
  InstanceNUMATopology object, notably the pinning information, which
  appears to be propagated to subsequent calls of the filter. The reason
  for this propagation is presumably Python's "call-by-object" model
  [1].

  We should ensure the original RequestSpec is not modified, thus
  preventing possible issues in the future.

  [1] https://jeffknupp.com/blog/2012/11/13/is-python-callbyvalue-or-
  callbyreference-neither/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1655979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655972] [NEW] test_cleanup_network_namespaces_cleans_dhcp_and_l3_namespaces fails intermittently

2017-01-12 Thread Jakub Libosvar
*** This bug is a duplicate of bug 1654287 ***
https://bugs.launchpad.net/bugs/1654287

Public bug reported:

Example of failed test: http://logs.openstack.org/73/373973/13/check
/gate-neutron-dsvm-functional-ubuntu-xenial/20b6a30/logs/dsvm-
functional-
logs/neutron.tests.functional.cmd.test_netns_cleanup.NetnsCleanupTest.test_cleanup_network_namespaces_cleans_dhcp_and_l3_namespaces.txt.gz

>From the quick look it seems that 'find' command returned result of
previous command 'netstat' that was run via rootwrap.

** Affects: neutron
 Importance: Undecided
 Status: New

** This bug has been marked a duplicate of bug 1654287
   functional test netns_cleanup failing in gate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655972

Title:
  test_cleanup_network_namespaces_cleans_dhcp_and_l3_namespaces fails
  intermittently

Status in neutron:
  New

Bug description:
  Example of failed test: http://logs.openstack.org/73/373973/13/check
  /gate-neutron-dsvm-functional-ubuntu-xenial/20b6a30/logs/dsvm-
  functional-
  
logs/neutron.tests.functional.cmd.test_netns_cleanup.NetnsCleanupTest.test_cleanup_network_namespaces_cleans_dhcp_and_l3_namespaces.txt.gz

  From the quick look it seems that 'find' command returned result of
  previous command 'netstat' that was run via rootwrap.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655974] [NEW] ml2 provides no information if there is no suitable mech_driver found during port binding

2017-01-12 Thread George Shuklin
Public bug reported:

If there is no suitable mech driver found, ML2 just make port
bind_failed and write uninformative message in the log:

2017-01-12 13:56:46.691 3889 ERROR neutron.plugins.ml2.managers [req-
d9d956d7-c9e9-4c1b-aa1b-59fb974dd980 5a08515f35d749068a6327e387ca04e2
7d450ecf00d64399aeb93bc122cb6dae - - -] Failed to bind port
f4e190cb-6678-43f6-9140-f662e9429e75 on host d02c7361-5e3a-4fdf-
89b5-f29b3901f0fc for vnic_type baremetal using segments
[{'segmentation_id': 21L, 'physical_network': u'provision', 'id':
u'6ed946b1-d7f6-4c8e-8459-10b6d65ce536', 'network_type': u'vlan'}]

I think it should report reason for this to admins more clearly, saying
that no mechanism driver found to bind port.

In my case it was: INFO neutron.plugins.ml2.managers [-] Loaded
mechanism driver names: [], which was hard to debug due to lack of any
information from neutron-server (even in debug mode!).

version: 2:9.0.0-0ubuntu1~cloud0

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- ml2 provides no information if there is no suitable mech_driver found
+ ml2 provides no information if there is no suitable mech_driver found during 
port binding

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655974

Title:
  ml2 provides no information if there is no suitable mech_driver found
  during port binding

Status in neutron:
  New

Bug description:
  If there is no suitable mech driver found, ML2 just make port
  bind_failed and write uninformative message in the log:

  2017-01-12 13:56:46.691 3889 ERROR neutron.plugins.ml2.managers [req-
  d9d956d7-c9e9-4c1b-aa1b-59fb974dd980 5a08515f35d749068a6327e387ca04e2
  7d450ecf00d64399aeb93bc122cb6dae - - -] Failed to bind port
  f4e190cb-6678-43f6-9140-f662e9429e75 on host d02c7361-5e3a-4fdf-
  89b5-f29b3901f0fc for vnic_type baremetal using segments
  [{'segmentation_id': 21L, 'physical_network': u'provision', 'id':
  u'6ed946b1-d7f6-4c8e-8459-10b6d65ce536', 'network_type': u'vlan'}]

  I think it should report reason for this to admins more clearly,
  saying that no mechanism driver found to bind port.

  In my case it was: INFO neutron.plugins.ml2.managers [-] Loaded
  mechanism driver names: [], which was hard to debug due to lack of any
  information from neutron-server (even in debug mode!).

  version: 2:9.0.0-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655964] [NEW] the error message when a invalid instance name is given on an instance creation have a generic error

2017-01-12 Thread Alphaxx
Public bug reported:

Hello guys,

Here is my problem, when you give your instance a name with an exclusive
numeric last part, it fails, example :

cow.moo => ok
cow.moo3 => ok
cow => ok
cow. => ok
cow.333 => KO
cow.333. => KO

it's ok, because it's a conceptual choice :
https://bugs.launchpad.net/nova/+bug/1581977

  RESP BODY: {"NeutronError": {"message": "Invalid input for dns_name.
Reason: 'myinstance6.4' not a valid PQDN or FQDN. Reason: TLD '4' must
not be all numeric.", "type": "HTTPBadRequest", "detail": ""}}


but when a customer gives a invalid instance name the instance creation fails 
and the standard error message "no valid host was found" appears in Horizon, 
which has nothing to do with it.

Could it be more interesting to get a more obvious error, in the same
format we could have when we forget to chose an image ? something like
"this instance name is not valid".

I tried to modify "create_instance.py" and "update_instance.py" myself
by simply add a regexp in

name = forms.CharField(label=_("Instance Name"), max_length=255)

but it isn't seems to be satisfactory at all.


If you have a better idea it could be great !

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1655964

Title:
  the error message when a invalid instance name is given on an instance
  creation have a generic error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hello guys,

  Here is my problem, when you give your instance a name with an
  exclusive numeric last part, it fails, example :

  cow.moo => ok
  cow.moo3 => ok
  cow => ok
  cow. => ok
  cow.333 => KO
  cow.333. => KO

  it's ok, because it's a conceptual choice :
  https://bugs.launchpad.net/nova/+bug/1581977

RESP BODY: {"NeutronError": {"message": "Invalid input for dns_name.
  Reason: 'myinstance6.4' not a valid PQDN or FQDN. Reason: TLD '4' must
  not be all numeric.", "type": "HTTPBadRequest", "detail": ""}}

  
  but when a customer gives a invalid instance name the instance creation fails 
and the standard error message "no valid host was found" appears in Horizon, 
which has nothing to do with it.

  Could it be more interesting to get a more obvious error, in the same
  format we could have when we forget to chose an image ? something like
  "this instance name is not valid".

  I tried to modify "create_instance.py" and "update_instance.py" myself
  by simply add a regexp in

  name = forms.CharField(label=_("Instance Name"), max_length=255)

  but it isn't seems to be satisfactory at all.

  
  If you have a better idea it could be great !

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1655964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1636950] Re: Set network connection timeout on Keystone Identity's LDAP backend to prevent stall on bind

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/390948
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=2d239cfbc37573f245e6560b42117828b73d19b9
Submitter: Jenkins
Branch:master

commit 2d239cfbc37573f245e6560b42117828b73d19b9
Author: Kam Nasim 
Date:   Wed Jan 11 18:55:40 2017 +

Set connection timeout for LDAP configuration

Presently the Identity LDAP driver does not set a connection timeout
option which has the disadvantage of causing the Identity LDAP backend
handler to stall indefinitely (or until TCP timeout) on LDAP bind, if
a) the LDAP URL is incorrect, or b) there is a connection failure/link
loss.

This commit add a new option to set the LDAP connection timeout to
set a new OPT_NETWORK_TIMEOUT option on the LDAP object. This will
raise ldap.SERVER_DOWN exceptions on timeout.

Signed-off-by: Kam Nasim 

Closes-Bug: #1636950
Change-Id: I574e6368169ad60bef2cc990d2d410a638d1b770


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1636950

Title:
  Set network connection timeout on Keystone Identity's LDAP backend to
  prevent stall on bind

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In our Mitaka deployment when setting up the Identity driver to use an
  external LDAP backend, if the URL of the LDAP server is incorrect or
  there is a network connectivity issue, it is seen that the ldap driver
  would stall indefinately (or until TCP timeout).

  This effects both LDAP connection pools and SimpleLDAP

  The LDAP configuration stanza (keystone.conf) provides a
  "pool_connection_timeout" option however this is not used anywhere
  within the LDAP driver.

  We have employed a fix downstream in our deployment which is to use
  this pool_connection_timeout value and set it as
  ldap.OPT_NETWORK_TIMEOUT so that the LDAP connection times out at the
  prescribed value without stalling indefinitely at the LDAP bind.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1636950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655917] Re: Revision is not increased when updating "network router:external"

2017-01-12 Thread Gena
** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655917

Title:
  Revision is not increased when updating "network router:external"

Status in neutron:
  New

Bug description:
  Try to update the network field router:external we expect the revision number 
to be increased, but it doesn't.
  Run "neutron net-update moshe --router:external=True" command for new network.
  The field router-external is updated but the revision number is not increased.

  Tested on Newton version (10)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655919] Re: CI: Openvswitch agent fails because can't import tinyrpc.server module

2017-01-12 Thread Sagi (Sergey) Shnaidman
new ryu package requires new library tinyrpc, new package is in progress

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655919

Title:
  CI: Openvswitch agent fails because can't import tinyrpc.server module

Status in tripleo:
  Triaged

Bug description:
  TripleO Job logs:

  http://logs.openstack.org/18/419018/4/check/gate-tripleo-ci-centos-7
  -ovb-nonha-oooq-nv/7a08339

  neutron logs: http://logs.openstack.org/18/419018/4/check/gate-
  tripleo-ci-centos-7-ovb-nonha-oooq-
  nv/7a08339/logs/undercloud/var/log/neutron/

  messages log: http://logs.openstack.org/18/419018/4/check/gate-
  tripleo-ci-centos-7-ovb-nonha-oooq-
  nv/7a08339/logs/undercloud/var/log/messages

  Openvswitch service fails with traceback:

  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Option 
"notification_driver" from group "DEFAULT" is deprecated. Use option "driver" 
from group "oslo_messaging_notifications".
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Could not load 
neutron.openstack.common.notifier.rpc_notifier
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Traceback (most recent 
call last):
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: sys.exit(main())
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py",
 line 20, in main
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: agent_main.main()
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/main.py",
 line 46, in main
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: mod = 
importutils.import_module(mod_name)
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 73, in 
import_module
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: __import__(import_str)
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py",
 line 18, in 
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: from ryu.base import 
app_manager
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/ryu/base/app_manager.py", line 35, in 
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: from ryu.app import wsgi
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/ryu/app/wsgi.py", line 23, in 
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: from tinyrpc.server 
import RPCServer
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: ImportError: No module 
named tinyrpc.server
  Jan 12 01:15:15 undercloud ironic-inspector: 2017-01-12 01:15:15.076 4421 
DEBUG futurist.periodics [-] Submitting periodic callback 
'ironic_inspector.main.periodic_update' _process_scheduled 
/usr/lib/python2.7/site-packages/futurist/periodics.py:623
  Jan 12 01:15:15 undercloud systemd: neutron-openvswitch-agent.service: main 
process exited, code=exited, status=1/FAILURE
  Jan 12 01:15:15 undercloud systemd: Unit neutron-openvswitch-agent.service 
entered failed state.
  Jan 12 01:15:15 undercloud systemd: neutron-openvswitch-agent.service failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1655919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655939] [NEW] Automated setup using Vagrant + Virtualbox Failed

2017-01-12 Thread liuxin
Public bug reported:

Description
===
I refer to this address 
http://docs.openstack.org/developer/dragonflow/installation.html#automated-setup-using-vagrant-virtualbox
 deployment failure

Steps to reproduce
==

# git clone https://git.openstack.org/openstack/dragonflow
# cd dragonflow
# vagrant plugin install vagrant-cachier
# vagrant plugin install vagrant-vbguest
# cd vagrant && vagrant up
# vagrant ssh devstack_controller
# cd devstack/
# ./stack.sh

Expected result
===
++stackrc:source:762HOST_IP=
++stackrc:source:763'[' '' == '' ']'
++stackrc:source:764die 764 'Could not determine host 
ip address.  See local.conf for suggestions on setting HOST_IP.'
++functions-common:die:186  local exitcode=0
++functions-common:die:187  set +o xtrace
[Call Trace]
./stack.sh:191:source
/home/vagrant/devstack/stackrc:764:die
[ERROR] /home/vagrant/devstack/stackrc:764 Could not determine host ip address. 
See local.conf for suggestions on setting HOST_IP.

Actual result
=
get HOST_IP and continue.

Environment
===
1. os version
# cat /proc/version
Linux version 4.4.0-57-generic (buildd@lgw01-54) (gcc version 5.4.0 20160609 
(Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #78-Ubuntu SMP Fri Dec 9 23:50:32 UTC 2016

2. vagrant version
# vagrant -v
Vagrant 1.9.1

3. virtualbox version
# virtualbox --help
Oracle VM VirtualBox Manager 5.1.12
(C) 2005-2016 Oracle Corporation
All rights reserved.


Logs & Configs
==

** Affects: dragonflow
 Importance: Undecided
 Assignee: liuxin (liuxin)
 Status: In Progress

** Project changed: nova => dragonflow

** Changed in: dragonflow
 Assignee: (unassigned) => 刘鑫 (liuxin)

** Changed in: dragonflow
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1655939

Title:
  Automated setup using Vagrant + Virtualbox Failed

Status in DragonFlow:
  In Progress

Bug description:
  Description
  ===
  I refer to this address 
http://docs.openstack.org/developer/dragonflow/installation.html#automated-setup-using-vagrant-virtualbox
 deployment failure

  Steps to reproduce
  ==

  # git clone https://git.openstack.org/openstack/dragonflow
  # cd dragonflow
  # vagrant plugin install vagrant-cachier
  # vagrant plugin install vagrant-vbguest
  # cd vagrant && vagrant up
  # vagrant ssh devstack_controller
  # cd devstack/
  # ./stack.sh

  Expected result
  ===
  ++stackrc:source:762HOST_IP=
  ++stackrc:source:763'[' '' == '' ']'
  ++stackrc:source:764die 764 'Could not determine host 
ip address.  See local.conf for suggestions on setting HOST_IP.'
  ++functions-common:die:186  local exitcode=0
  ++functions-common:die:187  set +o xtrace
  [Call Trace]
  ./stack.sh:191:source
  /home/vagrant/devstack/stackrc:764:die
  [ERROR] /home/vagrant/devstack/stackrc:764 Could not determine host ip 
address. See local.conf for suggestions on setting HOST_IP.

  Actual result
  =
  get HOST_IP and continue.

  Environment
  ===
  1. os version
  # cat /proc/version
  Linux version 4.4.0-57-generic (buildd@lgw01-54) (gcc version 5.4.0 20160609 
(Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #78-Ubuntu SMP Fri Dec 9 23:50:32 UTC 2016

  2. vagrant version
  # vagrant -v
  Vagrant 1.9.1

  3. virtualbox version
  # virtualbox --help
  Oracle VM VirtualBox Manager 5.1.12
  (C) 2005-2016 Oracle Corporation
  All rights reserved.

  
  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/dragonflow/+bug/1655939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655917] Re: Revision is not increased when updating "network router:external"

2017-01-12 Thread Reedip
Hi Gena, 
Thanks for the bug but we would need more information for the resolution of the 
bug.

Can you please provide more information , as per the policy given in the
below link ??

http://docs.openstack.org/developer/neutron/policies/bugs.html#bug-
screening-best-practices

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655917

Title:
  Revision is not increased when updating "network router:external"

Status in neutron:
  Invalid

Bug description:
  Try to update the network field router:external we expect the revision number 
to be increased, but it doesn't.
  Run "neutron net-update moshe --router:external=True" command for new network.
  The field router-external is updated but the revision number is not increased.

  Tested on Newton version (10)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655919] [NEW] Openvswitch agent fails because can't import tinyrpc.server module

2017-01-12 Thread Sagi (Sergey) Shnaidman
Public bug reported:

TripleO Job logs:

http://logs.openstack.org/18/419018/4/check/gate-tripleo-ci-centos-7
-ovb-nonha-oooq-nv/7a08339

neutron logs: http://logs.openstack.org/18/419018/4/check/gate-tripleo-
ci-centos-7-ovb-nonha-oooq-nv/7a08339/logs/undercloud/var/log/neutron/

messages log: http://logs.openstack.org/18/419018/4/check/gate-tripleo-
ci-centos-7-ovb-nonha-oooq-nv/7a08339/logs/undercloud/var/log/messages

Openvswitch service fails with traceback:

Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Option 
"notification_driver" from group "DEFAULT" is deprecated. Use option "driver" 
from group "oslo_messaging_notifications".
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Could not load 
neutron.openstack.common.notifier.rpc_notifier
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Traceback (most recent 
call last):
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: sys.exit(main())
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py",
 line 20, in main
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: agent_main.main()
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/main.py",
 line 46, in main
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: mod = 
importutils.import_module(mod_name)
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 73, in 
import_module
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: __import__(import_str)
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py",
 line 18, in 
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: from ryu.base import 
app_manager
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/ryu/base/app_manager.py", line 35, in 
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: from ryu.app import wsgi
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/ryu/app/wsgi.py", line 23, in 
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: from tinyrpc.server 
import RPCServer
Jan 12 01:15:14 undercloud neutron-openvswitch-agent: ImportError: No module 
named tinyrpc.server
Jan 12 01:15:15 undercloud ironic-inspector: 2017-01-12 01:15:15.076 4421 DEBUG 
futurist.periodics [-] Submitting periodic callback 
'ironic_inspector.main.periodic_update' _process_scheduled 
/usr/lib/python2.7/site-packages/futurist/periodics.py:623
Jan 12 01:15:15 undercloud systemd: neutron-openvswitch-agent.service: main 
process exited, code=exited, status=1/FAILURE
Jan 12 01:15:15 undercloud systemd: Unit neutron-openvswitch-agent.service 
entered failed state.
Jan 12 01:15:15 undercloud systemd: neutron-openvswitch-agent.service failed.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Undecided
 Status: New

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655919

Title:
  Openvswitch agent fails because can't import tinyrpc.server module

Status in neutron:
  New
Status in tripleo:
  New

Bug description:
  TripleO Job logs:

  http://logs.openstack.org/18/419018/4/check/gate-tripleo-ci-centos-7
  -ovb-nonha-oooq-nv/7a08339

  neutron logs: http://logs.openstack.org/18/419018/4/check/gate-
  tripleo-ci-centos-7-ovb-nonha-oooq-
  nv/7a08339/logs/undercloud/var/log/neutron/

  messages log: http://logs.openstack.org/18/419018/4/check/gate-
  tripleo-ci-centos-7-ovb-nonha-oooq-
  nv/7a08339/logs/undercloud/var/log/messages

  Openvswitch service fails with traceback:

  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Option 
"notification_driver" from group "DEFAULT" is deprecated. Use option "driver" 
from group "oslo_messaging_notifications".
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Could not load 
neutron.openstack.common.notifier.rpc_notifier
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: Traceback (most recent 
call last):
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: sys.exit(main())
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 
"/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py",
 line 20, in main
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: agent_main.main()
  Jan 12 01:15:14 undercloud neutron-openvswitch-agent: File 

[Yahoo-eng-team] [Bug 1655610] Re: alembic_version table has no primary key

2017-01-12 Thread Ann Taraday
I put change for Neutron on review as well
https://review.openstack.org/#/c/419320/

** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655610

Title:
  alembic_version table has no primary key

Status in neutron:
  In Progress

Bug description:
  Currently alembic_version table in the neutron db has no primary key.

  Which is a bad thing, if you consider to use Galera as a database,
  since it requires primary keys in all tables.

  For example, during the "INFO  [alembic.runtime.migration] Running
  upgrade  -> kilo, kilo_initial" migration you will get the:

  oslo_db.exception.DBError: (pymysql.err.InternalError) (1105, u
  'Percona-XtraDB-Cluster prohibits use of DML command on a table
  (neutron.alembic_version) without an explicit primary key with
  pxc_strict_mode = ENFORCING or MASTER') [SQL: u"INSERT INTO
  alembic_version (version_num) VALUES ('kilo')"]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655610/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655914] [NEW] vpnaas test failure for router info

2017-01-12 Thread YAMAMOTO Takashi
Public bug reported:

recent router info change broke vpnaas tests.

eg. http://logs.openstack.org/76/415976/3/check/gate-neutron-vpnaas-
dsvm-functional-sswan-ubuntu-xenial/a6a6b34/testr_results.html.gz

ft1.1: 
neutron_vpnaas.tests.functional.strongswan.test_strongswan_driver.TestStrongSwanDeviceDriver.test_process_lifecycle_StringException:
 Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "neutron_vpnaas/tests/functional/strongswan/test_strongswan_driver.py", 
line 121, in setUp
self.router = legacy_router.LegacyRouter(FAKE_ROUTER_ID, **ri_kwargs)
TypeError: __init__() takes at least 6 arguments (5 given)

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress


** Tags: gate-failure vpnaas

** Tags added: gate-failure vpnaas

** Description changed:

  recent router info change broke vpnaas tests.
+ 
+ eg. http://logs.openstack.org/76/415976/3/check/gate-neutron-vpnaas-
+ dsvm-functional-sswan-ubuntu-xenial/a6a6b34/testr_results.html.gz
+ 
+ ft1.1: 
neutron_vpnaas.tests.functional.strongswan.test_strongswan_driver.TestStrongSwanDeviceDriver.test_process_lifecycle_StringException:
 Empty attachments:
+   pythonlogging:''
+   stderr
+   stdout
+ 
+ Traceback (most recent call last):
+   File 
"neutron_vpnaas/tests/functional/strongswan/test_strongswan_driver.py", line 
121, in setUp
+ self.router = legacy_router.LegacyRouter(FAKE_ROUTER_ID, **ri_kwargs)
+ TypeError: __init__() takes at least 6 arguments (5 given)

** Changed in: neutron
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655914

Title:
  vpnaas test failure for router info

Status in neutron:
  In Progress

Bug description:
  recent router info change broke vpnaas tests.

  eg. http://logs.openstack.org/76/415976/3/check/gate-neutron-vpnaas-
  dsvm-functional-sswan-ubuntu-xenial/a6a6b34/testr_results.html.gz

  ft1.1: 
neutron_vpnaas.tests.functional.strongswan.test_strongswan_driver.TestStrongSwanDeviceDriver.test_process_lifecycle_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"neutron_vpnaas/tests/functional/strongswan/test_strongswan_driver.py", line 
121, in setUp
  self.router = legacy_router.LegacyRouter(FAKE_ROUTER_ID, **ri_kwargs)
  TypeError: __init__() takes at least 6 arguments (5 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655917] [NEW] Revision is not increased when updating "network router:external"

2017-01-12 Thread Gena
Public bug reported:

Try to update the network field router:external we expect the revision number 
to be increased, but it doesn't.
Run "neutron net-update moshe --router:external=True" command for new network.
The field router-external is updated but the revision number is not increased.

Tested on Newton version (10)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655917

Title:
  Revision is not increased when updating "network router:external"

Status in neutron:
  New

Bug description:
  Try to update the network field router:external we expect the revision number 
to be increased, but it doesn't.
  Run "neutron net-update moshe --router:external=True" command for new network.
  The field router-external is updated but the revision number is not increased.

  Tested on Newton version (10)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649527] Re: nova creates an invalid ethernet/bridge interface definition in virsh xml

2017-01-12 Thread A.Ojea
** Also affects: centos
   Importance: Undecided
   Status: New

** No longer affects: centos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649527

Title:
  nova creates an invalid ethernet/bridge interface definition in virsh
  xml

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/designer.py#L61
  sets the script path of an ethernet interface to ""

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/config.py#L1228
  checks script for None. As it is not none but a string it adds an empty 
  script path to the ethernet interface definition in the virsh xml

  Steps to reproduce
  ==

  nova generated virsh:

  [root@overcloud-novacompute-0 heat-admin]# cat 2.xml |grep tap -A5 -B3
  







  

  XML validation:

  [root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 2.xml
  Relax-NG validity error : Extra element devices in interleave
  2.xml:59: element devices: Relax-NG validity error : Element domain failed to 
validate content
  2.xml fails to validate

  removing the  element the xml validation succeeds:

  [root@overcloud-novacompute-0 heat-admin]# cat 1.xml |grep tap -A5 -B2
  






  
  [root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 1.xml
  1.xml validates

  Point is that libvirt <2.0.0 is more tolerant. libvirt 2.0.0 throws a 
segfault:
   
  Dec  9 13:30:32 comp1 kernel: libvirtd[1048]: segfault at 8 ip 
7fc9ff09e1c3 sp 7fc9edfef1d0 error 4 in 
libvirt.so.0.2000.0[7fc9fef4b000+352000]
  Dec  9 13:30:32 comp1 journal: End of file while reading data: Input/output 
error
  Dec  9 13:30:32 comp1 systemd: libvirtd.service: main process exited, 
code=killed, status=11/SEGV
  Dec  9 13:30:32 comp1 systemd: Unit libvirtd.service entered failed state.
  Dec  9 13:30:32 comp1 systemd: libvirtd.service failed.
  Dec  9 13:30:32 comp1 systemd: libvirtd.service holdoff time over, scheduling 
restart.
  Dec  9 13:30:32 comp1 systemd: Starting Virtualization daemon...
  Dec  9 13:30:32 comp1 systemd: Started Virtualization daemon. 

  Expected result
  ===
  VM can be started
  instead of checking for None, config.py should check for an empty string 
before
  adding script path

  
  Actual result
  =
  VM doesn't start

  Environment
  ===
  OSP10/Newton, libvirt 2.0.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651765] Re: Don't enable net.bridge.bridge-nf-call-arptables for iptables firewall

2017-01-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/413645
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=af0c53887c24b155842bd29ca73dc3800b4b1ec4
Submitter: Jenkins
Branch:master

commit af0c53887c24b155842bd29ca73dc3800b4b1ec4
Author: Ihar Hrachyshka 
Date:   Sat Dec 17 01:35:29 2016 +

iptables: don't enable arptables firewall

We don't use any arptables based firewall rules. This should somewhat
optimize kernel packet processing performance.

I think the setting came from:
http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf

but does not apply to the way we use iptables.

Depends-On: I41796c76172f5243e4f9c4902363abb1f19d0d12
Change-Id: I5de6cf0fac4d957ada816d3cd2ae1df9831f333d
Closes-Bug: #1651765


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1651765

Title:
  Don't enable net.bridge.bridge-nf-call-arptables for iptables firewall

Status in neutron:
  Fix Released

Bug description:
  This setting is of no use for neutron, because we don't use any
  arptables based firewall rules.

  More info at: https://bugzilla.redhat.com/show_bug.cgi?id=1357598

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1651765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655892] [NEW] Hyper-V: Adds vNUMA implementation

2017-01-12 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/282407
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 2195e4d68486ed70e55d0b5f038b13bd35e3271c
Author: Claudiu Belu 
Date:   Fri Feb 19 18:12:57 2016 +0200

Hyper-V: Adds vNUMA implementation

vNUMA can improve the performance of workloads running on virtual machines
that are configured with large amounts of memory. This feature is useful
for high-performance NUMA-aware applications, such as database or web
servers.

Returns Hyper-V host NUMA node information during get_available_resource
Adds validation for instances requiring NUMA topology (no asymmetric
topology and no CPU pinning supported).
Creates NUMA aware instances, if necessary.

The compute-cpu-topologies page in the admin-guide will have to be
updated to include Hyper-V NUMA topologies usage and configuration.

DocImpact

Change-Id: Iba2110e95e80b9511698cb7df2963fd218264c8e
Implements: blueprint hyper-v-vnuma-enable

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1655892

Title:
  Hyper-V: Adds vNUMA implementation

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/282407
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2195e4d68486ed70e55d0b5f038b13bd35e3271c
  Author: Claudiu Belu 
  Date:   Fri Feb 19 18:12:57 2016 +0200

  Hyper-V: Adds vNUMA implementation
  
  vNUMA can improve the performance of workloads running on virtual machines
  that are configured with large amounts of memory. This feature is useful
  for high-performance NUMA-aware applications, such as database or web
  servers.
  
  Returns Hyper-V host NUMA node information during get_available_resource
  Adds validation for instances requiring NUMA topology (no asymmetric
  topology and no CPU pinning supported).
  Creates NUMA aware instances, if necessary.
  
  The compute-cpu-topologies page in the admin-guide will have to be
  updated to include Hyper-V NUMA topologies usage and configuration.
  
  DocImpact
  
  Change-Id: Iba2110e95e80b9511698cb7df2963fd218264c8e
  Implements: blueprint hyper-v-vnuma-enable

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1655892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654504] Re: Resize instance which is booted from volume failed

2017-01-12 Thread Tao Li
In master this is working well. no need to update

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1654504

Title:
  Resize instance which is booted from volume failed

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===

  When instance was booted from volume, but the driver will consider it booted 
from image in resize
  scenario. This bug is introduced by the bug path:  bug #1587802. 

  Steps to reproduce
  ==

  1. Booting an instance from volume with flavor m1.small。
  2. Resize the instance to m1.tiny。

  
  Expected result
  ===
  Resize the instance succefully.

  Actual result
  =
  Exception was raised as follows.

  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3964, in 
_finish_resize
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] block_device_info, power_on)
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7417, in 
finish_migration
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] 
fallback_from_host=migration.source_compute)
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3180, in 
_create_image
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] 
backend.create_snap(libvirt_utils.RESIZE_SNAPSHOT_NAME)
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 944, 
in create_snap
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] return 
self.driver.create_snap(self.rbd_name, name)
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 
381, in create_snap
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] with RBDVolumeProxy(self, 
str(volume), pool=pool) as vol:
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 
65, in __init__
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] driver._disconnect_from_rados(client, 
ioctx)
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] self.force_reraise()
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] six.reraise(self.type_, self.value, 
self.tb)
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 
61, in __init__
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] read_only=read_only)
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc]   File 
"/usr/lib/python2.7/site-packages/rbd.py", line 374, in __init__
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] raise make_ex(ret, 'error opening 
image %s at snapshot %s' % (name, snapshot))
  2017-01-06 15:30:44.200 7947 ERROR nova.compute.manager [instance: 
496d2d53-5a57-44bf-93c4-26baec9a9bbc] ImageNotFound: error opening image 
496d2d53-5a57-44bf-93c4-26baec9a9bbc_disk at snapshot None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1654504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe :