[Yahoo-eng-team] [Bug 1604110] Re: when ngimages is set as default panel, page loops infinitely

2016-09-16 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1604110

Title:
  when ngimages is set as default panel, page loops infinitely

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Steps to reproduce the issue:
  1. Comment out 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/enabled/_1020_project_overview_panel.py#L21

  # DEFAULT_PANEL = 'overview'

  This disables 'overview' as being the default panel of project
  dashboard.

  In
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/enabled/_1051_project_ng_images_panel.py

  2. Set DISABLED = False
  3. Add line: DEFAULT_PANEL = 'ngimages'

  4. Load the project dashboard.
  The ngimages panel will reload itself infinitely. I set a debug point in the 
images.module.js and saw that the loading of js modules would occur again and 
again, but the page itself wouldn't load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1604110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624570] Re: qos rules are not applied to trunk subports

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371828
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ecdf75163527736728e0816ed3ab8e68e0f47b97
Submitter: Jenkins
Branch:master

commit ecdf75163527736728e0816ed3ab8e68e0f47b97
Author: Armando Migliaccio 
Date:   Fri Sep 16 15:38:40 2016 -0700

Change the prefix for trunk subports device_owner

The choice was poorly made as the 'network:' prefix is used extensively
in the codebase to identify ports created by the platform itself. Subports
instead are user created.

Closes-bug: 1624570

Change-Id: Ie792a154a6946d0acd5bed322363319e241b1ae7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624570

Title:
  qos rules are not applied to trunk subports

Status in neutron:
  Fix Released

Bug description:
  Change [1], introduced a device_owner for trunk sub-ports. However, it
  was overlooked that device_owner is used by QoS to determine whether
  to process the port and apply QoS rules [2,3,4].

  Right now there is not active coverage that ensures that QoS and Trunk
  are indeed compatible but as we stand right now, they clearly cannot
  as the code forbids applying rules to anything whose device_owner
  starts with 'network:' or 'neutron:'.

  We need to figure out a way to solve the conundrum.

  [1] https://review.openstack.org/#/c/368289/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/qos/rule.py#L84

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620764] Re: migration test fails on table addition

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371075
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=0818d42cb2afb5cd1ded72d9c2cc7c0ac4762a2c
Submitter: Jenkins
Branch:master

commit 0818d42cb2afb5cd1ded72d9c2cc7c0ac4762a2c
Author: David Stanek 
Date:   Thu Sep 15 19:28:22 2016 +

Ensure the sqla-migrate scripts cache is cleared

The cache was causing problems with our multiple repositories since
the modules all have the same names.
  e.g. expand_repo/versions/001_awesome.py and
   data_migration_repo/versions/001_awesome.py)

Change-Id: Ib9bbeebc2149b0aee5f3d6fec963e40e90f1b743
Closes-Bug: #1620764


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1620764

Title:
  migration test fails on table addition

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  If expand repo migration adds a table, corresponding unit test fails
  attempting to access created table with the error "Table does not
  exist" [0]

  [0] http://logs.openstack.org/88/208488/51/check/gate-keystone-
  python27-db-ubuntu-
  xenial/81311f3/console.html#_2016-09-06_14_27_49_936937

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1620764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624383] Re: vif plugging fails with ovn when trying to set mtu on qvb device that does not exist

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371543
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=fc0e281743917dbda25fb1911500439abed192ca
Submitter: Jenkins
Branch:master

commit fc0e281743917dbda25fb1911500439abed192ca
Author: John Garbutt 
Date:   Fri Sep 16 14:51:10 2016 +0100

Stop ovn networking failing on mtu

The type check needed to be more specific. There are some subclasses
that don't create the devices we were trying to set the mtu on.

Change-Id: Icc628a2dbde137d320fb78ad45b2ee0f7b5775fa
Closes-Bug: #1624383


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624383

Title:
  vif plugging fails with ovn when trying to set mtu on qvb device that
  does not exist

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Richard Theis reported the regression here:

  https://review.openstack.org/#/c/370681/9/nova/virt/libvirt/vif.py

  Shown here:

  http://logs.openstack.org/75/371175/1/check/gate-tempest-dsvm-
  networking-
  ovn/7e52927/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-16_11_20_39_627

  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager 
[req-78de89a8-2053-42d9-899f-04aa2b8f25c7 
tempest-FloatingIPsTestJSON-1913520192 tempest-FloatingIPsTestJSON-1913520192] 
[instance: fa58495c-1f29-4149-9efb-2bbf7624d221] Instance failed to spawn
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] Traceback (most recent call last):
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2078, in _build_resources
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] yield resources
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1920, in 
_build_and_run_instance
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] block_device_info=block_device_info)
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2583, in spawn
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] post_xml_callback=gen_confdrive)
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4814, in 
_create_domain_and_network
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] self.plug_vifs(instance, network_info)
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 684, in plug_vifs
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] self.vif_driver.plug(instance, vif)
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 817, in plug
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] self._plug_os_vif(instance, vif_obj, 
vif)
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 799, in _plug_os_vif
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] linux_net._set_device_mtu(veth, mtu)
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 1237, in _set_device_mtu
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] check_exit_code=[0, 2, 254])
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/utils.py", line 295, in execute
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] return 
RootwrapProcessHelper().execute(*cmd, **kwargs)
  2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/utils.py", line 178, in execute
  2016-09-16 11:21:29.140 15810 ERROR 

[Yahoo-eng-team] [Bug 1619466] Re: Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc command line

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/366690
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=7cd34433057897219b01ea2871ed7eba9a034d27
Submitter: Jenkins
Branch:master

commit 7cd34433057897219b01ea2871ed7eba9a034d27
Author: Nir Magnezi 
Date:   Wed Sep 7 07:12:27 2016 -0400

Adds neutron_lbaas.conf and services_lbaas.conf to q-svc command line

When q-lbaasv2 is enabled in local.conf, the neutron-lbaas plugin.sh
script creates new services_lbaas.conf and neutron_lbaas.conf files
with some config parameters.

Under several circumstances, some of the options in those files are
needed by other neutron daemons, such as the q-svc service.

This patch modifies the neutron-lbaas devstack plugin to include the
above mentioned config files in q-svc command line, by adding those
files to Q_PLUGIN_EXTRA_CONF_FILES.

Since both config files are shipped in neutron-lbaas, both should be
included. Starting from Ocata, The service provider option won't
automatically load to q-svc, so that is another good incentive to have
it passed with --config-file.

Closes-Bug: #1619466
Related-Bug: #1565511

Change-Id: I652ab029b7427c8783e4b2f0443a89ee884bf064


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619466

Title:
  Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc
  command line

Status in neutron:
  Fix Released

Bug description:
  When q-lbaasv2 is enabled in your devstack local.conf, this implies
  that LBaaS v2 is going to be used, and neutron-lbaas's corresponding
  devstack plugin.sh script creates a new
  /etc/neutron/neutron_lbaas.conf file with come configuration
  parameters. However, under several circumstances, some of the options
  in this file are needed by other neutron daemons, such as the q-svc
  daemon.

  So, if q-lbaasv2 is enabled in devstack local.conf, then the command-
  line for the q-svc agent should also include '--config-file
  /etc/neutron/neutron_lbaas.conf' so that these configuration
  parameters are pulled in for that daemon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624570] [NEW] qos rules are not applied to trunk subports

2016-09-16 Thread Armando Migliaccio
Public bug reported:

Change [1], introduced a device_owner for trunk sub-ports. However, it
was overlooked that device_owner is used by QoS to determine whether to
process the port and apply QoS rules [2,3,4].

Right now there is not active coverage that ensures that QoS and Trunk
are indeed compatible but as we stand right now, they clearly cannot as
the code forbids applying rules to anything whose device_owner starts
with 'network:' or 'neutron:'.

We need to figure out a way to solve the conundrum.

[1] https://review.openstack.org/#/c/368289/
[2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/qos/rule.py#L84

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: newton-rc-potential

** Tags added: newton-rc-potential

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624570

Title:
  qos rules are not applied to trunk subports

Status in neutron:
  New

Bug description:
  Change [1], introduced a device_owner for trunk sub-ports. However, it
  was overlooked that device_owner is used by QoS to determine whether
  to process the port and apply QoS rules [2,3,4].

  Right now there is not active coverage that ensures that QoS and Trunk
  are indeed compatible but as we stand right now, they clearly cannot
  as the code forbids applying rules to anything whose device_owner
  starts with 'network:' or 'neutron:'.

  We need to figure out a way to solve the conundrum.

  [1] https://review.openstack.org/#/c/368289/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/qos/rule.py#L84

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624562] [NEW] container details styles in swift ui are broken

2016-09-16 Thread Daniel Castellanos
Public bug reported:

The styling for the container details in swift UI is broken


-Goto Project>ObjectStorage>Containers
-Select a container (create a new one if you don't have any)

Result See how the container details are a bit misaligned and bulleted

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: swift ux

** Tags added: swift ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624562

Title:
  container details styles in swift ui are broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The styling for the container details in swift UI is broken

  
  -Goto Project>ObjectStorage>Containers
  -Select a container (create a new one if you don't have any)

  Result See how the container details are a bit misaligned and bulleted

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624554] [NEW] database deadlock during tempest run

2016-09-16 Thread Alex Schultz
Public bug reported:

During our Puppet Openstack CI testing with tempest, we received an
error indicating a DB deadlock in the nova api.

http://logs.openstack.org/78/370178/1/check/gate-puppet-openstack-
integration-3-scenario001-tempest-ubuntu-
xenial/8c9eb76/console.html#_2016-09-16_19_28_57_526498

Attached are the nova configurations and nova logs for this run.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-deadlock.tar"
   
https://bugs.launchpad.net/bugs/1624554/+attachment/4742312/+files/nova-deadlock.tar

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624554

Title:
  database deadlock during tempest run

Status in OpenStack Compute (nova):
  New

Bug description:
  During our Puppet Openstack CI testing with tempest, we received an
  error indicating a DB deadlock in the nova api.

  http://logs.openstack.org/78/370178/1/check/gate-puppet-openstack-
  integration-3-scenario001-tempest-ubuntu-
  xenial/8c9eb76/console.html#_2016-09-16_19_28_57_526498

  Attached are the nova configurations and nova logs for this run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593001] Re: Horizon Workflows should be one experience

2016-09-16 Thread OpenStack Infra
** Changed in: horizon
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1593001

Title:
  Horizon Workflows should be one experience

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The experience of the legacy (Django) workflows and the modern
  (angular based) workflows are drastically different.  They should be
  aligned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1593001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570482] Re: UX: Single Step Workflows should be handled differently

2016-09-16 Thread OpenStack Infra
** Changed in: horizon
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1570482

Title:
  UX: Single Step Workflows should be handled differently

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Single Step Workflows shouldn't even be treated as workflows ... it
  wastes space and looks strange.

  See here: https://i.imgur.com/RwKCHc4.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1570482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624546] [NEW] swift ui creating infinite folders in special scenarios

2016-09-16 Thread Daniel Castellanos
Public bug reported:

The swift UI creates an infinite folder tree when providing specific
input as the folder name

Steps to reproduce:

1.- Goto Project>Object Storage> containers
2.- create a new container i.e "Test"
3.- Add a folder to the container with the following name: Folder1;/Folder2

Expected Result:

Folder1; is created with a sub folder Folder2

Actual Results:
An infinite tree of folders is created with Folder1; as root of each one of the 
nodes

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: swift

** Attachment added: "swift.tiff"
   https://bugs.launchpad.net/bugs/1624546/+attachment/4742245/+files/swift.tiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624546

Title:
  swift ui creating infinite folders in special scenarios

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The swift UI creates an infinite folder tree when providing specific
  input as the folder name

  Steps to reproduce:

  1.- Goto Project>Object Storage> containers
  2.- create a new container i.e "Test"
  3.- Add a folder to the container with the following name: Folder1;/Folder2

  Expected Result:

  Folder1; is created with a sub folder Folder2

  Actual Results:
  An infinite tree of folders is created with Folder1; as root of each one of 
the nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376316] Re: nova absolute-limits floating ip count is incorrect in a neutron based deployment

2016-09-16 Thread Sivasathurappan Radhakrishnan
we should not be tracking usage of security groups in Nova when using
Neutron. Below patch is filtering out network related limits from API
response.

https://review.openstack.org/#/c/344947/7

you can check this by trying out curl request with OpenStack-API-Version: 
compute 2.36
curl -g -i -X GET http://192.168.0.31:8774/v2.1/limits -H 
"OpenStack-API-Version: compute 2.36" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.32" -H "X-Auth-Token: $OS_TOKEN"

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376316

Title:
  nova absolute-limits floating ip count is incorrect in a neutron based
  deployment

Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  In Progress

Bug description:
  1.
  $ lsb_release -rd
  Description:  Ubuntu 14.04 LTS
  Release:  14.04

  2.
  $ apt-cache policy python-novaclient 
  python-novaclient:
Installed: 1:2.17.0-0ubuntu1
Candidate: 1:2.17.0-0ubuntu1
Version table:
   *** 1:2.17.0-0ubuntu1 0
  500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. nova absolute-limits should report the correct value of allocated floating 
ips
  4. nova absolute-limits shows 0 floating ips when I have 5 allocated

  $ nova absolute-limits | grep Floating
  | totalFloatingIpsUsed| 0  |
  | maxTotalFloatingIps | 10 |

  $ nova floating-ip-list
  +---+---++-+
  | Ip| Server Id | Fixed Ip   | Pool|
  +---+---++-+
  | 10.98.191.146 |   | -  | ext_net |
  | 10.98.191.100 |   | 10.5.0.242 | ext_net |
  | 10.98.191.138 |   | 10.5.0.2   | ext_net |
  | 10.98.191.147 |   | -  | ext_net |
  | 10.98.191.102 |   | -  | ext_net |
  +---+---++-+

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: python-novaclient 1:2.17.0-0ubuntu1
  ProcVersionSignature: User Name 3.13.0-24.47-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.2
  Architecture: amd64
  Date: Wed Oct  1 15:19:08 2014
  Ec2AMI: ami-0001
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: python-novaclient
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456899] Re: nova absolute-limits Security groups count incorrect when using Neutron

2016-09-16 Thread Sivasathurappan Radhakrishnan
As john had mentioned in his comment, we should not be tracking usage of
security groups in Nova. Below patch is filtering out network related
limits from API response.

https://review.openstack.org/#/c/344947/7

you can check this by trying out curl request with OpenStack-API-Version: 
compute 2.36
curl -g -i -X GET http://192.168.0.31:8774/v2.1/limits -H 
"OpenStack-API-Version: compute 2.36" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.32" -H "X-Auth-Token: $OS_TOKEN"


** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova
 Assignee: Sivasathurappan Radhakrishnan (siva-radhakrishnan) => 
(unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456899

Title:
  nova absolute-limits Security groups count incorrect when using
  Neutron

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Used security groups show always 1- even if i have 2 or 0 assigned to a VM
   nova absolute-limits
  ++--+---+
  | Name   | Used | Max   |
  ++--+---+
  | Cores  | 2| 20|
  | FloatingIps| 0| 10|
  | ImageMeta  | -| 128   |
  | Instances  | 1| 10|
  | Keypairs   | -| 100   |
  | Personality| -| 5 |
  | Personality Size   | -| 10240 |
  | RAM| 4096 | 51200 |
  | SecurityGroupRules | -| 20|
  | SecurityGroups | 1| 10|
  | Server Meta| -| 128   |
  | ServerGroupMembers | -| 10|
  | ServerGroups   | 0| 10|
  ++--+---+

  nova show 2e722ad7-d54b-4122-8b90-0debec882668
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | puma09.scl.lab.tlv.redhat.com
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | puma09.scl.lab.tlv.redhat.com
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0001
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2015-05-18T06:20:45.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |  
|
  | created  | 2015-05-18T06:18:33Z 
|
  | flavor   | m1.medium (3)
|
  | hostId   | 
3e2a5e99d50824f33c61f2408bab8e92fd70f1af4e4f23d569c04a4f |
  | id   | 2e722ad7-d54b-4122-8b90-0debec882668 
|
  | image| rhel 
(565e7dc4-67d1-46d7-8ef5-765c1455e530)  |
  | int_net network  | 192.168.1.3, 10.35.170.2 
|
  | key_name | -
|
  | metadata | {}   
|
  | name | VM-1 
|
  | os-extended-volumes:volumes_attached | []   
|
  | progress | 0
|
  | security_groups  | default, test
|
  | status   | ACTIVE   
|
  | tenant_id| 2c238e6d92af464889aca6a16d80f857 
|
  | updated  | 2015-05-18T06:20:45Z 
  

[Yahoo-eng-team] [Bug 1624515] [NEW] DVR: SNAT port not found in the list error in check jobs

2016-09-16 Thread Brian Haley
Public bug reported:

While looking through logs I came across this ERROR:

http://logs.openstack.org/30/370430/6/check/gate-tempest-dsvm-neutron-
dvr-ubuntu-
xenial/c853e94/logs/screen-q-l3.txt.gz#_2016-09-15_16_27_41_804

Pasting here since that log could go away (even if the paste is horribly
wrapped, sorry):

ERROR neutron.agent.l3.dvr_router_base [-] DVR: SNAT port not found in
the list [{u'allowed_address_pairs': [], u'extra_dhcp_opts': [],
u'updated_at': u'2016-09-15T16:26:02', u'device_owner':
u'network:router_centralized_snat', u'revision_number': 6,
u'port_security_enabled': False, u'binding:profile': {},
u'binding:vnic_type': u'normal', u'fixed_ips': [{u'subnet_id':
u'af427fca-9194-440f-87d6-e74e4d1c8a27', u'prefixlen': 28,
u'ip_address': u'10.1.0.4'}], u'id': u'54df3773-e6cf-4d9c-
b1b9-0e43f77ba21a', u'security_groups': [], u'binding:vif_details':
{u'port_filter': True, u'ovs_hybrid_plug': True}, u'address_scopes':
{u'4': None, u'6': None}, u'binding:vif_type': u'ovs', u'mac_address':
u'fa:16:3e:ed:f7:77', u'project_id': u'', u'status': u'ACTIVE',
u'subnets': [{u'dns_nameservers': [], u'ipv6_ra_mode': None,
u'gateway_ip': u'10.1.0.1', u'cidr': u'10.1.0.0/28', u'id':
u'af427fca-9194-440f-87d6-e74e4d1c8a27', u'subnetpool_id': None}],
u'binding:host_id': u'ubuntu-xenial-osic-cloud1-4314226',
u'description': u'', u'device_id': u'6e1fd121-cf73-4e72-b6fa-
431761591de6', u'name': u'', u'admin_state_up': True, u'network_id':
u'86a63ce4-50b6-48ee-b192-c77fe2481db9', u'tenant_id': u'',
u'created_at': u'2016-09-15T16:25:51', u'mtu': 1450, u'extra_subnets':
[]}, {u'allowed_address_pairs': [], u'extra_dhcp_opts': [],
u'updated_at': u'2016-09-15T16:26:06', u'device_owner':
u'network:router_centralized_snat', u'revision_number': 7,
u'port_security_enabled': False, u'binding:profile': {},
u'binding:vnic_type': u'normal', u'fixed_ips': [{u'subnet_id':
u'b433a3c3-eb7c-404b-aea3-5e454b4dd0bd', u'prefixlen': 64,
u'ip_address': u'2003:0:0:1::2'}, {u'subnet_id':
u'3dd250b4-eade-4239-9011-9860ccf31364', u'prefixlen': 64,
u'ip_address': u'2003::4'}], u'id': u'7454b08a-3431-4099-a3bc-
7e46c56f310b', u'security_groups': [], u'binding:vif_details':
{u'port_filter': True, u'ovs_hybrid_plug': True}, u'address_scopes':
{u'4': None, u'6': None}, u'binding:vif_type': u'ovs', u'mac_address':
u'fa:16:3e:ce:db:80', u'project_id': u'', u'status': u'ACTIVE',
u'subnets': [{u'dns_nameservers': [], u'ipv6_ra_mode':
u'dhcpv6-stateless', u'gateway_ip': u'2003:0:0:1::1', u'cidr':
u'2003:0:0:1::/64', u'id': u'b433a3c3-eb7c-404b-aea3-5e454b4dd0bd',
u'subnetpool_id': None}, {u'dns_nameservers': [], u'ipv6_ra_mode':
u'dhcpv6-stateless', u'gateway_ip': u'2003::1', u'cidr': u'2003::/64',
u'id': u'3dd250b4-eade-4239-9011-9860ccf31364', u'subnetpool_id':
None}], u'binding:host_id': u'ubuntu-xenial-osic-cloud1-4314226',
u'description': u'', u'device_id': u'6e1fd121-cf73-4e72-b6fa-
431761591de6', u'name': u'', u'admin_state_up': True, u'network_id':
u'7862083c-a928-4170-b99e-3cfd0fb3ae77', u'tenant_id': u'',
u'created_at': u'2016-09-15T16:25:55', u'mtu': 1450, u'extra_subnets':
[]}] for the given router  internal port {u'allowed_address_pairs': [],
u'extra_dhcp_opts': [], u'updated_at': u'2016-09-15T16:27:37',
u'device_owner': u'network:router_interface_distributed',
u'revision_number': 8, u'port_security_enabled': False,
u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips':
[{u'subnet_id': u'3dd250b4-eade-4239-9011-9860ccf31364', u'prefixlen':
64, u'ip_address': u'2003::1'}], u'id':
u'6092e3b0-f833-4611-82e4-7fe5d1d31021', u'security_groups': [],
u'binding:vif_details': {}, u'address_scopes': {u'4': None, u'6': None},
u'binding:vif_type': u'distributed', u'mac_address':
u'fa:16:3e:ea:5f:f0', u'project_id':
u'b557a0938e3748d6a34b0ca2efdee658', u'status': u'ACTIVE', u'subnets':
[{u'dns_nameservers': [], u'ipv6_ra_mode': u'dhcpv6-stateless',
u'gateway_ip': u'2003::1', u'cidr': u'2003::/64', u'id':
u'3dd250b4-eade-4239-9011-9860ccf31364', u'subnetpool_id': None}],
u'binding:host_id': u'', u'description': u'', u'device_id':
u'6e1fd121-cf73-4e72-b6fa-431761591de6', u'name': u'',
u'admin_state_up': True, u'network_id': u'7862083c-a928-4170-b99e-
3cfd0fb3ae77', u'tenant_id': u'b557a0938e3748d6a34b0ca2efdee658',
u'created_at': u'2016-09-15T16:25:54', u'mtu': 1450, u'extra_subnets':
[{u'dns_nameservers': [], u'ipv6_ra_mode': u'dhcpv6-stateless',
u'gateway_ip': u'2003:0:0:1::1', u'cidr': u'2003:0:0:1::/64', u'id':
u'b433a3c3-eb7c-404b-aea3-5e454b4dd0bd', u'subnetpool_id': None}]}

>From looking at the trace, I do see the subnet in question in the list,
but since the code is only checking it's in the first port it doesn't
find it.  I'm curious why it doesn't look in them all, so will propose a
patch to verify it works and get some feedback.

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a 

[Yahoo-eng-team] [Bug 1624508] [NEW] Glance raises a HTTPBadRequest instead of HTTPRequestRangeNotSatisfiable (400 instead of 416)

2016-09-16 Thread Dharini Chandrasekar
Public bug reported:

Currently when range requests are given as a part of the HTTP header request 
for partial downloads of Glance images, a call to the method 
``get_content_range()`` in glance/common/wsgi.py is made.
If the request range is not satisfiable (For example a bad range like 2-4/3 or 
2-/3), the ``get_content_range()`` method raises a 400 instead of a 416.

Since this method deals only with parsing and returning the content
range header, it is more appropriate for it to raise a
HTTPRequestRangeNotSatisfiable with a status code of 416 instead of a
400.

File:
https://github.com/openstack/glance/blob/master/glance/common/wsgi.py#L975-L983

** Affects: glance
 Importance: Undecided
 Assignee: Dharini Chandrasekar (dharini-chandrasekar)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Dharini Chandrasekar (dharini-chandrasekar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1624508

Title:
  Glance raises a HTTPBadRequest instead of
  HTTPRequestRangeNotSatisfiable (400 instead of 416)

Status in Glance:
  New

Bug description:
  Currently when range requests are given as a part of the HTTP header request 
for partial downloads of Glance images, a call to the method 
``get_content_range()`` in glance/common/wsgi.py is made.
  If the request range is not satisfiable (For example a bad range like 2-4/3 
or 2-/3), the ``get_content_range()`` method raises a 400 instead of a 416.

  Since this method deals only with parsing and returning the content
  range header, it is more appropriate for it to raise a
  HTTPRequestRangeNotSatisfiable with a status code of 416 instead of a
  400.

  File:
  
https://github.com/openstack/glance/blob/master/glance/common/wsgi.py#L975-L983

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1624508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624496] Re: _create_or_update_agent used by neutron-dynamic-routing

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371742
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=b4c840d97b828c4f81c813f41f3de5f1355f2896
Submitter: Jenkins
Branch:master

commit b4c840d97b828c4f81c813f41f3de5f1355f2896
Author: Armando Migliaccio 
Date:   Fri Sep 16 11:06:22 2016 -0700

Stop using _create_or_update_agent

There is a public method, use it instead!

Change-Id: Ie076a2860a54c5e2958c16593c3f39f86353cd34
Closes-bug: #1624496


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624496

Title:
  _create_or_update_agent used by neutron-dynamic-routing

Status in neutron:
  Fix Released

Bug description:
  With change [1] we accidentally removed a private method, which is
  used by neutron-dynamic-routing. Furthermore, we tweaked the signature
  of the corresponding public method. This is really bad and should have
  not happened.

  No point in crying over spilled milk, [3] shows us that damage is not
  as bad as it sounds, and does not justify a revert for [1], but this
  warrants an RC2 for dynamic-routing, whose unit tests are currently
  broken for RC1.

  [1] https://review.openstack.org/#/c/367182/
  [2] https://review.openstack.org/#/c/371680/
  [3] 
http://codesearch.openstack.org/?q=create_or_update_agent=nope==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624494] Re: [master] metadata is not working on multi-node setup

2016-09-16 Thread Prashant Shetty
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624494

Title:
  [master] metadata is not working on multi-node setup

Status in devstack:
  New
Status in neutron:
  New

Bug description:
  Setup:
  1. One controller
  2. KVM and ESX nova compute
  3. Two Network nodes running q-dhcp and q-meta

  Nodes: Ubuntu 14.04.3 amd64 Trusty

  On above setup, tried to run few metadata queries we see 500 internal
  server error.

  $ route -n
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric RefUse Iface
  0.0.0.0 1.1.107.1   0.0.0.0 UG0  00 eth0
  1.1.107.0   0.0.0.0 255.255.255.0   U 0  00 eth0
  169.254.169.254 1.1.107.1   255.255.255.255 UGH   0  00 eth0
  $ curl http://169.254.169.254
  500 Internal Server Error

  Remote metadata server experienced an internal server error.

  $

  NOTE: Moved all services to single node(controller) and metadata
  queries seems to work fine.

  vmware@cntr11:~$ neutron agent-list
  
+--++---+---+---+++
  | id   | agent_type | host  | 
availability_zone | alive | admin_state_up | binary |
  
+--++---+---+---+++
  | 0168206e-6c13-40df-a0b8-7772220ab9cd | DHCP agent | network-1 | nova
  | :-)   | True   | neutron-dhcp-agent |
  | 08c8bb6e-0c7f-41e3-b134-81e2d5334aea | Metadata agent | network-1 | 
  | :-)   | True   | neutron-metadata-agent |
  | 25e9fa6d-6acc-4b8e-bb0b-f2d3ac20981d | Metadata agent | network-2 | 
  | :-)   | True   | neutron-metadata-agent |
  | 534d349c-8830-4648-814b-611a30f59287 | DHCP agent | network-2 | nova
  | :-)   | True   | neutron-dhcp-agent |
  
+--++---+---+---+++
  vmware@cntr11:~$ 
  vmware@cntr11:~$ nova service-list
  
++--+---+--+-+---++-+
  | Id | Binary   | Host  | Zone | Status  | State | 
Updated_at | Disabled Reason |
  
++--+---+--+-+---++-+
  | 7  | nova-conductor   | cntr11| internal | enabled | up| 
2016-09-16T10:00:22.00 | -   |
  | 9  | nova-compute | esx-ubuntu-01 | nova | enabled | up| 
2016-09-16T10:00:14.00 | -   |
  | 10 | nova-compute | kvm-3 | nova | enabled | up| 
2016-09-16T10:00:23.00 | -   |
  | 11 | nova-compute | kvm-2 | nova | enabled | up| 
2016-09-16T10:00:19.00 | -   |
  | 12 | nova-compute | kvm-1 | nova | enabled | up| 
2016-09-16T10:00:19.00 | -   |
  | 13 | nova-scheduler   | cntr11| internal | enabled | up| 
2016-09-16T10:00:15.00 | -   |
  | 14 | nova-consoleauth | cntr11| internal | enabled | up| 
2016-09-16T10:00:20.00 | -   |
  
++--+---+--+-+---++-+
  vmware@cntr11:~$ 

  Logs:
  2016-09-13 13:31:50.713 14309 DEBUG eventlet.wsgi.server [-] (14309) accepted 
'' server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:868
  2016-09-13 13:31:50.715 14309 DEBUG neutron.agent.metadata.agent [-] Request: 
GET / HTTP/1.0^M
  Accept-Encoding: gzip, deflate^M
  Content-Length: 0^M
  Content-Type: text/plain^M
  Host: 169.254.169.254^M
  User-Agent: Python-httplib2/0.9.2 (gzip)^M
  X-Forwarded-For: 1.1.107.3^M
  X-Neutron-Router-Id: bbe453a5-db77-4cd9-af02-31232a222f16 __call__ 
/opt/stack/neutron/neutron/agent/metadata/agent.py:86
  2016-09-13 13:31:50.716 14309 DEBUG oslo_messaging._drivers.amqpdriver [-] 
CALL msg_id: 50b283bdeab945cb93872300abd5b47c exchange 'neutron' topic 
'q-plugin' _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:448
  2016-09-13 13:31:50.807 14309 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: 50b283bdeab945cb93872300abd5b47c __call__ 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:296
  2016-09-13 13:31:50.808 14309 DEBUG oslo_messaging._drivers.amqpdriver [-] 
CALL msg_id: 52646ead6313405fb688b80cbb4bfc73 exchange 'neutron' topic 
'q-plugin' _send 

[Yahoo-eng-team] [Bug 1624496] [NEW] _create_or_update_agent used by neutron-dynamic-routing

2016-09-16 Thread Armando Migliaccio
Public bug reported:

With change [1] we accidentally removed a private method, which is used
by neutron-dynamic-routing. Furthermore, we tweaked the signature of the
corresponding public method. This is really bad and should have not
happened.

No point in crying over spilled milk, [3] shows us that damage is not as
bad as it sounds, and does not justify a revert for [1], but this
warrants an RC2 for dynamic-routing, whose unit tests are currently
broken for RC1.

[1] https://review.openstack.org/#/c/367182/
[2] https://review.openstack.org/#/c/371680/
[3] 
http://codesearch.openstack.org/?q=create_or_update_agent=nope==

** Affects: neutron
 Importance: Critical
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: In Progress


** Tags: newton-rc-potential

** Description changed:

  With change [1] we accidentally removed a private method, which is used
  by neutron-dynamic-routing. Furthermore, we tweaked the signature of the
  corresponding public method. This is really bad and should have not
  happened.
  
  No point in crying over spilled milk, [3] shows us that damage is not as
- bad and does not justify a revert for [1].
+ bad and does not justify a revert for [1], but this warrants an RC2 for
+ dynamic-routing, whose unit tests are currently broken for RC1.
  
  [1] https://review.openstack.org/#/c/367182/
  [2] https://review.openstack.org/#/c/371680/
  [3] 
http://codesearch.openstack.org/?q=create_or_update_agent=nope==

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
Milestone: None => newton-rc2

** Tags added: newton-rc-potential

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624496

Title:
  _create_or_update_agent used by neutron-dynamic-routing

Status in neutron:
  In Progress

Bug description:
  With change [1] we accidentally removed a private method, which is
  used by neutron-dynamic-routing. Furthermore, we tweaked the signature
  of the corresponding public method. This is really bad and should have
  not happened.

  No point in crying over spilled milk, [3] shows us that damage is not
  as bad as it sounds, and does not justify a revert for [1], but this
  warrants an RC2 for dynamic-routing, whose unit tests are currently
  broken for RC1.

  [1] https://review.openstack.org/#/c/367182/
  [2] https://review.openstack.org/#/c/371680/
  [3] 
http://codesearch.openstack.org/?q=create_or_update_agent=nope==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561200] Re: created_at and updated_at times don't include timezone

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/368682
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=424a633fd985ad517a06dee1aeec805702d2e576
Submitter: Jenkins
Branch:master

commit 424a633fd985ad517a06dee1aeec805702d2e576
Author: Kevin Benton 
Date:   Fri Sep 9 06:20:40 2016 -0700

Include timezone in timestamp fields

The Neutron 'created_at'/'updated_at' fields on API resources
were inconsistent with other OpenStack projects because we did
not include timezone information. This patch addressed that
problem by adding the zulu time indicator onto the end of the
fields.

Because this could break clients expecting no timezone, this patch
also eliminates the 'timestamp_core' and 'timestamp_ext' extensions
and consolidates them into a new 'timestamp' extension. This makes
the change discoverable via the API.

This is assuming the current API development paradigm where
extensions can come and go depending on the deployment and the client
is expected to handle this by checking the loaded extensions.
Once we decide extensions are permanent, this type of change will
no longer be possible.

Even though this is being proposed late in the cycle, it is better
to get this change in before the release where we expose even more
resources with incorrectly formatted timestamps.

APIImpact
Closes-Bug: #1561200
Change-Id: I2ee2ed4c713d88345adc55b022feb95653eec663


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561200

Title:
  created_at and updated_at times don't include timezone

Status in neutron:
  Fix Released

Bug description:
  created_at and updated_at were recently added to the API calls and
  notifications for many neutron resources (networks, subnets, ports,
  possibly more), which is awesome! I've noticed that the times don't
  include a timezone (compare to nova servers and glance images, for
  instance).

  Even if there's an assumption a user can make, this can create
  problems with some display tools (I noticed this because a javascript
  date formatting filter does local timezone conversions when a timezone
  is created, which meant times for resources created seconds apart
  looked as though they were several hours adrift.

  Tested on neutron mitaka RC1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624441] Re: Heat: Resource CREATE failed: StackValidationFailed

2016-09-16 Thread Matt Riedemann
That trace doesn't show what the actual error was that caused networking
to fail.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624441

Title:
  Heat: Resource CREATE failed: StackValidationFailed

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi,

  I am getting following error during the Stack creation and launching.
  Could you please help me. Thanks.

  >> heat stack-show XX

  stack_status_reason   | Resource CREATE failed: StackValidationFailed:

  resources.pl_group.resources[3].resources.pl_vm: 
  | Property error: pl_vm.Properties.networks[1].port:  
  | Error validating value u'd2bc0630-2dfd-49c2-8994-02418486ad4c': Unable to 
find port with name'd2bc0630-2dfd-49c2-8994-02418486ad4c'

  
  >> neutron port-list | grep d2bc0630-2dfd-49c2-8994-02418486ad4c
  >> neutron net-list | grep d2bc0630-2dfd-49c2-8994-02418486ad4c

  
  >> compute log:

  2016-09-16 09:52:35.215 8210 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('compute0-15', 'compute0-15')
  2016-09-16 09:52:35.257 8210 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
  2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager Traceback (most 
recent call last):
  2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1643, in 
_allocate_network_async
  2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
  2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 294, in 
allocate_for_instance
  2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager raise 
exception.PortInUse(port_id=request.port_id)
  2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager PortInUse: Port 
1cc143cc-d943-49e3-b2bf-81b8910283be is still in use.
  2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager
  2016-09-16 09:52:35.407 8210 INFO nova.virt.libvirt.driver [-] [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] Creating image
  2016-09-16 09:52:40.357 8210 ERROR nova.compute.manager [-] [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] Instance failed to spawn
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] Traceback (most recent call last):
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2243, in 
_build_resources
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] yield resources
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in 
_build_and_run_instance
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] block_device_info=block_device_info)
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2615, in 
spawn
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] admin_pass=admin_password)
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3096, in 
_create_image
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] instance, network_info, admin_pass, 
files, suffix)
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2893, in 
_inject_data
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] network_info, 
libvirt_virt_type=CONF.libvirt.virt_type)
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/netutils.py", line 87, in 
get_injected_network_template
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] if not (network_info and template):
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/network/model.py", line 463, in __len__
  2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   

[Yahoo-eng-team] [Bug 1624065] Re: Multiple security groups with the same name are created

2016-09-16 Thread Matt Riedemann
This is how neutron works (assuming you're using neutron), security
group names and descriptions are not unique. If you were using nova-
network (which is now deprecated), security group names are unique per
project, but nova-network != neutron and as noted nova-network is
deprecated.

** Changed in: nova
   Status: New => Invalid

** Tags added: neutron security-groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624065

Title:
  Multiple security groups with the same name are created

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description:
  
  I am able to create multiple security groups with the same name and same 
description. This behaviour can result in confusion.

  Expected Behaviour:
  --
  Enforcing uniqueness in security group names

  AWS gives me an error when I try to create multiple security groups
  with the same name in the same vpc.

  '''
  An error occurred creating your security group.
  The security group 'launch-wizard-3' already exists for VPC 'vpc-03cfc166'
  '''

  Environment:
  ---
  OpenStack Mitaka on Ubuntu 14.04 server

  Reproduction Steps:
  ---

  Steps from horizon:
  1. Create multiple security groups with same name and same description

  Steps from cli:
  1. Run the command "nova secgroup-create test test" multiple times

  Result:
  --
  nova secgroup-list
  +--+-++
  | Id   | Name| Description|
  +--+-++
  | 7708f691-7107-43d3-87f4-1d3e672dbe8d | default | Default security group |
  | 60d730cc-476b-4d0b-8fbe-f06f09a0b9cd | test| test   |
  | 63481312-0f6c-4575-af37-3941e9864cfb | test| test   |
  | 827a8642-6b14-47b7-970d-38b8136f62a8 | test| test   |
  | 827c33b5-ee4b-43eb-867d-56b3c858664c | test| test   |
  | 95607bc1-43a4-4105-9aad-f072ac330499 | test| test   |
  +--+-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621200] Re: password created_at does not honor timezones

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/367025
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=32328de6e37e1f9f55d563f8a55087dc7d6f46e1
Submitter: Jenkins
Branch:master

commit 32328de6e37e1f9f55d563f8a55087dc7d6f46e1
Author: Ronald De Rose 
Date:   Wed Sep 7 23:51:09 2016 +

Fixes password created_at errors due to the server_default

Migration 002 sets the password created_at column to a TIMESTAMP type
with a server_default=sql.func.now(). There are a couple problems
that have been uncovered with this change:
* We cannot guarantee that func.now() will generate a UTC timestamp.
* For some older versions of MySQL, the default TIMESTAMP column will
automatically be updated when other columns are updated:
https://dev.mysql.com/doc/refman/5.5/en/timestamp-initialization.html

This patch fixes the problem by recreating the password created_at
column back to a DateTime type without a server_default:
1) Drop and recreate the created_at column
2) Update the created_at value
3) Set the created_at column as not nullable

Closes-Bug: #1621200
Change-Id: Id5c607a777afb6565d66a336028eba796e3846b2


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1621200

Title:
  password created_at does not honor timezones

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  In Progress

Bug description:
  This was initially discovered when running the unit tests for
  migration 002 in a timezone that is UTC+3.

  Migration 002 sets the password created_at column to a TIMESTAMP type
  with a server_default=sql.func.now(). There are a couple problems
  that have been uncovered with this change:
  * We cannot guarantee that func.now() will generate a UTC timestamp.
  * For some older versions of MySQL, the TIMESTAMP column will
  automatically be updated when other columns are updated:
  https://dev.mysql.com/doc/refman/5.5/en/timestamp-initialization.html

  Steps to reproduce:
  1. dpkg-reconfigure tzdata and select there Europe/Moscow (UTC+3).
  2. Restart mysql
  3. Configure opportunistic tests with the following command in mysql:
  GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest' @'%' identified by 
'openstack_citest' WITH GRANT OPTION;
  4. Run 
keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password

  Expected result: test pass

  Actual result:
  Traceback (most recent call last):
    File "keystone/tests/unit/identity/backends/test_base.py", line 255, in 
test_change_password
  self.driver.authenticate(user['id'], new_password)
    File "keystone/identity/backends/sql.py", line 65, in authenticate
  raise AssertionError(_('Invalid user / password'))
  AssertionError: Invalid user / password

  Aside from the test issue, we should be saving all time related data
  in DateTime format instead of TIMESTAMP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1621200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624270] Re: openstack error 'oslo_config.cfg.NoSuchOptError'

2016-09-16 Thread Matt Riedemann
*** This bug is a duplicate of bug 1574988 ***
https://bugs.launchpad.net/bugs/1574988

This is fixed in the latest stable/mitaka release and in master
(newton).

** This bug has been marked a duplicate of bug 1574988
   

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624270

Title:
  openstack error 'oslo_config.cfg.NoSuchOptError'

Status in OpenStack Compute (nova):
  New

Bug description:
  
  [root@controller1 ~]#
  [root@controller1 ~]# openstack server create --flavor m1.nano --image cirros 
--nic net-id=b002f05b-5342-4a67-9c93-8a72158cab60 --security-group default 
--key-name mykey provider-instance
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-b06491f6-df6d-4d75-82af-546fa23790a1)
  [root@controller1 ~]#
  [root@controller1 ~]#
  [root@controller1 ~]# uname -a
  Linux controller1.arundell.com 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 
11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  [root@controller1 ~]#

  [root@controller1 nova]# cat /etc/redhat-release
  CentOS Linux release 7.2.1511 (Core)
  [root@controller1 nova]#

  
  Log file : /var/log/nova/nova-api.log logs pasted below 
  ==

  2016-09-16 08:29:21.089 11479 INFO nova.api.openstack.wsgi 
[req-b92dbd45-5130-45d1-bc2c-003b2791fc38 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf4d77bb497d72d6b397 - - -] HTTP exception thrown: Image not found.
  2016-09-16 08:29:21.090 11479 INFO nova.osapi_compute.wsgi.server 
[req-b92dbd45-5130-45d1-bc2c-003b2791fc38 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images/cirros HTTP/1.1" status: 404 len: 
351 time: 0.3027530
  2016-09-16 08:29:21.159 11479 INFO nova.osapi_compute.wsgi.server 
[req-af640c20-e4cb-4bc8-ba18-a425db112776 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images HTTP/1.1" status: 200 len: 770 
time: 0.0645511
  2016-09-16 08:29:21.209 11479 INFO nova.osapi_compute.wsgi.server 
[req-9417d9b1-73ef-41dc-b67f-fabe60d2406f 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images/e5e3e4d7-c74a-4e97-80f3-06fdc014a079
 HTTP/1.1" status: 200 len: 95time: 0.0442920
  2016-09-16 08:29:21.240 11479 INFO nova.api.openstack.wsgi 
[req-dd435c55-c360-4dee-bcb3-8902576848b0 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf4d77bb497d72d6b397 - - -] HTTP exception thrown: Flavor m1.nano 
could not be found.
  2016-09-16 08:29:21.241 11479 INFO nova.osapi_compute.wsgi.server 
[req-dd435c55-c360-4dee-bcb3-8902576848b0 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors/m1.nano HTTP/1.1" status: 404 
len: 369 time: 0.0274239

  2016-09-16 08:29:21.278 11479 INFO nova.osapi_compute.wsgi.server 
[req-eb65e696-ee36-475a-9c24-a7539503e14d 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors HTTP/1.1" status: 200 len: 1740 
time: 0.0316298
  2016-09-16 08:29:21.324 11479 INFO nova.osapi_compute.wsgi.server 
[req-23b1d340-4aa7-482b-a22c-c2c7e7157990 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors/0 HTTP/1.1" status: 200 len: 689 
time: 0.0391679
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions 
[req-b06491f6-df6d-4d75-82af-546fa23790a1 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] Unexpected exception in API method
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
iwrapped
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
  2016-09-16 08:29:21.581 11479 ERROR 

[Yahoo-eng-team] [Bug 1619602] Re: Hyper-V: vhd config drive images are not migrated

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/364829
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2dd231cff199bd75bcd3d1031a9ff0a1a82ec1cb
Submitter: Jenkins
Branch:master

commit 2dd231cff199bd75bcd3d1031a9ff0a1a82ec1cb
Author: Lucian Petrut 
Date:   Fri Sep 2 13:39:29 2016 +0300

HyperV: ensure config drives are copied as well during resizes

During cold migration, vhd config drive images are not copied
over, on the wrong assumption that the instance is already
configured and does not need the config drive.

For this reason, migrating instances using vhd config drives
will fail, as there is a check ensuring that the config drive
is present, if required.

This change addresses the issue, removing the check that was
preventing the config drive from being copied.

Change-Id: I8cd42bed4515f4f75c92e595c4d8b847b16058dd
Closes-Bug: #1619602


** Changed in: nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619602

Title:
  Hyper-V: vhd config drive images are not migrated

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  During cold migration, vhd config drive images are not copied over, on
  the wrong assumption that the instance is already configured and does
  not need the config drive.

  There is an explicit check at the following location:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L75-L76

  For this reason, migrating instances using vhd config drives will fail, as 
there is a check ensuring that the config drive is present, if required:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L153-L163

  The Hyper-V driver should not skip moving the config drive image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1619602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624441] [NEW] Heat: Resource CREATE failed: StackValidationFailed

2016-09-16 Thread Stack
Public bug reported:

Hi,

I am getting following error during the Stack creation and launching.
Could you please help me. Thanks.

>> heat stack-show XX

stack_status_reason   | Resource CREATE failed: StackValidationFailed:

resources.pl_group.resources[3].resources.pl_vm: 
| Property error: pl_vm.Properties.networks[1].port:  
| Error validating value u'd2bc0630-2dfd-49c2-8994-02418486ad4c': Unable to 
find port with name'd2bc0630-2dfd-49c2-8994-02418486ad4c'


>> neutron port-list | grep d2bc0630-2dfd-49c2-8994-02418486ad4c
>> neutron net-list | grep d2bc0630-2dfd-49c2-8994-02418486ad4c


>> compute log:

2016-09-16 09:52:35.215 8210 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('compute0-15', 'compute0-15')
2016-09-16 09:52:35.257 8210 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager Traceback (most recent 
call last):
2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1643, in 
_allocate_network_async
2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 294, in 
allocate_for_instance
2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager raise 
exception.PortInUse(port_id=request.port_id)
2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager PortInUse: Port 
1cc143cc-d943-49e3-b2bf-81b8910283be is still in use.
2016-09-16 09:52:35.257 8210 TRACE nova.compute.manager
2016-09-16 09:52:35.407 8210 INFO nova.virt.libvirt.driver [-] [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] Creating image
2016-09-16 09:52:40.357 8210 ERROR nova.compute.manager [-] [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] Instance failed to spawn
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] Traceback (most recent call last):
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2243, in 
_build_resources
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] yield resources
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in 
_build_and_run_instance
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] block_device_info=block_device_info)
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2615, in 
spawn
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] admin_pass=admin_password)
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3096, in 
_create_image
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] instance, network_info, admin_pass, 
files, suffix)
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2893, in 
_inject_data
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] network_info, 
libvirt_virt_type=CONF.libvirt.virt_type)
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/virt/netutils.py", line 87, in 
get_injected_network_template
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] if not (network_info and template):
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/network/model.py", line 463, in __len__
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] return self._sync_wrapper(fn, *args, 
**kwargs)
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 
"/usr/lib/python2.7/site-packages/nova/network/model.py", line 450, in 
_sync_wrapper
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842] self.wait()
2016-09-16 09:52:40.357 8210 TRACE nova.compute.manager [instance: 
3f8e8168-5a6b-4f4a-985c-216da890f842]   File 

[Yahoo-eng-team] [Bug 1624221] Re: RequiredOptError: value required for option lock_path in group [DEFAULT]

2016-09-16 Thread Hirofumi Ichihara
This is not neutron bug. I also think that descriptions for the option
are seen in doc places. For example, http://docs.openstack.org/liberty
/config-reference/content/section_neutron.conf.html

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624221

Title:
  RequiredOptError: value required for option lock_path in group
  [DEFAULT]

Status in neutron:
  Invalid
Status in openstack-manuals:
  New

Bug description:
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
140, in _get_lock_path
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server raise 
cfg.RequiredOptError('lock_path')
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server 
RequiredOptError: value required for option lock_path in group [DEFAULT]

  
  After adding the option in neutron.conf the problem is rectified. So it 
should be updated in install-guide

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624277] Re: nova-scheduler: UnicodeDecodeError in host aggregates handling

2016-09-16 Thread Matt Riedemann
Yeah the failure is logging the aggregates here:

https://github.com/openstack/nova/blob/fe21d29fa8b02f3e6437f035b0af6c58f8f454bb/nova/scheduler/host_manager.py#L171

With the unicode. This is similar to bug 1514325.

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/mitaka
   Status: New => Confirmed

** Changed in: nova/mitaka
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624277

Title:
  nova-scheduler: UnicodeDecodeError in host aggregates handling

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  nova-scheduler doesn't seem to like when there are non-asci characters
  in the host aggregate objects.

  Steps to reproduce:

  1. Create a host aggregate with some non-asci chars in properties,
  e.g.:

  $ openstack aggregate show test_aggr
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | nova |
  | created_at| 2016-09-09T17:31:12.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | [u'node-6.domain.tld', u'node-7.domain.tld'] |
  | id| 54   |
  | name  | test_aggr|
  | properties| test_meta='проверка мета'|
  | updated_at| None |
  +---+--+

  2. Start an instance

  Expected result: instance started.
  Actual result: instance creating failed, exception in the nova-scheduler.log 
attached.

  This is reproducible with Mitaka, didn't try master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624399] [NEW] cc_disk_setup.py lookup_force_flag lacks entry for swap

2016-09-16 Thread Hannes
Public bug reported:

Hi

I noticed that creating a swap disk fails because the method
lookup_force_flag doesn't support the type "swap". The correct value is
"-f". Would be nice to add it.

Thank you
Hannes

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1624399

Title:
  cc_disk_setup.py lookup_force_flag lacks entry for swap

Status in cloud-init:
  New

Bug description:
  Hi

  I noticed that creating a swap disk fails because the method
  lookup_force_flag doesn't support the type "swap". The correct value
  is "-f". Would be nice to add it.

  Thank you
  Hannes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1624399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624392] [NEW] Nova attempting to set MTU on device that does not exist

2016-09-16 Thread Richard Theis
*** This bug is a duplicate of bug 1624383 ***
https://bugs.launchpad.net/bugs/1624383

Public bug reported:

Bug https://bugs.launchpad.net/nova/+bug/1623876 resulted in
https://review.openstack.org/#/c/370681/ which broke networking-ovn (see
http://logs.openstack.org/75/371175/1/check/gate-tempest-dsvm-
networking-
ovn/7e52927/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-16_11_20_39_627)
because it is attempting to set MTU on a device that doesn't exist.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624392

Title:
  Nova attempting to set MTU on device that does not exist

Status in OpenStack Compute (nova):
  New

Bug description:
  Bug https://bugs.launchpad.net/nova/+bug/1623876 resulted in
  https://review.openstack.org/#/c/370681/ which broke networking-ovn
  (see http://logs.openstack.org/75/371175/1/check/gate-tempest-dsvm-
  networking-
  ovn/7e52927/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-16_11_20_39_627)
  because it is attempting to set MTU on a device that doesn't exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624383] [NEW] vif plugging fails with ovn when trying to set mtu on qvb device that does not exist

2016-09-16 Thread Matt Riedemann
Public bug reported:

Richard Theis reported the regression here:

https://review.openstack.org/#/c/370681/9/nova/virt/libvirt/vif.py

Shown here:

http://logs.openstack.org/75/371175/1/check/gate-tempest-dsvm-
networking-
ovn/7e52927/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-16_11_20_39_627

2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager 
[req-78de89a8-2053-42d9-899f-04aa2b8f25c7 
tempest-FloatingIPsTestJSON-1913520192 tempest-FloatingIPsTestJSON-1913520192] 
[instance: fa58495c-1f29-4149-9efb-2bbf7624d221] Instance failed to spawn
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] Traceback (most recent call last):
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2078, in _build_resources
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] yield resources
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1920, in 
_build_and_run_instance
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] block_device_info=block_device_info)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2583, in spawn
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] post_xml_callback=gen_confdrive)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4814, in 
_create_domain_and_network
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] self.plug_vifs(instance, network_info)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 684, in plug_vifs
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] self.vif_driver.plug(instance, vif)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 817, in plug
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] self._plug_os_vif(instance, vif_obj, 
vif)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 799, in _plug_os_vif
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] linux_net._set_device_mtu(veth, mtu)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 1237, in _set_device_mtu
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] check_exit_code=[0, 2, 254])
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/utils.py", line 295, in execute
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] return 
RootwrapProcessHelper().execute(*cmd, **kwargs)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/opt/stack/new/nova/nova/utils.py", line 178, in execute
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] return processutils.execute(*cmd, 
**kwargs)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
389, in execute
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] cmd=sanitized_cmd)
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] ProcessExecutionError: Unexpected error 
while running command.
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf ip link set qvb00b6a38f-60 mtu 1442
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] Exit code: 1
2016-09-16 11:21:29.140 15810 ERROR nova.compute.manager [instance: 
fa58495c-1f29-4149-9efb-2bbf7624d221] 

[Yahoo-eng-team] [Bug 1624264] Re: Dashboard take 5-10 secs for OpenStack operations

2016-09-16 Thread Junaid Ali
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624264

Title:
  Dashboard take 5-10 secs for OpenStack operations

Status in OpenStack Dashboard (Horizon):
  New
Status in horizon package in Ubuntu:
  New

Bug description:
  Dashboard takes around 5-10secs for OpenStack operations (e.g opening 
instance's page etc) from project panel in admin tenant. Although, if we open 
any page from admin panel, it takes around 2-3 seconds. Similarly, if we open 
any page from any other tenant, 5-10 secs are consumed.
  CLI commands seem to be working fine as they take around 1-3.5 secs.

  Please let me know the logs to share. Neutron, Keystone, Nova logs
  doesn't show any errors regarding slowness in API calls.

  OpenStack Release: Mitaka
  Horizon version: 9.1.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624097] Re: Neutron LBaaS CLI quota show includes l7policy and doesn't include member

2016-09-16 Thread Reedip
For neutron , query has been shared in the ML :
http://lists.openstack.org/pipermail/openstack-
dev/2016-September/103832.html

Awaiting response


** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624097

Title:
  Neutron LBaaS CLI quota show includes l7policy and doesn't include
  member

Status in neutron:
  New
Status in python-openstackclient:
  In Progress

Bug description:
  When running devstack and executing "neutron quota-show" it lists an
  l7 policy quota, but does not show a member quota.  However, the help
  message for "neutron quota-update" includes a member quota, but not an
  l7 policy quota.  The show command should not have the l7 policy
  quota, but should have the member quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624325] [NEW] Port binding fail in single node Mitaka setup

2016-09-16 Thread JABAR ALI
Public bug reported:

root@vm-V:/home/vm#
root@vm-V:/home/vm# neutron port-list
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| 74e2dc56-08d1-42ec-9792-23b9457eb920 |  | fa:16:3e:44:3f:b9 | 
{"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", "ip_address": 
"192.168.0.13"} |
| fa45dd01-beee-476a-832c-f5c10f020f61 |  | fa:16:3e:37:8d:6c | 
{"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", "ip_address": 
"192.168.0.10"} |
+--+--+---+-+
root@vm-V:/home/vm# neutron port-show 74e2dc56-08d1-42ec-9792-23b9457eb920
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   | vm-V  
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | binding_failed
  |
| binding:vnic_type | direct
  |
| created_at| 2016-09-12T22:26:47   
  |
| description   |   
  |
| device_id | 8343862a-6c26-4f92-b80c-f77c6d83bf51  
  |
| device_owner  | compute:None  
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", 
"ip_address": "192.168.0.13"} |
| id| 74e2dc56-08d1-42ec-9792-23b9457eb920  
  |
| mac_address   | fa:16:3e:44:3f:b9 
  |
| name  |   
  |
| network_id| 460b9c5c-119d-414e-a178-b0be69af41a1  
  |
| port_security_enabled | True  
  |
| security_groups   | 70982f3d-0dad-4f52-8315-e844be1f7824  
  |
| status| DOWN  
  |
| tenant_id | 83dc66e9e2554e109bbb13b164d807ce  
  |
| updated_at| 2016-09-15T18:08:20   
  |
+---+-+
root@vm-V:/home/vm# neutron agent-list
+--++--+---+---++---+
| id   | agent_type | host | 
availability_zone | alive | admin_state_up | binary|
+--++--+---+---++---+
| 987b5378-da3a-4508-9c6a-035dbff275bd | Linux bridge agent | vm-V |
   | :-)   | True   | neutron-linuxbridge-agent |
| 9b6e776b-1a61-48f5-a241-cdc61e119d22 | DHCP agent | vm-V | nova   
   | :-)   | True   | neutron-dhcp-agent|
| d6a44bd9-8bd3-41c1-a357-82ccaebdf360 | Metadata agent | vm-V |
   | :-)   | True   | neutron-metadata-agent|

[Yahoo-eng-team] [Bug 1624323] [NEW] Port binding fail in single node Mitaka setup

2016-09-16 Thread JABAR ALI
Public bug reported:

root@vm-V:/home/vm#
root@vm-V:/home/vm# neutron port-list
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| 74e2dc56-08d1-42ec-9792-23b9457eb920 |  | fa:16:3e:44:3f:b9 | 
{"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", "ip_address": 
"192.168.0.13"} |
| fa45dd01-beee-476a-832c-f5c10f020f61 |  | fa:16:3e:37:8d:6c | 
{"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", "ip_address": 
"192.168.0.10"} |
+--+--+---+-+
root@vm-V:/home/vm# neutron port-show 74e2dc56-08d1-42ec-9792-23b9457eb920
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   | vm-V  
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | binding_failed
  |
| binding:vnic_type | direct
  |
| created_at| 2016-09-12T22:26:47   
  |
| description   |   
  |
| device_id | 8343862a-6c26-4f92-b80c-f77c6d83bf51  
  |
| device_owner  | compute:None  
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", 
"ip_address": "192.168.0.13"} |
| id| 74e2dc56-08d1-42ec-9792-23b9457eb920  
  |
| mac_address   | fa:16:3e:44:3f:b9 
  |
| name  |   
  |
| network_id| 460b9c5c-119d-414e-a178-b0be69af41a1  
  |
| port_security_enabled | True  
  |
| security_groups   | 70982f3d-0dad-4f52-8315-e844be1f7824  
  |
| status| DOWN  
  |
| tenant_id | 83dc66e9e2554e109bbb13b164d807ce  
  |
| updated_at| 2016-09-15T18:08:20   
  |
+---+-+
root@vm-V:/home/vm# neutron agent-list
+--++--+---+---++---+
| id   | agent_type | host | 
availability_zone | alive | admin_state_up | binary|
+--++--+---+---++---+
| 987b5378-da3a-4508-9c6a-035dbff275bd | Linux bridge agent | vm-V |
   | :-)   | True   | neutron-linuxbridge-agent |
| 9b6e776b-1a61-48f5-a241-cdc61e119d22 | DHCP agent | vm-V | nova   
   | :-)   | True   | neutron-dhcp-agent|
| d6a44bd9-8bd3-41c1-a357-82ccaebdf360 | Metadata agent | vm-V |
   | :-)   | True   | neutron-metadata-agent|

[Yahoo-eng-team] [Bug 1624324] [NEW] Port binding fail in single node Mitaka setup

2016-09-16 Thread JABAR ALI
Public bug reported:

root@vm-V:/home/vm#
root@vm-V:/home/vm# neutron port-list
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| 74e2dc56-08d1-42ec-9792-23b9457eb920 |  | fa:16:3e:44:3f:b9 | 
{"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", "ip_address": 
"192.168.0.13"} |
| fa45dd01-beee-476a-832c-f5c10f020f61 |  | fa:16:3e:37:8d:6c | 
{"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", "ip_address": 
"192.168.0.10"} |
+--+--+---+-+
root@vm-V:/home/vm# neutron port-show 74e2dc56-08d1-42ec-9792-23b9457eb920
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   | vm-V  
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | binding_failed
  |
| binding:vnic_type | direct
  |
| created_at| 2016-09-12T22:26:47   
  |
| description   |   
  |
| device_id | 8343862a-6c26-4f92-b80c-f77c6d83bf51  
  |
| device_owner  | compute:None  
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {"subnet_id": "9023d118-223b-4cea-aa0c-72e28ed7c266", 
"ip_address": "192.168.0.13"} |
| id| 74e2dc56-08d1-42ec-9792-23b9457eb920  
  |
| mac_address   | fa:16:3e:44:3f:b9 
  |
| name  |   
  |
| network_id| 460b9c5c-119d-414e-a178-b0be69af41a1  
  |
| port_security_enabled | True  
  |
| security_groups   | 70982f3d-0dad-4f52-8315-e844be1f7824  
  |
| status| DOWN  
  |
| tenant_id | 83dc66e9e2554e109bbb13b164d807ce  
  |
| updated_at| 2016-09-15T18:08:20   
  |
+---+-+
root@vm-V:/home/vm# neutron agent-list
+--++--+---+---++---+
| id   | agent_type | host | 
availability_zone | alive | admin_state_up | binary|
+--++--+---+---++---+
| 987b5378-da3a-4508-9c6a-035dbff275bd | Linux bridge agent | vm-V |
   | :-)   | True   | neutron-linuxbridge-agent |
| 9b6e776b-1a61-48f5-a241-cdc61e119d22 | DHCP agent | vm-V | nova   
   | :-)   | True   | neutron-dhcp-agent|
| d6a44bd9-8bd3-41c1-a357-82ccaebdf360 | Metadata agent | vm-V |
   | :-)   | True   | neutron-metadata-agent|

[Yahoo-eng-team] [Bug 1623732] Re: test_network_basic_ops to fail with SSHTimeout

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371085
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a60c2de881a806e041190732d4def209c583aac1
Submitter: Jenkins
Branch:master

commit a60c2de881a806e041190732d4def209c583aac1
Author: Armando Migliaccio 
Date:   Thu Sep 15 13:06:06 2016 -0700

Add metadata proxy router_update callback handler

This patch implements the callback handler for router update events;
This checks if the proxy process monitor is active, and if not, starts
the proxy.

This is particularly important if the metadata driver misses to receive
a create notification due to failures, which in turn generates an update
event because of a resync step.

Closes-bug: #1623732

Change-Id: I296a37daff1e5f018ae11eb8661c77ad346b8075


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623732

Title:
  test_network_basic_ops to fail with SSHTimeout

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/63/370363/3/gate/gate-grenade-dsvm-neutron-
  ubuntu-trusty/5581f78/

  http://logs.openstack.org/09/368709/3/gate/gate-grenade-dsvm-neutron-
  ubuntu-trusty/d1195ea/

  Allegedly errors in network startup:

  Initializing random number generator... done.
  Starting acpid: OK
  cirros-ds 'local' up at 5.68
  no results found for mode=local. up 5.99. searched: nocloud configdrive ec2
  Starting network...
  udhcpc (v1.20.1) started
  Sending discover...
  Sending select for 10.1.0.10...
  Lease of 10.1.0.10 obtained, lease time 86400
  route: SIOCADDRT: File exists
  WARN: failed: route add -net "0.0.0.0/0" gw "10.1.0.1"
  cirros-ds 'net' up at 7.01

  Leads to usual:

  tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.9 via
  SSH timed out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/338179
Committed: 
https://git.openstack.org/cgit/openstack/networking-calico/commit/?id=8599fe0af907bc5aeff29c994bafdf87ef558b71
Submitter: Jenkins
Branch:master

commit 8599fe0af907bc5aeff29c994bafdf87ef558b71
Author: SongmingYan 
Date:   Wed Jul 6 06:48:10 2016 -0400

Fix order of arguments in assertEqual

Some tests used incorrect order assertEqual(observed, expected).
The correct order expected by testtools is
assertEqual(expected, observed).

Closes bug: #1259292

Change-Id: I39aa28bea07a3cc6dcab472d526b7d3436d5270d


** Changed in: networking-calico
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  Fix Released
Status in Bandit:
  In Progress
Status in Barbican:
  In Progress
Status in Blazar:
  New
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in Freezer:
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-infoblox:
  In Progress
Status in networking-l2gw:
  In Progress
Status in networking-sfc:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  Fix Released
Status in PBR:
  Fix Released
Status in pycadf:
  New
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in python-glanceclient:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Rally:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in sqlalchemy-migrate:
  New
Status in SWIFT:
  New
Status in tacker:
  In Progress
Status in tempest:
  Invalid
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381961] Re: Keystone API GET 5000/v3 returns wrong endpoint URL in response body

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/368969
Committed: 
https://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=09f569b6a60e6426102212e650b490ea0dd5084f
Submitter: Jenkins
Branch:master

commit 09f569b6a60e6426102212e650b490ea0dd5084f
Author: Adam Young 
Date:   Mon Sep 12 12:43:39 2016 -0400

Unset Keystone public_endpoint

The keystone public_endpoint value should be deduced from the calling
request and not hardcoded, or it makes network isolation impossible.

Change-Id: Ide6a65aa9393cb84591b0015ec5966cc01ffbcf8
Closes-Bug: 1381961


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1381961

Title:
  Keystone API GET 5000/v3 returns wrong endpoint URL in response body

Status in OpenStack Identity (keystone):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  When I was invoking a GET request to  public endpoint of Keystone, I found 
the admin endpoint URL in response body, I assume it should be the public 
endpoint URL:
  GET https://192.168.101.10:5000/v3

  {
"version": {
  "status": "stable",
  "updated": "2013-03-06T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v3+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v3+xml"
}
  ],
  "id": "v3.0",
  "links": [
{
  "href": "https://172.20.14.10:35357/v3/;,
  "rel": "self"
}
  ]
}
  }

  ===
  Btw, I can get the right URL for public endpoint in the response body of the 
versionless API call:
  GET https://192.168.101.10:5000

  {
"versions": {
  "values": [
{
  "status": "stable",
  "updated": "2013-03-06T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v3+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v3+xml"
}
  ],
  "id": "v3.0",
  "links": [
{
  "href": "https://192.168.101.10:5000/v3/;,
  "rel": "self"
}
  ]
},
{
  "status": "stable",
  "updated": "2014-04-17T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v2.0+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v2.0+xml"
}
  ],
  "id": "v2.0",
  "links": [
{
  "href": "https://192.168.101.10:5000/v2.0/;,
  "rel": "self"
},
{
  "href": 
"http://docs.openstack.org/api/openstack-identity-service/2.0/content/;,
  "type": "text/html",
  "rel": "describedby"
},
{
  "href": 
"http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf;,
  "type": "application/pdf",
  "rel": "describedby"
}
  ]
}
  ]
}
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1381961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624197] Re: i18n: Cannot control word order of message in Delete Dialog

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371268
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=cf5650f7f18ca3bb36a6c8c2c3802532b1bc882f
Submitter: Jenkins
Branch:master

commit cf5650f7f18ca3bb36a6c8c2c3802532b1bc882f
Author: Akihiro Motoki 
Date:   Fri Sep 16 04:26:49 2016 +

Allow translator to control word order in delete confirm dialog

Change-Id: I82c27d21cb000602e1cf76313c2cdec680c45394
Closes-Bug: #1624197


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624197

Title:
  i18n: Cannot control word order of message in Delete Dialog

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When deleting a resource, the confirm form is displayed. In the form,
  we have a message 'You have selected "net1".' but "You have selected"
  and a resource name is concatenated in the django template. In some
  language, an object is placed before a verb, but translators cannot
  control the word order.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624277] [NEW] nova-scheduler: UnicodeDecodeError in host aggregates handling

2016-09-16 Thread Roman Bogorodskiy
Public bug reported:

nova-scheduler doesn't seem to like when there are non-asci characters
in the host aggregate objects.

Steps to reproduce:

1. Create a host aggregate with some non-asci chars in properties, e.g.:

$ openstack aggregate show test_aggr
+---+--+
| Field | Value|
+---+--+
| availability_zone | nova |
| created_at| 2016-09-09T17:31:12.00   |
| deleted   | False|
| deleted_at| None |
| hosts | [u'node-6.domain.tld', u'node-7.domain.tld'] |
| id| 54   |
| name  | test_aggr|
| properties| test_meta='проверка мета'|
| updated_at| None |
+---+--+

2. Start an instance

Expected result: instance started.
Actual result: instance creating failed, exception in the nova-scheduler.log 
attached.

This is reproducible with Mitaka, didn't try master.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova_scheduler_unicode_error.txt"
   
https://bugs.launchpad.net/bugs/1624277/+attachment/4741874/+files/nova_scheduler_unicode_error.txt

** Description changed:

  nova-scheduler doesn't seem to like when there are non-asci characters
  in the host aggregate objects.
  
  Steps to reproduce:
  
  1. Create a host aggregate with some non-asci chars in properties, e.g.:
  
  $ openstack aggregate show test_aggr
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | nova |
  | created_at| 2016-09-09T17:31:12.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | [u'node-6.domain.tld', u'node-7.domain.tld'] |
  | id| 54   |
  | name  | test_aggr|
  | properties| test_meta='проверка мета'|
  | updated_at| None |
  +---+--+
  
  2. Start an instance
  
  Expected result: instance started.
  Actual result: instance creating failed, exception in the nova-scheduler.log 
attached.
+ 
+ This is reproducible with Mitaka, didn't try master.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624277

Title:
  nova-scheduler: UnicodeDecodeError in host aggregates handling

Status in OpenStack Compute (nova):
  New

Bug description:
  nova-scheduler doesn't seem to like when there are non-asci characters
  in the host aggregate objects.

  Steps to reproduce:

  1. Create a host aggregate with some non-asci chars in properties,
  e.g.:

  $ openstack aggregate show test_aggr
  +---+--+
  | Field | Value|
  +---+--+
  | availability_zone | nova |
  | created_at| 2016-09-09T17:31:12.00   |
  | deleted   | False|
  | deleted_at| None |
  | hosts | [u'node-6.domain.tld', u'node-7.domain.tld'] |
  | id| 54   |
  | name  | test_aggr|
  | properties| test_meta='проверка мета'|
  | updated_at| None |
  +---+--+

  2. Start an instance

  Expected result: instance started.
  Actual result: instance creating failed, exception in the nova-scheduler.log 
attached.

  This is reproducible with Mitaka, didn't try master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1624276] [NEW] Metering - metering-label-rule doesn't need 'tenant_id'

2016-09-16 Thread Yushiro FURUKAWA
Public bug reported:

In current metering-label-rule, 'tenant_id' is included into 
RESOURCE_ATTRIBUTE_MAP[1].
However, even if we specify 'tenant_id' by using CLI or REST-API, 'tenant_id'
isn't stored on DB[2] and we cannot see the result.  Therefore, I think
following fixes are necessary:

1. 'tenant_id' definition should be removed at extensions/metering.py.
2. Remove 'tenant_id' option from CLI(meter-label-rule)

[CLI]
(neutron) meter-label-rule-create my-label 192.168.2.0/24 --tenant-id 
8ae2759224e94ed0a66011315a32d07c
Created a new metering_label_rule:
+---+--+
| Field | Value|
+---+--+
| direction | ingress  |
| excluded  | False|
| id| 4ef89142-b216-42c7-9455-abc3a8fd1d3d |
| metering_label_id | 543ce87e-2190-46d5-9d79-3fd113681372 |
| remote_ip_prefix  | 192.168.2.0/24   |
+---+--+

[REST-API]
$ source devstack/openrc admin admin
$ export TOKEN=`openstack token issue | grep  ' id ' | get_field 2`
$ curl -s -X POST -H "accept:application/json" -H "content-type: 
application/json" -d '{"metering_label_rule": {"remote_ip_prefix": 
"172.16.3.0/24", "metering_label_id": "d34c612f-3e02-433b-ba9f-f13b2ac6511d", 
"direction": "ingress", "excluded": false, "tenant_id": 
"8ae2759224e94ed0a66011315a32d07c"}}'  -H "x-auth-token:$TOKEN" 
localhost:9696/v2.0/metering/metering-label-rules | jq "."

{
  "metering_label_rule": {
"remote_ip_prefix": "172.16.3.0/24",
"direction": "ingress",
"metering_label_id": "d34c612f-3e02-433b-ba9f-f13b2ac6511d",
"id": "d300512f-db84-4680-92cd-dafb91bec33b",
"excluded": false
  }
}

[1] 
https://github.com/openstack/neutron/blob/master/neutron/extensions/metering.py#L79
[2] 
https://github.com/openstack/neutron/blob/master/neutron/db/metering/metering_db.py#L171

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: metering

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624276

Title:
  Metering - metering-label-rule doesn't need 'tenant_id'

Status in neutron:
  New

Bug description:
  In current metering-label-rule, 'tenant_id' is included into 
RESOURCE_ATTRIBUTE_MAP[1].
  However, even if we specify 'tenant_id' by using CLI or REST-API, 'tenant_id'
  isn't stored on DB[2] and we cannot see the result.  Therefore, I think
  following fixes are necessary:

  1. 'tenant_id' definition should be removed at extensions/metering.py.
  2. Remove 'tenant_id' option from CLI(meter-label-rule)

  [CLI]
  (neutron) meter-label-rule-create my-label 192.168.2.0/24 --tenant-id 
8ae2759224e94ed0a66011315a32d07c
  Created a new metering_label_rule:
  +---+--+
  | Field | Value|
  +---+--+
  | direction | ingress  |
  | excluded  | False|
  | id| 4ef89142-b216-42c7-9455-abc3a8fd1d3d |
  | metering_label_id | 543ce87e-2190-46d5-9d79-3fd113681372 |
  | remote_ip_prefix  | 192.168.2.0/24   |
  +---+--+

  [REST-API]
  $ source devstack/openrc admin admin
  $ export TOKEN=`openstack token issue | grep  ' id ' | get_field 2`
  $ curl -s -X POST -H "accept:application/json" -H "content-type: 
application/json" -d '{"metering_label_rule": {"remote_ip_prefix": 
"172.16.3.0/24", "metering_label_id": "d34c612f-3e02-433b-ba9f-f13b2ac6511d", 
"direction": "ingress", "excluded": false, "tenant_id": 
"8ae2759224e94ed0a66011315a32d07c"}}'  -H "x-auth-token:$TOKEN" 
localhost:9696/v2.0/metering/metering-label-rules | jq "."

  {
"metering_label_rule": {
  "remote_ip_prefix": "172.16.3.0/24",
  "direction": "ingress",
  "metering_label_id": "d34c612f-3e02-433b-ba9f-f13b2ac6511d",
  "id": "d300512f-db84-4680-92cd-dafb91bec33b",
  "excluded": false
}
  }

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/extensions/metering.py#L79
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/db/metering/metering_db.py#L171

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624269] [NEW] Metering - missing 'tenant_id' for metering-label as request body

2016-09-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In http://developer.openstack.org/api-ref/networking/v2/?expanded=show-
metering-label-rule-details-detail,create-metering-label-detail,

'tenant_id' is missing as a request parameter.

(neutron) meter-label-create my-label --tenant-id 
8ae2759224e94ed0a66011315a32d07c
Created a new metering_label:
+-+--+
| Field   | Value|
+-+--+
| description |  |
| id  | 543ce87e-2190-46d5-9d79-3fd113681372 |
| name| my-label |
| project_id  | 8ae2759224e94ed0a66011315a32d07c |
| shared  | False|
| tenant_id   | 8ae2759224e94ed0a66011315a32d07c |
+-+--+

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: metering
-- 
Metering - missing  'tenant_id' for metering-label as request body
https://bugs.launchpad.net/bugs/1624269
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623876] Re: nova is not setting the MTU provided by Neutron

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370667
Committed: 
https://git.openstack.org/cgit/openstack/os-vif/commit/?id=f3130fe8e23fb07532bb3d740fffaf07481c1e70
Submitter: Jenkins
Branch:master

commit f3130fe8e23fb07532bb3d740fffaf07481c1e70
Author: Kevin Benton 
Date:   Wed Sep 14 17:41:31 2016 -0700

Add MTU to Network model and use it in plugging

This adds an MTU field to the network model and
has the vif_plug_ovs and vif_plug_linux_bridge drivers
check for it before referencing the global config variable.

Closes-Bug: #1623876
Change-Id: I327c901a285bca23560f49a921a5d030f7f71cad


** Changed in: os-vif
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623876

Title:
  nova is not setting the MTU provided by Neutron

Status in OpenStack Compute (nova):
  Fix Released
Status in os-vif:
  Fix Released

Bug description:
  Spotted in gate grenade job. We can see neutron MTU is 1450 but the mtu set 
calls in privsep use 1500.
  
http://logs.openstack.org/56/369956/3/gate/gate-grenade-dsvm-neutron-ubuntu-trusty/83daad8/logs/new/screen-n-cpu.txt.gz#_2016-09-15_01_16_57_512

  Relevant log snippet:

  2016-09-15 01:16:57.512 25573 DEBUG nova.network.os_vif_util 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Converting VIF {"profile": {}, 
"ovs_interfaceid": "8dfdfd9b-da9d-4215-abbd-4dffdc48494b", 
"preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": 
[{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], 
"address": "10.1.0.9"}], "version": 4, "meta": {}, "dns": [], "routes": [], 
"cidr": "10.1.0.0/28", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "10.1.0.1"}}], "meta": {"injected": false, "tenant_id": 
"563ca55619b1402ebf0c792ec604a774", "mtu": 1450}, "id": 
"6e1f0d14-a238-4da9-a2d5-659a0f28479c", "label": 
"tempest-AttachInterfacesTestJSON-32449395-network"}, "devname": 
"tap8dfdfd9b-da", "vnic_type": "normal", "qbh_params": null, "meta": {}, 
"details": {"port_filter": true, "ovs_hybrid_plug": true}, "address": 
"fa:16:3e:38:52:12", "active": fal
 se, "type": "ovs", "id": "8dfdfd9b-da9d-4215-abbd-4dffdc48494b", "qbg_params": 
null} nova_to_osvif_vif /opt/stack/new/nova/nova/network/os_vif_util.py:362
  2016-09-15 01:16:57.513 25573 DEBUG nova.network.os_vif_util 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Converted object 
VIFBridge(active=False,address=fa:16:3e:38:52:12,bridge_name='qbr8dfdfd9b-da',has_traffic_filtering=True,id=8dfdfd9b-da9d-4215-abbd-4dffdc48494b,network=Network(6e1f0d14-a238-4da9-a2d5-659a0f28479c),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap8dfdfd9b-da')
 nova_to_osvif_vif /opt/stack/new/nova/nova/network/os_vif_util.py:374
  2016-09-15 01:16:57.514 25573 DEBUG os_vif 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Plugging vif 
VIFBridge(active=False,address=fa:16:3e:38:52:12,bridge_name='qbr8dfdfd9b-da',has_traffic_filtering=True,id=8dfdfd9b-da9d-4215-abbd-4dffdc48494b,network=Network(6e1f0d14-a238-4da9-a2d5-659a0f28479c),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap8dfdfd9b-da')
 plug /usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:76
  2016-09-15 01:16:57.515 25573 DEBUG oslo.privsep.daemon [-] privsep: 
request[140021949493072]: (3, 'vif_plug_ovs.linux_net.ensure_bridge', 
(u'qbr8dfdfd9b-da',), {}) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2016-09-15 01:16:57.515 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): brctl addbr qbr8dfdfd9b-da out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2016-09-15 01:16:57.517 25573 DEBUG neutronclient.v2_0.client 
[req-6754757c-066c-488a-bc16-6bd451c28cdc 
tempest-ServerActionsTestJSON-1197364963 
tempest-ServerActionsTestJSON-1197364963] GET call to neutron for 
http://127.0.0.1:9696/v2.0/subnets.json?id=2b899e3c-17dc-478a-bd39-91132bb057ab 
used request id req-e82dd927-a0f8-48ba-bbbc-56724a10a29d _append_request_id 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:127
  2016-09-15 01:16:57.521 25573 DEBUG oslo.privsep.daemon [-] CMD "brctl addbr 
qbr8dfdfd9b-da" returned: 0 in 0.005s out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2016-09-15 01:16:57.521 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): brctl setfd qbr8dfdfd9b-da 0 out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2016-09-15 01:16:57.524 25573 DEBUG oslo.privsep.daemon [-] CMD "brctl 

[Yahoo-eng-team] [Bug 1624270] [NEW] openstack error 'oslo_config.cfg.NoSuchOptError'

2016-09-16 Thread arun
Public bug reported:


[root@controller1 ~]#
[root@controller1 ~]# openstack server create --flavor m1.nano --image cirros 
--nic net-id=b002f05b-5342-4a67-9c93-8a72158cab60 --security-group default 
--key-name mykey provider-instance
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-b06491f6-df6d-4d75-82af-546fa23790a1)
[root@controller1 ~]#
[root@controller1 ~]#
[root@controller1 ~]# uname -a
Linux controller1.arundell.com 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 
11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@controller1 ~]#

[root@controller1 nova]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@controller1 nova]#


Log file : /var/log/nova/nova-api.log logs pasted below 
==

2016-09-16 08:29:21.089 11479 INFO nova.api.openstack.wsgi 
[req-b92dbd45-5130-45d1-bc2c-003b2791fc38 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf4d77bb497d72d6b397 - - -] HTTP exception thrown: Image not found.
2016-09-16 08:29:21.090 11479 INFO nova.osapi_compute.wsgi.server 
[req-b92dbd45-5130-45d1-bc2c-003b2791fc38 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images/cirros HTTP/1.1" status: 404 len: 
351 time: 0.3027530
2016-09-16 08:29:21.159 11479 INFO nova.osapi_compute.wsgi.server 
[req-af640c20-e4cb-4bc8-ba18-a425db112776 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images HTTP/1.1" status: 200 len: 770 
time: 0.0645511
2016-09-16 08:29:21.209 11479 INFO nova.osapi_compute.wsgi.server 
[req-9417d9b1-73ef-41dc-b67f-fabe60d2406f 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/images/e5e3e4d7-c74a-4e97-80f3-06fdc014a079
 HTTP/1.1" status: 200 len: 95time: 0.0442920
2016-09-16 08:29:21.240 11479 INFO nova.api.openstack.wsgi 
[req-dd435c55-c360-4dee-bcb3-8902576848b0 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf4d77bb497d72d6b397 - - -] HTTP exception thrown: Flavor m1.nano 
could not be found.
2016-09-16 08:29:21.241 11479 INFO nova.osapi_compute.wsgi.server 
[req-dd435c55-c360-4dee-bcb3-8902576848b0 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors/m1.nano HTTP/1.1" status: 404 
len: 369 time: 0.0274239

2016-09-16 08:29:21.278 11479 INFO nova.osapi_compute.wsgi.server 
[req-eb65e696-ee36-475a-9c24-a7539503e14d 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors HTTP/1.1" status: 200 len: 1740 
time: 0.0316298
2016-09-16 08:29:21.324 11479 INFO nova.osapi_compute.wsgi.server 
[req-23b1d340-4aa7-482b-a22c-c2c7e7157990 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] 127.0.0.1 "GET 
/v2.1/536d39fefedf4d77bb48e97d72d6b397/flavors/0 HTTP/1.1" status: 200 len: 689 
time: 0.0391679
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions 
[req-b06491f6-df6d-4d75-82af-546fa23790a1 967fbd1d2e3441a687e84f0cca6b05c4 
536d39fefedf77bb48e97d72d6b397 - - -] Unexpected exception in API method
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
iwrapped
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
apper
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
6, in create
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
2016-09-16 08:29:21.581 11479 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-09-16 

[Yahoo-eng-team] [Bug 1622690] Re: Potential XSS in image create modal or angular table

2016-09-16 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1622690

Title:
  Potential XSS in image create modal or angular table

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  The Image Create modal allows you to create an image sending unencoded
  HTML and JavaScript. This could lead to a potential XSS attack

  Steps to reproduce:

  1. Go to project>images
  2. Click on "Create image"
  3. In the "Image Name" input enter some HTML code or script code (i.e 
This is bad, alert('This is bad');)
  4. Fill in other required fields
  5. Click on 'Create Image'

  Expected Result:
  The image is created but the name is safely encoded and it's shown in the 
table as it was written

  Actual Result:
  The image name is not encoded an therefore is being rendered as HTML by the 
browser.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1622690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603307] Re: horizon plugin gives the KeyError

2016-09-16 Thread Rob Cresswell
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1603307

Title:
  horizon plugin gives the KeyError

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Murano:
  Invalid

Bug description:
  horizon master with murano-dashboard, open the murano-dashboard, give the 
error:
File 
"/opt/openstack/horizon/.tox/venv/lib/python2.7/site-packages/django/dispatch/dispatcher.py",
 line 189, in send
  response = receiver(signal=self, sender=sender, **named)
File "/opt/openstack/horizon/horizon/templatetags/angular.py", line 36, in 
update_angular_template_hash
  theme = context['THEME']  # current theme being compressed
  KeyError: 'THEME'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624269] [NEW] Metering - missing 'tenant_id' for metering-label as request body

2016-09-16 Thread Yushiro FURUKAWA
Public bug reported:

In http://developer.openstack.org/api-ref/networking/v2/?expanded=show-
metering-label-rule-details-detail,create-metering-label-detail,

'tenant_id' is missing as a request parameter.

(neutron) meter-label-create my-label --tenant-id 
8ae2759224e94ed0a66011315a32d07c
Created a new metering_label:
+-+--+
| Field   | Value|
+-+--+
| description |  |
| id  | 543ce87e-2190-46d5-9d79-3fd113681372 |
| name| my-label |
| project_id  | 8ae2759224e94ed0a66011315a32d07c |
| shared  | False|
| tenant_id   | 8ae2759224e94ed0a66011315a32d07c |
+-+--+

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: metering

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624269

Title:
  Metering - missing  'tenant_id' for metering-label as request body

Status in neutron:
  New

Bug description:
  In http://developer.openstack.org/api-ref/networking/v2/?expanded
  =show-metering-label-rule-details-detail,create-metering-label-detail,

  'tenant_id' is missing as a request parameter.

  (neutron) meter-label-create my-label --tenant-id 
8ae2759224e94ed0a66011315a32d07c
  Created a new metering_label:
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | 543ce87e-2190-46d5-9d79-3fd113681372 |
  | name| my-label |
  | project_id  | 8ae2759224e94ed0a66011315a32d07c |
  | shared  | False|
  | tenant_id   | 8ae2759224e94ed0a66011315a32d07c |
  +-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613542] Re: tempest.conf doesn't contain $project in [service_available] section

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/357766
Committed: 
https://git.openstack.org/cgit/openstack/ec2-api/commit/?id=eff4f621f648c1bc84949ac52a1346844b9b6c7c
Submitter: Jenkins
Branch:master

commit eff4f621f648c1bc84949ac52a1346844b9b6c7c
Author: Thomas Bechtold 
Date:   Fri Aug 19 12:51:01 2016 +0200

Fix tempest.conf generation

[service_available] isn't being generated. This patch fixes it.
It also introduces a switch to disable the ec2api tempest tests via
the [service_available]ec2api parameter.

Closes-Bug: #1613542
Change-Id: I79e2bc26f86b3be6a45a2ee8ea33c50977d44838


** Changed in: ec2-api
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1613542

Title:
  tempest.conf doesn't contain $project in [service_available] section

Status in Aodh:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Gnocchi:
  Fix Committed
Status in Ironic:
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in Magnum:
  Fix Released
Status in neutron:
  New
Status in OpenStack Data Processing ("Sahara") sahara-tests:
  Fix Released
Status in senlin:
  In Progress
Status in Vitrage:
  In Progress
Status in vmware-nsx:
  Fix Released

Bug description:
  When generating the tempest conf, the tempest plugins need to register the 
config options.
  But for the [service_available] section, ceilometer (and the other mentioned 
projects) doesn't register any value so it's missng in the tempest sample 
config.

  Steps to reproduce:

  $ tox -egenconfig
  $ source .tox/genconfig/bin/activate
  $ oslo-config-generator --config-file 
.tox/genconfig/lib/python2.7/site-packages/tempest/cmd/config-generator.tempest.conf
 --output-file tempest.conf.sample

  Now check the [service_available] section from tempest.conf.sample

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1613542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622017] Re: OVS agent is not removing VLAN tags before tunnels when configured with native OF interface

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/368553
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4361f7543f984cf5f09c0c7070ac6b0f22f3b6b1
Submitter: Jenkins
Branch:master

commit 4361f7543f984cf5f09c0c7070ac6b0f22f3b6b1
Author: IWAMOTO Toshihiro 
Date:   Mon Sep 12 14:36:18 2016 +0900

of_interface: Use vlan_tci instead of vlan_vid

To pop VLAN tags in learn action generated flows, vlan_tci should
be used instead of vlan_vid.  Otherwise, VLAN tags with VID=0 are
left.

Change-Id: Ie38ab860424f6e2e2448abac82c428dae3a8a544
Closes-bug: #1622017


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622017

Title:
  OVS agent is not removing VLAN tags before tunnels when configured
  with native OF interface

Status in neutron:
  Fix Released

Bug description:
  In investigating an MTU issue, an accounted-for overhead of 4 bytes
  was discovered. A spurious 802.1q header was discovered using tcpdump
  when attempting to connect to a guest via floating IP. The tenant
  network type is VXLAN and the VXLAN endpoints themselves are on a
  VLAN. This issue effectively breaks communication with guests via
  floating ip for some system configurations.

  The test system is configured with a default global_physnet_mtu of
  1500 and inspection of the router namespace confirms that the tenant
  network's router interface has been automatically configured to with
  an MTU of 1450. Ping was used to test. e.g.  ping  -M do -s 1422
  192.0.2.58 (1422 is the maximum that should fit in the 1450 MTU
  without fragmentation).

  With the system configured as described, "ping -s 1420 "
  fails.

  tcpdump on the controller reveals:

  root@overcloud-controller-0 heat-admin]# tcpdump -vvv  -e -i any icmp
  tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 
65535 bytes
  18:32:49.163223   P 52:54:00:01:09:3c (oui Unknown) ethertype IPv4 (0x0800), 
length 1464: (tos 0x0, ttl 64, id 37535, offset 0, flags [DF], proto ICMP (1), 
length 1448)
  192.0.2.1 > 192.0.2.58: ICMP echo request, id 16083, seq 1, length 1428
  18:32:49.163340  In 00:00:00:00:00:00 (oui Ethernet) ethertype IPv4 (0x0800), 
length 592: (tos 0xc0, ttl 64, id 4395, offset 0, flags [none], proto ICMP (1), 
length 576)
  overcloud-controller-0.tenant.localdomain > 
overcloud-controller-0.tenant.localdomain: ICMP 
overcloud-novacompute-0.tenant.localdomain unreachable - need to frag (mtu 
1500), length 556
  (tos 0x0, ttl 64, id 22077, offset 0, flags [DF], proto UDP (17), 
length 1502)
  overcloud-controller-0.tenant.localdomain.51706 > 
overcloud-novacompute-0.tenant.localdomain.4789: [no cksum] VXLAN, flags [I] 
(0x08), vni 36

  
  Adjusting the ping size to allow for a 4 byte header (e.g. ping -s 1418 
) succeeds.

  Using an alternate tcpdump command to get information from the VXLAN traffic, 
reveals unusual extra 802.1q header with a vlan ID of 0:
  [root@overcloud-controller-0 heat-admin]# tcpdump -vvv -n  -e -i any udp
  tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 
65535 bytes
  18:36:48.095985 Out 56:13:19:d8:af:27 ethertype IPv4 (0x0800), length 1516: 
(tos 0x0, ttl 64, id 22088, offset 0, flags [DF], proto UDP (17), length 1500)
  172.16.0.5.51706 > 172.16.0.10.4789: [no cksum] VXLAN, flags [I] (0x08), 
vni 36
  fa:16:3e:99:37:ce > fa:16:3e:06:65:6f, ethertype 802.1Q (0x8100), length 
1464: vlan 0, p 0, ethertype IPv4, (tos 0x0, ttl 63, id 37541, offset 0, flags 
[DF], proto ICMP (1), length 1446)
  192.0.2.1 > 192.168.2.101: ICMP echo request, id 16422, seq 1, length 1426
  18:36:48.097861   P ea:0c:37:f7:69:5e ethertype 802.1Q (0x8100), length 1520: 
vlan 50, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 22354, offset 0, flags [DF], 
proto UDP (17), length 1500)
  172.16.0.10.50337 > 172.16.0.5.4789: [no cksum] VXLAN, flags [I] (0x08), 
vni 36

  The flow table is similar to (this was taken from the compute node,
  not the controller but the br-tun flow tables follow the same form
  with only different values for local segment IDs)

  [root@overcloud-novacompute-0 ml2]# ovs-ofctl -O OpenFlow13 dump-flows br-tun
  OFPST_FLOW reply (OF1.3) (xid=0x2):
   cookie=0xb13175655506ca2e, duration=11.785s, table=0, n_packets=0, 
n_bytes=0, priority=1,in_port=1 actions=goto_table:2
   cookie=0xb13175655506ca2e, duration=10.955s, table=0, n_packets=0, 
n_bytes=0, priority=1,in_port=2 actions=goto_table:4
   cookie=0xb13175655506ca2e, duration=11.783s, table=0, n_packets=0, 
n_bytes=0, priority=0 actions=drop
   cookie=0xb13175655506ca2e, duration=11.781s, table=2, n_packets=0, 
n_bytes=0, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=goto_table:20
   cookie=0xb13175655506ca2e, duration=11.779s, table=2, n_packets=0, 

[Yahoo-eng-team] [Bug 1623402] Re: ipam leaks DBReference error on deleted subnet

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/369992
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a397792ff7ac03f4f021e8d7df62d04d6641aa14
Submitter: Jenkins
Branch:master

commit a397792ff7ac03f4f021e8d7df62d04d6641aa14
Author: Kevin Benton 
Date:   Tue Sep 13 20:23:42 2016 -0700

Catch DBReferenceError in IPAM and convert to SubnetNotFound

If a subnet is removed after lookup but before IP allocation
insert, we can end up getting a DBReferenceError in the IPAM
system. Since this can occur on any flush we force a flush
with a session context manager to get the DBReference error
in a desired spot (in multi-writer galera this is a deadlock
much later, which we retry). We then convert it to SubnetNotFound,
which is an expected exception.

Closes-Bug: #1623402
Change-Id: I65632b174f8692cd465c721e285526827842a740


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623402

Title:
  ipam leaks DBReference error on deleted subnet

Status in neutron:
  Fix Released

Bug description:
  spotted in rally run with lots of concurrent subnet operations:

  
  
http://logs.openstack.org/11/369511/3/check/gate-rally-dsvm-neutron-rally/b188655/logs/screen-q-svc.txt.gz#_2016-09-14_07_20_10_547


  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
[req-14b41117-5a1b-4b0d-9962-14e9adb7048a c_rally_b76aa7ea_hhhcxI8u -] delete 
failed: Exception deleting fixed_ip from port 
1098dc9c-dd9e-4a11-8e6e-b999218d55aa
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 555, in delete
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 88, in wrapped
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 84, in wrapped
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 124, in wrapped
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
traceback.format_exc())
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1623800] Re: Can't add exact count of fixed ips to port (regression)

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370801
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=748faa0579056c0157d038e539610f386dd64abe
Submitter: Jenkins
Branch:master

commit 748faa0579056c0157d038e539610f386dd64abe
Author: Kevin Benton 
Date:   Wed Sep 14 20:37:56 2016 -0700

Revert "Don't allocate IP on port update when existing subnet specified"

This reverts commit f07c07b16fb0858c45f6cef135a8d8c07a16c505.
This broke the ability to add fixed IPs to a port based on
subnet_id.

API test to prevent regression in child patch.

Change-Id: Ia13abea59431744ce7a0270f480f4bf61a7161e0
Closes-Bug: #1623800


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623800

Title:
  Can't add exact count of fixed ips to port (regression)

Status in neutron:
  Fix Released

Bug description:
  environment: latest devstack. services: nova, glance, keystone, cinder, 
neutron, neutron-vpnaas, ec2-api
  non-admin project.

  we have a scenario when we create a port and then add two fixed ips to it.
  Now neutron adds only one fixed_ip to this port but this Monday all was good.
  And looks like that now it adds count-1 of passed new fixed_ips.

  
  logs:

  2016-09-15 09:13:47.568 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] REQ: curl -g -i --cacert 
"/opt/stack/data/ca-bundle.pem" -X GET 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json -H 
"User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}87af409cdd3b0396fa6954bdc181fddac54d823d" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:206
  2016-09-15 09:13:47.627 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] RESP: [200] Content-Type: 
application/json Content-Length: 735 X-Openstack-Request-Id: 
req-4fcc4b7c-09d3-40b6-9332-a3974e422630 Date: Thu, 15 Sep 2016 06:13:47 GMT 
Connection: keep-alive 
  RESP BODY: {"port": {"status": "DOWN", "created_at": "2016-09-15T06:13:46", 
"project_id": "c44a90bf24c14dcbac693c9bb8ac1923", "description": "", 
"allowed_address_pairs": [], "admin_state_up": true, "network_id": 
"93e7bdae-bb7b-4e3e-b33d-e80a561014ea", "tenant_id": 
"c44a90bf24c14dcbac693c9bb8ac1923", "extra_dhcp_opts": [], "updated_at": 
"2016-09-15T06:13:46", "name": "eni-30152657", "device_owner": "", 
"revision_number": 5, "mac_address": "fa:16:3e:12:34:dd", 
"port_security_enabled": true, "binding:vnic_type": "normal", "fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}], "id": "0be539d4-ed3c-4bba-8a25-9cb1641335ab", "security_groups": 
["2c51d398-1bd1-4084-8063-41bfe57788a4"], "device_id": ""}}
   _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:231
  2016-09-15 09:13:47.628 14578 DEBUG neutronclient.v2_0.client 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] GET call to neutron for 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json used 
request id req-4fcc4b7c-09d3-40b6-9332-a3974e422630 _append_request_id 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:127


  2016-09-15 09:13:47.628 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] REQ: curl -g -i --cacert 
"/opt/stack/data/ca-bundle.pem" -X PUT 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json -H 
"User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}87af409cdd3b0396fa6954bdc181fddac54d823d" -d '{"port": {"fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}, {"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e"}, 
{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e"}]}}' _http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:206
  2016-09-15 09:13:48.014 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] RESP: [200] Content-Type: 
application/json Content-Length: 816 X-Openstack-Request-Id: 
req-0c86f7d1-ce47-4c9e-b842-1aa37c2ca024 Date: Thu, 15 Sep 2016 06:13:48 GMT 
Connection: keep-alive 
  RESP BODY: {"port": {"status": "DOWN", "created_at": "2016-09-15T06:13:46", 
"project_id": "c44a90bf24c14dcbac693c9bb8ac1923", "description": "", 
"allowed_address_pairs": [], "admin_state_up": true, 

[Yahoo-eng-team] [Bug 1623849] Re: openvswitch native agent, ARP responder response has wrong Eth headers

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370639
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7d2721de1a06ed233c8a2906d14df02ed70c95d9
Submitter: Jenkins
Branch:master

commit 7d2721de1a06ed233c8a2906d14df02ed70c95d9
Author: Thomas Morin 
Date:   Thu Sep 15 11:25:47 2016 +0200

ovs agent, native ARP response: set Eth src/dst

This change adds action to install_arp_responder of native implementation
so that the source and destination MAC addresses of the Ethernet header
are properly set, and now consistent with the ovs-ofctl implementation.

Change-Id: I9a095add42ba5799bd81887f1cbe5507ab9ba48c
Closes-Bug: 1623849


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623849

Title:
  openvswitch native agent, ARP responder response has wrong Eth headers

Status in neutron:
  Fix Released

Bug description:
  The ovs-ofctl ARP responder implementation (install_arp_responder)
  sets the correct src/dst MAC addresses in the Ethernet header:

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_tun.py#L197

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/common/constants.py#L110

  --> 'move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:%(mac)s,'

  *However* the native Openflow/ryu install_arp_responder implementation
  does not set these src/dst fields of the Ethernet header:

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_tun.py#L223

  
  The result is that the forged ARP response is incorrect arp_responder=True 
and of_interface=native:

  09:59:47.162196 fa:16:3e:ea:2e:9a > ff:ff:ff:ff:ff:ff, ethertype ARP 
(0x0806), length 42: Request who-has 192.168.10.1 tell 192.168.10.5, length 28
  09:59:47.162426 fa:16:3e:ea:2e:9a > ff:ff:ff:ff:ff:ff, ethertype ARP 
(0x0806), length 42: Reply 192.168.10.1 is-at fa:16:5e:47:33:64, length 28

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622980] Re: Data Too Long for configurations column of neutron agents table

2016-09-16 Thread Esha Seth
Drew, yes the issue was due to very long bridge_mappings kept in
configurations column which were exceeding the limit of the column. It
is not related to deploy otherwise. I am cancelling this defect as
invalid.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622980

Title:
  Data Too Long for configurations column of neutron agents table

Status in neutron:
  Invalid

Bug description:
  When deploying with several networks the configurations column in
  table neutron_agents can run out of space causing the below error.

  09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters 
[req-817b01ad-1b7c-47bc-b3c9-81ad798a6eda - - - - -] DBAPIError exception 
wrapped from (_mysql_exceptions.DataError) (1406, "Data too long for column 
'configurations' at row 1") [SQL: u'UPDATE agents SET started_at=%s, 
heartbeat_timestamp=%s, configurations=%s WHERE agents.id = %s'] [parameters: 
(datetime.datetime(2016, 9, 6, 14, 3, 43, 663671), datetime.datetime(2016, 9, 
6, 14, 3, 43, 663671), '{"bridge_mappings": {"ETHERNET0-VLAN488": 
"9425b666-6e94-3783-ba02-0bdbd0a5175a", "ETHERNET0-VLAN489": 
"9425b666-6e94-3783-ba02-0bdbd0a5175a", ..., "ETHERNET0-VLAN533": 
"9425b666-6e94-3783-ba02-0bdbd0a5175a"}, "devices": 0}', 
'44d71991-b258-4b23-8cdb-f65e5273dbd9')]
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in 
_execute_context
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters context)
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in 
do_execute
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters DataError: 
(1406, "Data too long for column 'configurations' at row 1")
  2016-09-06 09:03:43.791 970 ERROR oslo_db.sqlalchemy.exc_filters

  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher 
ectxt.value = e.inner_exc
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 137, in wrapper
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron/db/agents_db.py", line 472, in 
report_state
  2016-09-06 09:03:43.935 970 ERROR oslo_messaging.rpc.dispatcher 
agent_status = 

[Yahoo-eng-team] [Bug 1624225] Re: quota update for LBaaS not working

2016-09-16 Thread Reedip
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624225

Title:
  quota update for LBaaS not working

Status in python-neutronclient:
  New

Bug description:
  On trying to update Members/Health Monitors via Openstack Quota Set or 
neutron quota-update,
  I am getting the following error:

  Actual Output:
  RESP BODY: {"NeutronError": {"message": "Unrecognized attribute(s) 
'health_monitor'", "type": "HTTPBadRequest", "detail": ""}}

  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError":
  {"message": "Unrecognized attribute(s) 'health_monitor'", "type":
  "HTTPBadRequest", "detail": ""}}

  Expected Output:
  Quotas for LBaaS should have been updated


  
  Update:
  Pool and Listener works
  Healthmonitor and member doesnt

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1624225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624225] [NEW] quota update for LBaaS not working

2016-09-16 Thread Reedip
Public bug reported:

On trying to update Members/Health Monitors via Openstack Quota Set or neutron 
quota-update,
I am getting the following error:

Actual Output:
RESP BODY: {"NeutronError": {"message": "Unrecognized attribute(s) 
'health_monitor'", "type": "HTTPBadRequest", "detail": ""}}

DEBUG: neutronclient.v2_0.client Error message: {"NeutronError":
{"message": "Unrecognized attribute(s) 'health_monitor'", "type":
"HTTPBadRequest", "detail": ""}}

Expected Output:
Quotas for LBaaS should have been updated


Update:
Pool and Listener works
Healthmonitor and member doesnt

** Affects: neutron
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Reedip (reedip-banerjee)

** Tags added: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624225

Title:
  quota update for LBaaS not working

Status in neutron:
  New

Bug description:
  On trying to update Members/Health Monitors via Openstack Quota Set or 
neutron quota-update,
  I am getting the following error:

  Actual Output:
  RESP BODY: {"NeutronError": {"message": "Unrecognized attribute(s) 
'health_monitor'", "type": "HTTPBadRequest", "detail": ""}}

  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError":
  {"message": "Unrecognized attribute(s) 'health_monitor'", "type":
  "HTTPBadRequest", "detail": ""}}

  Expected Output:
  Quotas for LBaaS should have been updated


  
  Update:
  Pool and Listener works
  Healthmonitor and member doesnt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619312] Re: dvr: can't migrate legacy router to DVR

2016-09-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370430
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=25e65df7974c238517ab51ea07079ab0482ad44a
Submitter: Jenkins
Branch:master

commit 25e65df7974c238517ab51ea07079ab0482ad44a
Author: Brian Haley 
Date:   Wed Sep 14 16:52:07 2016 -0400

Fix migration of legacy router to DVR

We have to ensure ML2's create_port method is not called
in a transaction.

This adds a temporary hack to set an attribute on the context
to skip this check to accomodate an L3 code-path that has a port
creation entangled in a transaction.  This attribute will
ultimately be removed once this path is refactored.

Change-Id: I9c41a7848b22c437bedcdd7eb57f7c9da873b06d
Closes-bug: #1619312


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619312

Title:
  dvr: can't migrate legacy router to DVR

Status in neutron:
  Fix Released

Bug description:
  As the title say:

  2016-09-01 16:38:46.026 ERROR neutron.api.v2.resource 
[req-d738cdb2-01bb-41a7-a2a9-534bf8b06377 admin 
85a2b05da4be46b19bc5f7cf41055e45] update failed: No details.
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/api/v2/base.py", line 575, in update
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource self.force_reraise()
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/db/api.py", line 82, in wrapped
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource 
traceback.format_exc())
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource self.force_reraise()
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/db/api.py", line 77, in wrapped
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/api/v2/base.py", line 623, in _update
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/db/extraroute_db.py", line 76, in update_router
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource context, id, router)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/db/l3_db.py", line 1722, in update_router
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource id, router)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/db/l3_db.py", line 282, in update_router
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource router_db = 
self._update_router_db(context, id, r)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 
"/opt/openstack/neutron/neutron/db/l3_hamode_db.py", line 533, in 
_update_router_db
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource context, router_id, 
data)
  2016-09-01 16:38:46.026 TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1624221] [NEW] RequiredOptError: value required for option lock_path in group [DEFAULT]

2016-09-16 Thread venkatamahesh
Public bug reported:

2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
140, in _get_lock_path
2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server raise 
cfg.RequiredOptError('lock_path')
2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server RequiredOptError: 
value required for option lock_path in group [DEFAULT]


After adding the option in neutron.conf the problem is rectified. So it should 
be updated in install-guide

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: openstack-manuals
 Importance: Undecided
 Status: New

** Attachment added: "Bug.png"
   https://bugs.launchpad.net/bugs/1624221/+attachment/4741785/+files/Bug.png

** Description changed:

  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
140, in _get_lock_path
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server raise 
cfg.RequiredOptError('lock_path')
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server 
RequiredOptError: value required for option lock_path in group [DEFAULT]
+ 
+ 
+ After adding the option in neutron.conf the problem is rectified. So it 
should be updated in install-guide

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624221

Title:
  RequiredOptError: value required for option lock_path in group
  [DEFAULT]

Status in neutron:
  New
Status in openstack-manuals:
  New

Bug description:
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
140, in _get_lock_path
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server raise 
cfg.RequiredOptError('lock_path')
  2016-09-16 10:56:49.490 4505 ERROR oslo_messaging.rpc.server 
RequiredOptError: value required for option lock_path in group [DEFAULT]

  
  After adding the option in neutron.conf the problem is rectified. So it 
should be updated in install-guide

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp