[Yahoo-eng-team] [Bug 1505776] Re: Validate src_ip_adress, dest_ip_address and ip_version

2016-12-01 Thread Reedip
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505776

Title:
  Validate src_ip_adress, dest_ip_address and ip_version

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/215768
  commit 29c51879683087e8ea486aaa9866bbce1f47a15c
  Author: Sean M. Collins 
  Date:   Mon Aug 10 15:24:52 2015 -0400

  Validate src_ip_adress, dest_ip_address and ip_version
  
  The FwaaS API should not allow the creation of firewall rules where
  the ip_version is set to 4, but the source or destination IPs are IPv6
  addresses
  
  APIImpact
  DocImpact
  
  Closes-Bug: #1487599
  
  Change-Id: Iad680996a47adcf27f9dc7e0bc0fea924fff4f9f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621076] Re: Can't detach interface from VM (if VM has two interface with same mac addresses)

2016-12-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/372243
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a5c38cc861f052b89bc1ce182f68784c493c723e
Submitter: Jenkins
Branch:master

commit a5c38cc861f052b89bc1ce182f68784c493c723e
Author: Leehom Li (feli5) 
Date:   Mon Sep 19 13:43:46 2016 +0800

Support detach interface with same MAC from instance

When detach_interface nova uses  XML generated
from scatch, which is missing the PCI device information that
libvirt would use to uniquely identify devices.

In case instance has mutiple interfaces with same MAC address.
Libvirt will failed with below error message:
libvirtError: operation failed: multiple devices matching
mac address fa:16:3e:60:46:1f found

This patch fixes this problem by provide a new function
get_interface_by_cfg, this function uses cfg generated by
nova.virt.libvirt.vif.get_config  as parameter,
return a LibvirtConfigGuestInterface object.

Also added function format_dom for
LibvirtConfigGuestDeviceAddressPCI, which will be used to
generate pci address for LibvirtConfigGuestInterface.

Change-Id: I8acae90c9d2111ed35f58f374f321d64f01ba563
Closes-Bug: #1621076


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1621076

Title:
  Can't detach interface from VM (if VM has two interface with same mac
  addresses)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  How to reproduce:

  1. Run any VM.
  2. Create two networks.
  3. Create two ports for each network with same mac addresses
  4. Attach those ports to VM
  5. Try to detach any interface.

  Expected result:
  The interface should be detached from VM.

  Actual result:
  We don't get any errors (via API) on previous steps but an interface still 
attached to VM

  Environment:
  * fuel_release: 9.0
  * fuel_openstack_version: mitaka-9.0
  * libvirt0: 1.2.9.3-9~u14.04+mos10
  * hypervisor: Libvirt + KVM


  Example:

  (OpenStack-venv)agent@laptop ~/projects $ nova interface-list 
c5ae5a9a-54a2-47b8-800e-619b8bc286f5
  
++--+--++---+
  | Port State | Port ID  | Net ID  
 | IP addresses   | MAC Addr  |
  
++--+--++---+
  | ACTIVE | 0d9377b4-4f8f-467a-bae7-54d0d70e5262 | 
3f5adcaa-d3e5-4caf-be8f-474751de5589 | 192.168.0.1| fa:16:3e:60:46:1f |
  | ACTIVE | 13cac036-7b6d-4188-879f-650c8d9e1f63 | 
28939866-7379-4279-800c-b64c2776e1e0 | 192.168.111.82 | fa:16:3e:24:0d:a4 |
  | ACTIVE | 310b9883-806d-4038-a095-1625abecbcb1 | 
311c5a7e-5cb0-47e2-8aa0-20d74c4ee8c2 | 192.168.0.1| fa:16:3e:60:46:1f |
  
++--+--++---+
  (OpenStack-venv)agent@laptop ~/projects $ nova interface-detach 
c5ae5a9a-54a2-47b8-800e-619b8bc286f5 0d9377b4-4f8f-467a-bae7-54d0d70e5262
  (OpenStack-venv)agent@laptop ~/projects $ nova interface-list 
c5ae5a9a-54a2-47b8-800e-619b8bc286f5
  
++--+--++---+
  | Port State | Port ID  | Net ID  
 | IP addresses   | MAC Addr  |
  
++--+--++---+
  | ACTIVE | 0d9377b4-4f8f-467a-bae7-54d0d70e5262 | 
3f5adcaa-d3e5-4caf-be8f-474751de5589 | 192.168.0.1| fa:16:3e:60:46:1f |
  | ACTIVE | 13cac036-7b6d-4188-879f-650c8d9e1f63 | 
28939866-7379-4279-800c-b64c2776e1e0 | 192.168.111.82 | fa:16:3e:24:0d:a4 |
  | ACTIVE | 310b9883-806d-4038-a095-1625abecbcb1 | 
311c5a7e-5cb0-47e2-8aa0-20d74c4ee8c2 | 192.168.0.1| fa:16:3e:60:46:1f |
  
++--+--++---+

  logs from compute:

  libvirt:
  <11>Sep  7 12:18:00 node-9 libvirtd: 11320: error : virDomainNetFindIdx:11005 
: operation failed: multiple devices matching mac address fa:16:3e:60:46:1f 
found

  nova compute:
  <183>Sep  6 17:58:04 node-7 nova-compute: 2016-09-06 17:58:04.348 7652 DEBUG 
nova.objects.instance [req-db381757-3fbf-4bb8-a4df-160e1422a005 
2b96e098d62147d8b9a15f635e0dd51a 4b867602d6974059afb2489a71dfaabb - - -] 
Lazy-loading 'flavor' on Instance uuid d95ab3d6-6b8f-42f0-8bf8-a553e99ee41a 
obj_load_attr /usr/lib/python2.7/dist-packages/nova/objects/instance.py:895
  

[Yahoo-eng-team] [Bug 1645554] Re: [api-ref] incorrect title for role-assignment

2016-12-01 Thread Steve Martinelli
** Summary changed:

- un appropriate title for role-assignment identity v3 api-ref
+ [api-ref] incorrect title for role-assignment

** Changed in: openstack-api-site
   Status: New => Invalid

** Also affects: keystone
   Importance: Undecided
   Status: New

** Tags added: api-ref

** Changed in: keystone
   Status: New => In Progress

** Changed in: keystone
   Importance: Undecided => Low

** Changed in: keystone
 Assignee: (unassigned) => Rodrigo Duarte (rodrigodsousa)

** Changed in: keystone
Milestone: None => ocata-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645554

Title:
  [api-ref] incorrect title for role-assignment

Status in OpenStack Identity (keystone):
  In Progress
Status in openstack-api-site:
  Invalid

Bug description:
  Title for role-assignment identity v3 api-ref is not appropriate, as
  this API is to list all role assignment and depends on query
  (effective), it will return filter role assignments.

  So title should be "List role assignments" instead of "List effective
  role assignments"

  http://developer.openstack.org/api-ref/identity/v3/?expanded=list-
  effective-role-assignments-detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632054] Re: Heat engine doesn't detect lbaas listener failures

2016-12-01 Thread Zhixin Li
Doesn't it need to add an affected project for neutron-lbaas? We need to
expose provisioning_status for all top level objects of lbaas.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632054

Title:
  Heat engine doesn't detect lbaas listener failures

Status in heat:
  Triaged
Status in neutron:
  New
Status in octavia:
  Triaged

Bug description:
  Please refer to the mail-list for comments from other developers,
  https://openstack.nimeyo.com/97427/openstack-neutron-octavia-doesnt-
  detect-listener-failures

  I am trying to use heat to launch lb resources with Octavia as backend. The
  template I used is from
  
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
  .

  Following are a few observations:

  1. Even though Listener was created with ERROR status, heat will still go
  ahead and mark it Creation Complete. As in the heat code, it only check
  whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
  And Loadbalancer status will be changed to ACTIVE anyway no matter
  Listener's status.

  2. As heat engine wouldn't know the Listener's creation failure, it will
  continue to create Pool\Member\Heatthmonitor on top of an Listener which
  actually doesn't exist. It causes a few undefined behaviors. As a result,
  those LBaaS resources in ERROR state are unable to be cleaned up
  with either normal neutron or heat api.

  3. The bug is introduce from here,
  
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/lbaas/listener.py#L188.
  It only checks the provisioning status of the root loadbalancer.
  However the listener itself has its own provisioning status which may
  go into ERROR.

  4. The same scenario applies for not only listener but also pool,
  member, healthmonitor, etc., basically every resources except
  loadbalancer from lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1632054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646672] Re: It's recommended to use keystoneauth1

2016-12-01 Thread Xu Ao
** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1646672

Title:
  It's recommended to use keystoneauth1

Status in Glance:
  Invalid

Bug description:
  While use cinder as glance-store, there being a warning in the log:

  2016-12-02 10:40:30.935 DEBUG
  glance.api.middleware.version_negotiation [-] new path
  /v2/images/59183025-2163-4022-93b6-928e9385359c from (pid=30487)
  process_request
  /opt/stack/glance/glance/api/middleware/version_negotiation.py:71

  /usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py:135: 
UserWarning: Using keystoneclient sessions has been deprecated. Please update 
your software to use keystoneauth1.
warnings.warn('Using keystoneclient sessions has been deprecated. '

  If the auth_plugin should be convert to keystoneauth1 here?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1646672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646672] [NEW] It's recommended to use keystoneauth1

2016-12-01 Thread Xu Ao
Public bug reported:

While use cinder as glance-store, there being a warning in the log:

2016-12-02 10:40:30.935 DEBUG glance.api.middleware.version_negotiation
[-] new path /v2/images/59183025-2163-4022-93b6-928e9385359c from
(pid=30487) process_request
/opt/stack/glance/glance/api/middleware/version_negotiation.py:71

/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py:135: 
UserWarning: Using keystoneclient sessions has been deprecated. Please update 
your software to use keystoneauth1.
  warnings.warn('Using keystoneclient sessions has been deprecated. '

If the auth_plugin should be convert to keystoneauth1 here?

** Affects: glance
 Importance: Undecided
 Assignee: Xu Ao (xuao)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Xu Ao (xuao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1646672

Title:
  It's recommended to use keystoneauth1

Status in Glance:
  New

Bug description:
  While use cinder as glance-store, there being a warning in the log:

  2016-12-02 10:40:30.935 DEBUG
  glance.api.middleware.version_negotiation [-] new path
  /v2/images/59183025-2163-4022-93b6-928e9385359c from (pid=30487)
  process_request
  /opt/stack/glance/glance/api/middleware/version_negotiation.py:71

  /usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py:135: 
UserWarning: Using keystoneclient sessions has been deprecated. Please update 
your software to use keystoneauth1.
warnings.warn('Using keystoneclient sessions has been deprecated. '

  If the auth_plugin should be convert to keystoneauth1 here?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1646672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642679] Re: The OpenStack network_config.json implementation fails on Hyper-V compute nodes

2016-12-01 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Confirmed => In Progress

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: (unassigned) => Scott Moser (smoser)

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642679

Title:
  The OpenStack network_config.json implementation fails on Hyper-V
  compute nodes

Status in cloud-init:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  In Progress
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  We have discovered an issue when booting Xenial instances on OpenStack
  environments (Liberty or newer) and Hyper-V compute nodes using config
  drive as metadata source.

  When applying the network_config.json, cloud-init fails with this error:
  http://paste.openstack.org/show/RvHZJqn48JBb0TO9QznL/

  The fix would be to add 'hyperv' as a link type here:
  /usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py, line 
587

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1642679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639930] Re: initramfs network configuration ignored if only ip6= on kernel command line

2016-12-01 Thread Scott Moser
** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: (unassigned) => Scott Moser (smoser)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1639930

Title:
  initramfs network configuration ignored if only ip6= on kernel command
  line

Status in cloud-init:
  Confirmed
Status in MAAS:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  In Progress
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  In changes made under bug 1621615 (specifically a1cdebdea), we now
  expect that there may be a 'ip6=' argument on the kernel command line.
  The changes made did not test the case where there is 'ip6=' and no
  'ip='.

  The code currently will return with no network configuration found if
  there is only ip6=...


  Related bugs:
   * bug 1621615: network not configured when ipv6 netbooted into cloud-init 
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses
   * bug 1635716: Can't bring up a machine on a dual network (ipv4 and ipv6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1639930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644043] Re: Read Kernel Config tests missing MAC Address

2016-12-01 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: (unassigned) => Scott Moser (smoser)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1644043

Title:
  Read Kernel Config tests missing MAC Address

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  In Progress
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  Unittests are broken [1] preventing cloud-init from building
  currently. The issue is the kernel config tests are looking for a MAC
  address that is not there.

  [1] https://jenkins.ubuntu.com/server/job/cloud-init-integration-
  lts/35/console

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1644043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460715] Re: MBR disk setup fails because sfdisk no longer accepts M as a valid unit

2016-12-01 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Confirmed => In Progress

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: (unassigned) => Scott Moser (smoser)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1460715

Title:
  MBR disk setup fails because sfdisk no longer accepts M as a valid
  unit

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  In Progress
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  Specifically, we get the following output in
  cc_disk_setup.exec_mkpart_mbr:

  sfdisk: --Linux option is unnecessary and deprecated
  sfdisk: unsupported unit 'M'

  and the manpage says:

     -u, --unit S
    Deprecated option.  Only the sector unit is supported.

  So we'll need to shift to using sectors.

  Related bugs:
   * bug 1642383: Unable to configure swap space on ephemeral disk in Azure

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1460715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556530] Re: neutron-lbaas needs a scenario test covering status query when health monitor is admin-state-up=False

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556530

Title:
  neutron-lbaas needs a scenario test covering status query when health
  monitor is admin-state-up=False

Status in octavia:
  Confirmed

Bug description:
  neutron-lbaas is missing a scenario test covering the case when a
  health monitor is in admin-state-up=False and a user queries for the
  status tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1556530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295424] Re: lbaas security group

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295424

Title:
  lbaas security group

Status in octavia:
  Incomplete

Bug description:
  There seems to be no way of specifying which security group a lbaas
  vip gets. It looks to default to 'default' in Havana. When you place a
  load balancer on a backend private neutron network, it gets the
  security group member rules from 'default' which are for the wrong
  subnet.

  Manually drilling down to find the port neutron port id, and then
  fixing the security_group on the vip port does seem to work.

  There needs to be a way to specify the security groups when you create
  the vip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1295424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540565] Re: lbaas v2 dashboard uses unclear "Admin State Up" terminology

2016-12-01 Thread Michael Johnson
** Project changed: neutron => neutron-lbaas-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540565

Title:
  lbaas v2 dashboard uses unclear "Admin State Up" terminology

Status in Neutron LBaaS Dashboard:
  Confirmed

Bug description:
  The lbaas v2 Horizon plugin at https://github.com/openstack/neutron-
  lbaas-dashboard/ uses the phrase "Admin State Up" Yes/No. It seems
  clearer to change this terminology to "Admin State: Up (or Down)" as
  suggested in this code review:
  https://review.openstack.org/#/c/259142/6/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1540565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640882] Re: Lbaasv2 cookie sessions not working as haproxy( which is backend for lbaas) no longer supports appsession

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640882

Title:
  Lbaasv2 cookie sessions not working as haproxy( which is backend for
  lbaas) no longer supports appsession

Status in octavia:
  Triaged

Bug description:
  I have deployed lbaasv2 and launched bunch of load balancers.

  For one of them, I need to configure cookie sessions. When I added it
  through cli

  neutron lbaas-pool-update poolid --session-persistence type=dict
  type=APP_COOKIE,cookie_name=sessionid

  I see errors in logs stating "appsession" is no longer supported by
  haproxy

  Seems appsession is deprecated in haproxy but lbaas still uses it
  configure the sessions in the backedn haproxy configs

  Tried editing it manually in backend but lbaas quickly overwrites it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1640882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643829] Re: [neutron-lbaas] Migration contains innodb specific code

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643829

Title:
  [neutron-lbaas] Migration contains innodb specific code

Status in octavia:
  In Progress

Bug description:
  The migration contained in
  
neutron_lbaas/db/migration/alembic_migrations/versions/mitaka/expand/6aee0434f911_independent_pools.py
  drops the foreign key constraints from the lbaas_listeners table. It
  contains code for both PostgreSQL and MySQL, however, the MySQL path
  is only compatible with the innodb engine. The ndbcluster engine
  assigns random names to foreign keys unless told otherwise, resulting
  in the code erroring out when it tries to reference
  "lbaas_listeners_ibfk_2". This can be fixed using sqlalchemy's
  reflection feature to look up the foreign key names before dropping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1643829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541802] Re: lbaasv2 namespace missing host routes from VIP subnet

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541802

Title:
  lbaasv2 namespace missing host routes from VIP subnet

Status in octavia:
  Confirmed

Bug description:
  When a lbaasv2 namespace is created it only receives the default
  gateway for that subnet, the additional host routes defined against
  the subnet are ignored which results in certain areas of a network
  being inaccessible.

  # ip netns exec qlbaas-ae4b71ef-e874-46a1-a489-c2a6e186ffe3 ip r s
  default via 192.168.31.254 dev tap9e9051cd-ff 
  192.168.31.0/24 dev tap9e9051cd-ff  proto kernel  scope link  src 
192.168.31.48

  Version Info:

  OpenStack: Liberty
  Distro: Ubuntu 14.04.3

  Not sure if any more information is required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1541802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554601] Re: able to update health monitor attributes which is attached to pool in lbaas

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

** Tags removed: lbaas
** Tags added: lbaasv1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554601

Title:
  able to update health monitor attributes which is attached to pool in
  lbaas

Status in OpenStack Neutron LBaaS Integration:
  Confirmed
Status in octavia:
  Incomplete

Bug description:
  Reproduced a bug in Load Balancer:
  1.created a pool
  2.attached members to pool1
  3.then associate health monitor to pool
  4.associate VIP to pool
  5.when I edit the  attributes of "health monitor" it shows me error like in 
"Error: Failed to update health monitor " but it is updated successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/f5openstackcommunitylbaas/+bug/1554601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554960] Re: able to attached one health monitor to two different pool in lbaas neutron

2016-12-01 Thread Michael Johnson
** Tags removed: lbaas
** Tags added: lbaasv1

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554960

Title:
  able to attached one health monitor to two different pool in lbaas
  neutron

Status in OpenStack Neutron LBaaS Integration:
  Confirmed
Status in octavia:
  Confirmed

Bug description:
  Reproduced bug:
  1.created a pool in lbaas
  2.add member to pool
  3.associate monitor and add vip to pool
  4.now i am able to associate one health monitor to two different pool.
  5.pool and health monitor should have one to one mapping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/f5openstackcommunitylbaas/+bug/1554960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544729] Re: No grenade coverage for neutron-lbaas/octavia

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544729

Title:
  No grenade coverage for neutron-lbaas/octavia

Status in octavia:
  Confirmed

Bug description:
  Stock neutron grenade no longer covers this, so we need a grenade
  plugin for neutron-lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1544729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624165] Re: LBaaS-enabled devstack fails with deprecated error when fatal_deprecations is set to True

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624165

Title:
  LBaaS-enabled devstack fails with deprecated error when
  fatal_deprecations is set to True

Status in octavia:
  Confirmed

Bug description:
  If we set fatal_deprecations = True in neutron.conf and neutron-lbaas is 
enabled 
  neutron server fails with following error 

  http://paste.openstack.org/show/577816/

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516862] Re: LB_NFV KiloV1 :the default session limit is 2000 rather than unlimit

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1516862

Title:
  LB_NFV KiloV1 :the default session limit is 2000 rather than unlimit

Status in octavia:
  Incomplete

Bug description:
[Summary]
LB_NFV KiloV1 :the default session limit is 2000 rather than unlimit

[Topo]
  RDO-Kilo 
  CentOS 7.1

[Description and expect result]
if we don't config session limit , the expect the session shoule be not 
limit

[Reproduceable or not]
  it is easy   to recreate.

[Recreate Steps]
  if don't config session-limit , the default session limit is 2000 , but the 
GUI show the session limit is -1, as  our common understanding , the negative 
number mean unlimit .
  so this issue should be fixed

  # pxnamesvnameqcurqmaxscursmaxslim
  fda4febc-8efd-436e-9227-435916d50e93FRONTEND52000
2000<<< check haproxy statistic infor is 
2000

  and

  ID
  fda4febc-8efd-436e-9227-435916d50e93
  Name
  VIP1
  Description
  -
  Project ID
  d95aa65136e6413fb0e29ab3550097a4
  Subnet
  LB_Scale1_VipSubnet_1 20.1.1.0/24
  Address
  20.1.1.100
  Protocol Port
  80
  Protocol
  HTTP
  Pool
  pool_1
  Port ID
  7f003e70-ea64-4f52-8793-356d703f9003
  Session Persistence
  None
  Connection Limit
  -1<<
  Admin State Up
  Yes
  Status
  ACTIVE 

  [Configration]

  [root@nitinserver2 ~(keystone_admin)]# more 
/var/lib/neutron/lbaas/61a7696f-ded6-493c-98bc-27c2a82cca15/conf 
  global
  daemon
  user nobody
  group haproxy
  log /dev/log local0
  log /dev/log local1 notice
  stats socket 
/var/lib/neutron/lbaas/61a7696f-ded6-493c-98bc-27c2a82cca15/sock mode 0666 
level user
  defaults
  log global
  retries 3
  option redispatch
  timeout connect 5000
  timeout client 5
  timeout server 5
  frontend 46d5cb86-ec6c-474e-82dd-9dc70baa3222
  option tcplog
  bind 0.0.0.0:80
  mode http
  default_backend 61a7696f-ded6-493c-98bc-27c2a82cca15
  option forwardfor
  backend 61a7696f-ded6-493c-98bc-27c2a82cca15
  mode http
  balance roundrobin
  option forwardfor
  server 09b5c49f-100f-4fc7-86dd-05512de21ec3 10.1.1.107:80 weight 1
  server 0a65e548-84d7-4aac-96e6-4ad5a170a9ee 10.1.1.130:80 weight 1
  server 0cc99f63-8883-4655-8433-545840699e53 10.1.1.126:80 weight 1
  server 18c767db-c622-4104-834d-9573fab5b979 10.1.1.127:80 weight 1
[logs]


[Root cause anlyze or debug inf]


[Attachment]
Upload the attachment and explain it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1516862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460164] Re: restart of openvswitch-switch causes instance network down when l2population enabled

2016-12-01 Thread Corey Bryant
Marking Juno as "Won't fix" for the Ubuntu Cloud Archive since it is
EOL.

** Changed in: cloud-archive/juno
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460164

Title:
  restart of openvswitch-switch causes instance network down when
  l2population enabled

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive icehouse series:
  Fix Released
Status in Ubuntu Cloud Archive juno series:
  Won't Fix
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  Fix Released
Status in neutron source package in Wily:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  Restarts of openvswitch (typically on upgrade) result in loss of tunnel 
connectivity when the l2population driver is in use.  This results in loss of 
access to all instances on the effected compute hosts

  [Test Case]
  Deploy cloud with ml2/ovs/l2population enabled
  boot instances
  restart ovs; instance connectivity will be lost until the 
neutron-openvswitch-agent is restarted on the compute hosts.

  [Regression Potential]
  Minimal - in multiple stable branches upstream.

  [Original Bug Report]
  On 2015-05-28, our Landscape auto-upgraded packages on two of our
  OpenStack clouds.  On both clouds, but only on some compute nodes, the
  upgrade of openvswitch-switch and corresponding downtime of
  ovs-vswitchd appears to have triggered some sort of race condition
  within neutron-plugin-openvswitch-agent leaving it in a broken state;
  any new instances come up with non-functional network but pre-existing
  instances appear unaffected.  Restarting n-p-ovs-agent on the affected
  compute nodes is sufficient to work around the problem.

  The packages Landscape upgraded (from /var/log/apt/history.log):

  Start-Date: 2015-05-28  14:23:07
  Upgrade: nova-compute-libvirt:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
libsystemd-login0:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
nova-compute-kvm:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
systemd-services:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
isc-dhcp-common:amd64 (4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), nova-common:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), python-nova:amd64 (2014.1.4-0ubuntu2, 
2014.1.4-0ubuntu2.1), libsystemd-daemon0:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), grub-common:amd64 (2.02~beta2-9ubuntu1.1, 
2.02~beta2-9ubuntu1.2), libpam-systemd:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), udev:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
grub2-common:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), 
openvswitch-switch:amd64 (2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2), 
libudev1:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), isc-dhcp-client:amd64 
(4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), python-eventlet:amd64 (0.13.0-1ubuntu2, 
0.13.0-1ubuntu
 2.1), python-novaclient:amd64 (2.17.0-0ubuntu1.1, 2.17.0-0ubuntu1.2), 
grub-pc-bin:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), grub-pc:amd64 
(2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), nova-compute:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), openvswitch-common:amd64 
(2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2)
  End-Date: 2015-05-28  14:24:47

  From /var/log/neutron/openvswitch-agent.log:

  2015-05-28 14:24:18.336 47866 ERROR neutron.agent.linux.ovsdb_monitor
  [-] Error received from ovsdb monitor: ovsdb-client:
  unix:/var/run/openvswitch/db.sock: receive failed (End of file)

  Looking at a stuck instances, all the right tunnels and bridges and
  what not appear to be there:

  root@vector:~# ip l l | grep c-3b
  460002: qbr7ed8b59c-3b:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default
  460003: qvo7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master ovs-system state UP mode DEFAULT group default qlen 1000
  460004: qvb7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master qbr7ed8b59c-3b state UP mode DEFAULT group default qlen 
1000
  460005: tap7ed8b59c-3b:  mtu 1500 qdisc 
pfifo_fast master qbr7ed8b59c-3b state UNKNOWN mode DEFAULT group default qlen 
500
  root@vector:~# ovs-vsctl list-ports br-int | grep c-3b
  qvo7ed8b59c-3b
  root@vector:~#

  But I can't ping the unit from within the qrouter-${id} namespace on
  the neutron gateway.  If I tcpdump the {q,t}*c-3b interfaces, I don't
  see any traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1460164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1638672] Re: No neat way to extend existing .po files for downstream customisations

2016-12-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/391506
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=36d1d1ac682c75167e5fe054f16eefe64988e3cf
Submitter: Jenkins
Branch:master

commit 36d1d1ac682c75167e5fe054f16eefe64988e3cf
Author: Rob Cresswell 
Date:   Thu Oct 6 14:27:22 2016 +0100

Refactor tox & update docs

- Updated tox envlist, so just running `tox` from the CLI will now run all
voting gate tests

- Reduce duplicated definitions and commands

- Remove any reliance on run_tests within tox

- Removes all doc references to run_tests.sh, and replaces them
with their tox equivalent. Where necessary, language around the tox
commands has been altered or extended so that it makes sense and is
consistent with other parts of the docs. Also adds a new "Test Environment"
list to the docs, so that newcomers do not have to piece together CLI
commands and their cryptic extensions from tox.ini

- Move the inline shell scripting to its own file. Also fixes a bug when
passing args, since the logic assumed you were attempting a subset test
run (try `tox -e py27 -- --pdb` on master to compare)

- Moved translation tooling from run_tests to manage.py, w/ help text
and arg restrictions. This is much more flexible so that plugins can use
it without having to copy commands, but still defaults to exactly the
same parameters/behaviour from run_tests. Docs updated appropriately.

- Removed npm/karma strange reliance on either .venv or tox/py27. Now
it only uses tox/npm.

Change-Id: I883f885bd424955d39ddcfde5ba396a88cfc041e
Implements: blueprint enhance-tox
Closes-Bug: 1638672


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1638672

Title:
  No neat way to extend existing .po files for downstream customisations

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Previously, downstream customisations could add translations by using
  `manage.py makemessages` and simply filling in the blanks in the
  generated .po files. Since the move to babel and .pot files,
  downstream now has to manually merge the .pot file into the .po files.

  We should add some helpers to cleanly update all the .po files with
  new strings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1638672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582278] Re: [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from one NUMA node and PCI device from another NUMA node.

2016-12-01 Thread Corey Bryant
Marking as fix released for xenial/mitaka since 13.1.2 is now in xenial-
updates and mitaka-updates.

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Invalid

** Changed in: nova (Ubuntu Xenial)
   Status: Triaged => Fix Released

** Changed in: cloud-archive/mitaka
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582278

Title:
  [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from
  one NUMA node and PCI device from another NUMA node.

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released
Status in nova source package in Yakkety:
  Fix Released

Bug description:
  Environment:
  Two NUMA nodes on compute host (node-0 and node-1).
  One SR-IOV PCI device associated with NUMA node-1.

  Steps to reproduce:

  Steps to reproduce:
   1) Deploy env with SR-IOV and CPU pinning enable
   2) Create new flavor with cpu pinning:
  nova flavor-show m1.small.performance
  
++---+
  | Property | Value |
  
++---+
  | OS-FLV-DISABLED:disabled | False |
  | OS-FLV-EXT-DATA:ephemeral | 0 |
  | disk | 20 |
  | extra_specs | {"hw:cpu_policy": "dedicated", "hw:numa_nodes": "1"} |
  | id | 7b0e5ee0-0bf7-4a46-9653-9279a947c650 |
  | name | m1.small.performance |
  | os-flavor-access:is_public | True |
  | ram | 2048 |
  | rxtx_factor | 1.0 |
  | swap | |
  | vcpus | 1 |
  
++
   3) download ubuntu image
   4) create sr-iov port and boot vm on this port with m1.small.performance 
flavor:
  NODE_1='node-4.test.domain.local'
  NODE_2='node-5.test.domain.local'
  NET_ID_1=$(neutron net-list | grep net_EW_2 | awk '{print$2}')
  neutron port-create $NET_ID_1 --binding:vnic-type direct --device_owner 
nova-compute --name sriov_23
  port_id=$(neutron port-list | grep 'sriov_23' | awk '{print$2}')
  nova boot vm23 --flavor m1.small.performance --image ubuntu_image 
--availability-zone nova:$NODE_1 --nic port-id=$port_id --key-name vm_key

  Expected results:
   VM is an ACTIVE state
  Actual result:
   In most cases the state is ERROR with following logs:

  2016-05-13 08:25:56.598 29097 ERROR nova.pci.stats 
[req-26138c0b-fa55-4ff8-8f3a-aad980e3c815 d864c4308b104454b7b46fb652f4f377 
9322dead0b5d440986b12596d9cbff5b - - -] Failed to allocate PCI devices for 
instance. Unassigning devices back to pools. This should not happen, since the 
scheduler should have accurate information, and allocation during claims is 
controlled via a hold on the compute node semaphore
  2016-05-13 08:25:57.502 29097 INFO nova.virt.libvirt.driver 
[req-26138c0b-fa55-4ff8-8f3a-aad980e3c815 d864c4308b104454b7b46fb652f4f377 
9322dead0b5d440986b12596d9cbff5b - - -] [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] Creating image
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
[req-26138c0b-fa55-4ff8-8f3a-aad980e3c815 d864c4308b104454b7b46fb652f4f377 
9322dead0b5d440986b12596d9cbff5b - - -] Instance failed network setup after 1 
attempt(s)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1570, in 
_allocate_network_async
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
bind_host_id=bind_host_id)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 666, in 
allocate_for_instance
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
self._delete_ports(neutron, instance, created_port_ids)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
self.force_reraise()
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
  2016-05-13 08:25:57.664 29097 ERROR nova.compute.manager   File 

[Yahoo-eng-team] [Bug 1608934] Re: ephemeral/swap disk creation fails for local storage with image type raw/lvm

2016-12-01 Thread Corey Bryant
Marking as fix released for xenial/mitaka since 13.1.2 is now in xenial-
updates and mitaka-updates.

** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

** Changed in: nova/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608934

Title:
  ephemeral/swap disk creation fails for local storage with image type
  raw/lvm

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Committed
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released

Bug description:
  Description
  ===
  I am currently trying to launch an instance in my mitaka cluster with a 
flavor with ephemeral and root storage. Whenever i am trying to start the 
instance i am running into an "DiskNotFound" Error (see trace below). Starting 
instances without ephemeral works perfectly fine and the root disk is created 
as expected in /var/lib/nova/instance/$INSTANCEID/disk .

  Steps to reproduce
  ==
  1. Create a flavor with ephemeral and root storage.
  2. Start an instance with that flavor.

  Expected result
  ===
  Instance starts and ephemeral disk is created in 
/var/lib/nova/instances/$INSTANCEID/disk.eph0 or disk.local ? (Not sure where 
the switchase for the naming is)

  Actual result
  =
  Instance does not start, ephemeral disk seems to be created at 
/var/lib/nova/instances/$INSTANCEID/disk.eph0, but nova checks 
/var/lib/nova/instances/_base/ephemeral_* for disk_size

  TRACE: http://pastebin.com/raw/TwtiNLY2

  Environment
  ===
  I am running OpenStack mitaka on Ubuntu 16.04 in the latest version with 
Libvirt + KVM as hypervisor (also latest stable in xenial).

  Config
  ==

  nova.conf:

  ...
  [libvirt]
  images_type = raw
  rbd_secret_uuid = XXX
  virt_type = kvm
  inject_key = true
  snapshot_image_format = raw
  disk_cachemodes = "network=writeback"
  rng_dev_path = /dev/random
  rbd_user = cinder
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1608934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632054] Re: Heat engine doesn't detect lbaas listener failures

2016-12-01 Thread Michael Johnson
The work in Octavia is complete for adding provisioning status to all of the 
objects.  We just need to make sure that is available via the APISs and clients.
Provisioning status work was done here: https://review.openstack.org/#/c/372791/

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632054

Title:
  Heat engine doesn't detect lbaas listener failures

Status in heat:
  Triaged
Status in octavia:
  Triaged

Bug description:
  Please refer to the mail-list for comments from other developers,
  https://openstack.nimeyo.com/97427/openstack-neutron-octavia-doesnt-
  detect-listener-failures

  I am trying to use heat to launch lb resources with Octavia as backend. The
  template I used is from
  
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
  .

  Following are a few observations:

  1. Even though Listener was created with ERROR status, heat will still go
  ahead and mark it Creation Complete. As in the heat code, it only check
  whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
  And Loadbalancer status will be changed to ACTIVE anyway no matter
  Listener's status.

  2. As heat engine wouldn't know the Listener's creation failure, it will
  continue to create Pool\Member\Heatthmonitor on top of an Listener which
  actually doesn't exist. It causes a few undefined behaviors. As a result,
  those LBaaS resources in ERROR state are unable to be cleaned up
  with either normal neutron or heat api.

  3. The bug is introduce from here,
  
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/lbaas/listener.py#L188.
  It only checks the provisioning status of the root loadbalancer.
  However the listener itself has its own provisioning status which may
  go into ERROR.

  4. The same scenario applies for not only listener but also pool,
  member, healthmonitor, etc., basically every resources except
  loadbalancer from lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1632054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571909] Re: Neutron-LBaaS v2: Deleting pool that has members changes state of other load balancers

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571909

Title:
  Neutron-LBaaS v2: Deleting pool that has members changes state of
  other load balancers

Status in octavia:
  Triaged

Bug description:
  As an admin user, perform the following for a given tenant:
  1.  Create Load_Balancer_1
  2.  Create a Pool_1 for Load_Balancer_1.
  3.  Add 1 member_1 to Pool_1.   (wait for Load_Balancer_1 to be Active)
  4.  Create Load_Balancer_2.
  5.  Create a Pool_2 for Load_Balancer_2.
  6.  Add 1 member_2 to Pool_2.  (wait for Load_Balancer_2 to be Active)
  7.  Delete Pool_2.
  8.  Do a list load balancers and observe state of Load_Balancer_1 and 
Load_Balancer_2.

  Actual Result:   Load_Balancer_1 status transitions to
  "PENDING_UPDATE".   Load_Balancer_2 status transitions to
  "PENDING_UPDATE".

  Expected:   Load_Balancer_2 status should ONLY transition to
  "PENDING_UPDATE".   Load_Balancer_1 should only stay ACTIVE.

  note: The state  seems to change to "PENDING_UPDATE" with all active
  load balancers for a given account when deleting a pool that has
  members

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1571909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592612] Re: LBaaS TLS is not working with non-admin tenant

2016-12-01 Thread Michael Johnson
To my knowledge we can grant ACL access to just the container the user is 
requesting we use for the listener creation, so we would not be granting the 
LBaaS service account access to all of the user's secrets, but just the ones 
that user is requesting we use for the listener.
Is that a mis-understanding?

** Changed in: octavia
   Status: New => Confirmed

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592612

Title:
  LBaaS TLS is not working with non-admin tenant

Status in Barbican:
  New
Status in octavia:
  Confirmed

Bug description:
  I went through https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-
  to-create-tls-loadbalancer with devstack. And all my branches were set
  to stable/mitaka.

  If I set my user and tenant as "admin admin", the workflow passed.
  But it failed if I set the user and tenant to "admin demo" and rerun all the 
steps.

  Steps to reproduce:
  1. source ~/devstack/openrc admin demo
  2. barbican secret store --payload-content-type='text/plain' 
--name='certificate' --payload="$(cat server.crt)"
  3. barbican secret store --payload-content-type='text/plain' 
--name='private_key' --payload="$(cat server.key)"
  4 .barbican secret container create --name='tls_container' 
--type='certificate' --secret="certificate=$(barbican secret list | awk '/ 
certificate / {print $2}')" --secret="private_key=$(barbican secret list | awk 
'/ private_key / {print $2}')"
  5. neutron lbaas-loadbalancer-create $(neutron subnet-list | awk '/ 
private-subnet / {print $2}') --name lb1
  6. neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 
--protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(barbican 
secret container list | awk '/ tls_container / {print $2}')

  
  The error msg I got is 
  $ neutron lbaas-listener-create --loadbalancer 
738689bd-b54e-485e-b742-57bd6e812270 --protocol-port 443 --protocol 
TERMINATED_HTTPS --name listener2 --default-tls-container=$(barbican secret 
container list | awk '/ tls_container / {print $2}')
  WARNING:barbicanclient.barbican:This Barbican CLI interface has been 
deprecated and will be removed in the O release. Please use the openstack 
unified client instead.
  DEBUG:stevedore.extension:found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('json = 
cliff.formatters.json_format:JSONFormatter')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('csv = 
cliff.formatters.commaseparated:CSVLister')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('value = 
cliff.formatters.value:ValueFormatter')
  DEBUG:stevedore.extension:found extension EntryPoint.parse('yaml = 
cliff.formatters.yaml_format:YAMLFormatter')
  DEBUG:barbicanclient.client:Creating Client object
  DEBUG:barbicanclient.containers:Listing containers - offset 0 limit 10 name 
None type None
  DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://192.168.100.148:5000/v2.0/tokens
  INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection 
(1): 192.168.100.148
  Starting new HTTP connection (1): 192.168.100.148
  DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 
200 3924
  DEBUG:keystoneclient.session:REQ: curl -g -i -X GET 
http://192.168.100.148:9311 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection 
(1): 192.168.100.148
  Starting new HTTP connection (1): 192.168.100.148
  DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 300 353
  DEBUG:keystoneclient.session:RESP: [300] Content-Length: 353 Content-Type: 
application/json; charset=UTF-8 Connection: close
  RESP BODY: {"versions": {"values": [{"status": "stable", "updated": 
"2015-04-28T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.key-manager-v1+json"}], "id": "v1", "links": 
[{"href": "http://192.168.100.148:9311/v1/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}]}}
  DEBUG:keystoneclient.session:REQ: curl -g -i -X GET 
http://192.168.100.148:9311/v1/containers -H "User-Agent: 
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}203d7de65f6cfb1fb170437ae2da98fef35f0942"
  INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: 
192.168.100.148
  Resetting dropped connection: 192.168.100.148
  DEBUG:requests.packages.urllib3.connectionpool:"GET 
/v1/containers?limit=10=0 HTTP/1.1" 200 585
  DEBUG:keystoneclient.session:RESP: [200] Connection: close Content-Type: 
application/json; charset=UTF-8 Content-Length: 585 x-openstack-request-id: 
req-aa4bb861-3d1d-42c6-be3d-5d3935622043
  RESP BODY: {"total": 1, "containers": 

[Yahoo-eng-team] [Bug 1439696] Re: Referencing a lb-healthmonitor ID for the first time from Heat would fail

2016-12-01 Thread Michael Johnson
Can we confirm this is an issue with LBaaSv2 and it is still occurring?
If so, what OpenStack release is being used?

** Project changed: neutron => octavia

** Changed in: octavia
Milestone: ocata-2 => None

** Changed in: octavia
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439696

Title:
  Referencing a lb-healthmonitor ID for the first time from Heat would
  fail

Status in octavia:
  Incomplete

Bug description:
  Creating a stack with heat that creates a lb-healthmonitor would result in a 
404 for that ID.
  This happens only at the first attempt to do so. Deleting the heat stack and 
recreating it using the same template would result in a success so it does not 
look like an issue originating from heat. Later operations either by neutron or 
Heat would succeed and the only way to reproduce this specific issue is to 
unstack and re-stack.

  From heat's log (has neutron's answer):

  REQ: curl -i http://10.35.160.83:9696//v2.0/lb/health_monitors.json -X POST 
-H "User-Agent: python-neutronclient" -H "X-Auth-Token: 
40357276a5b34f1bb4980d566d36e9c4" -d '{"health_monitor": {"delay": 5, "max_retr
   from (pid=10195) http_log_req 
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:130
  2015-04-02 15:24:19.791 DEBUG neutronclient.client [-] RESP:404 {'date': 
'Thu, 02 Apr 2015 12:24:19 GMT', 'connection': 'keep-alive', 'content-type': 
'text/plain; cha

  The resource could not be found.

   from (pid=10195) http_log_resp 
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:139
  2015-04-02 15:24:19.791 DEBUG neutronclient.v2_0.client [-] Error message: 
404 Not Found

  The resource could not be found.

  from (pid=10195) _handle_fault_response 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:173
  2015-04-02 15:24:19.792 INFO heat.engine.resource [-] CREATE: HealthMonitor 
"monitor" Stack "test-001-load_balancer-ukmrf56u2dm4" 
[7aab3fa0-b71d-47b3-acc5-4767cb23b99
  2015-04-02 15:24:19.792 TRACE heat.engine.resource Traceback (most recent 
call last):
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 466, in _action_recorder
  2015-04-02 15:24:19.792 TRACE heat.engine.resource yield
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 536, in _do_action
  2015-04-02 15:24:19.792 TRACE heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/scheduler.py", line 295, in wrapper
  2015-04-02 15:24:19.792 TRACE heat.engine.resource step = next(subtask)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 507, in action_handler_task
  2015-04-02 15:24:19.792 TRACE heat.engine.resource handler_data = 
handler(*args)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resources/neutron/loadbalancer.py", line 146, in 
handle_create
  2015-04-02 15:24:19.792 TRACE heat.engine.resource {'health_monitor': 
properties})['health_monitor']
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 99, in 
with_params
  2015-04-02 15:24:19.792 TRACE heat.engine.resource ret = 
self.function(instance, *args, **kwargs)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1064, in 
create_health_monitor
  2015-04-02 15:24:19.792 TRACE heat.engine.resource return 
self.post(self.health_monitors_path, body=body)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 295, in 
post
  2015-04-02 15:24:19.792 TRACE heat.engine.resource headers=headers, 
params=params)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 208, in 
do_request
  2015-04-02 15:24:19.792 TRACE heat.engine.resource 
self._handle_fault_response(status_code, replybody)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 182, in 
_handle_fault_response
  2015-04-02 15:24:19.792 TRACE heat.engine.resource 
exception_handler_v20(status_code, des_error_body)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 80, in 
exception_handler_v20
  2015-04-02 15:24:19.792 TRACE heat.engine.resource message=message)
  2015-04-02 15:24:19.792 TRACE heat.engine.resource NeutronClientException: 
404 Not Found
  2015-04-02 15:24:19.792 TRACE heat.engine.resource
 

[Yahoo-eng-team] [Bug 1622914] Re: agent traces about bridge-nf-call sysctl values missing

2016-12-01 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: devstack
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622914

Title:
  agent traces about bridge-nf-call sysctl values missing

Status in devstack:
  Fix Released
Status in neutron:
  Fix Released
Status in tripleo:
  Triaged

Bug description:
  spotted in gate:

  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call 
last):
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 450, in daemon_loop
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent sync = 
self.process_network_devices(device_info)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in 
wrapper
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return f(*args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 200, in process_network_devices
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent device_info.get('updated'))
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 265, in 
setup_port_filters
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.prepare_devices_filter(new_devices)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 130, in 
decorated_function
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent *args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 138, in 
prepare_devices_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._apply_port_filter(device_ids)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 163, in 
_apply_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.firewall.prepare_port_filter(device)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 170, in 
prepare_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._enable_netfilter_for_bridges()
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 114, in 
_enable_netfilter_for_bridges
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent run_as_root=True)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent raise RuntimeError(msg)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent RuntimeError: Exit code: 255; 
Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1622914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646629] [NEW] tox -edocs fails when tox picks python 3.x

2016-12-01 Thread melanie witt
Public bug reported:

Currently, the docs target can't be run with python 3.x and fail with a
trace like this:

Traceback (most recent call last):classification


  
  File "setup.py", line 29, in 
pbr=True)
  File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
dist.run_commands()
  File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
  File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/pbr/builddoc.py", line 
196, in run
self._sphinx_run()
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/pbr/builddoc.py", line 
150, in _sphinx_run
app.build(force_all=self.all_files)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/sphinx/application.py",
 line 261, in build
self.builder.build_all()
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/sphinx/builders/__init__.py",
 line 211, in build_all
self.build(None, summary='all source files', method='all')
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/sphinx/builders/__init__.py",
 line 265, in build
self.doctreedir, self.app))
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/sphinx/environment.py",
 line 547, in update
self._read_serial(docnames, app)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/sphinx/environment.py",
 line 567, in _read_serial
self.read_doc(docname, app)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/sphinx/environment.py",
 line 720, in read_doc
pub.publish()
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/core.py", 
line 217, in publish
self.settings)
  File "/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/sphinx/io.py", 
line 46, in read
self.parse()
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/readers/__init__.py",
 line 78, in parse
self.parser.parse(self.input, document)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/__init__.py",
 line 172, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 170, in run
input_source=document['source'])
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/statemachine.py",
 line 239, in run
context, state, transitions)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/statemachine.py",
 line 460, in check_line
return method(match, context, next_state)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 2961, in text
self.section(title.lstrip(), source, style, lineno + 1, messages)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 327, in section
self.new_subsection(title, lineno, messages)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 395, in new_subsection
node=section_node, match_titles=True)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 282, in nested_parse
node=node, match_titles=match_titles)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 195, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/statemachine.py",
 line 239, in run
context, state, transitions)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/statemachine.py",
 line 460, in check_line
return method(match, context, next_state)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 2726, in underline
self.section(title, source, style, lineno - 1, messages)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 327, in section
self.new_subsection(title, lineno, messages)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 395, in new_subsection
node=section_node, match_titles=True)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 282, in nested_parse
node=node, match_titles=match_titles)
  File 
"/home/ubuntu/nova/.tox/docs/lib/python3.5/site-packages/docutils/parsers/rst/states.py",
 line 195, in run
results = StateMachineWS.run(self, input_lines, input_offset)
  File 

[Yahoo-eng-team] [Bug 1646604] [NEW] event posting fails for check-cahce and init-local

2016-12-01 Thread David Britton
Public bug reported:

maas  2.1.2+bzr-0ubuntu1~16.04.1
cloud-init 0.7.8-49-g9e904bb-0ubuntu1~16.04.1


when I boot a xenial node with cloud-init, I get these 4 failure post backs:

Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
start: init-local/check-cache: attempting to read from cache [trust]
Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
finish: init-local/check-cache: SUCCESS: no cache found
Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
finish: init-local: SUCCESS: searching for local datasources
Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
start: init-network/check-cache: attempting to read from cache [trust]


I'd expect them to work and run cleanly.

paste of cloud-init log file:

http://paste.ubuntu.com/23564700/

Node deploys successfully, so this is not a major issue for us, but
seems like noise that should be fix.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: landscape

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1646604

Title:
  event posting fails for check-cahce and init-local

Status in cloud-init:
  New

Bug description:
  maas  2.1.2+bzr-0ubuntu1~16.04.1
  cloud-init 0.7.8-49-g9e904bb-0ubuntu1~16.04.1

  
  when I boot a xenial node with cloud-init, I get these 4 failure post backs:

  Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
start: init-local/check-cache: attempting to read from cache [trust]
  Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
finish: init-local/check-cache: SUCCESS: no cache found
  Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
finish: init-local: SUCCESS: searching for local datasources
  Dec  1 07:16:53 bohr [CLOUDINIT] handlers.py[WARNING]: failed posting event: 
start: init-network/check-cache: attempting to read from cache [trust]

  
  I'd expect them to work and run cleanly.

  paste of cloud-init log file:

  http://paste.ubuntu.com/23564700/

  Node deploys successfully, so this is not a major issue for us, but
  seems like noise that should be fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1646604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646571] Re: apt failures non fatal, but cloud the log

2016-12-01 Thread Scott Moser
** Also affects: maas
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1646571

Title:
  apt failures non fatal, but cloud the log

Status in cloud-init:
  New
Status in MAAS:
  New

Bug description:
  Better formatting: http://paste.ubuntu.com/23564366/

  Dec 01 17:29:15 kepler cloud-init[1134]: 2016-12-01 17:29:15,900 - 
util.py[WARNING]: Running module apt-configure () failed
  Dec 01 17:29:15 kepler cloud-init[1134]: [CLOUDINIT] util.py[WARNING]: 
Running module apt-configure () failed
  Dec 01 17:29:15 kepler cloud-init[1134]: [CLOUDINIT] util.py[DEBUG]: Running 
module apt-configure () failed
   Traceback (most recent call last):
     File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 792, in _run_modules
   freq=freq)
     File 
"/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run
   return self._runners.run(name, 
functor, args, freq, clear_on_fail)
     File 
"/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run
   results = functor(*args)
     File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
284, in handle
   ocfg = 
convert_to_v3_apt_format(ocfg)
     File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
748, in convert_to_v3_apt_format
   cfg = 
convert_v2_to_v3_apt_format(cfg)
     File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
714, in convert_v2_to_v3_apt_format
   oldkey))
   ValueError: Old and New apt format 
defined with unequal values True vs False @ apt_preserve_sources_list
  D

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1646571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470615] Re: retry decorator mask underlying exceptions

2016-12-01 Thread Isaku Yamahata
** Changed in: networking-odl
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470615

Title:
  retry decorator mask underlying exceptions

Status in networking-odl:
  Fix Released
Status in networking-ovn:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  message:"Unrecognized attribute(s) 'network:tenant_id'"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5yZWNvZ25pemVkIGF0dHJpYnV0ZShzKSAnbmV0d29yazp0ZW5hbnRfaWQnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzU3NzUxNjIyMDR9

  message:"EncodeError: Circular reference detected"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRW5jb2RlRXJyb3I6IENpcmN1bGFyIHJlZmVyZW5jZSBkZXRlY3RlZFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM1Nzc1MjE2MTk4fQ==

  Trace examples:

  http://logs.openstack.org/03/192003/9/check/check-tempest-dsvm-
  networking-odl/8f05ff0/logs/screen-q-svc.txt.gz?level=TRACE

  Culprit:

  https://review.openstack.org/#/c/191540

  Analysis:

  When we retry on deadlock, we do so by passing a resource that's being
  mutated, hence containing a bunch of stuff that comes from policy
  engine, post-api processing...and that causes all sorts of problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1470615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646571] [NEW] apt failures non fatal, but cloud the log

2016-12-01 Thread David Britton
Public bug reported:

Better formatting: http://paste.ubuntu.com/23564366/

Dec 01 17:29:15 kepler cloud-init[1134]: 2016-12-01 17:29:15,900 - 
util.py[WARNING]: Running module apt-configure () failed
Dec 01 17:29:15 kepler cloud-init[1134]: [CLOUDINIT] util.py[WARNING]: Running 
module apt-configure () failed
Dec 01 17:29:15 kepler cloud-init[1134]: [CLOUDINIT] util.py[DEBUG]: Running 
module apt-configure () failed
 Traceback (most recent call last):
   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 792, in _run_modules
 freq=freq)
   File 
"/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run
 return self._runners.run(name, 
functor, args, freq, clear_on_fail)
   File 
"/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run
 results = functor(*args)
   File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
284, in handle
 ocfg = 
convert_to_v3_apt_format(ocfg)
   File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
748, in convert_to_v3_apt_format
 cfg = 
convert_v2_to_v3_apt_format(cfg)
   File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
714, in convert_v2_to_v3_apt_format
 oldkey))
 ValueError: Old and New apt format 
defined with unequal values True vs False @ apt_preserve_sources_list
D

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

+ Better formatting: http://paste.ubuntu.com/23564366/
+ 
  Dec 01 17:29:15 kepler cloud-init[1134]: 2016-12-01 17:29:15,900 - 
util.py[WARNING]: Running module apt-configure () failed
  Dec 01 17:29:15 kepler cloud-init[1134]: [CLOUDINIT] util.py[WARNING]: 
Running module apt-configure () failed
  Dec 01 17:29:15 kepler cloud-init[1134]: [CLOUDINIT] util.py[DEBUG]: Running 
module apt-configure () failed
-  Traceback (most recent call last):
-File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 792, in _run_modules
-  freq=freq)
-File 
"/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run
-  return self._runners.run(name, 
functor, args, freq, clear_on_fail)
-File 
"/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run
-  results = functor(*args)
-File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
284, in handle
-  ocfg = 
convert_to_v3_apt_format(ocfg)
-File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
748, in convert_to_v3_apt_format
-  cfg = 
convert_v2_to_v3_apt_format(cfg)
-File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
714, in convert_v2_to_v3_apt_format
-  oldkey))
-  ValueError: Old and New apt format 
defined with unequal values True vs False @ apt_preserve_sources_list
+  Traceback (most recent call last):
+    File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 792, in _run_modules
+  freq=freq)
+    File 
"/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run
+  return self._runners.run(name, 
functor, args, freq, clear_on_fail)
+    File 
"/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run
+  results = functor(*args)
+    File 
"/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 
284, in handle
+  ocfg = 
convert_to_v3_apt_format(ocfg)
+    File 

[Yahoo-eng-team] [Bug 1498476] Re: LBaas-LB performance just have 1 G low performance than LB bypass have 4G

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498476

Title:
  LBaas-LB performance  just have 1 G  low performance than LB bypass
  have 4G

Status in octavia:
  Confirmed

Bug description:
  LB performance  just have 1G low performance than LB bypass have 4G

  setup infor

  for LB bypass , client directly send traffic to server without LB , we have 
4G performance
  and for LB, client send traffic with LB , we just have 1G traffic
  so LB is a bottleneck

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1498476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498359] Re: lbaas:after create 375 LB pool , the new lb -pool and vip get in error status

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498359

Title:
  lbaas:after create 375 LB pool , the new lb -pool and vip get in error
  status

Status in octavia:
  Incomplete

Bug description:

  1 create two-arm LB with 1client and 1 backend server on a tenant
  2 repeat step1 to create 375 tenants
  3 after step 2 , the LB network unstable

  [root@nsj13 ~(keystone_admin)]# neutron lb-vip-list |grep ERROR
  | 054bc376-ff50-40cb-b003-831569b41f0b | Scale1_vip_420 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 06fcb386-4563-48d1-b6de-003967a97a54 | Scale1_vip_418 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 160f9c87-5cb1-4a5e-9829-cc5de30114c6 | Scale1_vip_444 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 1f49c1ca-0dc9-4f6a-9efd-2c741fcc6ff3 | Scale1_vip_380 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 25040c46-446d-4e3f-841c-da9166bb4ad0 | Scale1_vip_384 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 2690bbdb-30e3-4c6c-b346-f96be83ea745 | Scale1_vip_419 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 27e9fbb4-a3c8-4084-af11-c3e71af1a762 | Scale1_vip_431 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 283aea00-ba23-403d-a4cd-0d275c36c53f | Scale1_vip_375 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 28dbb59d-28a9-4af2-b02a-9f953d4c9dd3 | Scale1_vip_428 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 30e99e31-2739-4d1b-a05e-f67d41f26c14 | Scale1_vip_430 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 37ee787f-d684-4968-95ea-15666b0cd0e7 | Scale1_vip_410 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 38fe0379-2c5a-4bcc-9aab-c0f75466b5bd | Scale1_vip_408 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 4190852c-0327-4b05-b1fd-20ef62cf06c0 | Scale1_vip_450 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 461a6da3-07e3-47d3-9d47-18ba2add0692 | Scale1_vip_445 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 49fb200a-9a73-4d93-a2ef-a5d033377ee2 | Scale1_vip_378 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 556a7e3a-d922-43c8-b584-416a8689421d | Scale1_vip_383 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 55a500b0-682d-4fc6-93e3-588dfc590b35 | Scale1_vip_421 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 5cb8347d-fbcc-497e-ba47-244d5b3f394b | Scale1_vip_429 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 606380c4-fd1e-43a6-b67d-01a01c7ad327 | Scale1_vip_412 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 60ff74a8-c464-49d0-9f8c-143c7629201e | Scale1_vip_407 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 62ce8e28-0c8f-41e1-831d-5ef7500dbb1f | Scale1_vip_453 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 691ab6a0-c06e-4e6a-9c7d-b70fc72146e0 | Scale1_vip_436 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 6f5c12d8-3640-45b5-a600-19f57d2c | Scale1_vip_427 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 746ab955-20da-41cc-9830-9c33e41abc9c | Scale1_vip_415 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 75342fd8-99bd-4527-92f5-7f3d91193adb | Scale1_vip_414 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 851311f3-d84d-43e4-9bc2-6fa08c5d4e7e | Scale1_vip_416 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 8624e33d-bc23-4b78-bde2-ceb2f12e6b65 | Scale1_vip_417 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 895e8ee6-1e78-44e5-a542-3713e07f49cb | Scale1_vip_377 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 94855fcf-03fc-431a-b6bb-58abc0f2a084 | Scale1_vip_437 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 9af109fa-f301-4942-a559-f6ad75459f3d | Scale1_vip_448 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | 9cf4bfa3-0073-4a46-a437-2f960ac41d34 | Scale1_vip_422 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | a138b439-c8b8-459f-85b6-3193d9df5e6d | Scale1_vip_433 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | a49b0dd1-55ce-4b52-8024-ce8f3219feab | Scale1_vip_435 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | ae57eadd-3a2d-457c-9149-014d43b2a91b | Scale1_vip_374 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | b70e7123-3e92-4ad1-bdf3-da58d5ed1a9f | Scale1_vip_426 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | b79eef81-8631-4012-97ed-b68cf7f5706b | Scale1_vip_434 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | bffecb85-1992-4c54-a411-8b8064c5b6ec | Scale1_vip_405 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | c51a6ed5-3134-412c-aad0-320ed7e7aaa1 | Scale1_vip_373 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | c6b8bb0f-eeca-4f2a-b9be-6162017daa5c | Scale1_vip_424 | 20.1.1.100  | HTTP  
   | True   | ERROR  |
  | c8153eaa-fc15-4af4-91ce-eb26c8ff3478 | Scale1_vip_432 | 

[Yahoo-eng-team] [Bug 1646555] Re: oslo.privsep: --config-dir is a list but is treated like a scalar

2016-12-01 Thread Nicolas Bock
I submitted https://review.openstack.org/#/c/405544/

** Changed in: fuel-plugin-contrail
 Assignee: (unassigned) => Nicolas Bock (nicolasbock)

** Project changed: fuel-plugin-contrail => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646555

Title:
  oslo.privsep: --config-dir is a list but is treated like a scalar

Status in OpenStack Compute (nova):
  New

Bug description:
  The --config-dir in oslo_config.ConfigOpts() is a list but is treated
  as a scalar value in oslo_privsep.PrivConetext.helper_command(). This
  leads to subprocess blowing up with:

  TypeError: execv() arg 2 must contain only strings

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1646555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646555] [NEW] oslo.privsep: --config-dir is a list but is treated like a scalar

2016-12-01 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The --config-dir in oslo_config.ConfigOpts() is a list but is treated as
a scalar value in oslo_privsep.PrivConetext.helper_command(). This leads
to subprocess blowing up with:

TypeError: execv() arg 2 must contain only strings

** Affects: nova
 Importance: Undecided
 Assignee: Nicolas Bock (nicolasbock)
 Status: New

-- 
oslo.privsep: --config-dir is a list but is treated like a scalar
https://bugs.launchpad.net/bugs/1646555
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646563] [NEW] Allow user specify custom project ID on project create

2016-12-01 Thread Andrey Grebennikov
Public bug reported:

As the user I have to be able to specify project ID upon creation:
"openstack project create --id xxxyyyzzz project1"

The usecase covered by this functionality:

Multi-site deployment with independent Keystone in each site and centralized 
authentication.
In this case with the feature implemented it is possible to re-use the token 
from one environment in the other ones, allowing the users to switch between 
environments easily, as well as allowing any automated tools to manipulate with 
the workloads/collect statistics with no additional acts of authorization.

** Affects: keystone
 Importance: Undecided
 Assignee: Andrey Grebennikov (agrebennikov)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Andrey Grebennikov (agrebennikov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1646563

Title:
  Allow user specify custom project ID on project create

Status in OpenStack Identity (keystone):
  New

Bug description:
  As the user I have to be able to specify project ID upon creation:
  "openstack project create --id xxxyyyzzz project1"

  The usecase covered by this functionality:

  Multi-site deployment with independent Keystone in each site and centralized 
authentication.
  In this case with the feature implemented it is possible to re-use the token 
from one environment in the other ones, allowing the users to switch between 
environments easily, as well as allowing any automated tools to manipulate with 
the workloads/collect statistics with no additional acts of authorization.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1646563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635449] Re: Too many pools created from heat template when both listeners and pools depend on a item

2016-12-01 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635449

Title:
  Too many pools created from heat template when both listeners and
  pools depend on a item

Status in octavia:
  Triaged

Bug description:
  When you deploy a heat template that has both listeners and pools
  depending on a item, due to the order of locking, you may get
  additional pools created erroneously.

  Excerpt of heat template showing the issue :

  # LOADBALANCERS #

test-loadbalancer:
  type: OS::Neutron::LBaaS::LoadBalancer
  properties:
name: test
description: test
vip_subnet: { get_param: subnet }

  # LISTENERS #

http-listener:
  type: OS::Neutron::LBaaS::Listener
  depends_on: test-loadbalancer
  properties:
name: listener1
description: listener1
protocol_port: 80
loadbalancer: { get_resource: test-loadbalancer } 
protocol: HTTP

https-listener:
  type: OS::Neutron::LBaaS::Listener
  depends_on: http-listener
  properties:
name: listener2
description: listener2
protocol_port: 443
loadbalancer: { get_resource: test-loadbalancer }
protocol: TERMINATED_HTTPS
default_tls_container_ref: ''

  # POOLS #

http-pool:
  type: OS::Neutron::LBaaS::Pool
  depends_on: http-listener
  properties:
name: pool1
description: pool1
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: http-listener }
protocol: HTTP

https-pool:
  type: OS::Neutron::LBaaS::Pool
  depends_on: https-listener
  properties:
name: pool2
description: pool2
lb_algorithm: 'ROUND_ROBIN'
listener: { get_resource: https-listener }
protocol: HTTP

  After the http-listener is created, both a pool and another listener
  attempt to create but we end up with a number of pools (not always the
  same number).

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1635449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577630] Re: ERROR neutron_lbaas.agent.agent_manager - HaproxyNSDriver

2016-12-01 Thread Michael Johnson
Moving from Ubuntu neutron-lbaas package project to Octavia, the new
owner for neutron-lbaas.

** Package changed: neutron-lbaas (Ubuntu) => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577630

Title:
  ERROR neutron_lbaas.agent.agent_manager - HaproxyNSDriver

Status in neutron:
  Invalid
Status in octavia:
  Incomplete

Bug description:
  Hi,

  I use OpenStack Liberty with Ubuntu 14.04.

  I installed LBaaSv2;

  # aptitude install haproxy neutron-lbaasv2-agent python-neutron-lbaas

  /etc/neutron/neutron.conf
  [DEFAULT]
  service_plugins = 
router,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
  [service_providers]
  service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  /etc/neutron/neutron_lbaas.conf
  [service_providers]
  
service_provider=LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  /etc/neutron/lbaas_agent.ini
  [DEFAULT]
  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  ovs_use_veth = False
  device_driver = 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
  [haproxy]
  user_group = haproxy

  # tail -f /var/log/neutron/neutron-lbaasv2-agent.log
  2016-05-01 21:12:16.266 6459 INFO neutron.common.config [-] Logging enabled!
  2016-05-01 21:12:16.267 6459 INFO neutron.common.config [-] 
/usr/bin/neutron-lbaasv2-agent version 7.0.3
  2016-05-01 21:12:16.294 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on controller:5672
  2016-05-01 21:12:16.318 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on controller:5672
  2016-05-01 21:12:16.335 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672
  2016-05-01 21:12:16.337 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672
  2016-05-01 21:12:16.475 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on controller:5672
  2016-05-01 21:12:16.495 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672

  $ neutron lbaas-loadbalancer-create --name lb01 
f64e80e1-da2a-40b7-ba9b-903dc72242ba
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 599a3226-5239-4100-a118-5ad1fe6ea55d |
  | listeners   |  |
  | name| lb01 |
  | operating_status| OFFLINE  |
  | provider| haproxy  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | 60d7922e29e44dc1ac62df3fe7ba3841 |
  | vip_address | 192.168.100.15   |
  | vip_port_id | 5f998bc7-17bc-4bb8-9fb5-07a6b9de5860 |
  | vip_subnet_id   | f64e80e1-da2a-40b7-ba9b-903dc72242ba |
  +-+--+

  neutron lbaas-loadbalancer-list
  
+--+--++-+--+
  | id   | name | vip_address| 
provisioning_status | provider |
  
+--+--++-+--+
  | 599a3226-5239-4100-a118-5ad1fe6ea55d | lb01 | 192.168.100.15 | ERROR
   | haproxy  |
  
+--+--++-+--+

  $ tail -f /var/log/neutron/neutron-lbaasv2-agent.log 
  2016-05-01 21:12:16.337 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672
  2016-05-01 21:12:16.475 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on controller:5672
  2016-05-01 21:12:16.495 6459 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672
  2016-05-01 21:40:43.582 6459 ERROR neutron_lbaas.agent.agent_manager 
[req-214be154-00fb-4c8a-8fe3-7212ffc19cb2 f0c587f6748346d9a7dd1a6d911da3b4 
60d7922e29e44dc1ac62df3fe7ba3841 - - -] Create loadbalancer 
599a3226-5239-4100-a118-5ad1fe6ea55d failed on device driver haproxy_ns
  2016-05-01 21:40:43.582 6459 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2016-05-01 21:40:43.582 6459 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/agent/agent_manager.py", line 
270, in 

[Yahoo-eng-team] [Bug 1602057] Re: (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

2016-12-01 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Tags added: sts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1602057

Title:
  (libvirt) KeyError updating resources for some node, guest.uuid is not
  in BDM list

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  New
Status in OpenStack Compute (nova) newton series:
  Fix Committed

Bug description:
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
[req-d5d5d486-b488-4429-bbb5-24c9f19ff2c0 - - - - -] Error updating resources 
for node controller.
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6726, in 
update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 500, 
in update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5728, in 
get_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
disk_over_committed = self._get_disk_over_committed_size_total()
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7397, in 
_get_disk_over_committed_size_total
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager KeyError: 
'0a5c5743-9555-4dfd-b26e-198449ebeee5'
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1602057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630920] Re: native/idl ovsdb driver loses some ovsdb transactions

2016-12-01 Thread Thomas Morin
** Changed in: bgpvpn
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630920

Title:
  native/idl ovsdb driver loses some ovsdb transactions

Status in networking-bgpvpn:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  It seems the 'native' and the 'vsctl' ovsdb drivers behave
  differently. The native/idl driver seems to lose some ovsdb
  transactions, at least the transactions setting the 'other_config' ovs
  port attribute.

  I have written about this in a comment of an earlier bug report
  (https://bugs.launchpad.net/neutron/+bug/1626010). But I opened this
  new bug report because the two problems seem to be independent and
  that other comment may have gone unnoticed.

  It is not completely clear to me what difference this causes in user-
  observable behavior. I think it at least leads to losing information
  about which conntrack zone to use in the openvswitch firewall driver.
  See here:

  
https://github.com/openstack/neutron/blob/3ade301/neutron/agent/linux/openvswitch_firewall/firewall.py#L257

  The details:

  If I use the vsctl ovsdb driver:

  ml2_conf.ini:
  [ovs]
  ovsdb_interface = vsctl

  then I see this:

  $ > /opt/stack/logs/q-agt.log
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server create --flavor cirros256 --image cirros-0.3.4-x86_64-uec 
--nic net-id=net0 --wait vm0
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  2
  $ openstack server delete vm0
  $ sleep 3
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ egrep -c 'Transaction caused no change' /opt/stack/logs/q-agt.log 
  0

  But if I use the (default) native driver:

  ml2_conf.ini:
  [ovs]
  ovsdb_interface = native

  Then this happens:

  $ > /opt/stack/logs/q-agt.log
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server create --flavor cirros256 --image cirros-0.3.4-x86_64-uec 
--nic net-id=net0 --wait vm0
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server delete vm0
  $ sleep 3
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ egrep -c 'Transaction caused no change' /opt/stack/logs/q-agt.log
  22

  A sample log message from q-agt.log:

  2016-10-06 09:23:05.447 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn 
command(idx=0): DbSetCommand(table=Port, col_values=(('other_config', {'tag': 
1}),), record=tap8e2a390d-63) from (pid=6068) do_commit 
/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py:99
  2016-10-06 09:23:05.448 DEBUG neutron.agent.ovsdb.impl_idl [-] Transaction 
caused no change from (pid=6068) do_commit 
/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py:126

  devstack version: 563d377
  neutron version: 3ade301

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1630920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641821] Re: admin guide: Cleanup LDAP

2016-12-01 Thread Richard
** Changed in: keystone
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641821

Title:
  admin guide: Cleanup LDAP

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  There exist three different documents [1] related to LDAP in the
  admin-guide [2]. They should be collapsed into one. Further, they
  recommend deploying a single backend LDAP, which is not what the
  keystone team recommends.

  [1] 1) identity-integrate-with-ldap.rst
  2) identity-ldap-server.rst
  3) identity-secure-ldap-backend.rst   

  [2] https://github.com/openstack/openstack-manuals/tree/master/doc
  /admin-guide/source

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604397] Re: [SRU] python-swiftclient is missing in requirements.txt (for glare)

2016-12-01 Thread James Page
This bug was fixed in the package python-glance-store - 0.18.0-0ubuntu3~cloud0
---

 python-glance-store (0.18.0-0ubuntu3~cloud0) xenial-ocata; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 python-glance-store (0.18.0-0ubuntu3) zesty; urgency=medium
 .
   * d/gbp.conf: Update default config.
 .
 python-glance-store (0.18.0-0ubuntu2) zesty; urgency=medium
 .
   [ Corey Bryant ]
   * d/control: Add run-time dependency for python-swiftclient (LP: #1604397).
   * d/p/drop-enum34.patch: Fix python3 test failures.
 .
   [ Thomas Goirand ]
   * Fixed enum34 runtime depends.


** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1604397

Title:
  [SRU] python-swiftclient is missing in requirements.txt (for glare)

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in Glance:
  New
Status in python-glance-store package in Ubuntu:
  Fix Released
Status in python-glance-store source package in Yakkety:
  Fix Released
Status in python-glance-store source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Test Case]
  I'm using UCA glance packages (version "13.0.0~b1-0ubuntu1~cloud0").
  And I've got this error:
  <30>Jul 18 16:03:45 node-2 glance-glare[17738]: ERROR: Store swift could not 
be configured correctly. Reason: Missing dependency python_swiftclient.

  Installing "python-swiftclient" fix the problem.

  In master
  (https://github.com/openstack/glance/blob/master/requirements.txt)
  package "python-swiftclient" is not included in requirements.txt. So
  UCA packages don't have proper dependencies.

  I think requirements.txt should be updated (add python-swiftclient
  there). This change should affect UCA packages.

  [Regression Potential]
  Minimal as this just adds a new dependency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1604397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643849] Re: The result of 'qos-minimum-bandwidth-rule-list' is wrong

2016-12-01 Thread Ihar Hrachyshka
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643849

Title:
  The result of 'qos-minimum-bandwidth-rule-list' is wrong

Status in python-neutronclient:
  In Progress

Bug description:
  In master branch,I have a policy,and some rules in it as folloss:

  [root@devstack162 devstack]# neutron qos-policy-show p1
  
+-++
  | Field   | Value 
 |
  
+-++
  | created_at  | 2016-11-22T07:11:02Z  
 |
  | description | a 
 |
  | id  | d296046b-0a87-496f-93dc-14e8bd5ed638  
 |
  | name| p1
 |
  | project_id  | 9a5b27e4da8b4aec99df42b222a8a696  
 |
  | revision_number | 5 
 |
  | rules   | 7ba69c66-6911-4176-895d-f5668cc5bbbe (type: 
bandwidth_limit)   |
  | | d9c021d5-5433-4d7c-8bfa-69cca486aac8 (type: dscp_marking) 
 |
  | | 8c8fc4de-25c5-4d16-85e6-543d0affe916 (type: 
minimum_bandwidth) |
  | shared  | False 
 |
  | tenant_id   | 9a5b27e4da8b4aec99df42b222a8a696  
 |
  | updated_at  | 2016-11-22T11:07:47Z  
 |
  
+-++
  and the command result of 'qos-bandwidth-limit-rule-list' is:

  [root@devstack162 devstack]# neutron  qos-bandwidth-limit-rule-list p1
  +--++--+
  | id   | max_burst_kbps | max_kbps |
  +--++--+
  | 7ba69c66-6911-4176-895d-f5668cc5bbbe | 50 |1 |

  
  but the command result of 'qos-minimum-bandwidth-rule-list' is:

  [root@devstack162 devstack]#  neutron qos-minimum-bandwidth-rule-list p1 
  'qos_minimum_bandwidth_rules'

  I think the result of 'qos-minimum-bandwidth-rule-lis' is wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1643849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646501] [NEW] Add a notify when creating network failed in neutron

2016-12-01 Thread Zhigang Li
Public bug reported:

When we create network ,the notifies of 'network.start' and
'network.end' will be sent to the ceilometer.Thus it is necessary to add
a notify when creating network failed in neutron.

** Affects: neutron
 Importance: Undecided
 Assignee: Zhigang Li (li-zhigang3)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Zhigang Li (li-zhigang3)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1646501

Title:
  Add a notify when creating network failed in neutron

Status in neutron:
  New

Bug description:
  When we create network ,the notifies of 'network.start' and
  'network.end' will be sent to the ceilometer.Thus it is necessary to
  add a notify when creating network failed in neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1646501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598171] Re: neutron router-gateway-set --enable-snat fails

2016-12-01 Thread Ihar Hrachyshka
** Tags removed: neutron router-gateway-set router-update snat

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598171

Title:
  neutron router-gateway-set --enable-snat fails

Status in python-neutronclient:
  Fix Released

Bug description:
  I have a Mitaka environment with 1 control node and 3 compute nodes,
  all physical machines running on openSUSE Leap 42.1.

  My version of neutron client:

  ---cut here---
  control1:~ # rpm -qi python-neutronclient-3.1.1-1.1.noarch
  Name: python-neutronclient
  Version : 3.1.1
  Release : 1.1
  Architecture: noarch
  Install Date: Mo 11 Apr 2016 12:13:44 CEST
  Group   : Development/Languages/Python
  Size: 2079132
  License : Apache-2.0
  Signature   : RSA/SHA1, Do 03 Mär 2016 16:46:07 CET, Key ID 893a90dad85f9316
  Source RPM  : python-neutronclient-3.1.1-1.1.src.rpm
  Build Date  : Do 03 Mär 2016 16:45:27 CET
  Build Host  : cloud118
  Relocations : (not relocatable)
  Vendor  : obs://build.opensuse.org/Cloud:OpenStack
  URL : http://launchpad.net/python-neutronclient
  Summary : Openstack Network (Quantum) API Client
  ---cut here---

  Changing the router property "enable_snat" works only in one direction. The 
resource description in Horizon for "resource_types/OS::Neutron::Router/" says:
  enable_snat: {description: 'Enables Source [...] update_allowed: true}
  So trying to update this property in CLI (as project admin) seems to work:

  control1:~ # neutron router-gateway-set --enable-snat  
  Set gateway for router cbc39730-34cc-4d18-986a-5b6b9b1b4e96

  But actually no change has been made:
  control1:~ # neutron router-list
  
+++-+
  | id | name   | external_gateway_info 
  |
  
+++-+
  | cbc39730...| router01   | {"network_id": "ID", "enable_snat": false, 
[...]|
  
+++-+

  I know there's no such option in the help page for router-gateway-set
  command, but if there's not I'd expect an error message. Or if it's a
  valid option it should actually change this property.

  Steps to reproduce:
  1. Create a router with snat disabled
  2. Try to enable snat via command line

  Expected output:
  Along with the success message ("Set gateway for router...") it should either 
actually enable snat or throw an error message ("unknown option" or something 
similar).

  Actual output:
  Success message saying router-gateway has been set, but the argument 
"--enable-snat" is ignored, no changes have been applied.

  I will be out of office for the next three weeks, I'll set my co-
  worker in cc to this bug. If there are any questions on this issue he
  will try to answer them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1598171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645509] Re: dnsmasq host file entries can return from the grave

2016-12-01 Thread Ihar Hrachyshka
We are landing a Mitaka fix. The bug is not present in Newton+. I close
it as fixed.

** Tags added: l3-ipam-dhcp

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645509

Title:
  dnsmasq host file entries can return from the grave

Status in neutron:
  Fix Released

Bug description:
  With a Mitaka system, if one creates a Reasonably Large Number (tm) of
  instances (eg 448) via:

  neutron port-create
  neutron floatingip-create 
  nova boot --net portid=

  and then once they are all created, delete them via:
  delete all the instances via the cCLI (groups of 10 or 14 at a time waiting 
for CLI completion)
  delete all the ports (groups of 10 or 14 at a time waiting for CLI completion)
  delete the floating IPs (groups of 10 or 14 at a time waiting for CLI 
completion)

  then as one watches the wc -l for the dnsmasq host file, one can see
  it go down to an expected value of three (the three IPs for the
  network/subnet for DHCP and routing), but then it starts to rebound,
  as entries appear to return from the grave.

  The entries are not specific to a given compute node - they are
  associated with instances across all the compute nodes hosting the
  instances.

  In this specific setup, it did not appear to happen for 224 instances,
  or 112 etc etc.  It seems to have happened at least twice out of three
  attempts at 448 instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646418] Re: gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial fails constantly

2016-12-01 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** No longer affects: nova/newton

** Changed in: nova
   Importance: High => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646418

Title:
  gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial fails
  constantly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Voting CI job that is used in check and gate pipeline fails on ceph setup 
phase:
  2016-12-01 08:10:52.641806 | 2016-12-01 08:10:52.641 | Setting up 
libnss3-tools (2:3.23-0ubuntu0.16.04.1) ...
  2016-12-01 08:10:52.645360 | 2016-12-01 08:10:52.645 | Processing triggers 
for libc-bin (2.23-0ubuntu4) ...
  2016-12-01 08:10:52.656568 | 2016-12-01 08:10:52.656 | Processing triggers 
for systemd (229-4ubuntu12) ...
  2016-12-01 08:10:52.919320 | 2016-12-01 08:10:52.918 | ceph daemons will run 
as ceph
  2016-12-01 08:10:52.927631 | 2016-12-01 08:10:52.927 | truncate: Invalid 
number: ‘var/lib/ceph/drives/images/ceph.img’
  2016-12-01 08:10:52.928182 | + 
/home/jenkins/workspace/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/devstack-gate/functions.sh:tsfilter:L96:
   return 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1646418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599936] Re: l2gw provider config prevents *aas provider config from being loaded

2016-12-01 Thread OpenStack Infra
** Changed in: neutron
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599936

Title:
  l2gw provider config prevents *aas provider config from being loaded

Status in devstack:
  New
Status in networking-l2gw:
  New
Status in neutron:
  In Progress

Bug description:
  networking-l2gw devstack plugin stores its service_providers config in 
/etc/l2gw_plugin.ini
  and add --config-file for it.
  as a result, neutron-server is invoked like the following.

  /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/midonet/midonet.ini --config-file
  /etc/neutron/l2gw_plugin.ini

  it breaks *aas service providers because NeutronModule.service_providers finds
  l2gw providers in cfg.CONF.service_providers.service_provider and thus 
doesn't look
  at *aas service_providers config, which is in /etc/neutron/neutron_*aas.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1599936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298472] Re: SSHTimeout in tempest trying to verify that computes are actually functioning

2016-12-01 Thread Andrea Frittoli
This is a very old bug. Anna set it back to new in Aug 2016 as it may be 
related to https://bugs.launchpad.net/mos/+bug/1606218, which has been fixed 
since.
Hence I will set this to invalid. If some hits an ssh bug in the gate again, 
please file a new bug.

** Changed in: tempest
   Status: New => Invalid

** Changed in: tempest
   Importance: Critical => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298472

Title:
  SSHTimeout in tempest trying to verify that computes are actually
  functioning

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Invalid

Bug description:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  failed at least once with the following traceback when trying to
  connect via SSH:

  Traceback (most recent call last):
File "tempest/scenario/test_volume_boot_pattern.py", line 156, in 
test_volume_boot_pattern
  ssh_client = self._ssh_to_server(instance_from_snapshot, keypair)
File "tempest/scenario/test_volume_boot_pattern.py", line 100, in 
_ssh_to_server
  private_key=keypair.private_key)
File "tempest/scenario/manager.py", line 466, in get_remote_client
  return RemoteClient(ip, username, pkey=private_key)
File "tempest/common/utils/linux/remote_client.py", line 47, in __init__
  if not self.ssh_client.test_connection_auth():
File "tempest/common/ssh.py", line 149, in test_connection_auth
  connection = self._get_ssh_connection()
File "tempest/common/ssh.py", line 65, in _get_ssh_connection
  timeout=self.channel_timeout, pkey=self.pkey)
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, 
in connect
  retry_on_signal(lambda: sock.connect(addr))
File "/usr/local/lib/python2.7/dist-packages/paramiko/util.py", line 279, 
in retry_on_signal
  return function()
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, 
in 
  retry_on_signal(lambda: sock.connect(addr))
File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
File 
"/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 
52, in signal_handler
  raise TimeoutException()
  TimeoutException

  Logs can be found at: 
http://logs.openstack.org/86/82786/1/gate/gate-tempest-dsvm-neutron-pg/1eaadd0/
  The review that triggered the issue is: 
https://review.openstack.org/#/c/82786/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1298472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640754] Re: Project service should have a description

2016-12-01 Thread splurge
** Project changed: keystone => kolla

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1640754

Title:
  Project service should have a description

Status in kolla:
  New

Bug description:
  Hi,

  I assume service project is a fundamental part of any openstack
  environment?

  If so, shouldn't the special "service" project have a description like the 
admin project?

  openstack project list --long
  
+--+-+---+---+-+
  | ID   | Name| Domain ID | Description
   | Enabled |
  
+--+-+---+---+-+
  | 1df997cc98474cb49423a141a4efc5a2 | service | default   |
   | True|


  If no, I will close the bug

  Cheers

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla/+bug/1640754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646418] Re: gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial fails constantly

2016-12-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/405196
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e83a3572344f9be39930ea9ead83a1f9b94ac7fe
Submitter: Jenkins
Branch:master

commit e83a3572344f9be39930ea9ead83a1f9b94ac7fe
Author: Timofey Durakov 
Date:   Thu Dec 1 11:58:18 2016 +0300

Fix for live-migration job

Commit 9293ac0 to devstack-plugin-ceph altered
CEPH_LOOPBACK_DISK_SIZE_DEFAULT variable initialization
This fix added source for setting this variable in correct way.

Closes-Bug: #1646418

Change-Id: I84c3b78c53cfa283e9bcb7cf4b70ec6c95044e9c


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646418

Title:
  gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial fails
  constantly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Voting CI job that is used in check and gate pipeline fails on ceph setup 
phase:
  2016-12-01 08:10:52.641806 | 2016-12-01 08:10:52.641 | Setting up 
libnss3-tools (2:3.23-0ubuntu0.16.04.1) ...
  2016-12-01 08:10:52.645360 | 2016-12-01 08:10:52.645 | Processing triggers 
for libc-bin (2.23-0ubuntu4) ...
  2016-12-01 08:10:52.656568 | 2016-12-01 08:10:52.656 | Processing triggers 
for systemd (229-4ubuntu12) ...
  2016-12-01 08:10:52.919320 | 2016-12-01 08:10:52.918 | ceph daemons will run 
as ceph
  2016-12-01 08:10:52.927631 | 2016-12-01 08:10:52.927 | truncate: Invalid 
number: ‘var/lib/ceph/drives/images/ceph.img’
  2016-12-01 08:10:52.928182 | + 
/home/jenkins/workspace/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/devstack-gate/functions.sh:tsfilter:L96:
   return 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1646418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646428] [NEW] [neutron-fwaas] Not validate protocol parameters when updating firewall rule

2016-12-01 Thread Ha Van Tu
Public bug reported:

When we create an ICMP firewall rule with port range parameters, there will be 
an error from Neutron server.
However, when we create a TCP firewall rule with port range parameters, then 
edit this rule to the ICMP one, there is not any error from Neutron server.
We need to check before updating firewall rule.

** Affects: neutron
 Importance: Undecided
 Assignee: Ha Van Tu (tuhv)
 Status: New


** Tags: neutron-fwaas

** Changed in: neutron
 Assignee: (unassigned) => Ha Van Tu (tuhv)

** Tags added: neutron-fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1646428

Title:
  [neutron-fwaas] Not validate protocol parameters when updating
  firewall rule

Status in neutron:
  New

Bug description:
  When we create an ICMP firewall rule with port range parameters, there will 
be an error from Neutron server.
  However, when we create a TCP firewall rule with port range parameters, then 
edit this rule to the ICMP one, there is not any error from Neutron server.
  We need to check before updating firewall rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1646428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646418] [NEW] gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial fails constantly

2016-12-01 Thread Timofey Durakov
Public bug reported:

Voting CI job that is used in check and gate pipeline fails on ceph setup phase:
2016-12-01 08:10:52.641806 | 2016-12-01 08:10:52.641 | Setting up libnss3-tools 
(2:3.23-0ubuntu0.16.04.1) ...
2016-12-01 08:10:52.645360 | 2016-12-01 08:10:52.645 | Processing triggers for 
libc-bin (2.23-0ubuntu4) ...
2016-12-01 08:10:52.656568 | 2016-12-01 08:10:52.656 | Processing triggers for 
systemd (229-4ubuntu12) ...
2016-12-01 08:10:52.919320 | 2016-12-01 08:10:52.918 | ceph daemons will run as 
ceph
2016-12-01 08:10:52.927631 | 2016-12-01 08:10:52.927 | truncate: Invalid 
number: ‘var/lib/ceph/drives/images/ceph.img’
2016-12-01 08:10:52.928182 | + 
/home/jenkins/workspace/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/devstack-gate/functions.sh:tsfilter:L96:
   return 1

** Affects: nova
 Importance: High
 Assignee: Timofey Durakov (tdurakov)
 Status: In Progress


** Tags: gate-failure

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646418

Title:
  gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial fails
  constantly

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Voting CI job that is used in check and gate pipeline fails on ceph setup 
phase:
  2016-12-01 08:10:52.641806 | 2016-12-01 08:10:52.641 | Setting up 
libnss3-tools (2:3.23-0ubuntu0.16.04.1) ...
  2016-12-01 08:10:52.645360 | 2016-12-01 08:10:52.645 | Processing triggers 
for libc-bin (2.23-0ubuntu4) ...
  2016-12-01 08:10:52.656568 | 2016-12-01 08:10:52.656 | Processing triggers 
for systemd (229-4ubuntu12) ...
  2016-12-01 08:10:52.919320 | 2016-12-01 08:10:52.918 | ceph daemons will run 
as ceph
  2016-12-01 08:10:52.927631 | 2016-12-01 08:10:52.927 | truncate: Invalid 
number: ‘var/lib/ceph/drives/images/ceph.img’
  2016-12-01 08:10:52.928182 | + 
/home/jenkins/workspace/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/devstack-gate/functions.sh:tsfilter:L96:
   return 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1646418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643793] Re: Display incorrect flavor name in instance property after edit flavor

2016-12-01 Thread Dinesh Bhor
Hi All,

I am not able to reproduce this issue in nova.

Steps:

Created one flaovr:
$nova flavor-create new_flavor 32 2 1 1

Created an instance with this flavor:
$nova boot --image c98e9a7c-7720-420b-a5cb-4990a80ebeb4 --flavor 32 
flavor_instance

I can see flavor name in the instance information: new_flavor (32)

Deleted a flavor:   
$nova flavor-delete 32

Show the instance:
$nova show 82b2b0a2-d904-4762-87f2-6c079e113494

+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | openstack-VirtualBox   
|
| OS-EXT-SRV-ATTR:hostname | flavor-instance
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | openstack-VirtualBox   
|
| OS-EXT-SRV-ATTR:instance_name| instance-000a  
|
| OS-EXT-SRV-ATTR:kernel_id| a5b4ae19-6b59-4db8-9b8f-80a95274b0ef   
|
| OS-EXT-SRV-ATTR:launch_index | 0  
|
| OS-EXT-SRV-ATTR:ramdisk_id   | 74bf2775-4b4f-4f98-b5a3-d5b95a2fb2b5   
|
| OS-EXT-SRV-ATTR:reservation_id   | r-ebvq7dqa 
|
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda   
|
| OS-EXT-SRV-ATTR:user_data| -  
|
| OS-EXT-STS:power_state   | 1  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | active 
|
| OS-SRV-USG:launched_at   | 2016-12-01T07:58:50.00 
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| config_drive |
|
| created  | 2016-12-01T07:58:39Z   
|
| description  | -  
|
| flavor   | Flavor not found (32)  
|
| hostId   | 
29293b7a5610f0716c95bb5fd6bdbe157ec88120712bcb67c6c5c6a5   |
| host_status  | UP 
|
| id   | 82b2b0a2-d904-4762-87f2-6c079e113494   
|
| image| cirros-0.3.4-x86_64-uec 
(c98e9a7c-7720-420b-a5cb-4990a80ebeb4) |
| key_name | -  
|
| locked   | False  
|
| metadata | {} 
|
| name | flavor_instance
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| public network   | 172.24.4.13, 2001:db8::7   
|
| security_groups  | default
|
| status   | ACTIVE 
|
| tags | [] 
|
| tenant_id| 514df258afd54361a0d9e8a8afc71a02   
|
| updated  | 2016-12-01T07:58:51Z   
|
| user_id  | 

[Yahoo-eng-team] [Bug 1646394] [NEW] Slowness on access & security page due to calls to neutron/ports.json?tenant_id=XXXXX

2016-12-01 Thread Robert van Leeuwen
Public bug reported:

When opening the access & security page in Horizon we experience some
extreme slowness for tenants with a large amount of instances (200+).

Looking at the debug logs the following calls take over 5 seconds:
https://neutron/v2.0/ports.json?tenant_id=X

This call is done twice for this page.

Since there are no interfaces shown on this page I am wondering why this call 
is being done at all.
Doing it twice makes the problem worse.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1646394

Title:
  Slowness on access & security page due to calls to
  neutron/ports.json?tenant_id=X

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When opening the access & security page in Horizon we experience some
  extreme slowness for tenants with a large amount of instances (200+).

  Looking at the debug logs the following calls take over 5 seconds:
  https://neutron/v2.0/ports.json?tenant_id=X

  This call is done twice for this page.

  Since there are no interfaces shown on this page I am wondering why this call 
is being done at all.
  Doing it twice makes the problem worse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1646394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645630] Re: Material Theme in Newton Appears to be Broken

2016-12-01 Thread Richard Jones
The upper-constraints pinning of the XStatic-roboto-fontface isn't
working for some reason in your environment. To be absolutely sure,
please ensure you include the upper-constraints.txt file for
stable/newton in your virtualenv install, for example:

  pip install -c
http://git.openstack.org/cgit/openstack/requirements/plain/upper-
constraints.txt?h=stable/newton .

Note that run_tests.sh has been modified in stable/newton to pull that
file in during virtualenv installation, so "run_tests.sh -m compress"
should work for you.

The "tox -e manage" command *should also* be pulling that in but it
appears not to be, and I'm not sure why, so I've marked this as a valid
bug.

** Changed in: horizon
   Status: New => Triaged

** Changed in: horizon
   Importance: Undecided => Critical

** Changed in: horizon
   Importance: Critical => High

** Also affects: horizon/newton
   Importance: Undecided
   Status: New

** No longer affects: horizon/newton

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645630

Title:
  Material Theme in Newton Appears to be Broken

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  Pulled down project from git (stable/newton).

  Used OpenStack global requirements and installed in a virtualenv under
  Ubuntu 14.04 (python 2.7).

  Problem manifested during compress operation, when setting up the web
  application for use.

  Code excerpt attached.

  To workaround, I explicitly disabled the Material theme.

  
https://gist.githubusercontent.com/jjahns/025e51e0b82009dd17a30651c2256262/raw/d02d96dcba8c2f68a0b68a3b175e6ed1c77190fc/gistfile1.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp