[Yahoo-eng-team] [Bug 1831009] [NEW] Improper close connection to database leading to mysql/mariadb block connection.

2019-05-29 Thread puthi
Public bug reported:

Version
===
Neutron-server: openstack-neutron-13.0.2-1.el7.noarch
Nova: openstack-nova-*-18.2.0-1.el7.noarch
Mariadb: mariadb-server-10.1.20-2.el7.x86_64

Openstack setup:

HAproxy => 3 Controllers (nova,neutron,keystone) => Mariadb

Error
=
As my Openstack cluster grow, more and more people start using the cluster, i 
start seeing this error everyday now

2019-05-30 10:42:15.695 44938 ERROR oslo_db.sqlalchemy.exc_filters [req-
b6fd59b9-8378-49df-bbf6-de9f9b741490 - - - - -] DBAPIError exception
wrapped from (pymysql.err.InternalError) (1129, u"Host 'xx.xx.xx.xx' is
blocked because of many connection errors; unblock with 'mysqladmin
flush-hosts'") (Background on this error at: http://sqlalche.me/e/2j85):
InternalError: (1129, u"Host 'xx.xx.xx.xx' is blocked because of many
connection errors; unblock with 'mysqladmin flush-hosts'")

This is not necessary happens to only neutron but all of the components
of openstack. And when i turn log_warnings=4 in Mariadb, I can see in
the log of Mariadb as below:


2019-05-27 10:22:04 140078484511488 [Warning] Aborted connection 70834104 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:05 140084673243904 [Warning] Aborted connection 70834111 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:07 140078500485888 [Warning] Aborted connection 70834211 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:07 140078490655488 [Warning] Aborted connection 70834157 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:09 140078698322688 [Warning] Aborted connection 70834327 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:12 140078715833088 [Warning] Aborted connection 70894166 to 
db: 'unconnected' user: 'neutron' host: 'controller2' (CLOSE_CONNECTION)
2019-05-27 10:22:13 140078737951488 [Warning] Aborted connection 70834380 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:17 140078641797888 [Warning] Aborted connection 70834382 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:21 140078581893888 [Warning] Aborted connection 70834436 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:22 140078724434688 [Warning] Aborted connection 70834469 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
2019-05-27 10:22:28 140078715833088 [Warning] Aborted connection 70894174 to 
db: 'unconnected' user: 'unauthenticated' host: 'controller2' (CLOSE_CONNECTION)
2019-05-27 10:22:29 140078715833088 [Warning] Aborted connection 70894177 to 
db: 'neutron' user: 'neutron' host: 'controller2' (CLOSE_CONNECTION)
...
2019-05-30  7:35:28 140078596025088 [Warning] Aborted connection 72547571 to 
db: 'nova' user: 'nova' host: 'controller1' (Got an error reading communication 
packets)
2019-05-30  7:46:54 140078541036288 [Warning] Aborted connection 72552087 to 
db: 'nova_cell0' user: 'nova' host: 'controller1' (Got an error reading 
communication packets)
2019-05-30  7:46:57 140078799182592 [Warning] Aborted connection 72552086 to 
db: 'nova' user: 'nova' host: 'controller1' (Got an error reading communication 
packets)
2019-05-30  7:47:02 140078738565888 [Warning] Aborted connection 72534613 to 
db: 'nova_cell0' user: 'nova' host: 'controller1' (Got an error reading 
communication packets)
2019-05-30  8:31:11 140078638418688 [Warning] Aborted connection 72419897 to 
db: 'nova' user: 'nova' host: 'controller3' (Got timeout reading communication 
packets)
2019-05-30  8:36:22 140078791195392 [Warning] Aborted connection 72421900 to 
db: 'nova' user: 'nova' host: 'controller2' (Got timeout reading communication 
packets)
2019-05-30  8:46:23 140078624594688 [Warning] Aborted connection 72577413 to 
db: 'nova_cell0' user: 'nova' host: 'controller1' (Got an error reading 
communication packets)
2019-05-30  8:46:26 140078716447488 [Warning] Aborted connection 72577414 to 
db: 'nova' user: 'nova' host: 'controller1' (Got an error reading communication 
packets)
2019-05-30 10:45:23 140078661151488 [Warning] Aborted connection 72675103 to 
db: 'neutron' user: 'neutron' host: 'controller3' (Got an error reading 
communication packets)
2019-05-30 10:45:23 140078672517888 [Warning] Aborted connection 72675137 to 
db: 'neutron' user: 'neutron' host: 'controller3' (Got an error reading 
communication packets)
2019-05-30 10:45:23 140078768155392 [Warning] Aborted connection 72674638 to 
db: 'neutron' user: 'neutron' host: 'controller3' (Got an error reading 
communication packets)
2019-05-30 10:45:23 140078647327488 [Warning] Aborted connection 72674581 to 
db: 'neutron' user: 'neutron' host: 'controller3' (Got an error reading 
communication packets)


I also notice that every times i restart any services (nova/neutron/keystone) i 
can see that warning "Got an error reading communication packets" 

[Yahoo-eng-team] [Bug 1829734] Re: [RFE] OVS DPDK port representors support

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/658784
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=47390226f5c755a083b0f181f7a81b480069c8a9
Submitter: Zuul
Branch:master

commit 47390226f5c755a083b0f181f7a81b480069c8a9
Author: Hamdy Khader 
Date:   Mon May 13 13:39:57 2019 +0300

OVS DPDK port representors support

Adds support for OVS DPDK port representors[1], a direct port on
a netdev datapath is considered a DPDK representor port.

get_vif_type returns OVS VIF type in case of a direct port.

[1] http://docs.openvswitch.org/en/latest/topics/dpdk/phy/#representors

Closes-Bug: #1829734
Change-Id: I3956eeda19ebc93fdb0b13c1cfb3dc64abffee9f


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1829734

Title:
  [RFE] OVS DPDK port representors support

Status in neutron:
  Fix Released

Bug description:
  DPDK representors enable configuring a phy port to a guest (VM)
  machine.

  OVS resides in the hypervisor which has one or more physical
  interfaces also known as the physical functions (PFs). If a PF
  supports SR-IOV it can be used to enable communication with the VMs
  via Virtual Functions (VFs). The VFs are virtual PCIe devices created
  from the physical Ethernet controller.

  DPDK models a physical interface as a rte device on top of which an
  eth device is created. DPDK (version 18.xx) introduced the
  representors eth devices. A representor device represents the VF eth
  device (VM side) on the hypervisor side and operates on top of a PF.
  Representors are multi devices created on top of one PF.

  The goal is to enable having dpdk port attached to instance as a
  direct port when the hypervisor datapath type is netdev.

  
  Changes needed:

  * Neutron:
  return OVS VIF type in case of a direct port and a netdev databath.

  * Nova:
  In case of a port's OVS VIF type and a direct VNIC type, the datapath info 
must be set in the VIF profile as the VIF object is instantiated from 
VIFHostDevice class with profile of type VIFPortProfileOVSRepresentor.

  * os-vif: 
  Do the plugging using the DPDK Representors syntax[1] in case of 
VIFHostDevice and a netdev databath.


  [1] http://docs.openvswitch.org/en/latest/topics/dpdk/phy/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1829734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815771] Re: Credentials are not cached

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/636645
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=479a2a0afaeb505c371ee97a1f2fbc1b11e3cef1
Submitter: Zuul
Branch:master

commit 479a2a0afaeb505c371ee97a1f2fbc1b11e3cef1
Author: Jose Castro Leon 
Date:   Wed Feb 13 15:54:39 2019 +0100

Adds caching of credentials

Allows to cache the credentials as they are currently fetched
directly from the database

Change-Id: I9a706ac506b0f65402f2433e6fd56097e0830657
Closes-Bug: #1815771


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1815771

Title:
  Credentials are not cached

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  While trying to improve performance on validation of EC2 credentials,
  we have just realized than the credentials are always fetched from the
  underlying database.

  If there is a flood on credential validation, this will transform in a
  increase of load on the database server that could impact the service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1815771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724172] Re: Allocation of an evacuated instance is not cleaned on the source host if instance is not defined on the hypervisor

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/512623
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9cacaad14e8c18e99e85d9dc04308fee91303f8f
Submitter: Zuul
Branch:master

commit 9cacaad14e8c18e99e85d9dc04308fee91303f8f
Author: Balazs Gibizer 
Date:   Tue Oct 17 15:06:59 2017 +0200

cleanup evacuated instances not on hypervisor

When the compute host recovers from a failure the compute manager
destroys instances that were evacuated from the host while it was down.
However these code paths only consider evacuated instances that are
still reported by the hypervisor. This means that if the compute
host is recovered in a way that the hypervisor lost the definition
of the instances (for example the compute host was redeployed) then
the allocation of these instances will not be deleted.

This patch makes sure that the instance allocation is cleaned up
even if the driver doesn't return that instance as exists on the
hypervisor.

Note that the functional test coverage will be added on top as it needs
some refactoring that would make the bugfix non backportable.

Change-Id: I4bc81b482380c5778781659c4d167a712316dab4
Closes-Bug: #1724172


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724172

Title:
  Allocation of an evacuated instance is not cleaned on the source host
  if instance is not defined on the hypervisor

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova does not clean up the allocation of an evacuated instance from
  the recovered source compute host if the instance is not any more
  defined on the hypervisor.

  To reproduce:
  * Boot an instance
  * Kill the compute host the instance is booted on
  * Evacuate the instance
  * Recover the original compute host in a way that clears the instance 
definition from the hypervisor (e.g. redeploy the compute host).
  * Check the allocations of the instance in placement API. The allocation 
against the source compute host is not cleaned up.

  The compute manager is supposed to clean up evacuated instances during
  the compute manager init_host method by calling
  _destroy_evacuated_instances. However that function only iterates on
  instances known by the hypervisor [1].

  [1]
  
https://github.com/openstack/nova/blob/5e4c98a58f1afeaa903829f5e3f28cd6dc30bf4b/nova/compute/manager.py#L654

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830800] Re: Compute API in nova - server group "policy" field is a string rather than an object

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/661869
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=961435453552569754cb7801c1e5c366dd9d16a8
Submitter: Zuul
Branch:master

commit 961435453552569754cb7801c1e5c366dd9d16a8
Author: Yikun Jiang 
Date:   Wed May 29 09:16:51 2019 +0800

Fix the server group "policy" field type in api-ref

The server group policy field added in v2.64 is a string but
the API reference says the parameter is an object.

This patch changes it from "object" to "string".

Change-Id: I1b4efe8afb302d94c810389e124c5370cbe72ddf
Closes-bug: #1830800


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830800

Title:
  Compute API in nova - server group "policy" field is a string rather
  than an object

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  - [x] This doc is inaccurate in this way:

  The server group policy field added in v2.64 is a string but the API
  reference says the parameter is an object.

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  /api-ref/source/parameters.yaml#L5368

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586/nova/api/openstack/compute/schemas/server_groups.py#L60

  ---
  Release: 19.1.0.dev441 on 2019-03-26 18:09:01
  SHA: 37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  Source: https://opendev.org/openstack/nova/src/api-ref/source/index.rst
  URL: https://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830926] Re: Links to reno are incorrect

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/661967
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8b9dad0b636e4f6bdd8e1eee0f1a8a00c1ffd2b7
Submitter: Zuul
Branch:master

commit 8b9dad0b636e4f6bdd8e1eee0f1a8a00c1ffd2b7
Author: Stephen Finucane 
Date:   Wed May 29 13:39:06 2019 +0100

docs: Don't version links to reno docs

reno doesn't have stable branches and doesn't version its documentation.
There's no point versioning our links to same.

Change-Id: Id782d3b11715bc3211e7952fb01b42a659d06e36
Closes-Bug: #1830926
Signed-off-by: Stephen Finucane 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830926

Title:
  Links to reno are incorrect

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are multiple links to reno in the "release notes" section of the
  contributor guide:

  https://docs.openstack.org/nova/stein/contributor/releasenotes.html

  These are versioned links but reno is unversioned. This is resulting
  in breaking links when on stable branches like the above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815676] Re: DVR: External process monitor for keepalived should be removed when external gateway is removed for DVR HA routers

2019-05-29 Thread Swaminathan Vasudevan
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815676

Title:
  DVR: External process monitor for keepalived should be removed when
  external gateway is removed for DVR HA routers

Status in neutron:
  Invalid

Bug description:
  External process monitor for keepalived state change should be removed when 
the External Gateway is removed for DVR HA routers.
  We have seen under certain conditions when the SNAT namespace is missing, the 
External process Monitor is try to respawn the keepalived state change monitor 
process within the namespace.
  But the External process monitor does not check for the SNAT namespace and it 
is up to the process that calls it.

  The 'delete' ha-router takes care of cleaning the external process
  monitor subscription for the keepalived state change, but the external
  gateway remove function is not calling this function.

  This is how I was able to reproduce the problem.

  But this is how I was able to reproduce.
  Create HA/DVR routers
  Delete the SNAT Namespace of the routers.
  Also delete the PID files for the ip_monitor under 
/opt/stack/data/neutron/external/pids/ip_monitor pid

  Once deleted I was able to see the log message in the
  neutron-l3.service logs.

  `
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: ERROR 
neutron.agent.linux.external_process [-] ip_monitor for router with uuid
  04fabe76-9316-4270-a99f-4f0ccffb8feb not found. The process should not have 
died
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: WARNING 
neutron.agent.linux.external_process [-] Respawning ip_monitor for uui
  d 04fabe76-9316-4270-a99f-4f0ccffb8feb
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: DEBUG 
neutron.agent.linux.utils [-] Unable to access /opt/stack/data/neutron/e
  xternal/pids/04fabe76-9316-4270-a99f-4f0ccffb8feb.monitor.pid {{(pid=12153) 
get_value_from_file /opt/stack/neutron/neutron/agent/linux/utils
  .py:250}}
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: DEBUG 
neutron.agent.linux.utils [-] Running command (rootwrap daemon): ['ip',
  'netns', 'exec', 'snat-04fabe76-9316-4270-a99f-4f0ccffb8feb', 
'neutron-keepalived-state-change', '--router_id=04fabe76-9316-4270-a99f-4f0ccf
  fb8feb', '--namespace=snat-04fabe76-9316-4270-a99f-4f0ccffb8feb', 
'--conf_dir=/opt/stack/data/neutron/ha_confs/04fabe76-9316-4270-a99f-4f0cc
  ffb8feb', '--monitor_interface=ha-4af17105-bd', 
'--monitor_cidr=169.254.0.1/24', 
'--pid_file=/opt/stack/data/neutron/external/pids/04fabe76-
  9316-4270-a99f-4f0ccffb8feb.monitor.pid', 
'--state_path=/opt/stack/data/neutron', '--user=1000', '--group=1004'] 
{{(pid=12153) execute_rootw
  rap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:103}}
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: ERROR 
neutron.agent.linux.utils [-] Exit code: 1; Stdin: ; Stdout: ; Stderr: C
  annot open network namespace "snat-04fabe76-9316-4270-a99f-4f0ccffb8feb": No 
such file or directory
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]:
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: DEBUG 
oslo_concurrency.lockutils [-] Lock "_check_child_processes" released by
  "neutron.agent.linux.external_process._check_child_processes" :: held 0.007s 
{{(pid=12153) inner /usr/local/lib/python2.7/dist-packages/osl
  o_concurrency/lockutils.py:285}}
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: Traceback (most 
recent call last):
  Oct 04 23:43:39 ubuntu-18-ctlr-rocky neutron-l3-agent[12153]: File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 460
  , in fire_timers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830800] Re: Compute API in nova - server group "policy" field is a string rather than an object

2019-05-29 Thread Matt Riedemann
> The API reference is generated from master branch only.

True they are published from master only, but...

> So stable branches do not have to be fixed.

If someone is building docs from the stable branch code for their
product, then the api-ref changes could be backported. This is also a
small enough fix that I think backports are OK.

** Changed in: nova/stein
   Status: Won't Fix => Confirmed

** Changed in: nova/rocky
   Status: Won't Fix => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830800

Title:
  Compute API in nova - server group "policy" field is a string rather
  than an object

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  - [x] This doc is inaccurate in this way:

  The server group policy field added in v2.64 is a string but the API
  reference says the parameter is an object.

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  /api-ref/source/parameters.yaml#L5368

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586/nova/api/openstack/compute/schemas/server_groups.py#L60

  ---
  Release: 19.1.0.dev441 on 2019-03-26 18:09:01
  SHA: 37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  Source: https://opendev.org/openstack/nova/src/api-ref/source/index.rst
  URL: https://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1793029] Re: adding 0.0.0.0/0 address pair to a port bypasses all other vm security groups

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/661594
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=e02c4b214b6762823c5e1ae5719f08f5f51910e8
Submitter: Zuul
Branch:master

commit e02c4b214b6762823c5e1ae5719f08f5f51910e8
Author: Slawek Kaplonski 
Date:   Mon May 27 02:45:15 2019 +0200

[api-ref] Add short warning about ANY IP address in allowed address pair

Change-Id: Ie3ed5ea81e0df50a699174fef95eb32337ed5682
Closes-Bug: #1793029


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1793029

Title:
  adding 0.0.0.0/0 address pair to a port  bypasses all other vm
  security groups

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  On an openstack-ansible / newton setup with linuxbridge, a customer
  ran:

  neutron port-update $port-uuid --allowed-address-pairs type=dict
  list=true ip_address=0.0.0.0/0

  to bypass the ip source restriction (pfsense router and had to route
  packets).

  The impact of running the above, was an allow all rule was added to
  all ports in the network, bypassing all security groups.

  The iptables rule:

   905K   55M RETURN all  --  *  *   0.0.0.0/0
  0.0.0.0/0match-set NIPv44046d62c-59c8-4fd0-a547- src

  used on all ports, now triggers as:

  0.0.0.0/1
  128.0.0.0/1

  was added to the ipset NIPv44046d62c-59c8-4fd0-a547 (verified by
  looking at the ipset on the nova hosts). Removing the two lines from
  the ipset restored all security groups.

  Expected result was to remove ip filtering on the single port.

  This sounds similar to:

  https://bugs.launchpad.net/neutron/+bug/1461054

  but is marked fixed long ago.

  I've marked this as a security bug as a change to a single port can
  bypass other ports security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1793029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830926] [NEW] Links to reno are incorrect

2019-05-29 Thread Stephen Finucane
Public bug reported:

There are multiple links to reno in the "release notes" section of the
contributor guide:

https://docs.openstack.org/nova/stein/contributor/releasenotes.html

These are versioned links but reno is unversioned. This is resulting in
breaking links when on stable branches like the above.

** Affects: nova
 Importance: Undecided
 Assignee: Stephen Finucane (stephenfinucane)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830926

Title:
  Links to reno are incorrect

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  There are multiple links to reno in the "release notes" section of the
  contributor guide:

  https://docs.openstack.org/nova/stein/contributor/releasenotes.html

  These are versioned links but reno is unversioned. This is resulting
  in breaking links when on stable branches like the above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826419] Re: dhcp agent configured with mismatching domain and host entries

2019-05-29 Thread James Page
** Also affects: neutron (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Bionic)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1826419

Title:
  dhcp agent configured with mismatching domain and host entries

Status in neutron:
  In Progress
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Bionic:
  Triaged
Status in neutron source package in Cosmic:
  New
Status in neutron source package in Disco:
  New
Status in neutron source package in Eoan:
  New

Bug description:
  Related bug 1774710 and bug 1580588

  The neutron-dhcp-agent in OpenStack >= Queens makes use of the
  dns_domain value set on a network to configure the '--domain'
  parameter of the dnsmasq instance that supports it;  at the same time,
  neutron makes use of CONF.dns_domain when creating dns_assignments for
  ports - this results in a hosts file for the dnsmasq instance which
  uses CONF.dns_domain and a --domain parameter of network.dns_domain
  which do not match.

  This results in a search path on instances booted attached to the
  network which is inconsistent with the internal DNS entries that
  dnsmasq responds with:

root@bionic-045546-2:~# host 192.168.21.222
222.21.168.192.in-addr.arpa domain name pointer 
bionic-045546-2.jamespage.internal.
root@bionic-045546-2:~# host bionic-045546-2
bionic-045546-2.designate.local has address 192.168.21.222

  In the above example:

CONF.dns_domain = jamespage.internal.
network.dns_domain = designate.local.

  Based on previous discussion in bug 1580588 I think that the
  dns_domain value for a network was intented for use for external DNS
  integration such as that provided by Designate.

  The changed made under commit:

https://opendev.org/openstack/neutron/commit/137a6d61053

  appear to break this assumption, producing somewhat inconsistent
  behaviour in the dnsmasq instance for the network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1826419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830014] Re: [RFE] add API for neutron debug tool "probe"

2019-05-29 Thread LIU Yulong
** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1830014

Title:
  [RFE] add API for neutron debug tool "probe"

Status in neutron:
  Opinion

Bug description:
  Recently, due to this bug:
  https://bugs.launchpad.net/neutron/+bug/1821912
  We noticed that sometimes the guest OS is not fully UP, but test case is 
trying to login it. A simple idea is to ping it first, then try to login. So we 
hope to find a way for tempest to verify the neutron port link state. In high 
probability, the DB resource state is not reliable. We need an independent 
mechanism to check the VM network status. Because tempest is "blackbox" test, 
it can run in any host, we can not use the current resources under the existing 
mechanism, such as qdhcp-namepace or qrouter-namepace to do such check.

  Then this RFE is up. We have neutron-debug tool which include a "probe" 
resource in the agent side.
  https://docs.openstack.org/neutron/latest/cli/neutron-debug.html
  We could add some API to neutron, and let the proper agent to add such 
"probe" for us.
  In agent side, it will be a general agent extension, you can enable it to the 
ovs-agent, L3-agent or DHCP-agent.
  Once you have such "probe" resource in the agent side, then you can run any 
command in it.
  This will be useful for neutron CI to check the VM link state.

  So a basic workflow will be:
  1. neutron tempest create router and connected to one subnet (network-1)
  2. neutron tempest create one VM
  3. neutron tempest create one floating IP and bind it to the VM-1 port
  4. create a "probe" for network-1 via neutron API
  5. ping the VM port until reachable in the "probe" namespace
  6. ssh the VM by floating IP
  7. do the next step

  One more thing, we now have set the "neutron-debug" tool as deprecated:
  https://bugs.launchpad.net/neutron/+bug/1583700
  But we can remain that "probe" mechanism.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1830014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830739] Re: user-data in CloudSigma VM's metadata field "cloudinit-user-data" fails to configure eth1

2019-05-29 Thread Dan Watkins
Thanks for the reminder!

** Changed in: cloud-init
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1830739

Title:
  user-data in CloudSigma VM's metadata field "cloudinit-user-data"
  fails to configure eth1

Status in cloud-init:
  Invalid

Bug description:
  1. Tell us your cloud provider

     CloudSigma

  2. Any appropriate cloud-init configuration you can provide us

     network: {version: 1, config: {type: physical, name: eth1, subnets:
  {type: static, address: 10.1.1.101/24}}}

  https://cloudinit.readthedocs.io/en/latest/topics/datasources/cloudsigma.html
  says that "By default cloud-config format is expected there and the
  #cloud-config header could be omitted."

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1830739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825952] Re: When creating a volume group, the volume type is not selected, the creation is not successful, but no prompt message is reported.

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/654889
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=564fb5aa5babfc16b72a9b1aa95f65b80c663d05
Submitter: Zuul
Branch:master

commit 564fb5aa5babfc16b72a9b1aa95f65b80c663d05
Author: pengyuesheng 
Date:   Tue Apr 23 16:35:28 2019 +0800

Display the error message on create volume group form

On create volume group form, if the volume type is not selected,
the creation is not successful, but no prompt message is shown.
This patch display error message when volume type is not selected

Change-Id: Ib7d7531a3cdeb6166dd63381901bc4ba2db412e0
Closes-Bug: #1825952


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1825952

Title:
  When creating a volume group, the volume type is not selected, the
  creation is not successful, but no prompt message is reported.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When creating a volume group, the volume type is not selected, the
  creation is not successful, but no prompt message is reported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1825952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804700] Re: keystone-manage bootstrap raises ValueError

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/660203
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=295b07cc76c8387adc2fd9a8f3efb769f260ff02
Submitter: Zuul
Branch:master

commit 295b07cc76c8387adc2fd9a8f3efb769f260ff02
Author: Gage Hugo 
Date:   Mon May 20 15:19:39 2019 -0500

Don't throw valueerror on bootstrap

When keystone-manage bootstrap is ran without providing a value
to set as the admin password, keystone-manage will throw an
unhandled ValueError while displaying the proper warning
message.

This change removes the ValueError and simply has the CLI
exit out when this condition is met.

Closes-Bug: #1804700

Change-Id: I4e7d5eeb2e48ff354b44196bd11d62d51a73357b


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1804700

Title:
  keystone-manage bootstrap raises ValueError

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The 'keystone-manage bootstrap' command still raises Value Error
  despite 'keystone-manage fernet_setup' command was successful. The
  whole bug is:

  shuayb@shuayb-HP-EliteBook-Folio-9470m:~/keystone$ keystone-manage bootstrap
  Either --bootstrap-password argument or OS_BOOTSTRAP_PASSWORD must be set.
  2018-11-22 21:51:51.394 3884 CRITICAL keystone [-] Unhandled error: ValueError
  2018-11-22 21:51:51.394 3884 ERROR keystone Traceback (most recent call last):
  2018-11-22 21:51:51.394 3884 ERROR keystone   File 
"/home/shuayb/anaconda3/bin/keystone-manage", line 10, in 
  2018-11-22 21:51:51.394 3884 ERROR keystone sys.exit(main())
  2018-11-22 21:51:51.394 3884 ERROR keystone   File 
"/home/shuayb/keystone/keystone/cmd/manage.py", line 41, in main
  2018-11-22 21:51:51.394 3884 ERROR keystone cli.main(argv=sys.argv, 
developer_config_file=developer_config)
  2018-11-22 21:51:51.394 3884 ERROR keystone   File 
"/home/shuayb/keystone/keystone/cmd/cli.py", line 1349, in main
  2018-11-22 21:51:51.394 3884 ERROR keystone CONF.command.cmd_class.main()
  2018-11-22 21:51:51.394 3884 ERROR keystone   File 
"/home/shuayb/keystone/keystone/cmd/cli.py", line 178, in main
  2018-11-22 21:51:51.394 3884 ERROR keystone klass.do_bootstrap()
  2018-11-22 21:51:51.394 3884 ERROR keystone   File 
"/home/shuayb/keystone/keystone/cmd/cli.py", line 156, in do_bootstrap
  2018-11-22 21:51:51.394 3884 ERROR keystone raise ValueError
  2018-11-22 21:51:51.394 3884 ERROR keystone ValueError
  2018-11-22 21:51:51.394 3884 ERROR keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1804700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825102] Re: In the process of creating the image, can still edit the name, description, Source Type and file.

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/653339
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f8e5c4ef8e1eb1c55aa380864b1a023f3c4b22ce
Submitter: Zuul
Branch:master

commit f8e5c4ef8e1eb1c55aa380864b1a023f3c4b22ce
Author: pengyuesheng 
Date:   Wed Apr 17 15:16:44 2019 +0800

Disable textbox on create image form when submitting

In the process of creating the image,
can still edit the name, description,
Source Type and file.
This patch is Disable textbox when submitting

Change-Id: I4607b3b6d90ce28ba1b63808fd2028755039dcde
Closes-Bug: #1825102


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1825102

Title:
  In the process of creating the image, can still edit the name,
  description,Source Type and file.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In the process of creating the image, can still edit the name,
  description,Source Type and file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1825102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825954] Re: When the volume group is created, if the name already exists, an error is reported, but when the volume group is updated, the name is changed to the existing one and

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/654896
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=7c77637f7643f5500234b809fce348e3f1d7a80b
Submitter: Zuul
Branch:master

commit 7c77637f7643f5500234b809fce348e3f1d7a80b
Author: pengyuesheng 
Date:   Tue Apr 23 16:55:12 2019 +0800

Do not check name duplication when creating a volume group

The cinder service allows to use a same name for multiple groups.
There is no need to check whether the name already is used in
the "Create Volume Group" form.

Change-Id: If6f33f1a23ffaddbada614306f9cf9844e8be1e4
Closes-Bug: #1825954


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1825954

Title:
  When the volume group is created, if the name already exists, an error
  is reported, but when the volume group is updated, the name is changed
  to the existing one and it succeeds.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When the volume group is created, if the name already exists, an error
  is reported, but when the volume group is updated, the name is changed
  to the existing one and it succeeds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1825954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826510] Re: Unable to accept volume transfer when number of volumes equal quota of volumes

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/655834
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=98c434c199f07831c84bc1cc04b5dfa74821329c
Submitter: Zuul
Branch:master

commit 98c434c199f07831c84bc1cc04b5dfa74821329c
Author: pengyuesheng 
Date:   Fri Apr 26 14:27:58 2019 +0800

Disabled accept transfer when number of volumes equal quota of volumes

Change-Id: I3d26b88617a6c4fb177cc9655bab89d23ce72914
Closes-Bug: #1826510


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1826510

Title:
   Unable to accept volume transfer when number of volumes equal quota
  of volumes

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
   Unable to accept volume transfer when number of volumes equal quota
  of volumes,and the error messages is unclear

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1826510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830895] [NEW] Debug test_assert_br_int_patch_port_ofports_dont_change errors

2019-05-29 Thread Rodolfo Alonso
Public bug reported:

This functional test,
"test_assert_br_int_patch_port_ofports_dont_change", has been reported
as unstable.


Logs:
* 
http://logs.openstack.org/36/660936/1/check/neutron-functional/624aa7c/testr_results.html.gz


Error:
ft1.20: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_br_int_patch_port_ofports_dont_changetesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l2/base.py",
 line 214, in stop_agent
rpc_loop_thread.wait()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/greenthread.py",
 line 181, in wait
return self._exit_event.wait()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/event.py",
 line 125, in wait
result = hub.switch()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 298, in switch
return self.greenlet.switch()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 350, in run
self.wait(sleep_time)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/poll.py",
 line 80, in wait
presult = self.do_poll(seconds)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/epolls.py",
 line 31, in do_poll
return self.poll.poll(seconds)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
raise TimeoutException()
fixtures._fixtures.timeout.TimeoutException

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1830895

Title:
  Debug test_assert_br_int_patch_port_ofports_dont_change errors

Status in neutron:
  New

Bug description:
  This functional test,
  "test_assert_br_int_patch_port_ofports_dont_change", has been reported
  as unstable.

  
  Logs:
  * 
http://logs.openstack.org/36/660936/1/check/neutron-functional/624aa7c/testr_results.html.gz

  
  Error:
  ft1.20: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_br_int_patch_port_ofports_dont_changetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l2/base.py",
 line 214, in stop_agent
  rpc_loop_thread.wait()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/greenthread.py",
 line 181, in wait
  return self._exit_event.wait()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/event.py",
 line 125, in wait
  result = hub.switch()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 298, in switch
  return self.greenlet.switch()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 350, in run
  self.wait(sleep_time)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/poll.py",
 line 80, in wait
  presult = self.do_poll(seconds)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/epolls.py",
 line 31, in do_poll
  return self.poll.poll(seconds)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  raise TimeoutException()
  fixtures._fixtures.timeout.TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1830895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523341] Re: Unable to add ipv6 static route

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/656866
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=43034cbb2330c476325edd1c6854dd93a30d8814
Submitter: Zuul
Branch:master

commit 43034cbb2330c476325edd1c6854dd93a30d8814
Author: Siebe Claes 
Date:   Thu May 2 22:44:06 2019 +0200

Fixes IPv6 static route addition

This change fixes the form validation error while adding
IPv6 static routes to a router through the OpenStack Dashboard.
Unit tests to cover this action have also been added.

Change-Id: Ied0fd27dbbc33a98617c049977539c5b3c71cdfe
Closes-Bug: #1523341


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523341

Title:
  Unable to add ipv6 static route

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  On Liberty release.

  When I add a ipv6 static route on Project->Network->Routers->Static
  Route->Add Static Route, Horizon returns "Invalid version for IP
  address".

  The same ipv6 static route can be added with neutron cli.

  # neutron -v router-update router0 -- --routes type=dict list=true
  destination=2002::0/64,nexthop=2001::100

  # neutron router-show router0
  
+---+--+
  | Field | Value   

 |
  
+---+--+
  | admin_state_up| True

 |
  | distributed   | False   

 |
  | external_gateway_info | {"network_id": 
"63c39233-e44b-400b-b8c3-9a185568eedc", "enable_snat": true, 
"external_fixed_ips": [{"subnet_id": "f8f96fa1-6233-4eae-92f0-fca1848bb275", 
"ip_address": "172.16.207.5"}]} |
  | ha| False   

 |
  | id| b02c7c45-f807-47d8-8335-fbffb3a2b6b6

 |
  | name  | router0 

 |
  | routes| {"destination": "2002::0/64", "nexthop": 
"2001::100"}
|
  | status| ACTIVE  

 |
  | tenant_id | ace870e6790a4195b1b50fe69adbab69

 |
  
+---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606741] Re: [SRU] Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode

2019-05-29 Thread Corey Bryant
This bug was fixed in the package neutron - 2:12.0.5-0ubuntu5~cloud0
---

 neutron (2:12.0.5-0ubuntu5~cloud0) xenial-queens; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:12.0.5-0ubuntu5) bionic; urgency=medium
 .
   * Backport fix for dvr+l3ha metadata service not available
 - d/p/Spawn-metadata-proxy-on-dvr-ha-standby-routers.patch (LP: #1606741)


** Changed in: cloud-archive/queens
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606741

Title:
  [SRU] Metadata service for instances is unavailable when the l3-agent
  on the compute host  is dvr_snat mode

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  Fix Released
Status in neutron source package in Cosmic:
  Fix Released
Status in neutron source package in Disco:
  Fix Released
Status in neutron source package in Eoan:
  Fix Released

Bug description:
  [Impact] 
  Currently if you deploy Openstack with dvr and l3ha enabled (and > 1 compute 
host) only instances that are booted on the compute host that is running the VR 
master will have access to metadata. This patch ensures that both master and 
slave VRs have an associated haproxy ns-metadata proccess running local to the 
compute host.

  [Test Case]
  * deploy Openstack with dvr and l3ha enabled with 2 compute hosts
  * create an ubuntu instance on each compute hosts
  * check that both are able to access the metadata api (i.e. cloud-init 
completes successfully)
  * verify that there is an ns-metadata haproxy process running on each compute 
host

  [Regression Potential] 
  None anticipated
   
  =

  In my mitaka environment, there are five nodes here, including
  controller, network1, network2, computer1, computer2 node. I start
  l3-agents with dvr_snat mode in all network and compute nodes and set
  enable_metadata_proxy to true in l3-agent.ini. It works well for most
  neutron services unless the metadata proxy service. When I run command
  "curl http://169.254.169.254; in an instance booting from cirros, it
  returns "curl: couldn't connect to host" and the instance can't fetch
  metadata in its first booting.

  * Pre-conditions: start l3-agent with dvr_snat mode in all computer
  and network nodes and set enable_metadata_proxy to true in
  l3-agent.ini.

  * Step-by-step reproduction steps:
  1.create a network and a subnet under this network;
  2.create a router;
  3.add the subnet to the router
  4.create an instance with cirros (or other images) on this subnet
  5.open the console for this instance and run command 'curl 
http://169.254.169.254' in bash, waiting for result.

  * Expected output: this command should return the true metadata info
  with the command  'curl http://169.254.169.254'

  * Actual output:  the command actually returns "curl: couldn't connect
  to host"

  * Version:
    ** Mitaka
    ** All hosts are centos7

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1606741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828195] Re: Snapshot Name is optional parameter on create snapshot form

2019-05-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/657765
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=002c163d562f073ffebbdd23ea0682203da510d8
Submitter: Zuul
Branch:master

commit 002c163d562f073ffebbdd23ea0682203da510d8
Author: pengyuesheng 
Date:   Wed May 8 17:14:57 2019 +0800

Snapshot Name is optional parameter on create and update snapshot form

Change-Id: I32aed4f1e27ce53ab9303f470b50145c9715c4ce
Closes-Bug: #1828195


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1828195

Title:
  Snapshot Name is optional parameter on create snapshot form

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Snapshot Name is optional parameter on create snapshot form

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1828195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830886] [NEW] Taiwanese locale not working

2019-05-29 Thread Vadym Markov
Public bug reported:

Language selector allows switching to zh-tw, but zh-cn is displayed
instead.

Django 1.11 finally removed support of legacy chinese locale naming. So,
any zh-* locale silently falls back to zh-Hans, which is equivalent of
zh-cn.

Related discussion:
https://bugs.launchpad.net/horizon/+bug/1818639

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1830886

Title:
  Taiwanese locale not working

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Language selector allows switching to zh-tw, but zh-cn is displayed
  instead.

  Django 1.11 finally removed support of legacy chinese locale naming.
  So, any zh-* locale silently falls back to zh-Hans, which is
  equivalent of zh-cn.

  Related discussion:
  https://bugs.launchpad.net/horizon/+bug/1818639

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1830886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp