[Yahoo-eng-team] [Bug 1962713] Re: Race between loadbalancer creation and FIP association with ovn-octavia provider

2022-03-02 Thread Luis Tomas Bolivar
Yes, this is what I did in this (partial) backport:
https://review.opendev.org/c/openstack/networking-ovn/+/831349/

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962713

Title:
  Race between loadbalancer creation and FIP association with ovn-
  octavia provider

Status in neutron:
  Fix Released

Bug description:
  With Kuryr, when a service of LoadBalancer type is created in kubernetes the
  process is the next:
  - Create a load balancer
  - Associate FIP to the load balancer VIP

  In busy enviroments, with HA, there may be a race condition where
  the method to associate the FIP to the loadbalancer fails to find
  the recently created loadbalancer, therefore not doing the FIP to
  VIP association in the OVN NB DB. Which breaks the connectivity to
  the LoadBalancer FIP (k8s external-ip associated to the loadbalancer)
  until there is a modification on the service (for instance, adding
  a new member/endpoint) and the FIP to VIP association is reconfigured

  This problem only happens in stable/train, as fix was released as part
  of this code reshape at 
https://opendev.org/openstack/ovn-octavia-provider/commit/c6cee9207349a12e499cbc81fe0e5d4d5bfa015c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1962713/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1960346] Re: Volume detach failure in devstack-platform-centos-9-stream job

2022-03-02 Thread Ghanshyam Mann
adding tempest for
https://review.opendev.org/q/topic:wait_until_sshable_pingable fixes and
we will see if that fixes the things. if it does then we can remove the
nova from this bug.

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Triaged

** Changed in: tempest
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1960346

Title:
  Volume detach failure in devstack-platform-centos-9-stream job

Status in OpenStack Compute (nova):
  Triaged
Status in tempest:
  Triaged

Bug description:
  devstack-platform-centos-9-stream job is failing 100% with the compute
  server rescue test with volume detach error:

  traceback-1: {{{
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/waiters.py", line 316, in 
wait_for_volume_resource_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: volume 70cedb4b-e74d-4a86-a73d-ba8bce29bc99 failed to reach 
available status (current in-use) within the required time (196 s).
  }}}

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/waiters.py", line 384, in 
wait_for_volume_attachment_remove_from_server
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Volume 70cedb4b-e74d-4a86-a73d-ba8bce29bc99 failed to detach from 
server cf57d12b-5e37-431e-8c71-4a7149e963ae within the required time (196 s) 
from the compute API perspective

  
https://a886e0e70a23f464643f-7cd608bf14cafb686390b86bc06cde2a.ssl.cf1.rackcdn.com/827576/6/check/devstack-
  platform-centos-9-stream/53de74e/testr_results.html

  
  
https://zuul.openstack.org/builds?job_name=devstack-platform-centos-9-stream=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1960346/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962771] [NEW] Implement stable MORef for VMware

2022-03-02 Thread kiran pawar
Public bug reported:

Description

The "mo-ref" in VMware is not guaranteed to be stable. It can change after a 
recovery or simply un-registering and re-registering a vm. So making it stable 
is often necessary for the vcenter to be able to manage vms which were "lost" 
somehow. 
Possible scenario is, when VMs reside on a ESXi host, which was down, and the 
ESXi host comes back, but has lost the vm. For the Stable VM refs, we need to 
restart the agent to clear out the cache. More problematic is the volume-ref as 
there is no fallback to the instance-uuid and we need to modify the db to 
reflect the new mo-ref.

Affected versions 
=
vsphere (all), openstack (all). 

Steps to reproduce:
===
For the VM:
1. Create an Instance
2. Stop the instance
3. Unregister the VM in vsphere
4. Register the VM in vsphere
5. Start the instance in openstack
6. An exception is raised

For the volume:
1. Create an instance with vmdk-volume attached
2. Unregister shadow-vm in vcenter
3. Register shadow-vm in vcenter
4. Detach the volume
5. An exception is raised

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1962771

Title:
  Implement stable MORef for VMware

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  
  The "mo-ref" in VMware is not guaranteed to be stable. It can change after a 
recovery or simply un-registering and re-registering a vm. So making it stable 
is often necessary for the vcenter to be able to manage vms which were "lost" 
somehow. 
  Possible scenario is, when VMs reside on a ESXi host, which was down, and the 
ESXi host comes back, but has lost the vm. For the Stable VM refs, we need to 
restart the agent to clear out the cache. More problematic is the volume-ref as 
there is no fallback to the instance-uuid and we need to modify the db to 
reflect the new mo-ref.

  Affected versions 
  =
  vsphere (all), openstack (all). 

  Steps to reproduce:
  ===
  For the VM:
  1. Create an Instance
  2. Stop the instance
  3. Unregister the VM in vsphere
  4. Register the VM in vsphere
  5. Start the instance in openstack
  6. An exception is raised

  For the volume:
  1. Create an instance with vmdk-volume attached
  2. Unregister shadow-vm in vcenter
  3. Register shadow-vm in vcenter
  4. Detach the volume
  5. An exception is raised

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1962771/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962759] [NEW] jinja-template doesn't support 'do' extension.

2022-03-02 Thread paul bruno
Public bug reported:

example user-data file with jinja

## template: jinja
#!/bin/sh

{% set data_result = [] %}
{% set data_input = [1,2,3] %}
{% for i in data_input %}
  {% do data_result.append(i) %}
{% endfor %}
echo results: {{data_result}} >>results.out


The following exception is thrown when using jinja2 'do' statement.

jinja2.exceptions.TemplateSyntaxError: Encountered unknown tag 'do'.
Jinja was looking for the following tags: 'endfor' or 'else'. The
innermost block that needs to be closed is 'for'.

I'm using cloud-init from a 64bitencoded file passed into terraform
azure provider custom_data.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init logs"
   
https://bugs.launchpad.net/bugs/1962759/+attachment/5565034/+files/cloud-init.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1962759

Title:
  jinja-template doesn't support 'do' extension.

Status in cloud-init:
  New

Bug description:
  example user-data file with jinja

  ## template: jinja
  #!/bin/sh

  {% set data_result = [] %}
  {% set data_input = [1,2,3] %}
  {% for i in data_input %}
{% do data_result.append(i) %}
  {% endfor %}
  echo results: {{data_result}} >>results.out

  
  The following exception is thrown when using jinja2 'do' statement.

  jinja2.exceptions.TemplateSyntaxError: Encountered unknown tag 'do'.
  Jinja was looking for the following tags: 'endfor' or 'else'. The
  innermost block that needs to be closed is 'for'.

  I'm using cloud-init from a 64bitencoded file passed into terraform
  azure provider custom_data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1962759/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962488] Re: Show X-FORWARDED-FOR IP address if login failed

2022-03-02 Thread Vishal Manchanda
Looking at comment #4 of this bug. So Changing its status to invalid.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1962488

Title:
  Show X-FORWARDED-FOR IP address if login failed

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Failed login syslog message shows the internal IP address from ha-
  proxy, not the real IP of the client.

  Tested via: horizon-21.2.6

  Message sent to syslog on a failed login:

  httpd[4218]: [wsgi:error] [pid 4218] [remote 192.168.17.12:59680]
  Login failed for user "aa" using domain "default", remote
  address 192.168.17.12.

  (192.168.17.12 is the internal ha-proxy address)

  I couldn't find a filed bug or anything related, the error message is
  generated in openstack_auth/forms.py in "def clean(self)".

  Please let me know if there is a workaround for it, or if I should see
  to get a patch together to get the real client IP address into the log
  file.

  thx

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1962488/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962714] Re: disable ipv6 breaks several unit tests

2022-03-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/831490
Committed: 
https://opendev.org/openstack/neutron/commit/10caa1e101a04525559f104d651ab5b2cd8108c2
Submitter: "Zuul (22348)"
Branch:master

commit 10caa1e101a04525559f104d651ab5b2cd8108c2
Author: uchenily 
Date:   Wed Mar 2 08:16:48 2022 +

Mock netutils.is_ipv6_enabled() method when testing

Mock netutils.is_ipv6_enabled() to prevent unittest results from being
affected by /proc/sys/net/ipv6/conf/default/disable_ipv6 values

Closes-Bug: #1962714
Change-Id: I3b6175eb0db6e4a791f8fa686b491a448ebf4ad9


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962714

Title:
  disable ipv6 breaks  several unit tests

Status in neutron:
  Fix Released

Bug description:
  Recently, our CI/CD environment changed, and ipv6 was disabled by
  default, which caused some unit tests to fail.

  
  $ cat /proc/sys/net/ipv6/conf/default/disable_ipv6
  0
  $ tox -e py3 neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager

  ==
  Totals
  ==
  Ran: 8 tests in 0.5640 sec.
   - Passed: 8
   - Skipped: 0
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 0
  Sum of execute time for each test: 1.0939 sec.



  
  $ echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
  $ tox -e py3 neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager

  ==
  Failed 3 tests - output below:
  ==

  
neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager.test_setup_reserved_and_enable_metadata
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):

File "/root/work/neutron-community/neutron/tests/base.py", line 183, in 
func
  return f(self, *args, **kwargs)

File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3212, in test_setup_reserved_and_enable_metadata
  self._test_setup_reserved(enable_isolated_metadata=True,

File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3186, in _test_setup_reserved
  mgr.driver.init_l3.assert_called_with('ns-XXX',

File "/usr/lib/python3.8/unittest/mock.py", line 913, in 
assert_called_with
  raise AssertionError(_error_message()) from cause

  AssertionError: expected call not found.
  Expected: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32', 'fe80::a9fe:a9fe/64'], namespace='qdhcp-ns')
  Actual: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32'], namespace='qdhcp-ns')

  
  
neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager.test_setup_reserved_with_force_metadata_enable
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):

File "/root/work/neutron-community/neutron/tests/base.py", line 183, in 
func
  return f(self, *args, **kwargs)

File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3206, in test_setup_reserved_with_force_metadata_enable
  self._test_setup_reserved(force_metadata=True)

File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3186, in _test_setup_reserved
  mgr.driver.init_l3.assert_called_with('ns-XXX',

File "/usr/lib/python3.8/unittest/mock.py", line 913, in 
assert_called_with
  raise AssertionError(_error_message()) from cause

  AssertionError: expected call not found.
  Expected: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32', 'fe80::a9fe:a9fe/64'], namespace='qdhcp-ns')
  Actual: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32'], namespace='qdhcp-ns')

  
  
neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager.test_setup_reserved_with_isolated_metadata_enable
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):

File "/root/work/neutron-community/neutron/tests/base.py", line 183, in 
func
  return f(self, *args, **kwargs)

File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3200, in test_setup_reserved_with_isolated_metadata_enable
  self._test_setup_reserved(enable_isolated_metadata=True)

File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3186, in _test_setup_reserved
  

[Yahoo-eng-team] [Bug 1962726] [NEW] ssh-rsa key is no longer allowed by recent openssh

2022-03-02 Thread Takashi Kajinami
Public bug reported:

Description
===
Currently create Key-pair API without actual key content returns the key 
generated at server side which is formatted in ssh-rsa.

However ssh-rsa is no longer supported by default since openssh 8.8

https://www.openssh.com/txt/release-8.8

```
This release disables RSA signatures using the SHA-1 hash algorithm
by default. This change has been made as the SHA-1 hash algorithm is
cryptographically broken, and it is possible to create chosen-prefix
hash collisions for https://www.openssh.com/txt/release-8.8
+ 
+ ```
+ 
+ This release disables RSA signatures using the SHA-1 hash algorithm
+ by default. This change has been made as the SHA-1 hash algorithm is
+ cryptographically broken, and it is possible to create chosen-prefix
+ hash collisions for https://www.openssh.com/txt/release-8.8
  
  ```
- 
  This release disables RSA signatures using the SHA-1 hash algorithm
  by default. This change has been made as the SHA-1 hash algorithm is
  cryptographically broken, and it is possible to create chosen-prefix
  hash collisions for https://bugs.launchpad.net/bugs/1962726

Title:
  ssh-rsa key is no longer allowed by recent openssh

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Currently create Key-pair API without actual key content returns the key 
generated at server side which is formatted in ssh-rsa.

  However ssh-rsa is no longer supported by default since openssh 8.8

  https://www.openssh.com/txt/release-8.8

  ```
  This release disables RSA signatures using the SHA-1 hash algorithm
  by default. This change has been made as the SHA-1 hash algorithm is
  cryptographically broken, and it is possible to create chosen-prefix
  hash collisions for https://bugs.launchpad.net/nova/+bug/1962726/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962714] [NEW] disable ipv6 breaks several unit tests

2022-03-02 Thread uchenily
Public bug reported:

Recently, our CI/CD environment changed, and ipv6 was disabled by
default, which caused some unit tests to fail.


$ cat /proc/sys/net/ipv6/conf/default/disable_ipv6
0
$ tox -e py3 neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager

==
Totals
==
Ran: 8 tests in 0.5640 sec.
 - Passed: 8
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 1.0939 sec.



$ echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
$ tox -e py3 neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager

==
Failed 3 tests - output below:
==

neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager.test_setup_reserved_and_enable_metadata
--

Captured traceback:
~~~
Traceback (most recent call last):

  File "/root/work/neutron-community/neutron/tests/base.py", line 183, in 
func
return f(self, *args, **kwargs)

  File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3212, in test_setup_reserved_and_enable_metadata
self._test_setup_reserved(enable_isolated_metadata=True,

  File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3186, in _test_setup_reserved
mgr.driver.init_l3.assert_called_with('ns-XXX',

  File "/usr/lib/python3.8/unittest/mock.py", line 913, in 
assert_called_with
raise AssertionError(_error_message()) from cause

AssertionError: expected call not found.
Expected: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32', 'fe80::a9fe:a9fe/64'], namespace='qdhcp-ns')
Actual: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32'], namespace='qdhcp-ns')


neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager.test_setup_reserved_with_force_metadata_enable
-

Captured traceback:
~~~
Traceback (most recent call last):

  File "/root/work/neutron-community/neutron/tests/base.py", line 183, in 
func
return f(self, *args, **kwargs)

  File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3206, in test_setup_reserved_with_force_metadata_enable
self._test_setup_reserved(force_metadata=True)

  File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3186, in _test_setup_reserved
mgr.driver.init_l3.assert_called_with('ns-XXX',

  File "/usr/lib/python3.8/unittest/mock.py", line 913, in 
assert_called_with
raise AssertionError(_error_message()) from cause

AssertionError: expected call not found.
Expected: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32', 'fe80::a9fe:a9fe/64'], namespace='qdhcp-ns')
Actual: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32'], namespace='qdhcp-ns')


neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager.test_setup_reserved_with_isolated_metadata_enable


Captured traceback:
~~~
Traceback (most recent call last):

  File "/root/work/neutron-community/neutron/tests/base.py", line 183, in 
func
return f(self, *args, **kwargs)

  File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3200, in test_setup_reserved_with_isolated_metadata_enable
self._test_setup_reserved(enable_isolated_metadata=True)

  File 
"/root/work/neutron-community/neutron/tests/unit/agent/linux/test_dhcp.py", 
line 3186, in _test_setup_reserved
mgr.driver.init_l3.assert_called_with('ns-XXX',

  File "/usr/lib/python3.8/unittest/mock.py", line 913, in 
assert_called_with
raise AssertionError(_error_message()) from cause

AssertionError: expected call not found.
Expected: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32', 'fe80::a9fe:a9fe/64'], namespace='qdhcp-ns')
Actual: init_l3('ns-XXX', ['192.168.0.6/24', 'fdca:3ba5:a17a:4ba3::2/64', 
'169.254.169.254/32'], namespace='qdhcp-ns')


==
Totals
==
Ran: 8 tests in 0.4921 sec.
 - Passed: 5
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 3
Sum of execute time for each test: 0.8743 sec.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962714

Title:
  disable ipv6 breaks  several unit tests

Status in neutron:
  New

Bug description:
  Recently, our CI/CD environment changed, and ipv6 was disabled 

[Yahoo-eng-team] [Bug 1962713] [NEW] Race between loadbalancer creation and FIP association with ovn-octavia provider

2022-03-02 Thread Luis Tomas Bolivar
Public bug reported:

With Kuryr, when a service of LoadBalancer type is created in kubernetes the
process is the next:
- Create a load balancer
- Associate FIP to the load balancer VIP

In busy enviroments, with HA, there may be a race condition where
the method to associate the FIP to the loadbalancer fails to find
the recently created loadbalancer, therefore not doing the FIP to
VIP association in the OVN NB DB. Which breaks the connectivity to
the LoadBalancer FIP (k8s external-ip associated to the loadbalancer)
until there is a modification on the service (for instance, adding
a new member/endpoint) and the FIP to VIP association is reconfigured

This problem only happens in stable/train, as fix was released as part
of this code reshape at 
https://opendev.org/openstack/ovn-octavia-provider/commit/c6cee9207349a12e499cbc81fe0e5d4d5bfa015c

** Affects: neutron
 Importance: Undecided
 Assignee: Luis Tomas Bolivar (ltomasbo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Luis Tomas Bolivar (ltomasbo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962713

Title:
  Race between loadbalancer creation and FIP association with ovn-
  octavia provider

Status in neutron:
  New

Bug description:
  With Kuryr, when a service of LoadBalancer type is created in kubernetes the
  process is the next:
  - Create a load balancer
  - Associate FIP to the load balancer VIP

  In busy enviroments, with HA, there may be a race condition where
  the method to associate the FIP to the loadbalancer fails to find
  the recently created loadbalancer, therefore not doing the FIP to
  VIP association in the OVN NB DB. Which breaks the connectivity to
  the LoadBalancer FIP (k8s external-ip associated to the loadbalancer)
  until there is a modification on the service (for instance, adding
  a new member/endpoint) and the FIP to VIP association is reconfigured

  This problem only happens in stable/train, as fix was released as part
  of this code reshape at 
https://opendev.org/openstack/ovn-octavia-provider/commit/c6cee9207349a12e499cbc81fe0e5d4d5bfa015c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1962713/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp