Verification on Jammy
=====================
Before upgrading
----------------
Check installed octavia version
dpkg -l | grep python3-octavia
ii python3-octavia 1:10.1.1-0ubuntu1.2
all OpenStack Load Balancer as a Service - Python libraries
After configuring the environment as described in steps 1 through 5
inspect the created resources
openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets
|
+--------------------------------------+-------------+--------------------------------------+
| 2048f21b-8b44-4e54-94cf-6d46ce63da05 | net1 |
ec0f5016-e886-417a-9fbc-ed9e4ab658ca |
| 68fb4cba-94f9-4182-af9a-ed74e9e3a762 | ext_net |
f5d90c73-7bed-4d08-8aaf-b2d6659974c7 |
| 7e03fc3e-f872-4ccc-ada4-0961cb3cc650 | net2 |
de9f6aed-5c5d-4ebf-b03a-a0800857ff4e |
| ba6e2618-dcac-4c2a-85b2-fc8826254fb9 | lb-mgmt-net |
3d74aca3-2544-4225-906b-8ebadde548a6 |
+--------------------------------------+-------------+--------------------------------------+
openstack subnet list
+--------------------------------------+------------------+--------------------------------------+--------------------------+
| ID | Name | Network
| Subnet |
+--------------------------------------+------------------+--------------------------------------+--------------------------+
| 3d74aca3-2544-4225-906b-8ebadde548a6 | lb-mgmt-subnetv6 |
ba6e2618-dcac-4c2a-85b2-fc8826254fb9 | fc00:fc88:2625:4fb9::/64 |
| de9f6aed-5c5d-4ebf-b03a-a0800857ff4e | subnet2 |
7e03fc3e-f872-4ccc-ada4-0961cb3cc650 | 172.16.0.0/24 |
| ec0f5016-e886-417a-9fbc-ed9e4ab658ca | subnet1 |
2048f21b-8b44-4e54-94cf-6d46ce63da05 | 192.168.21.0/24 |
| f5d90c73-7bed-4d08-8aaf-b2d6659974c7 | ext_net_subnet |
68fb4cba-94f9-4182-af9a-ed74e9e3a762 | 10.149.93.128/25 |
+--------------------------------------+------------------+--------------------------------------+--------------------------+
openstack router show router1 -f json | jq ."interfaces_info"
[
{
"port_id": "5610db12-03e3-4b13-ad6d-e16b475aef59",
"ip_address": "192.168.21.1",
"subnet_id": "ec0f5016-e886-417a-9fbc-ed9e4ab658ca"
},
{
"port_id": "d8b79dd1-34b0-4fef-9bfb-8371554d7762",
"ip_address": "172.16.0.1",
"subnet_id": "de9f6aed-5c5d-4ebf-b03a-a0800857ff4e"
}
]
openstack server list
+--------------------------------------+---------+--------+---------------------+--------------+---------+
| ID | Name | Status | Networks
| Image | Flavor |
+--------------------------------------+---------+--------+---------------------+--------------+---------+
| 499ae2c7-fa29-44d8-a899-9b899fc1d592 | server2 | ACTIVE | net2=172.16.0.169
| cirros-0.4.0 | m1.tiny |
| 00ca7c47-9cfb-404b-abcc-19d980192da4 | server1 | ACTIVE | net1=192.168.21.101
| cirros-0.4.0 | m1.tiny |
+--------------------------------------+---------+--------+---------------------+--------------+---------+
openstack loadbalancer list
+--------------------------------------+------+----------------------------------+---------------+---------------------+------------------+----------+
| id | name | project_id
| vip_address | provisioning_status | operating_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+------------------+----------+
| 2cbba470-deb3-45d5-a0cd-e9228bcbc74b | lb | fad6ac89259e41bbbb07260a9716f0c9
| 192.168.21.91 | ACTIVE | ONLINE | amphora |
+--------------------------------------+------+----------------------------------+---------------+---------------------+------------------+----------+
openstack loadbalancer member list pool
+--------------------------------------+---------+----------------------------------+---------------------+--------------+---------------+------------------+--------+
| id | name | project_id
| provisioning_status | address | protocol_port | operating_status |
weight |
+--------------------------------------+---------+----------------------------------+---------------------+--------------+---------------+------------------+--------+
| db014657-5022-4c1b-afc2-59a70e1b537d | server2 |
fad6ac89259e41bbbb07260a9716f0c9 | ACTIVE | 172.16.0.169 |
22 | NO_MONITOR | 1 |
+--------------------------------------+---------+----------------------------------+---------------------+--------------+---------------+------------------+--------+
openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+---------------+
| id | loadbalancer_id |
status | role | lb_network_ip | ha_ip |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+---------------+
| 94330239-3a8a-423e-9bb2-853231ec90d5 | 2cbba470-deb3-45d5-a0cd-e9228bcbc74b |
ALLOCATED | MASTER | fc00:fc88:2625:4fb9:f816:3eff:fe95:71e6 | 192.168.21.91 |
| fc60989a-e771-4bee-8ff8-3533950af491 | 2cbba470-deb3-45d5-a0cd-e9228bcbc74b |
ALLOCATED | BACKUP | fc00:fc88:2625:4fb9:f816:3eff:fe51:8766 | 192.168.21.91 |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+---------------+
The two networks are connected at the L3 layer, the loadbalancer
topology is ACTIVE_STANDBY, server2 is a member of the listening pool on
subnet2, and the amphorae have a VIP on subnet1
Access the juju unit that has the required network namespaces
juju ssh nova-compute/0 "export NET1_UUID=$(openstack network show net1
-f json | jq --raw-output .id); export NET2_UUID=$(openstack network
show net2 -f json | jq --raw-output .id); export SERVER1_IP=$(openstack
server show server1 --format json --column addresses | jq --raw-output
'.addresses.net1[]'); export SERVER2_IP=$(openstack server show server2
--format json --column addresses | jq --raw-output '.addresses.net2[]');
export VIP_IP=$(openstack loadbalancer list -f json | jq --raw-output
'.[].vip_address'); bash -l"
Verify server1 can reach server2 through the L3 router
ubuntu@juju-7cb855-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET1_UUID ssh -t cirros@$SERVER1_IP "VIP_IP=$VIP_IP; ping $SERVER2_IP"
[email protected]'s password:
PING 172.16.0.169 (172.16.0.169): 56 data bytes
64 bytes from 172.16.0.169: seq=0 ttl=63 time=1.839 ms
Verify that server1 can ssh to server2 through the VIP
ubuntu@juju-7cb855-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET1_UUID ssh -t cirros@$SERVER1_IP "VIP_IP=$VIP_IP; ssh
cirros@${VIP_IP}"
[email protected]'s password:
[email protected]'s password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:b8:ad:9d brd ff:ff:ff:ff:ff:ff
inet 172.16.0.169/24 brd 172.16.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feb8:ad9d/64 scope link
valid_lft forever preferred_lft forever
Verify server2 can reach server1 through the L3 router
ubuntu@juju-7cb855-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET2_UUID ssh -t cirros@$SERVER2_IP "VIP_IP=$VIP_IP; ping $SERVER1_IP"
[email protected]'s password:
PING 192.168.21.101 (192.168.21.101): 56 data bytes
64 bytes from 192.168.21.101: seq=0 ttl=63 time=0.390 ms
Observe that server2 cannot reach its own subnet through the VIP / loadbalancer
as this hangs
ubuntu@juju-7cb855-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET2_UUID ssh -t cirros@$SERVER2_IP "VIP_IP=$VIP_IP; ssh
cirros@$VIP_IP"
[email protected]'s password:
## We never receive a prompt to log into the vip
============
After upgrading
---------------
dpkg -l | grep python3-octavia
ii python3-octavia 1:10.1.1-0ubuntu1.3
all OpenStack Load Balancer as a Service - Python libraries
All the resources already exist, so we just need to failover the
loadbalancer so that the upgraded octavia-worker service properly
refreshes the keepalived templates and populates the route
openstack loadbalancer failover lb
After failover completes, verify that server1 can still reach server2 through
the loadbalancer
ubuntu@juju-7cb855-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET1_UUID ssh -t cirros@$SERVER1_IP "VIP_IP=$VIP_IP; ssh
cirros@${VIP_IP}"
[email protected]'s password:
[email protected]'s password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:b8:ad:9d brd ff:ff:ff:ff:ff:ff
inet 172.16.0.169/24 brd 172.16.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feb8:ad9d/64 scope link
valid_lft forever preferred_lft forever
Observe that now server2 can reach itself through the loadbalancer
ubuntu@juju-7cb855-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET2_UUID ssh -t cirros@$SERVER2_IP "VIP_IP=$VIP_IP; ssh
cirros@$VIP_IP"
[email protected]'s password:
[email protected]'s password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:b8:ad:9d brd ff:ff:ff:ff:ff:ff
inet 172.16.0.169/24 brd 172.16.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feb8:ad9d/64 scope link
valid_lft forever preferred_lft forever
** Tags removed: verification-needed verification-needed-jammy
** Tags added: verification-done verification-done-jammy
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2117280
Title:
[SRU] Asymmetric routing issue on amphorae in ACTIVE_STANDBY topology
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2117280/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs