Verification on Focal
=====================
Before upgrading
----------------
Check installed octavia version
dpkg -l | grep python3-octavia
ii python3-octavia 1:10.1.1-0ubuntu1.2~cloud0 all
OpenStack Load Balancer as a Service - Python libraries
After configuring the environment as described in steps 1 through 5
inspect the created resources
openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets
|
+--------------------------------------+-------------+--------------------------------------+
| 0bb6b398-f82f-4459-8f93-a6119580a4f6 | lb-mgmt-net |
71aa3e00-efda-46c1-956d-3d869da2e91d |
| 581a85c6-d51f-4328-9e5c-c50983b2e45f | ext_net |
26fc7f10-51bd-40cb-8c2f-e98c9015eb0e |
| c9949cbb-a737-4c1e-979f-16a38cbd5c74 | net1 |
2a1eec62-6ea5-4fad-ad3a-abe19bbe52a6 |
| e7373b24-f089-4b01-aec1-3d88a6fd08f6 | net2 |
770cbe2e-ac01-4d60-b1c7-26d4cf294afc |
+--------------------------------------+-------------+--------------------------------------+
openstack subnet list
+--------------------------------------+------------------+--------------------------------------+--------------------------+
| ID | Name | Network
| Subnet |
+--------------------------------------+------------------+--------------------------------------+--------------------------+
| 26fc7f10-51bd-40cb-8c2f-e98c9015eb0e | ext_net_subnet |
581a85c6-d51f-4328-9e5c-c50983b2e45f | 10.149.93.128/25 |
| 2a1eec62-6ea5-4fad-ad3a-abe19bbe52a6 | subnet1 |
c9949cbb-a737-4c1e-979f-16a38cbd5c74 | 192.168.21.0/24 |
| 71aa3e00-efda-46c1-956d-3d869da2e91d | lb-mgmt-subnetv6 |
0bb6b398-f82f-4459-8f93-a6119580a4f6 | fc00:a611:9580:a4f6::/64 |
| 770cbe2e-ac01-4d60-b1c7-26d4cf294afc | subnet2 |
e7373b24-f089-4b01-aec1-3d88a6fd08f6 | 172.16.0.0/24 |
+--------------------------------------+------------------+--------------------------------------+--------------------------+
openstack router show router1 -f json | jq ."interfaces_info"
[
{
"port_id": "accaf83a-f32a-4da0-b4fe-d90ab09109e5",
"ip_address": "172.16.0.1",
"subnet_id": "770cbe2e-ac01-4d60-b1c7-26d4cf294afc"
},
{
"port_id": "dee7c476-06fa-4aad-a2ab-98a058462ee9",
"ip_address": "192.168.21.1",
"subnet_id": "2a1eec62-6ea5-4fad-ad3a-abe19bbe52a6"
}
]
openstack server list
+--------------------------------------+---------+--------+---------------------+--------------+---------+
| ID | Name | Status | Networks
| Image | Flavor |
+--------------------------------------+---------+--------+---------------------+--------------+---------+
| 47e76971-dd3c-41e2-b39e-c12003a237f0 | server2 | ACTIVE | net2=172.16.0.134
| cirros-0.4.0 | m1.tiny |
| 6318c194-b2e6-49e1-8fd7-678fc3bd2a0e | server1 | ACTIVE | net1=192.168.21.205
| cirros-0.4.0 | m1.tiny |
+--------------------------------------+---------+--------+---------------------+--------------+---------+
openstack loadbalancer list
+--------------------------------------+------+----------------------------------+--------------+---------------------+------------------+----------+
| id | name | project_id
| vip_address | provisioning_status | operating_status | provider |
+--------------------------------------+------+----------------------------------+--------------+---------------------+------------------+----------+
| eb49c915-af95-4c20-8b23-5400e135e4bf | lb |
17630c14e7b14acfbfb65c9f02b29742 | 192.168.21.5 | ACTIVE | ONLINE
| amphora |
+--------------------------------------+------+----------------------------------+--------------+---------------------+------------------+----------+
openstack loadbalancer member list pool
+--------------------------------------+---------+----------------------------------+---------------------+--------------+---------------+------------------+--------+
| id | name | project_id
| provisioning_status | address | protocol_port | operating_status |
weight |
+--------------------------------------+---------+----------------------------------+---------------------+--------------+---------------+------------------+--------+
| 4de6ef38-828c-476d-aec6-b467c1c272a0 | server2 |
17630c14e7b14acfbfb65c9f02b29742 | ACTIVE | 172.16.0.134 |
22 | NO_MONITOR | 1 |
+--------------------------------------+---------+----------------------------------+---------------------+--------------+---------------+------------------+--------+
openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+--------------+
| id | loadbalancer_id |
status | role | lb_network_ip | ha_ip |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+--------------+
| 52e4da19-96cc-4057-9b8b-854494a08dd0 | eb49c915-af95-4c20-8b23-5400e135e4bf |
ALLOCATED | MASTER | fc00:a611:9580:a4f6:f816:3eff:feef:eb4 | 192.168.21.5 |
| 907ca365-da50-4e05-aab8-ef865839811c | eb49c915-af95-4c20-8b23-5400e135e4bf |
ALLOCATED | BACKUP | fc00:a611:9580:a4f6:f816:3eff:febb:72bd | 192.168.21.5 |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+--------------+
The two networks are connected at the L3 layer, the loadbalancer
topology is ACTIVE_STANDBY, server2 is a member of the listening pool on
subnet2, and the amphorae have a VIP on subnet1
Access the juju unit that has the required network namespaces
juju ssh nova-compute/0 "export NET1_UUID=$(openstack network show net1
-f json | jq --raw-output .id); export NET2_UUID=$(openstack network
show net2 -f json | jq --raw-output .id); export SERVER1_IP=$(openstack
server show server1 --format json --column addresses | jq --raw-output
'.addresses.net1[]'); export SERVER2_IP=$(openstack server show server2
--format json --column addresses | jq --raw-output '.addresses.net2[]');
export VIP_IP=$(openstack loadbalancer list -f json | jq --raw-output
'.[].vip_address'); bash -l"
Verify server1 can reach server2 through the L3 router
ubuntu@juju-b321fc-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET1_UUID ssh -t cirros@$SERVER1_IP "VIP_IP=$VIP_IP; ping $SERVER2_IP"
[email protected]'s password:
PING 172.16.0.134 (172.16.0.134): 56 data bytes
64 bytes from 172.16.0.134: seq=0 ttl=63 time=0.334 ms
Verify that server1 can ssh to server2 through the VIP
ubuntu@juju-b321fc-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET1_UUID ssh -t cirros@$SERVER1_IP "VIP_IP=$VIP_IP; ssh
cirros@${VIP_IP}"
[email protected]'s password:
[email protected]'s password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:97:a2:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.134/24 brd 172.16.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe97:a229/64 scope link
valid_lft forever preferred_lft forever
Verify server2 can reach server1 through the L3 router
ubuntu@juju-b321fc-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET2_UUID ssh -t cirros@$SERVER2_IP "VIP_IP=$VIP_IP; ping $SERVER1_IP"
[email protected]'s password:
PING 192.168.21.205 (192.168.21.205): 56 data bytes
64 bytes from 192.168.21.205: seq=0 ttl=63 time=1.352 ms
Observe that server2 cannot reach its own subnet through the VIP /
loadbalancer as this hangs
ubuntu@juju-b321fc-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET2_UUID ssh -t cirros@$SERVER2_IP "VIP_IP=$VIP_IP; ssh
cirros@$VIP_IP"
[email protected]'s password:
## We never receive a prompt to log into the vip
===============
After upgrading
---------------
ubuntu@juju-b321fc-octavia-routing-repro-9:~$ dpkg -l | grep python3-octavia
ii python3-octavia 1:10.1.1-0ubuntu1.3~cloud0 all
OpenStack Load Balancer as a Service - Python libraries
All the resources already exist, so we just need to failover the
loadbalancer so that the upgraded octavia-worker service properly
refreshes the keepalived templates and populates the route
openstack loadbalancer failover lb
After failover completes, verify that server1 can still reach server2
through the loadbalancer
ubuntu@juju-b321fc-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET1_UUID ssh -t cirros@$SERVER1_IP "VIP_IP=$VIP_IP; ssh
cirros@${VIP_IP}"
[email protected]'s password:
[email protected]'s password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:97:a2:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.134/24 brd 172.16.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe97:a229/64 scope link
valid_lft forever preferred_lft forever
Observe that now server2 can reach itself through the loadbalancer
ubuntu@juju-b321fc-octavia-routing-repro-8:~$ sudo ip netns exec
ovnmeta-$NET2_UUID ssh -t cirros@$SERVER2_IP "VIP_IP=$VIP_IP; ssh
cirros@$VIP_IP"
[email protected]'s password:
[email protected]'s password:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:97:a2:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.134/24 brd 172.16.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe97:a229/64 scope link
valid_lft forever preferred_lft forever
** Tags removed: verification-yoga-needed
** Tags added: verification-yoga-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2117280
Title:
[SRU] Asymmetric routing issue on amphorae in ACTIVE_STANDBY topology
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2117280/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs