On Wed, May 15, 2019 at 11:36 AM <[email protected]> wrote: > > From: Numan Siddique <[email protected]> > > This new type is added for the following reasons: > > - When a load balancer is created in an OpenStack deployment with Octavia > service, it creates a logical port 'VIP' for the virtual ip. > > - This logical port is not bound to any VIF. > > - Octavia service creates a service VM (with another logical port 'P' which > belongs to the same logical switch) > > - The virtual ip 'VIP' is configured on this service VM. > > - This service VM provides the load balancing for the VIP with the configured > backend IPs. > > - Octavia service can be configured to create few service VMs with active-standby mode > with the active VM configured with the VIP. The VIP can move between > these service nodes. > > Presently there are few problems: > > - When a floating ip (externally reachable IP) is associated to the VIP and if > the compute nodes have external connectivity then the external traffic cannot > reach the VIP using the floating ip as the VIP logical port would be down. > dnat_and_snat entry in NAT table for this vip will have 'external_mac' and > 'logical_port' configured. > > - The only way to make it work is to clear the 'external_mac' entry so that > the gateway chassis does the DNAT for the VIP. > > To solve these problems, this patch proposes a new logical port type - virtual. > CMS when creating the logical port for the VIP, should > > - set the type as 'virtual' > > - configure the VIP in the newly added column Logical_Switch_Port.virtual_ip > > - And set the virtual parents in the new added column Logical_Switch_Port.virtual_parents. > These virtual parents are the one which can be configured wit the VIP. > > If suppose the virtual_ip is configured to 10.0.0.10 on a virtual logical port 'sw0-vip' > and the virtual_parents are set to - [sw0-p1, sw0-p2] then below logical flows are added in the > lsp_in_arp_rsp logical switch pipeline > > - table=11(ls_in_arp_rsp), priority=100, > match=(inport == "sw0-p1" && ((arp.op == 1 && arp.spa == 10.0.0.10 && arp.tpa == 10.0.0.10) || > (arp.op == 2 && arp.spa == 10.0.0.10))), > action=(bind_vport("sw0-vip", inport); next;) > - table=11(ls_in_arp_rsp), priority=100, > match=(inport == "sw0-p2" && ((arp.op == 1 && arp.spa == 10.0.0.10 && arp.tpa == 10.0.0.10) || > (arp.op == 2 && arp.spa == 10.0.0.10))), > action=(bind_vport("sw0-vip", inport); next;) > > The action bind_vport will claim the logical port - sw0-vip on the chassis where this action > is executed. Since the port - sw0-vip is claimed by a chassis, the dnat_and_snat rule for > the VIP will be handled by the compute node. > > Signed-off-by: Numan Siddique <[email protected]>
Hi Numan, this looks interesting. I haven't reviewed code yet, but just some questions to better understand the feature. Firstly, can Octavia be implemented by using the distributed LB feature of OVN, instead of using dedicated node? What's the major gap for using the OVN LB? Secondly, how is associating the floating-ip with the VIP configured currently? Thirdly, can static route be used to route the VIP to the VM, instead of creating a lport for the VIP? I.e. create a route in the logical router: destination - VIP, next hop - service VM IP. Thanks, Han _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
