On Thu, Jul 4, 2024 at 9:44 AM Frode Nordahl <fnord...@ubuntu.com> wrote:
>
> On Mon, Jul 1, 2024 at 1:58 AM Gurpreet Singh <gurps...@redhat.com> wrote:
> > > On Jun 28, 2024, at 11:56 AM, Ilya Maximets <i.maxim...@ovn.org> wrote:
> > >
> > > On 6/28/24 17:38, Dumitru Ceara wrote:
> > >> On 6/28/24 15:05, Ilya Maximets wrote:
> > >>> On 6/28/24 11:03, Ales Musil wrote:
> > >>>> Hi Frode,
> > >>>>
> > >>>> looking forward to the RFC. AFAIU it means that the routes would be 
> > >>>> exposed on
> > >>>> LR, more specifically GW router. Would it make sense to allow this 
> > >>>> behavior for
> > >>>> provider networks (LS with localnet port)? In that case we could 
> > >>>> advertise
> > >>>> chassis-local information from logical routers attached to BGP-enabled
> > >>>> switches. E.g.: FIPs, LBs. It would cover the use case for distributed
> > >>>> routers. To achieve that we should have BGP peers for each chassis 
> > >>>> that the LS
> > >>>> is local on.
> > >>>
> > >>> I haven't read the whole thing yet, but can we, please, stop adding 
> > >>> routing features
> > >>> to switches? :)  If someone wants routing, they should use a router, 
> > >>> IMO.
> > >>>
> > >>
> > >> I'm fairly certain that there are precedents in "classic" network
> > >> appliances: switches that can do a certain amount of routing (even run 
> > >> BGP).
> > >>
> > >> In this case we could add a logical router but I'm not sure that
> > >> simplifies things.
> > >>
> > >
> > > "classic" network appliances are a subject for restrictions of a physical
> > > material world.  It's just way easier and cheaper to acquire and install
> > > a single physical box instead of N.  This is not a problem for a virtual
> > > network.  AP+router+switch+modem combo boxes are also "classic" last mile
> > > network appliances that we just call a "router".  It doesn't mean we 
> > > should
> > > implement one.
> > >
> > > The distinction between OVN logical routers and switches is there for a
> > > reason.  That is so you can look at the logical topology and understand
> > > how it works more or less without diving into configuration of every 
> > > single
> > > part of it.  If switches do routing and routers do switching, what's the
> > > point of differentiation?  It only brings more confusion.
>
> I tend to agree with Ilya here, clarity for the operator as for what
> the system is actually doing becomes even more important when we are
> integrating with external systems (the ToRs). The operator would
> expect to be able to map configuration and status observed on one
> system to configuration and status observed in another system.
>
> Another issue is that I see no way for magically mapping a single
> localnet port into multiple chassis resident LRPs which would be
> required for configurations with multiple NICs that do not use bonds.
>
> Presumably the goal with your proposal is to find a low-touch way to
> make existing CMSs with overlay-based OVN configuration, such as
> OpenStack, work in the new topology.
>
> We're also interested in minimizing the development effort on the CMS
> side, so the tardy response to this thread is due to us spending a few
> days exploring options.
>
>
> I'll describe one approach that from observation mostly works below:
>
> With the outset of a classic OpenStack OVN configuration:
> * Single provider LS
> * Per project LSs with distributed LRs that have gateway chassis set
> and NAT rules configured
> * Instances scattered across three nodes
>
> We did the following steps to morph it into a configuration with per
> node gateway routers and local entry/exit of traffic to/from
> instances:
> * Apply Numan's overlay provider network patch [0] and set
> other_config:overlay_provider_network=true on provider LS
> * Remove the provider network localnet port
> * Create per chassis/NIC LS with localnet ports
> * Create per chassis GWR and attach it to NIC LS as well as provider network
>
> We have this handy script to do most of it [1].

One thing I forgot to mention is that for simplicity the script uses a
lot of IPv4 addresses. In a final solution I would propose we use IPv6
LLAs also for routing between the internal OVN L3 constructs to avoid
this.

--
Frode Nordahl

> With that we can have an outside observer sending traffic via external
> GWR IP destined to an instance FIP local to that chassis and have the
> traffic enter/exit locally.
>
> The part that does not work in this setup is correct identification of
> the return path due to the project LR having a single global default
> route, so it only works for a single chassis at a time.
>
>
> Perhaps we could solve this with a per chassis routing table and/or
> option for automatic addition of default route to peer GWR or
> something similar?
>
>
> 0: 
> https://patchwork.ozlabs.org/project/ovn/patch/20240606214432.168750-1-num...@ovn.org/
> 1: https://pastebin.ubuntu.com/p/RFGpsDggpp/
>
> > If there is no or minimal overhead of adding an LR or restricting the 
> > routing to LR, then perhaps keeping that logical separation makes sense. I 
> > think the design we evaluate or look at has to focus very tightly on 
> > performance as any non-trivial forwarding performance impact will affect 
> > the adoption of the solution.
>
> Definitively. I think the performance impact of having actual LRs and
> LRPs visible in the configuration where the user expects them to be,
> as opposed to having the system magically generate them internally,
> would be negligible.
>
> --
> Frode Nordahl
>
> > > Best regards, Ilya Maximets.
> > >
> >
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to