On Sat, May 9, 2020 at 6:44 PM Tim Rozet <[email protected]> wrote:

> So we can get rid of the join logical switch. This might be a dumb
> question, but why do we need an external switch? In the local gateway mode:
>
> pod------logical switch----DR---join switch (to remove) --- GR
> 169.x.x.2---external switch---169.x.x.1 Linux host
>
> There's no reason in the above to have an external switch that I can see.
>

If there is no external switch i.e logical switch with localnet port, then
how will the packet go out from br-int  ?
The packet is supposed to go out  from br-int via the patch port connecting
to the provider bridge.

Thanks
Numan


>
> Perhaps in the shared gateway mode it is necessary if all of the nodes
> externally attach to the same L2 network.
>
> Tim Rozet
> Red Hat CTO Networking Team
>
>
> On Fri, May 8, 2020 at 4:13 PM Lorenzo Bianconi <
> [email protected]> wrote:
>
>> > On Wed, May 6, 2020 at 11:41 PM Han Zhou <[email protected]> wrote:
>> >
>> > >
>> > >
>> > > On Wed, May 6, 2020 at 12:49 AM Numan Siddique <[email protected]>
>> wrote:
>> > > >
>>
>> [...]
>>
>> > > > I forgot to mention, Lorenzo have similar ideas for moving the arp
>> > > resolve lflows for NAT entries to mac_binding rows.
>> > > >
>> > >
>> > > I am hesitate to the approach of moving to mac_binding as solution to
>> this
>> > > particular problem, because:
>> > > 1. Although cost of each mac_binding entry may be much lower than a
>> > > logical flow entry, it would still be O(n^2), since LRP is part of
>> the key
>> > > in the table.
>> > >
>> >
>> > Agree. I realize it now.
>>
>> Hi Han and Numan,
>>
>> what about moving to mac_binding table just entries related to NAT where
>> we
>> configured the external mac address since this info is known in advance.
>> I can
>> share a PoC I developed few weeks ago.
>>
>> Regards,
>> Lorenzo
>>
>> >
>> > Thanks
>> > Numan
>> >
>> >
>> > > 2. It is better to separate the static and dynamic part clearly.
>> Moving to
>> > > mac_binding will lose this clarity in data, and also the ownership of
>> the
>> > > data as well (now mac_binding entries are added only by
>> ovn-controllers).
>> > > Although I am not in favor of solving the problem with this approach
>> > > (because of 1)), maybe it makes sense to reduce number of logical
>> flows as
>> > > a general improvement by moving all neighbour information to
>> mac_binding
>> > > for scalability. If we do so, I would suggest to figure out a way to
>> keep
>> > > the data clarity between static and dynamic part.
>> > >
>> > > For this particular problem, we just don't want the static part
>> populated
>> > > because most of them are not needed except one per LRP. However, even
>> > > before considering optionally disabling the static part, I wanted to
>> > > understand firstly why separating the join LS would not solve the
>> problem.
>> > >
>> > > >>
>> > > >>
>> > > >> Thanks
>> > > >> Numan
>> > > >>
>> > > >>>
>> > > >>> > 2. In most places in ovn-kubernetes, our MAC addresses are
>> > > >>> > programmatically related to the corresponding IP addresses, and
>> in
>> > > >>> > places where that's not currently true, we could try to make it
>> true,
>> > > >>> > and then perhaps the thousands of rules could just be replaced
>> by a
>> > > >>> > single rule?
>> > > >>> >
>> > > >>> This may be a good idea, but I am not sure how to implement in
>> OVN to
>> > > make it generic, since most OVN users can't make such assumption.
>> > > >>>
>> > > >>> On the other hand, why wouldn't splitting the join logical switch
>> to
>> > > 1000 LSes solve the problem? I understand that there will be 1000 more
>> > > datapaths, and 1000 more LRPs, but these are all O(n), which is much
>> more
>> > > efficient than the O(n^2) exploding. What's the other scale issues
>> created
>> > > by this?
>> > > >>>
>> > > >>> In addition, Girish, for the external LS, I am not sure why can't
>> it
>> > > be shared, if all the nodes are connected to a single L2 network. (If
>> they
>> > > are connected to separate L2 networks, different external LSes should
>> be
>> > > created, at least according to current OVN model).
>> > > >>>
>> > > >>> Thanks,
>> > > >>> Han
>> > > >>> _______________________________________________
>> > > >>> discuss mailing list
>> > > >>> [email protected]
>> > > >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>> > > _______________________________________________
>> > > discuss mailing list
>> > > [email protected]
>> > > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>> > >
>>
>> > _______________________________________________
>> > discuss mailing list
>> > [email protected]
>> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "ovn-kubernetes" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/ovn-kubernetes/20200508201301.GD47205%40localhost.localdomain
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "ovn-kubernetes" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ovn-kubernetes/CADO7ZnpYW7T94PEAYNxdLQh5EOezbefbo8%3DgLHAE1ERGKdeU7A%40mail.gmail.com
> <https://groups.google.com/d/msgid/ovn-kubernetes/CADO7ZnpYW7T94PEAYNxdLQh5EOezbefbo8%3DgLHAE1ERGKdeU7A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to