Re: [ovs-discuss] BGP EVPN support

2021-03-25 Thread Daniel Alvarez Sanchez
Thanks Krzysztof, all

Let me see if I understand the 'native' proposal. Please amend as necessary
:)

On Tue, Mar 16, 2021 at 9:28 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:

>
>
> On Tue, Mar 16, 2021, at 19:15, Mark Gray wrote:
> > On 16/03/2021 15:41, Krzysztof Klimonda wrote:
> > > Yes, that seems to be prerequisite (or one of prerequisites) for
> keeping current DPDK / offload capabilities, as far as I understand. By
> Proxy ARP/NDP I think you mean responding to ARP and NDP on behalf of the
> system where FRR is running?
> > >
> > > As for whether to go ovn-kubernetes way and try to implement it with
> existing primitives, or add BGP support directly into OVN, I feel like this
> should be a core feature of OVN itself and not something that could be
> built on top of it by a careful placement of logical switches, routers and
> ports. This would also help with management (you would configure new BGP
> connection by modifying northbound DB) and simplify troubleshooting in case
> something is not working as expected.
> > >
> >
> > There would be quite a lot of effort to implement BGP support directly
> > into OVN as per all the relevant BGP RPCs .. and the effort to maintain.
> > Another option might be to make FRR Openflow-aware and enabling it to
> > program Openflow flows directly into an OVN bridge much like it does
> > into the kernel today. FRR does provide some flexibility to extend like
> > that through the use of something like FPM
> > (http://docs.frrouting.org/projects/dev-guide/en/latest/fpm.html)
>
> Indeed, when I wrote "adding BGP support directly to OVN" I didn't really
> mean implementing BGP protocol directly in OVN, but rather implementing
> integration with FRR directly in OVN, and not by reusing existing
> resources. Making ovn-controller into fully fledged BGP peer seems.. like a
> nice expansion of the initial idea, assuming that the protocol could be
> offloaded to some library, but it's probably not a hard requirement for the
> initial implementation, as long as OVS can be programmed to deliver BGP
> traffic to FRR.
>
> When you write that FRR would program flows on OVS bridge, do you have
> something specific in mind? I thought the discussion so far was mostly one
> way BGP announcement with FRR "simply" announcing specific prefixes from
> the chassis nodes. Do you have something more in mind, like programming
> routes received from BGP router into OVN?
>

That's what I had also in my mind, ie. "announcing specific prefixes from
the chassis nodes". I'd leave the importing routers part for a later stage
if that's really a requirement.

For the announcing part, let's say we try to remove the kernel as much as
we can based on the discussion on this thread, then we're left with:

- Route announcement:
  - Configure some loopback IP address in OVN per hypervisor which is going
to be the nexthop of all routes announced from that chassis
  - Configuring OVN to tell which prefixes to announce - CMS responsibility
and some knob added into OVN NB as Mark suggests
  - OVN to talk to FRR somehow (gRPC?) to advertise the loopback IP as
directly connected and the rest of IPs in the chassis via its loopback IP

- Extra resources/flows:

Similar to [0]  we'd need:
  - 1 localnet LS per node that will receive the traffic directed to the
OVN loopback address
  - 1 gateway LR per node responsible for routing the traffic within the
node
  - 1 transit LS per node that connects the previous gateway to the
infrastructure router
  - 1 infrastructure LR to which all LS that require to expose routes will
attach to (consuming one IP address from each subnet exposed)
WARNING: scale!!

My question earlier this thread was who's responsible for creating all
these resources. If we don't want to put the burden of this to the CMS, are
you proposing OVN to do it 'under the hood'? What about the IP addresses
that we'll be consuming from the tenants? Maybe if doing it under the hood,
that's not required and we can do it in OpenFlow some other way. Is this
what you mean?

IMO, it is very complicated but maybe it brings a lot of value to OVN. The
benefit of the PoC approach is that we can use BGP (almost) today without
any changes to OVN or Neutron but I am for sure open to discuss the
'native' way more in depth!

[0]
https://raw.githubusercontent.com/ovn-org/ovn-kubernetes/master/docs/design/current_ovn_topology.svg

- Physical interfaces:
The PoC was under the assumption that all the compute nodes will have a
default ECMP route in the form of: 0.0.0.0 via nic1 via nic2
If we want to match this, we probably need to add OpenFlow rules to the
provider bridge and add both NICs to it.
If the NICs are used as well for control plane, management, storage or
whatever, we are creating a dependency on OVS for all that. I believe that
it is fine to lose the data plane if OVS goes down but not everything else.
Somebody may have a suggestion here though.
If not, we still need the kernel to do some steering 

Re: [ovs-discuss] BGP EVPN support

2021-03-18 Thread Mark Gray
On 16/03/2021 20:28, Krzysztof Klimonda wrote:
> Do you have something more in mind, like programming routes received from BGP 
> router into OVN?

Yes, this is what I was thinking of. We are looking at how to do this
for ovn-kubernetes. For shared gateway mode, we may not be able to use
the kernel routes with FRR so we may need FRR to update an ovs bridge
directly.

On the announcement side, that is another problem we are looking at and
we are considering using the FRR gRPC interface rather than `vtysh` or
updating the config file and reloading. Maybe we could add a field in
OVN NB (announce=true/false) to indicate that this prefix/ip should be
announced. It might ease integration with a third party bgp stack?

Ideally, we would like something generic that could reused by other
projects but I'm not sure if that will be possible. I'll probably have a
better idea of what we will do in a couple of weeks as we look at it in
more detail and can update you if you are interested.

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Krzysztof Klimonda



On Tue, Mar 16, 2021, at 19:15, Mark Gray wrote:
> On 16/03/2021 15:41, Krzysztof Klimonda wrote:
> > Yes, that seems to be prerequisite (or one of prerequisites) for keeping 
> > current DPDK / offload capabilities, as far as I understand. By Proxy 
> > ARP/NDP I think you mean responding to ARP and NDP on behalf of the system 
> > where FRR is running?  
> > 
> > As for whether to go ovn-kubernetes way and try to implement it with 
> > existing primitives, or add BGP support directly into OVN, I feel like this 
> > should be a core feature of OVN itself and not something that could be 
> > built on top of it by a careful placement of logical switches, routers and 
> > ports. This would also help with management (you would configure new BGP 
> > connection by modifying northbound DB) and simplify troubleshooting in case 
> > something is not working as expected.
> > 
> 
> There would be quite a lot of effort to implement BGP support directly
> into OVN as per all the relevant BGP RPCs .. and the effort to maintain.
> Another option might be to make FRR Openflow-aware and enabling it to
> program Openflow flows directly into an OVN bridge much like it does
> into the kernel today. FRR does provide some flexibility to extend like
> that through the use of something like FPM
> (http://docs.frrouting.org/projects/dev-guide/en/latest/fpm.html)

Indeed, when I wrote "adding BGP support directly to OVN" I didn't really mean 
implementing BGP protocol directly in OVN, but rather implementing integration 
with FRR directly in OVN, and not by reusing existing resources. Making 
ovn-controller into fully fledged BGP peer seems.. like a nice expansion of the 
initial idea, assuming that the protocol could be offloaded to some library, 
but it's probably not a hard requirement for the initial implementation, as 
long as OVS can be programmed to deliver BGP traffic to FRR.

When you write that FRR would program flows on OVS bridge, do you have 
something specific in mind? I thought the discussion so far was mostly one way 
BGP announcement with FRR "simply" announcing specific prefixes from the 
chassis nodes. Do you have something more in mind, like programming routes 
received from BGP router into OVN?

-- 
  Krzysztof Klimonda
  kklimo...@syntaxhighlighted.com
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Mark Gray
On 16/03/2021 15:41, Krzysztof Klimonda wrote:
> Yes, that seems to be prerequisite (or one of prerequisites) for keeping 
> current DPDK / offload capabilities, as far as I understand. By Proxy ARP/NDP 
> I think you mean responding to ARP and NDP on behalf of the system where FRR 
> is running?  
> 
> As for whether to go ovn-kubernetes way and try to implement it with existing 
> primitives, or add BGP support directly into OVN, I feel like this should be 
> a core feature of OVN itself and not something that could be built on top of 
> it by a careful placement of logical switches, routers and ports. This would 
> also help with management (you would configure new BGP connection by 
> modifying northbound DB) and simplify troubleshooting in case something is 
> not working as expected.
> 

There would be quite a lot of effort to implement BGP support directly
into OVN as per all the relevant BGP RPCs .. and the effort to maintain.
Another option might be to make FRR Openflow-aware and enabling it to
program Openflow flows directly into an OVN bridge much like it does
into the kernel today. FRR does provide some flexibility to extend like
that through the use of something like FPM
(http://docs.frrouting.org/projects/dev-guide/en/latest/fpm.html)

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Krzysztof Klimonda
Hi Daniel,

On Tue, Mar 16, 2021, at 15:19, Daniel Alvarez Sanchez wrote:
> 
> 
> On Tue, Mar 16, 2021 at 2:45 PM Luis Tomas Bolivar  
> wrote:
> > Of course we are fully open to redesign it if there is a better approach! 
> > And that was indeed the intention when linking to the current efforts, 
> > figure out if that was a "valid" way of doing it, and how it can be 
> > improved/redesigned. The main idea behind the current design was not to 
> > need modifications to core OVN as well as to minimize the complexity, i.e., 
> > not having to implement another kind of controller for managing the extra 
> > OF flows.
> > 
> > Regarding the metadata/localport, I have a couple of questions, mainly due 
> > to me not knowing enough about ovn/localport:
> > 1) Isn't the metadata managed through a namespace? And the end of the day 
> > that is also visible from the hypervisor, as well as the OVS bridges
> > 2) Another difference is that we are using BGP ECMP and therefore not 
> > associating any nic/bond to br-ex, and that is why we require some 
> > rules/routes to redirect the traffic to br-ex.
> > 
> > Thanks for your input! Really appreciated!
> > 
> > Cheers,
> > Luis
> > 
> > On Tue, Mar 16, 2021 at 2:22 PM Krzysztof Klimonda 
> >  wrote:
> >> __
> >> Would it make more sense to reverse this part of the design? I was 
> >> thinking of having each chassis its own IPv4/IPv6 address used for 
> >> next-hop in announcements and OF flows installed to direct BGP control 
> >> packets over to the host system, in a similar way how localport is used 
> >> today for neutron's metadata service (although I'll admit that I haven't 
> >> looked into how this integrates with dpdk and offload).
> 
> Hi Krzysztof, not sure I follow your suggestion but let me see if I do. 
> With this PoC, the kernel will do:
> 
> 1) Routing to/from physical interface to OVN
> 2) Proxy ARP
> 3) Proxy NDP
> 
> Also FRR will advertise directly connected routes based on the IPs 
> configured on dummy interfaces.
> All this comes with the benefit that no changes are required in the CMS 
> or OVN itself.
> 
> If I understand your proposal well, you would like to do 1), 2) and 3) 
> in OpenFlow so an agent running on all compute nodes is going to be 
> responsible for this? Or you propose adding extra OVN resources in a 
> similar way to what ovn-kubernetes does today [0] and in this case:

Yes, that seems to be prerequisite (or one of prerequisites) for keeping 
current DPDK / offload capabilities, as far as I understand. By Proxy ARP/NDP I 
think you mean responding to ARP and NDP on behalf of the system where FRR is 
running?  

As for whether to go ovn-kubernetes way and try to implement it with existing 
primitives, or add BGP support directly into OVN, I feel like this should be a 
core feature of OVN itself and not something that could be built on top of it 
by a careful placement of logical switches, routers and ports. This would also 
help with management (you would configure new BGP connection by modifying 
northbound DB) and simplify troubleshooting in case something is not working as 
expected.

> 
> - Create an OVN Gateway router and connect it to the provider Logical 
> Switch
> - Advertise host routes through the Gateway Router IP address for each 
> node. This would consume one IP address per provider network per node

That seems excessive - why would we need one IP address per provider network 
per node? Shouldn't single IP per node be enough even if we go with your 
proposal of reusing existing OVN resources? If we do that, separate "service 
subnets" could be used per "external network" that provide connectivity between 
BGP router and OVN chassis (so that next hop can be configured correctly). 
Burning IP addresses from all provider networks seems excessive, given that 
some of them are going to be public and those are getting pretty expensive at 
the moment.

> - Some external entity to configure ECMP routing to the ToRs

(we're still talking about implementing it via neutron CMS, right?)
This is probably out of scope for the OVN or neutron anyway? I'd assume ToRs 
are configured before the compute node is deployed.

> - Who creates/manages the infra resources? Onboarding new hypervisors 
> requires IPAM and more

Right, that seems to be another reason to do that "natively" in OVN.

> - OpenStack provides flexibility to its users to customize their own 
> networking (more than ovn-kubernetes I believe). Mixing user created 
> network resources with infra resources in the same OVN cluster is non 
> trivial (eg. maintenance tasks, migration to OVN, ...)

I'm not sure I follow, but if you mean that in the second scenario (where BGP 
support is implemented using existing OVN resources by strategic placement of 
the LSs etc.) too much of the "infra" topology becomes visible to neutron (and 
possibly neutron users) then I wholeheartedly agree - I think this is not the 
way to implement that, and implementation should be done 

Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Daniel Alvarez Sanchez
On Tue, Mar 16, 2021 at 3:20 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:

>
> On Tue, Mar 16, 2021, at 14:45, Luis Tomas Bolivar wrote:
>
> Of course we are fully open to redesign it if there is a better approach!
> And that was indeed the intention when linking to the current efforts,
> figure out if that was a "valid" way of doing it, and how it can be
> improved/redesigned. The main idea behind the current design was not to
> need modifications to core OVN as well as to minimize the complexity, i.e.,
> not having to implement another kind of controller for managing the extra
> OF flows.
>
> Regarding the metadata/localport, I have a couple of questions, mainly due
> to me not knowing enough about ovn/localport:
> 1) Isn't the metadata managed through a namespace? And the end of the day
> that is also visible from the hypervisor, as well as the OVS bridges
>
>
> Indeed, that's true - you can reach tenant's network from ovnmeta-
> namespace (where metadata proxy lives), however from what I remember while
> testing you can only establish connection to VMs running on the same
> hypervisor. Granted, this is less about "hardening" per se - any potential
> takeover of the hypervisor is probably giving the attacker enough tools to
> own entire overlay network anyway. Perhaps it's just giving me a bad
> feeling, where what should be an isolated public facing network can be
> reached from hypervisor without going through expected network path.
>
> 2) Another difference is that we are using BGP ECMP and therefore not
> associating any nic/bond to br-ex, and that is why we require some
> rules/routes to redirect the traffic to br-ex.
>
>
> That's an interesting problem  - I wonder if that can even be done in OVS
> today (for example with multipath action) and how would ovs handle incoming
> traffic (what flows are needed to handle that properly). I guess someone
> with OVS internals knowledge would have to chime in on this one.
>

OVN supports ECMP since 20.03 [0] and some enhancement for rerouting
policies has been added recently [1] so yeah it can be done in OVS as well
AFAIU.

[0] https://github.com/ovn-org/ovn/blob/master/NEWS#L113
[1] https://github.com/ovn-org/ovn/blob/master/NEWS#L12

>
> Thanks for your input! Really appreciated!
>
> Cheers,
> Luis
>
> On Tue, Mar 16, 2021 at 2:22 PM Krzysztof Klimonda <
> kklimo...@syntaxhighlighted.com> wrote:
>
>
> Would it make more sense to reverse this part of the design? I was
> thinking of having each chassis its own IPv4/IPv6 address used for next-hop
> in announcements and OF flows installed to direct BGP control packets over
> to the host system, in a similar way how localport is used today for
> neutron's metadata service (although I'll admit that I haven't looked into
> how this integrates with dpdk and offload).
>
>
> This way we can also simplify host's networking configuration as extra
> routing rules and arp entries are no longer needed (I think it would be
> preferable, from security perspective, for hypervisor to not have a direct
> access to overlay networks which seems to be the case when you use rules
> like that).
>
> --
>   Krzysztof Klimonda
>   kklimo...@syntaxhighlighted.com
>
>
>
> On Tue, Mar 16, 2021, at 13:56, Luis Tomas Bolivar wrote:
>
> Hi Krzysztof,
>
> On Tue, Mar 16, 2021 at 12:54 PM Krzysztof Klimonda <
> kklimo...@syntaxhighlighted.com> wrote:
>
>
> Hi Luis,
>
> I haven't yet had time to give it a try in our lab, but from reading your
> blog posts I have a quick question. How does it work when either DPDK or
> NIC offload is used for OVN traffic? It seems you are (de-)encapsulating
> traffic on chassis nodes by routing them through kernel - is this current
> design or just an artifact of PoC code?
>
>
> You are correct, that is a limitation as we are using kernel routing for
> N/S traffic, so DPDK/NIC offloading could not be used. That said, the E/W
> traffic still uses the OVN overlay and Geneve tunnels.
>
>
>
>
> --
>   Krzysztof Klimonda
>   kklimo...@syntaxhighlighted.com
>
>
>
> On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
>
> Hi Sergey, all,
>
> In fact we are working on a solution based on FRR where a (python) agent
> reads from OVN SB DB (port binding events) and triggers FRR so that the
> needed routes gets advertised. It leverages kernel networking to redirect
> the traffic to the OVN overlay, and therefore does not require any
> modifications to ovn itself (at least for now). The PoC code can be found
> here: https://github.com/luis5tb/bgp-agent
>
> And there is a series of blog posts related to how to use it on OpenStack
> and how it works:
> - OVN-BGP agent introduction:
> https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
> - How to set ip up on DevStack Environment:
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
> - In-depth traffic flow inspection:
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/

Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Daniel Alvarez Sanchez
On Tue, Mar 16, 2021 at 2:45 PM Luis Tomas Bolivar 
wrote:

> Of course we are fully open to redesign it if there is a better approach!
> And that was indeed the intention when linking to the current efforts,
> figure out if that was a "valid" way of doing it, and how it can be
> improved/redesigned. The main idea behind the current design was not to
> need modifications to core OVN as well as to minimize the complexity, i.e.,
> not having to implement another kind of controller for managing the extra
> OF flows.
>
> Regarding the metadata/localport, I have a couple of questions, mainly due
> to me not knowing enough about ovn/localport:
> 1) Isn't the metadata managed through a namespace? And the end of the day
> that is also visible from the hypervisor, as well as the OVS bridges
> 2) Another difference is that we are using BGP ECMP and therefore not
> associating any nic/bond to br-ex, and that is why we require some
> rules/routes to redirect the traffic to br-ex.
>
> Thanks for your input! Really appreciated!
>
> Cheers,
> Luis
>
> On Tue, Mar 16, 2021 at 2:22 PM Krzysztof Klimonda <
> kklimo...@syntaxhighlighted.com> wrote:
>
>> Would it make more sense to reverse this part of the design? I was
>> thinking of having each chassis its own IPv4/IPv6 address used for next-hop
>> in announcements and OF flows installed to direct BGP control packets over
>> to the host system, in a similar way how localport is used today for
>> neutron's metadata service (although I'll admit that I haven't looked into
>> how this integrates with dpdk and offload).
>>
>
Hi Krzysztof, not sure I follow your suggestion but let me see if I do.
With this PoC, the kernel will do:

1) Routing to/from physical interface to OVN
2) Proxy ARP
3) Proxy NDP

Also FRR will advertise directly connected routes based on the IPs
configured on dummy interfaces.
All this comes with the benefit that no changes are required in the CMS or
OVN itself.

If I understand your proposal well, you would like to do 1), 2) and 3) in
OpenFlow so an agent running on all compute nodes is going to be
responsible for this? Or you propose adding extra OVN resources in a
similar way to what ovn-kubernetes does today [0] and in this case:

- Create an OVN Gateway router and connect it to the provider Logical Switch
- Advertise host routes through the Gateway Router IP address for each
node. This would consume one IP address per provider network per node
- Some external entity to configure ECMP routing to the ToRs
- Who creates/manages the infra resources? Onboarding new hypervisors
requires IPAM and more
- OpenStack provides flexibility to its users to customize their own
networking (more than ovn-kubernetes I believe). Mixing user created
network resources with infra resources in the same OVN cluster is non
trivial (eg. maintenance tasks, migration to OVN, ...)
- Scaling issues due to the larger number of resources/flows?

[0]
https://raw.githubusercontent.com/ovn-org/ovn-kubernetes/master/docs/design/current_ovn_topology.svg

This way we can also simplify host's networking configuration as extra
>> routing rules and arp entries are no longer needed (I think it would be
>> preferable, from security perspective, for hypervisor to not have a direct
>> access to overlay networks which seems to be the case when you use rules
>> like that).
>>
>
I agree with the fact that it'd simplify the host networking but will
overcomplicate the rest (unless I'm missing something which is more than
possible :)

Thanks a lot for the discussion,
Daniel


>
>> --
>>   Krzysztof Klimonda
>>   kklimo...@syntaxhighlighted.com
>>
>>
>>
>> On Tue, Mar 16, 2021, at 13:56, Luis Tomas Bolivar wrote:
>>
>> Hi Krzysztof,
>>
>> On Tue, Mar 16, 2021 at 12:54 PM Krzysztof Klimonda <
>> kklimo...@syntaxhighlighted.com> wrote:
>>
>>
>> Hi Luis,
>>
>> I haven't yet had time to give it a try in our lab, but from reading your
>> blog posts I have a quick question. How does it work when either DPDK or
>> NIC offload is used for OVN traffic? It seems you are (de-)encapsulating
>> traffic on chassis nodes by routing them through kernel - is this current
>> design or just an artifact of PoC code?
>>
>>
>> You are correct, that is a limitation as we are using kernel routing for
>> N/S traffic, so DPDK/NIC offloading could not be used. That said, the E/W
>> traffic still uses the OVN overlay and Geneve tunnels.
>>
>>
>>
>>
>> --
>>   Krzysztof Klimonda
>>   kklimo...@syntaxhighlighted.com
>>
>>
>>
>> On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
>>
>> Hi Sergey, all,
>>
>> In fact we are working on a solution based on FRR where a (python) agent
>> reads from OVN SB DB (port binding events) and triggers FRR so that the
>> needed routes gets advertised. It leverages kernel networking to redirect
>> the traffic to the OVN overlay, and therefore does not require any
>> modifications to ovn itself (at least for now). The PoC code can be found
>> here: https://github.com/luis5tb/bgp-agent
>>
>> And 

Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Krzysztof Klimonda

On Tue, Mar 16, 2021, at 14:45, Luis Tomas Bolivar wrote:
> Of course we are fully open to redesign it if there is a better approach! And 
> that was indeed the intention when linking to the current efforts, figure out 
> if that was a "valid" way of doing it, and how it can be improved/redesigned. 
> The main idea behind the current design was not to need modifications to core 
> OVN as well as to minimize the complexity, i.e., not having to implement 
> another kind of controller for managing the extra OF flows.
> 
> Regarding the metadata/localport, I have a couple of questions, mainly due to 
> me not knowing enough about ovn/localport:
> 1) Isn't the metadata managed through a namespace? And the end of the day 
> that is also visible from the hypervisor, as well as the OVS bridges

Indeed, that's true - you can reach tenant's network from ovnmeta- namespace 
(where metadata proxy lives), however from what I remember while testing you 
can only establish connection to VMs running on the same hypervisor. Granted, 
this is less about "hardening" per se - any potential takeover of the 
hypervisor is probably giving the attacker enough tools to own entire overlay 
network anyway. Perhaps it's just giving me a bad feeling, where what should be 
an isolated public facing network can be reached from hypervisor without going 
through expected network path.

> 2) Another difference is that we are using BGP ECMP and therefore not 
> associating any nic/bond to br-ex, and that is why we require some 
> rules/routes to redirect the traffic to br-ex.

That's an interesting problem  - I wonder if that can even be done in OVS today 
(for example with multipath action) and how would ovs handle incoming traffic 
(what flows are needed to handle that properly). I guess someone with OVS 
internals knowledge would have to chime in on this one.

> Thanks for your input! Really appreciated!
> 
> Cheers,
> Luis
> 
> On Tue, Mar 16, 2021 at 2:22 PM Krzysztof Klimonda 
>  wrote:
>> __
>> Would it make more sense to reverse this part of the design? I was thinking 
>> of having each chassis its own IPv4/IPv6 address used for next-hop in 
>> announcements and OF flows installed to direct BGP control packets over to 
>> the host system, in a similar way how localport is used today for neutron's 
>> metadata service (although I'll admit that I haven't looked into how this 
>> integrates with dpdk and offload).
>> 
>> This way we can also simplify host's networking configuration as extra 
>> routing rules and arp entries are no longer needed (I think it would be 
>> preferable, from security perspective, for hypervisor to not have a direct 
>> access to overlay networks which seems to be the case when you use rules 
>> like that).
>> 
>> --
>>   Krzysztof Klimonda
>>   kklimo...@syntaxhighlighted.com
>> 
>> 
>> 
>> On Tue, Mar 16, 2021, at 13:56, Luis Tomas Bolivar wrote:
>>> Hi Krzysztof,
>>> 
>>> On Tue, Mar 16, 2021 at 12:54 PM Krzysztof Klimonda 
>>>  wrote:
 __
 Hi Luis,
 
 I haven't yet had time to give it a try in our lab, but from reading your 
 blog posts I have a quick question. How does it work when either DPDK or 
 NIC offload is used for OVN traffic? It seems you are (de-)encapsulating 
 traffic on chassis nodes by routing them through kernel - is this current 
 design or just an artifact of PoC code?
>>> 
>>> You are correct, that is a limitation as we are using kernel routing for 
>>> N/S traffic, so DPDK/NIC offloading could not be used. That said, the E/W 
>>> traffic still uses the OVN overlay and Geneve tunnels.
>>> 
>>> 
 
 
 --
   Krzysztof Klimonda
   kklimo...@syntaxhighlighted.com
 
 
 
 On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
> Hi Sergey, all,
> 
> In fact we are working on a solution based on FRR where a (python) agent 
> reads from OVN SB DB (port binding events) and triggers FRR so that the 
> needed routes gets advertised. It leverages kernel networking to redirect 
> the traffic to the OVN overlay, and therefore does not require any 
> modifications to ovn itself (at least for now). The PoC code can be found 
> here: https://github.com/luis5tb/bgp-agent
> 
> And there is a series of blog posts related to how to use it on OpenStack 
> and how it works:
> - OVN-BGP agent introduction: 
> https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
> - How to set ip up on DevStack Environment: 
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
> - In-depth traffic flow inspection: 
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/
> 
> We are thinking that possible next steps if community is interested could 
> be related to adding multitenancy support (e.g., through EVPN), as well 
> as defining what could be the best API to decide what to expose through 
> 

Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Luis Tomas Bolivar
Of course we are fully open to redesign it if there is a better approach!
And that was indeed the intention when linking to the current efforts,
figure out if that was a "valid" way of doing it, and how it can be
improved/redesigned. The main idea behind the current design was not to
need modifications to core OVN as well as to minimize the complexity, i.e.,
not having to implement another kind of controller for managing the extra
OF flows.

Regarding the metadata/localport, I have a couple of questions, mainly due
to me not knowing enough about ovn/localport:
1) Isn't the metadata managed through a namespace? And the end of the day
that is also visible from the hypervisor, as well as the OVS bridges
2) Another difference is that we are using BGP ECMP and therefore not
associating any nic/bond to br-ex, and that is why we require some
rules/routes to redirect the traffic to br-ex.

Thanks for your input! Really appreciated!

Cheers,
Luis

On Tue, Mar 16, 2021 at 2:22 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:

> Would it make more sense to reverse this part of the design? I was
> thinking of having each chassis its own IPv4/IPv6 address used for next-hop
> in announcements and OF flows installed to direct BGP control packets over
> to the host system, in a similar way how localport is used today for
> neutron's metadata service (although I'll admit that I haven't looked into
> how this integrates with dpdk and offload).
>
This way we can also simplify host's networking configuration as extra
> routing rules and arp entries are no longer needed (I think it would be
> preferable, from security perspective, for hypervisor to not have a direct
> access to overlay networks which seems to be the case when you use rules
> like that).
>
> --
>   Krzysztof Klimonda
>   kklimo...@syntaxhighlighted.com
>
>
>
> On Tue, Mar 16, 2021, at 13:56, Luis Tomas Bolivar wrote:
>
> Hi Krzysztof,
>
> On Tue, Mar 16, 2021 at 12:54 PM Krzysztof Klimonda <
> kklimo...@syntaxhighlighted.com> wrote:
>
>
> Hi Luis,
>
> I haven't yet had time to give it a try in our lab, but from reading your
> blog posts I have a quick question. How does it work when either DPDK or
> NIC offload is used for OVN traffic? It seems you are (de-)encapsulating
> traffic on chassis nodes by routing them through kernel - is this current
> design or just an artifact of PoC code?
>
>
> You are correct, that is a limitation as we are using kernel routing for
> N/S traffic, so DPDK/NIC offloading could not be used. That said, the E/W
> traffic still uses the OVN overlay and Geneve tunnels.
>
>
>
>
> --
>   Krzysztof Klimonda
>   kklimo...@syntaxhighlighted.com
>
>
>
> On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
>
> Hi Sergey, all,
>
> In fact we are working on a solution based on FRR where a (python) agent
> reads from OVN SB DB (port binding events) and triggers FRR so that the
> needed routes gets advertised. It leverages kernel networking to redirect
> the traffic to the OVN overlay, and therefore does not require any
> modifications to ovn itself (at least for now). The PoC code can be found
> here: https://github.com/luis5tb/bgp-agent
>
> And there is a series of blog posts related to how to use it on OpenStack
> and how it works:
> - OVN-BGP agent introduction:
> https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
> - How to set ip up on DevStack Environment:
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
> - In-depth traffic flow inspection:
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/
>
> We are thinking that possible next steps if community is interested could
> be related to adding multitenancy support (e.g., through EVPN), as well as
> defining what could be the best API to decide what to expose through BGP.
> It would be great to get some feedback on it!
>
> Cheers,
> Luis
>
> On Fri, Mar 12, 2021 at 8:09 PM Dan Sneddon  wrote:
>
>
>
> On 3/10/21 2:09 PM, Sergey Chekanov wrote:
> > I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for
> > communicate with OVN Northbound Database right now, but not sure yet.
> > FRR I think will be too heavy for it...
> >
> > On 10.03.2021 05:05, Raymond Burkholder wrote:
> >> You could look at it from a Free Range Routing perspective.  I've used
> >> it in combination with OVS for layer 2 and layer 3 handling.
> >>
> >> On 3/8/21 3:40 AM, Sergey Chekanov wrote:
> >>> Hello!
> >>>
> >>> Is there are any plans for support BGP EVPN for extending virtual
> >>> networks to ToR hardware switches?
> >>> Or why it is bad idea?
> >>>
> >>> ___
> >>> discuss mailing list
> >>> disc...@openvswitch.org
> >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >>
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >

Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Krzysztof Klimonda
Would it make more sense to reverse this part of the design? I was thinking of 
having each chassis its own IPv4/IPv6 address used for next-hop in 
announcements and OF flows installed to direct BGP control packets over to the 
host system, in a similar way how localport is used today for neutron's 
metadata service (although I'll admit that I haven't looked into how this 
integrates with dpdk and offload).
This way we can also simplify host's networking configuration as extra routing 
rules and arp entries are no longer needed (I think it would be preferable, 
from security perspective, for hypervisor to not have a direct access to 
overlay networks which seems to be the case when you use rules like that).

--
  Krzysztof Klimonda
  kklimo...@syntaxhighlighted.com



On Tue, Mar 16, 2021, at 13:56, Luis Tomas Bolivar wrote:
> Hi Krzysztof,
> 
> On Tue, Mar 16, 2021 at 12:54 PM Krzysztof Klimonda 
>  wrote:
>> __
>> Hi Luis,
>> 
>> I haven't yet had time to give it a try in our lab, but from reading your 
>> blog posts I have a quick question. How does it work when either DPDK or NIC 
>> offload is used for OVN traffic? It seems you are (de-)encapsulating traffic 
>> on chassis nodes by routing them through kernel - is this current design or 
>> just an artifact of PoC code?
> 
> You are correct, that is a limitation as we are using kernel routing for N/S 
> traffic, so DPDK/NIC offloading could not be used. That said, the E/W traffic 
> still uses the OVN overlay and Geneve tunnels.
> 
> 
>> 
>> 
>> --
>>   Krzysztof Klimonda
>>   kklimo...@syntaxhighlighted.com
>> 
>> 
>> 
>> On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
>>> Hi Sergey, all,
>>> 
>>> In fact we are working on a solution based on FRR where a (python) agent 
>>> reads from OVN SB DB (port binding events) and triggers FRR so that the 
>>> needed routes gets advertised. It leverages kernel networking to redirect 
>>> the traffic to the OVN overlay, and therefore does not require any 
>>> modifications to ovn itself (at least for now). The PoC code can be found 
>>> here: https://github.com/luis5tb/bgp-agent
>>> 
>>> And there is a series of blog posts related to how to use it on OpenStack 
>>> and how it works:
>>> - OVN-BGP agent introduction: 
>>> https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
>>> - How to set ip up on DevStack Environment: 
>>> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
>>> - In-depth traffic flow inspection: 
>>> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/
>>> 
>>> We are thinking that possible next steps if community is interested could 
>>> be related to adding multitenancy support (e.g., through EVPN), as well as 
>>> defining what could be the best API to decide what to expose through BGP. 
>>> It would be great to get some feedback on it!
>>> 
>>> Cheers,
>>> Luis
>>> 
>>> On Fri, Mar 12, 2021 at 8:09 PM Dan Sneddon  wrote:
 
 
 On 3/10/21 2:09 PM, Sergey Chekanov wrote:
 > I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for 
 > communicate with OVN Northbound Database right now, but not sure yet.
 > FRR I think will be too heavy for it...
 > 
 > On 10.03.2021 05:05, Raymond Burkholder wrote:
 >> You could look at it from a Free Range Routing perspective.  I've used 
 >> it in combination with OVS for layer 2 and layer 3 handling.
 >>
 >> On 3/8/21 3:40 AM, Sergey Chekanov wrote:
 >>> Hello!
 >>>
 >>> Is there are any plans for support BGP EVPN for extending virtual 
 >>> networks to ToR hardware switches?
 >>> Or why it is bad idea?
 >>>
 >>> ___
 >>> discuss mailing list
 >>> disc...@openvswitch.org
 >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
 >>
 > 
 > ___
 > discuss mailing list
 > disc...@openvswitch.org
 > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
 > 
 
 FRR is delivered as a set of daemons which perform specific functions. 
 If you only need BGP functionality, you can just run bgpd. The zebra 
 daemon adds routing exchange between BGP and the kernel. The vtysh 
 daemon provides a command-line interface to interact with the FRR 
 processes. There is also a bi-directional forwarding detection (BFD) 
 daemon that can be run to detect unidirectional forwarding failures. 
 Other daemons provide other services and protocols. For this reason, I 
 felt that it was lightweight enough to just run a few daemons in a 
 container.
 
 A secondary concern for my use case was support on Red Hat Enterprise 
 Linux, which will be adding FRR to the supported packages shortly.
 
 I'm curious to hear any input that anyone has on FRR compared with GoBGP 
 and other daemons. Please feel free to respond 

Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Luis Tomas Bolivar
Hi Krzysztof,

On Tue, Mar 16, 2021 at 12:54 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:

> Hi Luis,
>
> I haven't yet had time to give it a try in our lab, but from reading your
> blog posts I have a quick question. How does it work when either DPDK or
> NIC offload is used for OVN traffic? It seems you are (de-)encapsulating
> traffic on chassis nodes by routing them through kernel - is this current
> design or just an artifact of PoC code?
>

You are correct, that is a limitation as we are using kernel routing for
N/S traffic, so DPDK/NIC offloading could not be used. That said, the E/W
traffic still uses the OVN overlay and Geneve tunnels.



> --
>   Krzysztof Klimonda
>   kklimo...@syntaxhighlighted.com
>
>
>
> On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
>
> Hi Sergey, all,
>
> In fact we are working on a solution based on FRR where a (python) agent
> reads from OVN SB DB (port binding events) and triggers FRR so that the
> needed routes gets advertised. It leverages kernel networking to redirect
> the traffic to the OVN overlay, and therefore does not require any
> modifications to ovn itself (at least for now). The PoC code can be found
> here: https://github.com/luis5tb/bgp-agent
>
> And there is a series of blog posts related to how to use it on OpenStack
> and how it works:
> - OVN-BGP agent introduction:
> https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
> - How to set ip up on DevStack Environment:
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
> - In-depth traffic flow inspection:
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/
>
> We are thinking that possible next steps if community is interested could
> be related to adding multitenancy support (e.g., through EVPN), as well as
> defining what could be the best API to decide what to expose through BGP.
> It would be great to get some feedback on it!
>
> Cheers,
> Luis
>
> On Fri, Mar 12, 2021 at 8:09 PM Dan Sneddon  wrote:
>
>
>
> On 3/10/21 2:09 PM, Sergey Chekanov wrote:
> > I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for
> > communicate with OVN Northbound Database right now, but not sure yet.
> > FRR I think will be too heavy for it...
> >
> > On 10.03.2021 05:05, Raymond Burkholder wrote:
> >> You could look at it from a Free Range Routing perspective.  I've used
> >> it in combination with OVS for layer 2 and layer 3 handling.
> >>
> >> On 3/8/21 3:40 AM, Sergey Chekanov wrote:
> >>> Hello!
> >>>
> >>> Is there are any plans for support BGP EVPN for extending virtual
> >>> networks to ToR hardware switches?
> >>> Or why it is bad idea?
> >>>
> >>> ___
> >>> discuss mailing list
> >>> disc...@openvswitch.org
> >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >>
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
>
> FRR is delivered as a set of daemons which perform specific functions.
> If you only need BGP functionality, you can just run bgpd. The zebra
> daemon adds routing exchange between BGP and the kernel. The vtysh
> daemon provides a command-line interface to interact with the FRR
> processes. There is also a bi-directional forwarding detection (BFD)
> daemon that can be run to detect unidirectional forwarding failures.
> Other daemons provide other services and protocols. For this reason, I
> felt that it was lightweight enough to just run a few daemons in a
> container.
>
> A secondary concern for my use case was support on Red Hat Enterprise
> Linux, which will be adding FRR to the supported packages shortly.
>
> I'm curious to hear any input that anyone has on FRR compared with GoBGP
> and other daemons. Please feel free to respond on-list if it involves
> OVS, or off-list if not. Thanks.
>
> --
> Dan Sneddon |  Senior Principal Software Engineer
> dsned...@redhat.com |  redhat.com/cloud
> dsneddon:irc|  @dxs:twitter
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
>
> --
> LUIS TOMÁS BOLÍVAR
> Principal Software Engineer
> Red Hat
> Madrid, Spain
> ltoma...@redhat.com
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>


-- 
LUIS TOMÁS BOLÍVAR
Principal Software Engineer
Red Hat
Madrid, Spain
ltoma...@redhat.com
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-16 Thread Krzysztof Klimonda
Hi Luis,

I haven't yet had time to give it a try in our lab, but from reading your blog 
posts I have a quick question. How does it work when either DPDK or NIC offload 
is used for OVN traffic? It seems you are (de-)encapsulating traffic on chassis 
nodes by routing them through kernel - is this current design or just an 
artifact of PoC code?

--
  Krzysztof Klimonda
  kklimo...@syntaxhighlighted.com



On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
> Hi Sergey, all,
> 
> In fact we are working on a solution based on FRR where a (python) agent 
> reads from OVN SB DB (port binding events) and triggers FRR so that the 
> needed routes gets advertised. It leverages kernel networking to redirect the 
> traffic to the OVN overlay, and therefore does not require any modifications 
> to ovn itself (at least for now). The PoC code can be found here: 
> https://github.com/luis5tb/bgp-agent
> 
> And there is a series of blog posts related to how to use it on OpenStack and 
> how it works:
> - OVN-BGP agent introduction: 
> https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
> - How to set ip up on DevStack Environment: 
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
> - In-depth traffic flow inspection: 
> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/
> 
> We are thinking that possible next steps if community is interested could be 
> related to adding multitenancy support (e.g., through EVPN), as well as 
> defining what could be the best API to decide what to expose through BGP. It 
> would be great to get some feedback on it!
> 
> Cheers,
> Luis
> 
> On Fri, Mar 12, 2021 at 8:09 PM Dan Sneddon  wrote:
>> 
>> 
>> On 3/10/21 2:09 PM, Sergey Chekanov wrote:
>> > I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for 
>> > communicate with OVN Northbound Database right now, but not sure yet.
>> > FRR I think will be too heavy for it...
>> > 
>> > On 10.03.2021 05:05, Raymond Burkholder wrote:
>> >> You could look at it from a Free Range Routing perspective.  I've used 
>> >> it in combination with OVS for layer 2 and layer 3 handling.
>> >>
>> >> On 3/8/21 3:40 AM, Sergey Chekanov wrote:
>> >>> Hello!
>> >>>
>> >>> Is there are any plans for support BGP EVPN for extending virtual 
>> >>> networks to ToR hardware switches?
>> >>> Or why it is bad idea?
>> >>>
>> >>> ___
>> >>> discuss mailing list
>> >>> disc...@openvswitch.org
>> >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>> >>
>> > 
>> > ___
>> > discuss mailing list
>> > disc...@openvswitch.org
>> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>> > 
>> 
>> FRR is delivered as a set of daemons which perform specific functions. 
>> If you only need BGP functionality, you can just run bgpd. The zebra 
>> daemon adds routing exchange between BGP and the kernel. The vtysh 
>> daemon provides a command-line interface to interact with the FRR 
>> processes. There is also a bi-directional forwarding detection (BFD) 
>> daemon that can be run to detect unidirectional forwarding failures. 
>> Other daemons provide other services and protocols. For this reason, I 
>> felt that it was lightweight enough to just run a few daemons in a 
>> container.
>> 
>> A secondary concern for my use case was support on Red Hat Enterprise 
>> Linux, which will be adding FRR to the supported packages shortly.
>> 
>> I'm curious to hear any input that anyone has on FRR compared with GoBGP 
>> and other daemons. Please feel free to respond on-list if it involves 
>> OVS, or off-list if not. Thanks.
>> 
>> -- 
>> Dan Sneddon |  Senior Principal Software Engineer
>> dsned...@redhat.com |  redhat.com/cloud
>> dsneddon:irc|  @dxs:twitter
>> 
>> ___
>> discuss mailing list
>> disc...@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
> 
> -- 
> LUIS TOMÁS BOLÍVAR
> Principal Software Engineer
> Red Hat
> Madrid, Spain
> ltoma...@redhat.com   
>  
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-15 Thread Luis Tomas Bolivar
Hi Sergey, all,

In fact we are working on a solution based on FRR where a (python) agent
reads from OVN SB DB (port binding events) and triggers FRR so that the
needed routes gets advertised. It leverages kernel networking to redirect
the traffic to the OVN overlay, and therefore does not require any
modifications to ovn itself (at least for now). The PoC code can be found
here: https://github.com/luis5tb/bgp-agent

And there is a series of blog posts related to how to use it on OpenStack
and how it works:
- OVN-BGP agent introduction:
https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
- How to set ip up on DevStack Environment:
https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
- In-depth traffic flow inspection:
https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/

We are thinking that possible next steps if community is interested could
be related to adding multitenancy support (e.g., through EVPN), as well as
defining what could be the best API to decide what to expose through BGP.
It would be great to get some feedback on it!

Cheers,
Luis

On Fri, Mar 12, 2021 at 8:09 PM Dan Sneddon  wrote:

>
>
> On 3/10/21 2:09 PM, Sergey Chekanov wrote:
> > I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for
> > communicate with OVN Northbound Database right now, but not sure yet.
> > FRR I think will be too heavy for it...
> >
> > On 10.03.2021 05:05, Raymond Burkholder wrote:
> >> You could look at it from a Free Range Routing perspective.  I've used
> >> it in combination with OVS for layer 2 and layer 3 handling.
> >>
> >> On 3/8/21 3:40 AM, Sergey Chekanov wrote:
> >>> Hello!
> >>>
> >>> Is there are any plans for support BGP EVPN for extending virtual
> >>> networks to ToR hardware switches?
> >>> Or why it is bad idea?
> >>>
> >>> ___
> >>> discuss mailing list
> >>> disc...@openvswitch.org
> >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >>
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
>
> FRR is delivered as a set of daemons which perform specific functions.
> If you only need BGP functionality, you can just run bgpd. The zebra
> daemon adds routing exchange between BGP and the kernel. The vtysh
> daemon provides a command-line interface to interact with the FRR
> processes. There is also a bi-directional forwarding detection (BFD)
> daemon that can be run to detect unidirectional forwarding failures.
> Other daemons provide other services and protocols. For this reason, I
> felt that it was lightweight enough to just run a few daemons in a
> container.
>
> A secondary concern for my use case was support on Red Hat Enterprise
> Linux, which will be adding FRR to the supported packages shortly.
>
> I'm curious to hear any input that anyone has on FRR compared with GoBGP
> and other daemons. Please feel free to respond on-list if it involves
> OVS, or off-list if not. Thanks.
>
> --
> Dan Sneddon |  Senior Principal Software Engineer
> dsned...@redhat.com |  redhat.com/cloud
> dsneddon:irc|  @dxs:twitter
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>


-- 
LUIS TOMÁS BOLÍVAR
Principal Software Engineer
Red Hat
Madrid, Spain
ltoma...@redhat.com
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-12 Thread Dan Sneddon



On 3/10/21 2:09 PM, Sergey Chekanov wrote:
I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for 
communicate with OVN Northbound Database right now, but not sure yet.

FRR I think will be too heavy for it...

On 10.03.2021 05:05, Raymond Burkholder wrote:
You could look at it from a Free Range Routing perspective.  I've used 
it in combination with OVS for layer 2 and layer 3 handling.


On 3/8/21 3:40 AM, Sergey Chekanov wrote:

Hello!

Is there are any plans for support BGP EVPN for extending virtual 
networks to ToR hardware switches?

Or why it is bad idea?

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss




___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



FRR is delivered as a set of daemons which perform specific functions. 
If you only need BGP functionality, you can just run bgpd. The zebra 
daemon adds routing exchange between BGP and the kernel. The vtysh 
daemon provides a command-line interface to interact with the FRR 
processes. There is also a bi-directional forwarding detection (BFD) 
daemon that can be run to detect unidirectional forwarding failures. 
Other daemons provide other services and protocols. For this reason, I 
felt that it was lightweight enough to just run a few daemons in a 
container.


A secondary concern for my use case was support on Red Hat Enterprise 
Linux, which will be adding FRR to the supported packages shortly.


I'm curious to hear any input that anyone has on FRR compared with GoBGP 
and other daemons. Please feel free to respond on-list if it involves 
OVS, or off-list if not. Thanks.


--
Dan Sneddon |  Senior Principal Software Engineer
dsned...@redhat.com |  redhat.com/cloud
dsneddon:irc|  @dxs:twitter

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-10 Thread Sergey Chekanov
I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for 
communicate with OVN Northbound Database right now, but not sure yet.

FRR I think will be too heavy for it...

On 10.03.2021 05:05, Raymond Burkholder wrote:
You could look at it from a Free Range Routing perspective.  I've used 
it in combination with OVS for layer 2 and layer 3 handling.


On 3/8/21 3:40 AM, Sergey Chekanov wrote:

Hello!

Is there are any plans for support BGP EVPN for extending virtual 
networks to ToR hardware switches?

Or why it is bad idea?

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-10 Thread Piotr Misiak

We are also looking towards some kind of BGP functionality in
Neutron/OVN, because a provider/external networks does not scale well,
especially without network segmentation support.

Currently we are working on POC implementation of a BGP speaker plugged
into Neutron which will announce /32 prefixes to an external router.

AFAIK The OVN VTEP L2 gateway implementation is very limited and does
not have HA, but I may be wrong.

IMHO BGP-EVPN is a better way to extend virtual networks, especially
when you already have EVPN in your physical network.


On 09.03.2021 23:04, Sergey Chekanov wrote:
> Looks like what I want to do.
> But seems they do not want to use current VTEP Gateway functionality
> as a base for it, will be interesting why...
>
> On 10.03.2021 00:47, Daniel Alvarez wrote:
>> +Ankur
>>
>>> On 9 Mar 2021, at 22:34, Ben Pfaff  wrote:
>>>
>>> I'm not arguing against it, I just don't know of anyone working on it.
>>
>> Nutanix folks presented this in the OVS/OVN conf last fall:
>>
>> http://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf
>> 
>>
>>
>> I am not sure about the progress though.
>>
>>
>>>
>>> On Wed, Mar 10, 2021 at 12:23:31AM +0300, Sergey Chekanov wrote:
 I mean why not use EVPN to extend OVN logical switch (with current
 VTEP
 Gateway functionality)?
 Is it a good Idea to implement it as a BGP daemon which will takes
 records
 from vtep database?
 The reason is: not all switches support vtep schema, but almost all
 support
 EVPN.

 Maybe it is bad idea... We just start use OVN, so I am not sure,
 will be
 happy to listen your opinions.

 On 10.03.2021 00:11, Ben Pfaff wrote:
> On Mon, Mar 08, 2021 at 01:40:58PM +0300, Sergey Chekanov wrote:
>> Is there are any plans for support BGP EVPN for extending virtual
>> networks
>> to ToR hardware switches?
>> Or why it is bad idea?
> I haven't heard anyone mention such plans.
>>> ___
>>> discuss mailing list
>>> disc...@openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-09 Thread Sergey Chekanov

Looks like what I want to do.
But seems they do not want to use current VTEP Gateway functionality as 
a base for it, will be interesting why...


On 10.03.2021 00:47, Daniel Alvarez wrote:

+Ankur


On 9 Mar 2021, at 22:34, Ben Pfaff  wrote:

I'm not arguing against it, I just don't know of anyone working on it.


Nutanix folks presented this in the OVS/OVN conf last fall:

http://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf 



I am not sure about the progress though.




On Wed, Mar 10, 2021 at 12:23:31AM +0300, Sergey Chekanov wrote:

I mean why not use EVPN to extend OVN logical switch (with current VTEP
Gateway functionality)?
Is it a good Idea to implement it as a BGP daemon which will takes 
records

from vtep database?
The reason is: not all switches support vtep schema, but almost all 
support

EVPN.

Maybe it is bad idea... We just start use OVN, so I am not sure, will be
happy to listen your opinions.

On 10.03.2021 00:11, Ben Pfaff wrote:

On Mon, Mar 08, 2021 at 01:40:58PM +0300, Sergey Chekanov wrote:
Is there are any plans for support BGP EVPN for extending virtual 
networks

to ToR hardware switches?
Or why it is bad idea?

I haven't heard anyone mention such plans.

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-09 Thread Daniel Alvarez
+Ankur

> On 9 Mar 2021, at 22:34, Ben Pfaff  wrote:
> 
> I'm not arguing against it, I just don't know of anyone working on it.

Nutanix folks presented this in the OVS/OVN conf last fall:

http://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf

I am not sure about the progress though.


> 
>> On Wed, Mar 10, 2021 at 12:23:31AM +0300, Sergey Chekanov wrote:
>> I mean why not use EVPN to extend OVN logical switch (with current VTEP
>> Gateway functionality)?
>> Is it a good Idea to implement it as a BGP daemon which will takes records
>> from vtep database?
>> The reason is: not all switches support vtep schema, but almost all support
>> EVPN.
>> 
>> Maybe it is bad idea... We just start use OVN, so I am not sure, will be
>> happy to listen your opinions.
>> 
>>> On 10.03.2021 00:11, Ben Pfaff wrote:
>>> On Mon, Mar 08, 2021 at 01:40:58PM +0300, Sergey Chekanov wrote:
 Is there are any plans for support BGP EVPN for extending virtual networks
 to ToR hardware switches?
 Or why it is bad idea?
>>> I haven't heard anyone mention such plans.
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-09 Thread Ben Pfaff
I'm not arguing against it, I just don't know of anyone working on it.

On Wed, Mar 10, 2021 at 12:23:31AM +0300, Sergey Chekanov wrote:
> I mean why not use EVPN to extend OVN logical switch (with current VTEP
> Gateway functionality)?
> Is it a good Idea to implement it as a BGP daemon which will takes records
> from vtep database?
> The reason is: not all switches support vtep schema, but almost all support
> EVPN.
> 
> Maybe it is bad idea... We just start use OVN, so I am not sure, will be
> happy to listen your opinions.
> 
> On 10.03.2021 00:11, Ben Pfaff wrote:
> > On Mon, Mar 08, 2021 at 01:40:58PM +0300, Sergey Chekanov wrote:
> > > Is there are any plans for support BGP EVPN for extending virtual networks
> > > to ToR hardware switches?
> > > Or why it is bad idea?
> > I haven't heard anyone mention such plans.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-09 Thread Sergey Chekanov
I mean why not use EVPN to extend OVN logical switch (with current VTEP 
Gateway functionality)?
Is it a good Idea to implement it as a BGP daemon which will takes 
records from vtep database?
The reason is: not all switches support vtep schema, but almost all 
support EVPN.


Maybe it is bad idea... We just start use OVN, so I am not sure, will be 
happy to listen your opinions.


On 10.03.2021 00:11, Ben Pfaff wrote:

On Mon, Mar 08, 2021 at 01:40:58PM +0300, Sergey Chekanov wrote:

Is there are any plans for support BGP EVPN for extending virtual networks
to ToR hardware switches?
Or why it is bad idea?

I haven't heard anyone mention such plans.

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-09 Thread Ben Pfaff
On Mon, Mar 08, 2021 at 01:40:58PM +0300, Sergey Chekanov wrote:
> Is there are any plans for support BGP EVPN for extending virtual networks
> to ToR hardware switches?
> Or why it is bad idea?

I haven't heard anyone mention such plans.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss