Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Moshe Levi



> -Original Message-
> From: Dumitru Ceara 
> Sent: Friday, March 26, 2021 12:58 AM
> To: Moshe Levi ; ovs-discuss@openvswitch.org
> Subject: Re: [ovs-discuss] [ovn] help with creating logical topology with l3
> gateway
> 
> External email: Use caution opening links or attachments
> 
> 
> On 3/25/21 10:50 PM, Moshe Levi wrote:
> >
> >
> >> -Original Message-
> >> From: Dumitru Ceara 
> >> Sent: Thursday, March 25, 2021 1:44 PM
> >> To: Moshe Levi ; ovs-discuss@openvswitch.org
> >> Subject: Re: [ovs-discuss] [ovn] help with creating logical topology
> >> with l3 gateway
> >>
> >> External email: Use caution opening links or attachments
> >>
> >>
> >> On 3/25/21 12:40 PM, Dumitru Ceara wrote:
>  Also, to see exactly where the packet is dropped, please share the
>  output of:
> 
>  inport=$(ovs-vsctl --bare --columns ofport list interface vm1)
> 
> >>
> flow=40440003404400010800455417cd40004001b3980a00010264
> >> 40
> 
> >>
> 00020800e1d35d0a0001c1635c60d789050010111213141516
> >> 171
> 
> 8191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f3031323334353637
> >>> To "demystify" this a bit, I got the packet contents by running the
> >>> following command while ping was running from vm1:
> >>>
> >>> ip netns exec vm1 tcpdump -vvvnne -i vm1 -c1 -XX | ovs-tcpundump
> >>>
>  ovs-appctl ofproto/trace br-int in_port=$in_port $flow |
>  ovn-detrace
> >>
> >> And here's a typo, sorry, should be:
> >>
> >> ovs-appctl ofproto/trace br-int in_port=$inport $flow | ovn-detrace
> > Here is the output [1]
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpast
> >
> ebin.ubuntu.com%2Fp%2FjDYz9Dfy2t%2Fdata=04%7C01%7Cmoshele
> %40nvidi
> >
> a.com%7C960a360b887d4d8946ec08d8efe17a1b%7C43083d15727340c1b7db3
> 9efd9c
> >
> cc17a%7C0%7C0%7C637523098980876494%7CUnknown%7CTWFpbGZsb3d8e
> yJWIjoiMC4
> >
> wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000
> mp;sda
> >
> ta=JuapJNoC0iQEQfmuQtAsh%2BqmqKNtYajgJtV%2FamT2jQo%3Dres
> erved=0
> >
> > OVN 21.03 -> commit da028c72bdc7742b3065d1df95a3789fbc16b27a
> > OVS 2.15 -> commit d5dc16670ec95702058fccad253ed6d24ebd5329
> >
> 
> I think I know what the problem is (aside from the missing route).
> Is it possible that the node's chassis-id (OVS system-id) isn't 9a790be7-a876-
> 48a9-b7c5-1c45c6946dd4?
> 
> In your the commands you shared you had:
> ovn-nbctl create Logical_Router name=gw-worker1
> options:chassis=9a790be7-a876-48a9-b7c5-1c45c6946dd4
> 
> The chassis-id should correspond to the system-id OVS was started with on
> the node.  You can retrieve this with:
> 
> ovs-vsctl get open_vswitch . external_ids:system-id
Thanks with you command to get the chassis-id it works. I used the uuid from 
ovn-sbctl list chassis. Shouldn't that work as well? 
> 
> E.g., on my test setup:
> 
> $ ovs-vsctl get open_vswitch . external_ids:system-id local
> 
> $ ovn-sbctl --columns name list chassis local
> name: local
> 
> To bind the GW router to this chassis:
> ovn-nbctl set Logical_Router gw-worker1 options:chassis=local
> 
> Regards,
> Dumitru

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Dumitru Ceara
On 3/25/21 10:50 PM, Moshe Levi wrote:
> 
> 
>> -Original Message-
>> From: Dumitru Ceara 
>> Sent: Thursday, March 25, 2021 1:44 PM
>> To: Moshe Levi ; ovs-discuss@openvswitch.org
>> Subject: Re: [ovs-discuss] [ovn] help with creating logical topology with l3
>> gateway
>>
>> External email: Use caution opening links or attachments
>>
>>
>> On 3/25/21 12:40 PM, Dumitru Ceara wrote:
 Also, to see exactly where the packet is dropped, please share the
 output of:

 inport=$(ovs-vsctl --bare --columns ofport list interface vm1)

>> flow=40440003404400010800455417cd40004001b3980a00010264
>> 40

>> 00020800e1d35d0a0001c1635c60d789050010111213141516
>> 171
 8191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f3031323334353637
>>> To "demystify" this a bit, I got the packet contents by running the
>>> following command while ping was running from vm1:
>>>
>>> ip netns exec vm1 tcpdump -vvvnne -i vm1 -c1 -XX | ovs-tcpundump
>>>
 ovs-appctl ofproto/trace br-int in_port=$in_port $flow | ovn-detrace
>>
>> And here's a typo, sorry, should be:
>>
>> ovs-appctl ofproto/trace br-int in_port=$inport $flow | ovn-detrace
> Here is the output [1] https://pastebin.ubuntu.com/p/jDYz9Dfy2t/
> 
> OVN 21.03 -> commit da028c72bdc7742b3065d1df95a3789fbc16b27a
> OVS 2.15 -> commit d5dc16670ec95702058fccad253ed6d24ebd5329
> 

I think I know what the problem is (aside from the missing route).
Is it possible that the node's chassis-id (OVS system-id) isn't
9a790be7-a876-48a9-b7c5-1c45c6946dd4?

In your the commands you shared you had:
ovn-nbctl create Logical_Router name=gw-worker1 
options:chassis=9a790be7-a876-48a9-b7c5-1c45c6946dd4

The chassis-id should correspond to the system-id OVS was started with
on the node.  You can retrieve this with:

ovs-vsctl get open_vswitch . external_ids:system-id

E.g., on my test setup:

$ ovs-vsctl get open_vswitch . external_ids:system-id
local

$ ovn-sbctl --columns name list chassis local
name: local

To bind the GW router to this chassis:
ovn-nbctl set Logical_Router gw-worker1 options:chassis=local

Regards,
Dumitru

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Moshe Levi



> -Original Message-
> From: Dumitru Ceara 
> Sent: Thursday, March 25, 2021 1:44 PM
> To: Moshe Levi ; ovs-discuss@openvswitch.org
> Subject: Re: [ovs-discuss] [ovn] help with creating logical topology with l3
> gateway
> 
> External email: Use caution opening links or attachments
> 
> 
> On 3/25/21 12:40 PM, Dumitru Ceara wrote:
> >> Also, to see exactly where the packet is dropped, please share the
> >> output of:
> >>
> >> inport=$(ovs-vsctl --bare --columns ofport list interface vm1)
> >>
> flow=40440003404400010800455417cd40004001b3980a00010264
> 40
> >>
> 00020800e1d35d0a0001c1635c60d789050010111213141516
> 171
> >> 8191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f3031323334353637
> > To "demystify" this a bit, I got the packet contents by running the
> > following command while ping was running from vm1:
> >
> > ip netns exec vm1 tcpdump -vvvnne -i vm1 -c1 -XX | ovs-tcpundump
> >
> >> ovs-appctl ofproto/trace br-int in_port=$in_port $flow | ovn-detrace
> 
> And here's a typo, sorry, should be:
> 
> ovs-appctl ofproto/trace br-int in_port=$inport $flow | ovn-detrace
Here is the output [1] https://pastebin.ubuntu.com/p/jDYz9Dfy2t/

OVN 21.03 -> commit da028c72bdc7742b3065d1df95a3789fbc16b27a
OVS 2.15 -> commit d5dc16670ec95702058fccad253ed6d24ebd5329

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Help with network loop between two trunked OVS instances

2021-03-25 Thread Bryan T. Richardson
Hello-

I'm trying to mirror traffic using a GRE tunnel between two OVS instances
running on separate servers that are trunked via a managed physical switch, but
doing so results in a network loop. My goal is to mirror traffic to/from VMs on
one server to a target VM on the other server.

The two servers each have two NICs, eth0 and eth1. eth0 is connected to a
separate switch that allows for cluster management, and has an IP address. eth1
is connected to a managed switch port configured for trunking, doesn't have an
IP address assigned, and is added to the OVS bridge so OVS VLANs are extended
across both servers via the trunk.

I have two VMs, X and Y, running on server A, and one VM, Z, running on server
B. I want to mirror packets between VMs X and Y to VM Z. The way I'm attempting
to do this now is as follows:

VMs X and Y have ports tapX and tapY on the OVS switch on Server A tagged with
VLAN 101, and can ping each other.

VM Z has port tapZ on the OVS switch on Server B tagged with VLAN 201.

Server A and Server B have an addressed internal port on their OVS switch tagged
with VLAN 301 so each host can talk to each other over the trunk.

Server B has a GRE port and OpenFlow rule configured as follows:

ovs-vsctl add-port br0 gre0 \
  -- set interface gre0 type=gre options:remote_ip=flow options:key=1234567890

ovs-ofctl add-flow br0 "in_port=gre0 actions=tapZ"

Server A has a GRE port and mirror configured as follows:

ovs-vsctl add-port br0 gre0 \
  -- set interface gre0 type=gre options:remote_ip= 
options:key=1234567890

ovs-vsctl \
  -- --id=@p0 get port tapX \
  -- --id=@p1 get port tapY \
  -- --id=@g0 get port gre0 \
  -- --id=@m create mirror name=m0 select-dst-port@p0,@p1 output-port=@g0 \
  -- set bridge br0 mirrors=@m

The GRE tunnels can be up and no loop seems to be present because VMs X and Y
can continue to ping each other. As soon as I create the mirror on Server A, I
can see the pings via tcpdump on VM Z, so the mirror and OpenFlow configs are
working, but the pings between VMs X and Y begin to degrade and eventually stop.
As soon as I clear the mirror on Server A the pings start up again.

My rationale behind using the OpenFlow rule on Server B was to try and avoid the
mirrored packets coming in over the GRE tunnel from being flooded to all the
ports on the bridge, especially the trunked eth1 port.

My rationale for being selective about what source ports are mirrored on Server
A was similar, in that I was trying to avoid any mirrored packets showing up on
the trunked eth1 port from being sent into the GRE tunnel again.

Any ideas why I'm still getting a network loop? I'm sure it's something obvious
and I'm just being an idiot, but I'm currently at a loss.

Please advise. Thanks in advance!

-V/R, Bryan
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Dumitru Ceara
On 3/25/21 12:40 PM, Dumitru Ceara wrote:
>> Also, to see exactly where the packet is dropped, please share the
>> output of:
>>
>> inport=$(ovs-vsctl --bare --columns ofport list interface vm1)  
>> flow=40440003404400010800455417cd40004001b3980a00010264420800e1d35d0a0001c1635c60d7890500101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f3031323334353637
> To "demystify" this a bit, I got the packet contents by running the
> following command while ping was running from vm1:
> 
> ip netns exec vm1 tcpdump -vvvnne -i vm1 -c1 -XX | ovs-tcpundump
> 
>> ovs-appctl ofproto/trace br-int in_port=$in_port $flow | ovn-detrace

And here's a typo, sorry, should be:

ovs-appctl ofproto/trace br-int in_port=$inport $flow | ovn-detrace

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Dumitru Ceara
On 3/25/21 12:36 PM, Dumitru Ceara wrote:
> On 3/25/21 12:16 PM, Moshe Levi wrote:
>>
>>
>>> -Original Message-
>>> From: Dumitru Ceara 
>>> Sent: Thursday, March 25, 2021 12:49 PM
>>> To: Moshe Levi ; ovs-discuss@openvswitch.org
>>> Subject: Re: [ovs-discuss] [ovn] help with creating logical topology with l3
>>> gateway
>>>
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> On 3/24/21 11:31 PM, Moshe Levi wrote:
 Hi all,
>>>
>>> Hi Moshe,
>>>
 I trying to create logical topology with l3 gateway.
 I have create the following logical topology:
 I able to ping from ns to 100.64.0.1 but it failed to ping 100.64.0.2 
 (port on
>>> the gw-worker1).
 Below I pasted the command I am using. Can you help me understand what
>>> is missing or what I doing wrong?
>>>
>>> The problem is gw-worker1 has no route to reach 10.0.0.0/16.
>>>

  |
 |  router | gw-worker1
  -  port 'gw-worker1-join':100.64.0.2/16
  |
 |  switch | join  100.64.0.0/16
  -
  |
 |  router | join-router port 'join-router-ls-join':  100.64.0.1/16
  -  port 'join-router-worker1-net': 10.0.1.1/24
  |
  |
 |  switch | join-router 10.0.1.0/24
  -
  /
  ___/_
 |  ns|
  -


 ## worker 1 - worker1-net
 ovn-nbctl ls-add worker1-net
 ovn-nbctl lsp-add worker1-net vm1
 ovn-nbctl lsp-set-addresses vm1 "40:44:00:00:00:01 10.0.1.2"



 ## create join router
 ovn-nbctl lr-add join-router

 ## create router port to connect
 ovn-nbctl lrp-add join-router join-router-worker1-net
 40:44:00:00:00:03 10.0.1.1/24 ovn-nbctl lrp-add join-router
 join-router-worker2-net 40:44:00:00:00:04 10.0.2.1/24


 ## create the 'worker1-net' switch port for connection to 'join-router'
 ovn-nbctl lsp-add worker1-net worker1-net-join-router ovn-nbctl
 lsp-set-type worker1-net-join-router  router ovn-nbctl
 lsp-set-addresses worker1-net-join-router  router ovn-nbctl
 lsp-set-options worker1-net-join-router
 router-port=join-router-worker1-net




 #worker 1
 ovs-vsctl add-port br-int vm1 -- set Interface vm1 type=internal --
 set Interface vm1 external_ids:iface-id=vm1 ip netns add vm1 ip link
 set vm1 netns vm1 ip netns exec vm1 ip link set vm1 address
 40:44:00:00:00:01 ip netns exec vm1 ip addr add 10.0.1.2/24 dev vm1 ip
 netns exec vm1 ip link set vm1 up ip netns exec vm1 ip route add
 default via 10.0.1.1




 # create gw-worker1
 ovn-nbctl create Logical_Router name=gw-worker1
 options:chassis=9a790be7-a876-48a9-b7c5-1c45c6946dd4
>>>
>>> This should fix it:
>>>
>>> ovn-nbctl lr-route-add gw-worker1 10.0.0.0/16 100.64.0.1
>> Dumitru, thanks for the response. I added the above route  but It is still 
>> don't work. Anything else that I am missing? 
> 
> That's weird because it did fix it when I configured the topology using
> the commands you shared; can you please also get the output of:
> 
> ovn-nbctl lr-route-list gw-worker1
> 
> Also, to see exactly where the packet is dropped, please share the
> output of:
> 
> inport=$(ovs-vsctl --bare --columns ofport list interface vm1)  
> flow=40440003404400010800455417cd40004001b3980a00010264420800e1d35d0a0001c1635c60d7890500101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f3031323334353637

To "demystify" this a bit, I got the packet contents by running the
following command while ping was running from vm1:

ip netns exec vm1 tcpdump -vvvnne -i vm1 -c1 -XX | ovs-tcpundump

> ovs-appctl ofproto/trace br-int in_port=$in_port $flow | ovn-detrace
> 
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Dumitru Ceara
On 3/25/21 12:16 PM, Moshe Levi wrote:
> 
> 
>> -Original Message-
>> From: Dumitru Ceara 
>> Sent: Thursday, March 25, 2021 12:49 PM
>> To: Moshe Levi ; ovs-discuss@openvswitch.org
>> Subject: Re: [ovs-discuss] [ovn] help with creating logical topology with l3
>> gateway
>>
>> External email: Use caution opening links or attachments
>>
>>
>> On 3/24/21 11:31 PM, Moshe Levi wrote:
>>> Hi all,
>>
>> Hi Moshe,
>>
>>> I trying to create logical topology with l3 gateway.
>>> I have create the following logical topology:
>>> I able to ping from ns to 100.64.0.1 but it failed to ping 100.64.0.2 (port 
>>> on
>> the gw-worker1).
>>> Below I pasted the command I am using. Can you help me understand what
>> is missing or what I doing wrong?
>>
>> The problem is gw-worker1 has no route to reach 10.0.0.0/16.
>>
>>>
>>>  |
>>> |  router | gw-worker1
>>>  -  port 'gw-worker1-join':100.64.0.2/16
>>>  |
>>> |  switch | join  100.64.0.0/16
>>>  -
>>>  |
>>> |  router | join-router port 'join-router-ls-join':  100.64.0.1/16
>>>  -  port 'join-router-worker1-net': 10.0.1.1/24
>>>  |
>>>  |
>>> |  switch | join-router 10.0.1.0/24
>>>  -
>>>  /
>>>  ___/_
>>> |  ns|
>>>  -
>>>
>>>
>>> ## worker 1 - worker1-net
>>> ovn-nbctl ls-add worker1-net
>>> ovn-nbctl lsp-add worker1-net vm1
>>> ovn-nbctl lsp-set-addresses vm1 "40:44:00:00:00:01 10.0.1.2"
>>>
>>>
>>>
>>> ## create join router
>>> ovn-nbctl lr-add join-router
>>>
>>> ## create router port to connect
>>> ovn-nbctl lrp-add join-router join-router-worker1-net
>>> 40:44:00:00:00:03 10.0.1.1/24 ovn-nbctl lrp-add join-router
>>> join-router-worker2-net 40:44:00:00:00:04 10.0.2.1/24
>>>
>>>
>>> ## create the 'worker1-net' switch port for connection to 'join-router'
>>> ovn-nbctl lsp-add worker1-net worker1-net-join-router ovn-nbctl
>>> lsp-set-type worker1-net-join-router  router ovn-nbctl
>>> lsp-set-addresses worker1-net-join-router  router ovn-nbctl
>>> lsp-set-options worker1-net-join-router
>>> router-port=join-router-worker1-net
>>>
>>>
>>>
>>>
>>> #worker 1
>>> ovs-vsctl add-port br-int vm1 -- set Interface vm1 type=internal --
>>> set Interface vm1 external_ids:iface-id=vm1 ip netns add vm1 ip link
>>> set vm1 netns vm1 ip netns exec vm1 ip link set vm1 address
>>> 40:44:00:00:00:01 ip netns exec vm1 ip addr add 10.0.1.2/24 dev vm1 ip
>>> netns exec vm1 ip link set vm1 up ip netns exec vm1 ip route add
>>> default via 10.0.1.1
>>>
>>>
>>>
>>>
>>> # create gw-worker1
>>> ovn-nbctl create Logical_Router name=gw-worker1
>>> options:chassis=9a790be7-a876-48a9-b7c5-1c45c6946dd4
>>
>> This should fix it:
>>
>> ovn-nbctl lr-route-add gw-worker1 10.0.0.0/16 100.64.0.1
> Dumitru, thanks for the response. I added the above route  but It is still 
> don't work. Anything else that I am missing? 

That's weird because it did fix it when I configured the topology using
the commands you shared; can you please also get the output of:

ovn-nbctl lr-route-list gw-worker1

Also, to see exactly where the packet is dropped, please share the
output of:

inport=$(ovs-vsctl --bare --columns ofport list interface vm1)  
flow=40440003404400010800455417cd40004001b3980a00010264420800e1d35d0a0001c1635c60d7890500101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f3031323334353637
ovs-appctl ofproto/trace br-int in_port=$in_port $flow | ovn-detrace


>>
>> Regards,
>> Dumitru
>>
>>>
>>>
>>> # create a new logical switch for connecting the 'gw-worker1' and
>>> 'join-router' routers ovn-nbctl ls-add join
>>>
>>> # connect 'gw-worker1' to the 'join' switch ovn-nbctl lrp-add
>>> gw-worker1 gw-worker1-join 40:44:00:00:00:07 100.64.0.2/16 ovn-nbctl
>>> lsp-add join join-gw-worker1 ovn-nbctl lsp-set-type join-gw-worker1
>>> router ovn-nbctl lsp-set-addresses join-gw-worker1 router ovn-nbctl
>>> lsp-set-options join-gw-worker1 router-port=gw-worker1-join
>>>
>>>
>>> # connect 'join-router' to the 'join' switch ovn-nbctl lrp-add
>>> join-router join-router-ls-join 40:44:00:00:00:06 100.64.0.1/16
>>> ovn-nbctl lsp-add join ls-join-router-join ovn-nbctl lsp-set-type
>>> ls-join-router-join router ovn-nbctl lsp-set-addresses
>>> ls-join-router-join router ovn-nbctl lsp-set-options
>>> ls-join-router-join router-port=join-router-ls-join
>>>
>>>
>>>
>>> ___
>>> discuss mailing list
>>> disc...@openvswitch.org
>>>
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail
>>> .openvswitch.org%2Fmailman%2Flistinfo%2Fovs-
>> discussdata=04%7C01%7
>>>
>> Cmoshele%40nvidia.com%7C41f0a5395ca845571e0508d8ef7ba7bb%7C43083
>> d15727
>>>
>> 340c1b7db39efd9ccc17a%7C0%7C0%7C637522661666184044%7CUnknown%7
>> CTWFpbGZ
>>>
>> sb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6M
>> n0%3

Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Moshe Levi



> -Original Message-
> From: Dumitru Ceara 
> Sent: Thursday, March 25, 2021 12:49 PM
> To: Moshe Levi ; ovs-discuss@openvswitch.org
> Subject: Re: [ovs-discuss] [ovn] help with creating logical topology with l3
> gateway
> 
> External email: Use caution opening links or attachments
> 
> 
> On 3/24/21 11:31 PM, Moshe Levi wrote:
> > Hi all,
> 
> Hi Moshe,
> 
> > I trying to create logical topology with l3 gateway.
> > I have create the following logical topology:
> > I able to ping from ns to 100.64.0.1 but it failed to ping 100.64.0.2 (port 
> > on
> the gw-worker1).
> > Below I pasted the command I am using. Can you help me understand what
> is missing or what I doing wrong?
> 
> The problem is gw-worker1 has no route to reach 10.0.0.0/16.
> 
> >
> >  |
> > |  router | gw-worker1
> >  -  port 'gw-worker1-join':100.64.0.2/16
> >  |
> > |  switch | join  100.64.0.0/16
> >  -
> >  |
> > |  router | join-router port 'join-router-ls-join':  100.64.0.1/16
> >  -  port 'join-router-worker1-net': 10.0.1.1/24
> >  |
> >  |
> > |  switch | join-router 10.0.1.0/24
> >  -
> >  /
> >  ___/_
> > |  ns|
> >  -
> >
> >
> > ## worker 1 - worker1-net
> > ovn-nbctl ls-add worker1-net
> > ovn-nbctl lsp-add worker1-net vm1
> > ovn-nbctl lsp-set-addresses vm1 "40:44:00:00:00:01 10.0.1.2"
> >
> >
> >
> > ## create join router
> > ovn-nbctl lr-add join-router
> >
> > ## create router port to connect
> > ovn-nbctl lrp-add join-router join-router-worker1-net
> > 40:44:00:00:00:03 10.0.1.1/24 ovn-nbctl lrp-add join-router
> > join-router-worker2-net 40:44:00:00:00:04 10.0.2.1/24
> >
> >
> > ## create the 'worker1-net' switch port for connection to 'join-router'
> > ovn-nbctl lsp-add worker1-net worker1-net-join-router ovn-nbctl
> > lsp-set-type worker1-net-join-router  router ovn-nbctl
> > lsp-set-addresses worker1-net-join-router  router ovn-nbctl
> > lsp-set-options worker1-net-join-router
> > router-port=join-router-worker1-net
> >
> >
> >
> >
> > #worker 1
> > ovs-vsctl add-port br-int vm1 -- set Interface vm1 type=internal --
> > set Interface vm1 external_ids:iface-id=vm1 ip netns add vm1 ip link
> > set vm1 netns vm1 ip netns exec vm1 ip link set vm1 address
> > 40:44:00:00:00:01 ip netns exec vm1 ip addr add 10.0.1.2/24 dev vm1 ip
> > netns exec vm1 ip link set vm1 up ip netns exec vm1 ip route add
> > default via 10.0.1.1
> >
> >
> >
> >
> > # create gw-worker1
> > ovn-nbctl create Logical_Router name=gw-worker1
> > options:chassis=9a790be7-a876-48a9-b7c5-1c45c6946dd4
> 
> This should fix it:
> 
> ovn-nbctl lr-route-add gw-worker1 10.0.0.0/16 100.64.0.1
Dumitru, thanks for the response. I added the above route  but It is still 
don't work. Anything else that I am missing? 
> 
> Regards,
> Dumitru
> 
> >
> >
> > # create a new logical switch for connecting the 'gw-worker1' and
> > 'join-router' routers ovn-nbctl ls-add join
> >
> > # connect 'gw-worker1' to the 'join' switch ovn-nbctl lrp-add
> > gw-worker1 gw-worker1-join 40:44:00:00:00:07 100.64.0.2/16 ovn-nbctl
> > lsp-add join join-gw-worker1 ovn-nbctl lsp-set-type join-gw-worker1
> > router ovn-nbctl lsp-set-addresses join-gw-worker1 router ovn-nbctl
> > lsp-set-options join-gw-worker1 router-port=gw-worker1-join
> >
> >
> > # connect 'join-router' to the 'join' switch ovn-nbctl lrp-add
> > join-router join-router-ls-join 40:44:00:00:00:06 100.64.0.1/16
> > ovn-nbctl lsp-add join ls-join-router-join ovn-nbctl lsp-set-type
> > ls-join-router-join router ovn-nbctl lsp-set-addresses
> > ls-join-router-join router ovn-nbctl lsp-set-options
> > ls-join-router-join router-port=join-router-ls-join
> >
> >
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail
> > .openvswitch.org%2Fmailman%2Flistinfo%2Fovs-
> discussdata=04%7C01%7
> >
> Cmoshele%40nvidia.com%7C41f0a5395ca845571e0508d8ef7ba7bb%7C43083
> d15727
> >
> 340c1b7db39efd9ccc17a%7C0%7C0%7C637522661666184044%7CUnknown%7
> CTWFpbGZ
> >
> sb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6M
> n0%3
> >
> D%7C1000sdata=mHbGww2ylVcT%2FyCUDAfCKPxwaj%2F8NhoUHgN
> cHIkQ9J8%3D&
> > amp;reserved=0
> >

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovn] help with creating logical topology with l3 gateway

2021-03-25 Thread Dumitru Ceara
On 3/24/21 11:31 PM, Moshe Levi wrote:
> Hi all,

Hi Moshe,

> I trying to create logical topology with l3 gateway.
> I have create the following logical topology:
> I able to ping from ns to 100.64.0.1 but it failed to ping 100.64.0.2 (port 
> on the gw-worker1).
> Below I pasted the command I am using. Can you help me understand what is 
> missing or what I doing wrong?

The problem is gw-worker1 has no route to reach 10.0.0.0/16.

> 
>  |
> |  router | gw-worker1
>  -  port 'gw-worker1-join':100.64.0.2/16
>  |
> |  switch | join  100.64.0.0/16
>  -
>  |
> |  router | join-router port 'join-router-ls-join':  100.64.0.1/16
>  -  port 'join-router-worker1-net': 10.0.1.1/24
>  |
>  |
> |  switch | join-router 10.0.1.0/24
>  -
>  /
>  ___/_
> |  ns|
>  -
> 
> 
> ## worker 1 - worker1-net
> ovn-nbctl ls-add worker1-net
> ovn-nbctl lsp-add worker1-net vm1
> ovn-nbctl lsp-set-addresses vm1 "40:44:00:00:00:01 10.0.1.2"
> 
> 
> 
> ## create join router
> ovn-nbctl lr-add join-router
> 
> ## create router port to connect
> ovn-nbctl lrp-add join-router join-router-worker1-net 40:44:00:00:00:03 
> 10.0.1.1/24
> ovn-nbctl lrp-add join-router join-router-worker2-net 40:44:00:00:00:04 
> 10.0.2.1/24
> 
> 
> ## create the 'worker1-net' switch port for connection to 'join-router'
> ovn-nbctl lsp-add worker1-net worker1-net-join-router
> ovn-nbctl lsp-set-type worker1-net-join-router  router
> ovn-nbctl lsp-set-addresses worker1-net-join-router  router
> ovn-nbctl lsp-set-options worker1-net-join-router  
> router-port=join-router-worker1-net
> 
> 
> 
> 
> #worker 1
> ovs-vsctl add-port br-int vm1 -- set Interface vm1 type=internal -- set 
> Interface vm1 external_ids:iface-id=vm1
> ip netns add vm1
> ip link set vm1 netns vm1
> ip netns exec vm1 ip link set vm1 address 40:44:00:00:00:01
> ip netns exec vm1 ip addr add 10.0.1.2/24 dev vm1
> ip netns exec vm1 ip link set vm1 up
> ip netns exec vm1 ip route add default via 10.0.1.1
> 
> 
> 
> 
> # create gw-worker1
> ovn-nbctl create Logical_Router name=gw-worker1 
> options:chassis=9a790be7-a876-48a9-b7c5-1c45c6946dd4

This should fix it:

ovn-nbctl lr-route-add gw-worker1 10.0.0.0/16 100.64.0.1

Regards,
Dumitru

> 
> 
> # create a new logical switch for connecting the 'gw-worker1' and 
> 'join-router' routers
> ovn-nbctl ls-add join
> 
> # connect 'gw-worker1' to the 'join' switch
> ovn-nbctl lrp-add gw-worker1 gw-worker1-join 40:44:00:00:00:07 100.64.0.2/16
> ovn-nbctl lsp-add join join-gw-worker1
> ovn-nbctl lsp-set-type join-gw-worker1 router
> ovn-nbctl lsp-set-addresses join-gw-worker1 router
> ovn-nbctl lsp-set-options join-gw-worker1 router-port=gw-worker1-join
> 
> 
> # connect 'join-router' to the 'join' switch
> ovn-nbctl lrp-add join-router join-router-ls-join 40:44:00:00:00:06 
> 100.64.0.1/16
> ovn-nbctl lsp-add join ls-join-router-join
> ovn-nbctl lsp-set-type ls-join-router-join router
> ovn-nbctl lsp-set-addresses ls-join-router-join router
> ovn-nbctl lsp-set-options ls-join-router-join router-port=join-router-ls-join
> 
> 
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] BGP EVPN support

2021-03-25 Thread Daniel Alvarez Sanchez
Thanks Krzysztof, all

Let me see if I understand the 'native' proposal. Please amend as necessary
:)

On Tue, Mar 16, 2021 at 9:28 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:

>
>
> On Tue, Mar 16, 2021, at 19:15, Mark Gray wrote:
> > On 16/03/2021 15:41, Krzysztof Klimonda wrote:
> > > Yes, that seems to be prerequisite (or one of prerequisites) for
> keeping current DPDK / offload capabilities, as far as I understand. By
> Proxy ARP/NDP I think you mean responding to ARP and NDP on behalf of the
> system where FRR is running?
> > >
> > > As for whether to go ovn-kubernetes way and try to implement it with
> existing primitives, or add BGP support directly into OVN, I feel like this
> should be a core feature of OVN itself and not something that could be
> built on top of it by a careful placement of logical switches, routers and
> ports. This would also help with management (you would configure new BGP
> connection by modifying northbound DB) and simplify troubleshooting in case
> something is not working as expected.
> > >
> >
> > There would be quite a lot of effort to implement BGP support directly
> > into OVN as per all the relevant BGP RPCs .. and the effort to maintain.
> > Another option might be to make FRR Openflow-aware and enabling it to
> > program Openflow flows directly into an OVN bridge much like it does
> > into the kernel today. FRR does provide some flexibility to extend like
> > that through the use of something like FPM
> > (http://docs.frrouting.org/projects/dev-guide/en/latest/fpm.html)
>
> Indeed, when I wrote "adding BGP support directly to OVN" I didn't really
> mean implementing BGP protocol directly in OVN, but rather implementing
> integration with FRR directly in OVN, and not by reusing existing
> resources. Making ovn-controller into fully fledged BGP peer seems.. like a
> nice expansion of the initial idea, assuming that the protocol could be
> offloaded to some library, but it's probably not a hard requirement for the
> initial implementation, as long as OVS can be programmed to deliver BGP
> traffic to FRR.
>
> When you write that FRR would program flows on OVS bridge, do you have
> something specific in mind? I thought the discussion so far was mostly one
> way BGP announcement with FRR "simply" announcing specific prefixes from
> the chassis nodes. Do you have something more in mind, like programming
> routes received from BGP router into OVN?
>

That's what I had also in my mind, ie. "announcing specific prefixes from
the chassis nodes". I'd leave the importing routers part for a later stage
if that's really a requirement.

For the announcing part, let's say we try to remove the kernel as much as
we can based on the discussion on this thread, then we're left with:

- Route announcement:
  - Configure some loopback IP address in OVN per hypervisor which is going
to be the nexthop of all routes announced from that chassis
  - Configuring OVN to tell which prefixes to announce - CMS responsibility
and some knob added into OVN NB as Mark suggests
  - OVN to talk to FRR somehow (gRPC?) to advertise the loopback IP as
directly connected and the rest of IPs in the chassis via its loopback IP

- Extra resources/flows:

Similar to [0]  we'd need:
  - 1 localnet LS per node that will receive the traffic directed to the
OVN loopback address
  - 1 gateway LR per node responsible for routing the traffic within the
node
  - 1 transit LS per node that connects the previous gateway to the
infrastructure router
  - 1 infrastructure LR to which all LS that require to expose routes will
attach to (consuming one IP address from each subnet exposed)
WARNING: scale!!

My question earlier this thread was who's responsible for creating all
these resources. If we don't want to put the burden of this to the CMS, are
you proposing OVN to do it 'under the hood'? What about the IP addresses
that we'll be consuming from the tenants? Maybe if doing it under the hood,
that's not required and we can do it in OpenFlow some other way. Is this
what you mean?

IMO, it is very complicated but maybe it brings a lot of value to OVN. The
benefit of the PoC approach is that we can use BGP (almost) today without
any changes to OVN or Neutron but I am for sure open to discuss the
'native' way more in depth!

[0]
https://raw.githubusercontent.com/ovn-org/ovn-kubernetes/master/docs/design/current_ovn_topology.svg

- Physical interfaces:
The PoC was under the assumption that all the compute nodes will have a
default ECMP route in the form of: 0.0.0.0 via nic1 via nic2
If we want to match this, we probably need to add OpenFlow rules to the
provider bridge and add both NICs to it.
If the NICs are used as well for control plane, management, storage or
whatever, we are creating a dependency on OVS for all that. I believe that
it is fine to lose the data plane if OVS goes down but not everything else.
Somebody may have a suggestion here though.
If not, we still need the kernel to do some steering