Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Han Zhou
On Tue, Jun 19, 2018 at 2:53 PM, Daniel Alvarez Sanchez 
wrote:
>
>
>
> On Tue, Jun 19, 2018 at 10:37 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:
>>
>> Sorry, the problem seems to be that this ACL is not added in the Port
Groups case for some reason (I checked wrong lflows log I had):
>
> s/ACL/Logical Flow
>>
>>
>> _uuid   : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
>> actions : "reg0[0] = 1; next;"
>> external_ids: {source="ovn-northd.c:2931",
stage-name=ls_in_pre_acl}
>> logical_datapath: 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
>> match   : ip
>> pipeline: ingress
>> priority: 100
>>
>>
>> Apparently, this code is not getting triggered for the Port Group case:
>>
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2930
>>
>>
>>
> The problem is that build_pre_acls() [0] function checks if the Logical
Switch has stateful
> ACLs but since we're now applying ACLs on Port Groups, it'll always
return false
> and it won't apply the pre ACLs for conntrack.
>
> [0]
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2852

Yes, thanks Daniel for finding the problem! I am checking why the test case
didn't find out.
I will work on the fix asap.

Thanks,
Han
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Daniel Alvarez Sanchez
On Tue, Jun 19, 2018 at 10:37 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:

> Sorry, the problem seems to be that this ACL is not added in the Port
> Groups case for some reason (I checked wrong lflows log I had):
>
s/ACL/Logical Flow

>
> _uuid   : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
> actions : "reg0[0] = 1; next;"
> external_ids: {source="ovn-northd.c:2931",
> stage-name=ls_in_pre_acl}
> logical_datapath: 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
> match   : ip
> pipeline: ingress
> priority: 100
>
>
> Apparently, this code is not getting triggered for the Port Group case:
> https://github.com/openvswitch/ovs/blob/master/ovn/northd/
> ovn-northd.c#L2930
>
>
>
> The problem is that build_pre_acls() [0] function checks if the Logical
Switch has stateful
ACLs but since we're now applying ACLs on Port Groups, it'll always return
false
and it won't apply the pre ACLs for conntrack.

[0]
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2852


>
> On Tue, Jun 19, 2018 at 10:09 PM, Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
>
>> Hi folks,
>>
>> Sorry for not being clear enough. In the tcpdump we can see the SYN
>> packets being sent by port1 but retransmitted as it looks like the response
>> to that SYN never reaches its destination. This is confirmed through the DP
>> flows:
>>
>> $ sudo ovs-dpctl dump-flows
>>
>> recirc_id(0),in_port(3),eth(src=fa:16:3e:78:a2:cf,dst=fa:16:
>> 3e:bf:6f:51),eth_type(0x0800),ipv4(src=10.0.0.6,dst=168.0.0.
>> 0/252.0.0.0,proto=6,frag=no), packets:4, bytes:296, used:0.514s,
>> flags:S, actions:4
>>
>> recirc_id(0),in_port(4),eth(src=fa:16:3e:bf:6f:51,dst=fa:16:
>> 3e:78:a2:cf),eth_type(0x0800),ipv4(src=128.0.0.0/128.0.0.0,d
>> st=10.0.0.0/255.255.255.192,proto=6,frag=no),tcp(dst=32768/0x8000),
>> packets:7, bytes:518, used:0.514s, flags:S., actions:drop
>>
>>
>> $ sudo ovs-appctl ofproto/trace br-int in_port=20,tcp,dl_src=fa:16:3e
>> :78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,tcp_dst=80
>> | ovn-detrace
>>
>> Flow: tcp,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,dl_d
>> st=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,
>> nw_tos=0,nw_ecn=0,nw_ttl=0,tp_sr
>> c=0,tp_dst=80,tcp_flags=0
>>
>> bridge("br-int")
>> 
>> 0. in_port=20, priority 100
>> set_field:0x8->reg13
>> set_field:0x5->reg11
>> set_field:0x1->reg12
>> set_field:0x1->metadata
>> set_field:0x4->reg14
>> resubmit(,8)
>> 8. reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf, priority 50, cookie
>> 0xe299b701
>> resubmit(,9)
>> 9. ip,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,
>> priority 90, cookie 0x6581e351
>> resubmit(,10)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>> * Logical flow: table=1 (ls_in_port_sec_ip), priority=90,
>> match=(inport == "8ea9d963-7e55-49a6-8be7-cc294278180a" && eth.src ==
>> fa:16:3e:78:a2:cf && i
>> p4.src == {10.0.0.6}), actions=(next;)
>> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
>> resubmit(,11)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>> * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
>> match=(1), actions=(next;)
>>
>> ...
>>
>> 47. metadata=0x1, priority 0, cookie 0xf35c5784
>> resubmit(,48)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>> * Logical flow: table=7 (ls_out_stateful), priority=0, match=(1),
>> actions=(next;)
>> 48. metadata=0x1, priority 0, cookie 0x9546c56e
>> resubmit(,49)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>> * Logical flow: table=8 (ls_out_port_sec_ip), priority=0,
>> match=(1), actions=(next;)
>> 49. reg15=0x1,metadata=0x1, priority 50, cookie 0x58af7841
>> resubmit(,64)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>> * Logical flow: table=9 (ls_out_port_sec_l2), priority=50,
>> match=(outport == "74db766c-2600-40f1-9ffa-255dc147d8a5),
>> actions=(output;)
>> 64. priority 0
>> resubmit(,65)
>> 65. reg15=0x1,metadata=0x1, priority 100
>> output:21
>>
>> Final flow: tcp,reg11=0x5,reg12=0x1,reg13=0x9,reg14=0x4,reg15=0x1,metada
>> ta=0x1,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,
>> dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.
>> 169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
>> Megaflow: recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x/0x1000,dl_src
>> =fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=
>> 168.0.0.0/6,nw_frag=no
>> Datapath actions: 4
>>
>>
>>
>> At this point I would've expected the connection to be in conntrack (but
>> 

Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Daniel Alvarez Sanchez
Sorry, the problem seems to be that this ACL is not added in the Port
Groups case for some reason (I checked wrong lflows log I had):

_uuid   : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
actions : "reg0[0] = 1; next;"
external_ids: {source="ovn-northd.c:2931", stage-name=ls_in_pre_acl}
logical_datapath: 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
match   : ip
pipeline: ingress
priority: 100


Apparently, this code is not getting triggered for the Port Group case:
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2930




On Tue, Jun 19, 2018 at 10:09 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:

> Hi folks,
>
> Sorry for not being clear enough. In the tcpdump we can see the SYN
> packets being sent by port1 but retransmitted as it looks like the response
> to that SYN never reaches its destination. This is confirmed through the DP
> flows:
>
> $ sudo ovs-dpctl dump-flows
>
> recirc_id(0),in_port(3),eth(src=fa:16:3e:78:a2:cf,dst=fa:
> 16:3e:bf:6f:51),eth_type(0x0800),ipv4(src=10.0.0.6,dst=
> 168.0.0.0/252.0.0.0,proto=6,frag=no), packets:4, bytes:296, used:0.514s,
> flags:S, actions:4
>
> recirc_id(0),in_port(4),eth(src=fa:16:3e:bf:6f:51,dst=fa:
> 16:3e:78:a2:cf),eth_type(0x0800),ipv4(src=128.0.0.0/
> 128.0.0.0,dst=10.0.0.0/255.255.255.192,proto=6,frag=no),tcp(dst=32768/0x8000),
> packets:7, bytes:518, used:0.514s, flags:S., actions:drop
>
>
> $ sudo ovs-appctl ofproto/trace br-int in_port=20,tcp,dl_src=fa:16:
> 3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,tcp_dst=80
> | ovn-detrace
>
> Flow: tcp,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:
> cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.
> 254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_sr
> c=0,tp_dst=80,tcp_flags=0
>
> bridge("br-int")
> 
> 0. in_port=20, priority 100
> set_field:0x8->reg13
> set_field:0x5->reg11
> set_field:0x1->reg12
> set_field:0x1->metadata
> set_field:0x4->reg14
> resubmit(,8)
> 8. reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf, priority 50, cookie
> 0xe299b701
> resubmit(,9)
> 9. ip,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,
> priority 90, cookie 0x6581e351
> resubmit(,10)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
> * Logical flow: table=1 (ls_in_port_sec_ip), priority=90,
> match=(inport == "8ea9d963-7e55-49a6-8be7-cc294278180a" && eth.src ==
> fa:16:3e:78:a2:cf && i
> p4.src == {10.0.0.6}), actions=(next;)
> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
> resubmit(,11)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
> * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
> match=(1), actions=(next;)
>
> ...
>
> 47. metadata=0x1, priority 0, cookie 0xf35c5784
> resubmit(,48)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
> * Logical flow: table=7 (ls_out_stateful), priority=0, match=(1),
> actions=(next;)
> 48. metadata=0x1, priority 0, cookie 0x9546c56e
> resubmit(,49)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
> * Logical flow: table=8 (ls_out_port_sec_ip), priority=0,
> match=(1), actions=(next;)
> 49. reg15=0x1,metadata=0x1, priority 50, cookie 0x58af7841
> resubmit(,64)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
> * Logical flow: table=9 (ls_out_port_sec_l2), priority=50,
> match=(outport == "74db766c-2600-40f1-9ffa-255dc147d8a5),
> actions=(output;)
> 64. priority 0
> resubmit(,65)
> 65. reg15=0x1,metadata=0x1, priority 100
> output:21
>
> Final flow: tcp,reg11=0x5,reg12=0x1,reg13=0x9,reg14=0x4,reg15=0x1,
> metadata=0x1,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:
> a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.
> 254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
> Megaflow: recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x/0x1000,dl_
> src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=
> 168.0.0.0/6,nw_frag=no
> Datapath actions: 4
>
>
>
> At this point I would've expected the connection to be in conntrack (but
> if i'm not mistaken this is not supported in ovn-trace :?) so the return
> packet would be dropped:
>
> $ sudo ovs-appctl ofproto/trace br-int in_port=21,tcp,dl_dst=fa:16:
> 3e:78:a2:cf,dl_src=fa:16:3e:bf:6f:51,nw_dst=10.0.0.6,nw_src=169.254.169.254,tcp_dst=80
> | ovn-detrace
> Flow: tcp,in_port=21,vlan_tci=0x,dl_src=fa:16:3e:bf:6f:
> 51,dl_dst=fa:16:3e:78:a2:cf,nw_src=169.254.169.254,nw_dst=
> 10.0.0.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
>
> bridge("br-int")
> 
> 0. in_port=21, priority 100
> 

Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Daniel Alvarez Sanchez
Hi folks,

Sorry for not being clear enough. In the tcpdump we can see the SYN packets
being sent by port1 but retransmitted as it looks like the response to that
SYN never reaches its destination. This is confirmed through the DP flows:

$ sudo ovs-dpctl dump-flows

recirc_id(0),in_port(3),eth(src=fa:16:3e:78:a2:cf,dst=fa:16:3e:bf:6f:51),eth_type(0x0800),ipv4(src=10.0.0.6,dst=
168.0.0.0/252.0.0.0,proto=6,frag=no), packets:4, bytes:296, used:0.514s,
flags:S, actions:4

recirc_id(0),in_port(4),eth(src=fa:16:3e:bf:6f:51,dst=fa:16:3e:78:a2:cf),eth_type(0x0800),ipv4(src=
128.0.0.0/128.0.0.0,dst=10.0.0.0/255.255.255.192,proto=6,frag=no),tcp(dst=32768/0x8000),
packets:7, bytes:518, used:0.514s, flags:S., actions:drop


$ sudo ovs-appctl ofproto/trace br-int
in_port=20,tcp,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,tcp_dst=80
| ovn-detrace

Flow:
tcp,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_sr
c=0,tp_dst=80,tcp_flags=0

bridge("br-int")

0. in_port=20, priority 100
set_field:0x8->reg13
set_field:0x5->reg11
set_field:0x1->reg12
set_field:0x1->metadata
set_field:0x4->reg14
resubmit(,8)
8. reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf, priority 50, cookie
0xe299b701
resubmit(,9)
9. ip,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,
priority 90, cookie 0x6581e351
resubmit(,10)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=1 (ls_in_port_sec_ip), priority=90,
match=(inport == "8ea9d963-7e55-49a6-8be7-cc294278180a" && eth.src ==
fa:16:3e:78:a2:cf && i
p4.src == {10.0.0.6}), actions=(next;)
10. metadata=0x1, priority 0, cookie 0x1c3ddeef
resubmit(,11)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=2 (ls_in_port_sec_nd), priority=0, match=(1),
actions=(next;)

...

47. metadata=0x1, priority 0, cookie 0xf35c5784
resubmit(,48)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
* Logical flow: table=7 (ls_out_stateful), priority=0, match=(1),
actions=(next;)
48. metadata=0x1, priority 0, cookie 0x9546c56e
resubmit(,49)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
* Logical flow: table=8 (ls_out_port_sec_ip), priority=0,
match=(1), actions=(next;)
49. reg15=0x1,metadata=0x1, priority 50, cookie 0x58af7841
resubmit(,64)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
* Logical flow: table=9 (ls_out_port_sec_l2), priority=50,
match=(outport == "74db766c-2600-40f1-9ffa-255dc147d8a5), actions=(output;)
64. priority 0
resubmit(,65)
65. reg15=0x1,metadata=0x1, priority 100
output:21

Final flow:
tcp,reg11=0x5,reg12=0x1,reg13=0x9,reg14=0x4,reg15=0x1,metadata=0x1,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
Megaflow:
recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x/0x1000,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=
168.0.0.0/6,nw_frag=no
Datapath actions: 4



At this point I would've expected the connection to be in conntrack (but if
i'm not mistaken this is not supported in ovn-trace :?) so the return
packet would be dropped:

$ sudo ovs-appctl ofproto/trace br-int
in_port=21,tcp,dl_dst=fa:16:3e:78:a2:cf,dl_src=fa:16:3e:bf:6f:51,nw_dst=10.0.0.6,nw_src=169.254.169.254,tcp_dst=80
| ovn-detrace
Flow:
tcp,in_port=21,vlan_tci=0x,dl_src=fa:16:3e:bf:6f:51,dl_dst=fa:16:3e:78:a2:cf,nw_src=169.254.169.254,nw_dst=10.0.0.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0

bridge("br-int")

0. in_port=21, priority 100
set_field:0x9->reg13
set_field:0x5->reg11
set_field:0x1->reg12
set_field:0x1->metadata
set_field:0x1->reg14
resubmit(,8)
8. reg14=0x1,metadata=0x1, priority 50, cookie 0x4017bca3
resubmit(,9)
9. metadata=0x1, priority 0, cookie 0x5f2a07c6
resubmit(,10)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=1 (ls_in_port_sec_ip), priority=0, match=(1),
actions=(next;)
10. metadata=0x1, priority 0, cookie 0x1c3ddeef
resubmit(,11)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=2 (ls_in_port_sec_nd), priority=0, match=(1),
actions=(next;)
...
44. ip,reg15=0x4,metadata=0x1, priority 2001, cookie 0x3a87f6e9
drop
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"

Re: [ovs-discuss] OVN IPAM

2018-06-19 Thread Paul Greenberg
Thank you, Guru! Will do.

Best Regards,
Paul Greenberg

From: Guru Shetty 
Sent: Tuesday, June 19, 2018 2:51:18 PM
To: Paul Greenberg
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] OVN IPAM

In tests/ovn.at, search for "ipam".

On 17 June 2018 at 14:27, Paul Greenberg 
mailto:green...@outlook.com>> wrote:
All,

I want to get an IP address from OVN without using DHCP. I did not find a 
command line option to do so.

How could one do it through OVSDB queries? Did anyone attempt it?

Best Regards,
Paul Greenberg

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVN IPAM

2018-06-19 Thread Guru Shetty
In tests/ovn.at, search for "ipam".

On 17 June 2018 at 14:27, Paul Greenberg  wrote:

> All,
>
> I want to get an IP address from OVN without using DHCP. I did not find a
> command line option to do so.
>
> How could one do it through OVSDB queries? Did anyone attempt it?
>
> Best Regards,
> Paul Greenberg
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Processing FlowMods

2018-06-19 Thread Ashish Varma
In the function "miniflow_extract", a miniflow structure gets populated
based on the fields received in the packet.
("struct miniflow" is a sparse representation of the "struct flow"
structure. See ovs/lib/flow.h)

The code inside "miniflow_extract" puts the sctp field (if present) in the
miniflow structure:

} else if (OVS_LIKELY(nw_proto == IPPROTO_SCTP)) {
if (OVS_LIKELY(size >= SCTP_HEADER_LEN)) {
const struct sctp_header *sctp = data;

miniflow_push_be16(mf, tp_src, sctp->sctp_src);
miniflow_push_be16(mf, tp_dst, sctp->sctp_dst);
miniflow_push_be16(mf, ct_tp_src, ct_tp_src);
miniflow_push_be16(mf, ct_tp_dst, ct_tp_dst);
}
}

"miniflow_push_be16" stores the value of the sctp ports in the miniflow
structure.






On Sat, May 26, 2018 at 10:57 AM, Pedro Henrique 
wrote:

> Worked fine! Thanks.
> Another question. In *miniflow_extract* function (lib/flow.c),  the OVS
> uses different packets.h structs (lib/packets.h) depending on the traffic.
> For instance, if the traffic is SCTP, the *sctp_header *struct will be
> used (around Line 917 of flow.c file)*. *Now, here is my problem. I'm not
> finding where the *sctp_header *struct fields (such as: *sctp_src* and
> *sctp_dst*) are being written. I thought these fields were being filled
> out on *packet_set_sctp_port* (packets.c) or *flow_compose_l4 *(flow.c)
> functions. However, after debugging the code, the  *packet_set_sctp_port*
> function is not being called at all and the *sctp_header* struct of 
> *flow_compose_l4
> *is not being accessed. Please, where is the function responsible to fill
> out the *sctp_header* fields?
>
> PS: I turned off the kernel datapath and am using only the userspace path.
> Moreover, the SCTP traffic is being sent between the end hosts perfectly,
> no problem at all.
>
> Thanks in advance,
>
>
>
>
>
> 2018-05-21 14:07 GMT-04:00 Ashish Varma :
>
>> Try debugging from functions:
>>
>> ofputil_pull_ofp11_match  -->
>>oxm_pull_match
>>
>> "ofputil_match_from_ofp11_match" is called for "OFPMT_STANDARD" case
>> which is deprecated in the openflow 1.4 standard.
>>
>> Thanks,
>> Ashish
>>
>> On Mon, May 21, 2018 at 10:32 AM, Pedro Henrique <
>> phamorimreze...@gmail.com> wrote:
>>
>>> Dear members,
>>>
>>> I'm looking for the OVS's function(s) responsible for processing the
>>> match fields from the OpenFlow FlowMod message, which is sent by the the
>>> SDN controller. I thought it was this function from lib/ofp-util.c:
>>> "ofputil_match_to_ofp11_match(const struct match *match, struct
>>> ofp11_match *ofmatch)". However, this function is not being called,
>>> according to some debug code I inserted on it.
>>>
>>> P.S.: I'm using OVS 2.9.0 and OpenFlow 1.4.
>>>
>>> Thank you,
>>>
>>> --
>>> Pedro Henrique Amorim Rezende
>>>
>>> ___
>>> discuss mailing list
>>> disc...@openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>
>>>
>>
>
>
> --
> Pedro Henrique Amorim Rezende
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [OVN] MTU issues with OVN

2018-06-19 Thread axel
Hello everyone,

I have a project where I use Vagrant to spawn VMs connected to OVS and managed 
by OVN. Think of it as "A minimalist OpenStack-like tool that is based on 
Vagrant and OVN" (yes I know, it sounds horrible said like that!)
The workflow is: users can request VMs through the tool, the tool creates the 
Vagrantfile, launches the VM using vagrant on a hypervisor with free resources 
(based on qemu+kvm or HyperV), and attach the VM to OVS, where OVN grants it 
network connectivity to others VMs launched by the same user (+ internet 
through a gateway). VMs from a same user can be on different hypervisors.
VM images are public pre-packages images chosen by the user from Vagrant Cloud 
(https://app.vagrantup.com/boxes/search)

I'm encountering an issue where TCP connections between VMs on the same LAN but 
on different hypervisors sometime hangs.
The issue is with the MTU of the network interface of the VM: it defaults to 
1500, but packets have to be sent over a Geneve tunnel so they should not 
exceed 1442 bytes (with DF=1). So packets larger than this are dropped.

>From 
>https://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
> the option OpenStack chose is to advertise the lower MTU using DHCP to the VM 
>("mtu"="1442" in `ovn-nbctl create DHCP_Options` options).

The issue is that not all DHCP clients apply this option: specifically, from my 
tests, dhcpcd and dhclient apply the option, but systemd-networkd doesn't (see 
https://github.com/systemd/systemd/pull/6950/files).
The issue is that a lot of Vagrant Boxes (all the ones based on systemd I 
encountered so far) use systemd-networkd as their DHCP client.

So I have a situation where: I cannot send the MTU over DHCP because it won't 
be accepted by systemd-networkd on the vagrant boxes, I cannot connect to the 
vagrant boxes to provision them as they are connected to OVN and the hypervisor 
has no access to them anymore (there is no vagrant management interface on the 
VM on my setup, I cannot provision them post-creation), and of course I cannot 
modify all the images on Vagrant Cloud to add "UseMTU=true" to the 
systemd-networkd config.

PMTUD or MSS Clamping cannot be done as the packet is not going through a 
router (it's going between VMs in the same LAN).

I talked about that with lucasagomes on IRC (thanks a lot for his time!) and he 
recommended I ask here since there was no easy answer to my problem.

An option is to use jumbo frames on the underlay network so its MTU would be 
>1558 bytes, so the overlay network can keep its MTU to 1500. But then I can't 
have hypervisors across Internet (or any other MTU limiting network).

Another option would be to fragment the Geneve encapsulated packets (after 
encapsulation) before going through the tunnel, and reassemble them on the 
other side of the tunnel. It would hurt performances a lot 
(fragmentation/reassembly of packets, sending of a big packet and a very small 
one each time), but it would solve the issue at least until the MTU is lowered 
on the VM by its user.
I also saw some hints that OVN could do this ("Although GENEVE and OVN supports 
IP fragmentation [...]" on 
https://ovirt.org/develop/release-management/features/network/managed_mtu_for_vm_networks/),
 but I did not find a way to do it. Is there a way?

In any case, the issue with lowering the MTU is that communication between VMs 
in the same hypervisor could be way more efficient if they could use a high 
MTU, and only use a lower MTU if they communicate across hypervisors.
With that in mind, I wonders if OVS could send "fake" ICMP Fragmentation needed 
packets to the sender VM if the packet has to go through a tunnel, has DF=1 and 
packet size is over the (MTU - tunnel header size). It probably would not work 
because the packet is not going through a router, but I have not tried. What 
are you though about this?

Have I exhausted all the options to work around the MTU issue without having to 
modify the MTU in the VM itself? Or are there more things that can be done and 
that I did not think of?

Thanks,
Axel
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] openvswitch-ovn in fedora 28

2018-06-19 Thread Vasiliy Tolstov
Hi! I'm try to run ovn in fedora 28.
I have 3 hosts:
gateway=controler 172.16.1.254
compute1 172.16.1.1
compute2 172.16.1.2

What services i need to enable on compute and on controller?
Also as i found, i need to enable listening ovn-northd on 172.16.1.254
and connect to it other nodes via port 6641 and 6642. How can i do
that in fedora?
Also if my controller node have vm that i want to connect to internal
ovn network what services i need to run additional on controller node
and does it possible?

Thanks and sorry, i don't found much info about ovn on fedora.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss