Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Han Zhou
On Tue, Jun 19, 2018 at 2:53 PM, Daniel Alvarez Sanchez 
wrote:
>
>
>
> On Tue, Jun 19, 2018 at 10:37 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:
>>
>> Sorry, the problem seems to be that this ACL is not added in the Port
Groups case for some reason (I checked wrong lflows log I had):
>
> s/ACL/Logical Flow
>>
>>
>> _uuid   : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
>> actions : "reg0[0] = 1; next;"
>> external_ids: {source="ovn-northd.c:2931",
stage-name=ls_in_pre_acl}
>> logical_datapath: 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
>> match   : ip
>> pipeline: ingress
>> priority: 100
>>
>>
>> Apparently, this code is not getting triggered for the Port Group case:
>>
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2930
>>
>>
>>
> The problem is that build_pre_acls() [0] function checks if the Logical
Switch has stateful
> ACLs but since we're now applying ACLs on Port Groups, it'll always
return false
> and it won't apply the pre ACLs for conntrack.
>
> [0]
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2852

Yes, thanks Daniel for finding the problem! I am checking why the test case
didn't find out.
I will work on the fix asap.

Thanks,
Han
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Daniel Alvarez Sanchez
On Tue, Jun 19, 2018 at 10:37 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:

> Sorry, the problem seems to be that this ACL is not added in the Port
> Groups case for some reason (I checked wrong lflows log I had):
>
s/ACL/Logical Flow

>
> _uuid   : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
> actions : "reg0[0] = 1; next;"
> external_ids: {source="ovn-northd.c:2931",
> stage-name=ls_in_pre_acl}
> logical_datapath: 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
> match   : ip
> pipeline: ingress
> priority: 100
>
>
> Apparently, this code is not getting triggered for the Port Group case:
> https://github.com/openvswitch/ovs/blob/master/ovn/northd/
> ovn-northd.c#L2930
>
>
>
> The problem is that build_pre_acls() [0] function checks if the Logical
Switch has stateful
ACLs but since we're now applying ACLs on Port Groups, it'll always return
false
and it won't apply the pre ACLs for conntrack.

[0]
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2852


>
> On Tue, Jun 19, 2018 at 10:09 PM, Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
>
>> Hi folks,
>>
>> Sorry for not being clear enough. In the tcpdump we can see the SYN
>> packets being sent by port1 but retransmitted as it looks like the response
>> to that SYN never reaches its destination. This is confirmed through the DP
>> flows:
>>
>> $ sudo ovs-dpctl dump-flows
>>
>> recirc_id(0),in_port(3),eth(src=fa:16:3e:78:a2:cf,dst=fa:16:
>> 3e:bf:6f:51),eth_type(0x0800),ipv4(src=10.0.0.6,dst=168.0.0.
>> 0/252.0.0.0,proto=6,frag=no), packets:4, bytes:296, used:0.514s,
>> flags:S, actions:4
>>
>> recirc_id(0),in_port(4),eth(src=fa:16:3e:bf:6f:51,dst=fa:16:
>> 3e:78:a2:cf),eth_type(0x0800),ipv4(src=128.0.0.0/128.0.0.0,d
>> st=10.0.0.0/255.255.255.192,proto=6,frag=no),tcp(dst=32768/0x8000),
>> packets:7, bytes:518, used:0.514s, flags:S., actions:drop
>>
>>
>> $ sudo ovs-appctl ofproto/trace br-int in_port=20,tcp,dl_src=fa:16:3e
>> :78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,tcp_dst=80
>> | ovn-detrace
>>
>> Flow: tcp,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,dl_d
>> st=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,
>> nw_tos=0,nw_ecn=0,nw_ttl=0,tp_sr
>> c=0,tp_dst=80,tcp_flags=0
>>
>> bridge("br-int")
>> 
>> 0. in_port=20, priority 100
>> set_field:0x8->reg13
>> set_field:0x5->reg11
>> set_field:0x1->reg12
>> set_field:0x1->metadata
>> set_field:0x4->reg14
>> resubmit(,8)
>> 8. reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf, priority 50, cookie
>> 0xe299b701
>> resubmit(,9)
>> 9. ip,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,
>> priority 90, cookie 0x6581e351
>> resubmit(,10)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>> * Logical flow: table=1 (ls_in_port_sec_ip), priority=90,
>> match=(inport == "8ea9d963-7e55-49a6-8be7-cc294278180a" && eth.src ==
>> fa:16:3e:78:a2:cf && i
>> p4.src == {10.0.0.6}), actions=(next;)
>> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
>> resubmit(,11)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>> * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
>> match=(1), actions=(next;)
>>
>> ...
>>
>> 47. metadata=0x1, priority 0, cookie 0xf35c5784
>> resubmit(,48)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>> * Logical flow: table=7 (ls_out_stateful), priority=0, match=(1),
>> actions=(next;)
>> 48. metadata=0x1, priority 0, cookie 0x9546c56e
>> resubmit(,49)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>> * Logical flow: table=8 (ls_out_port_sec_ip), priority=0,
>> match=(1), actions=(next;)
>> 49. reg15=0x1,metadata=0x1, priority 50, cookie 0x58af7841
>> resubmit(,64)
>> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
>> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>> * Logical flow: table=9 (ls_out_port_sec_l2), priority=50,
>> match=(outport == "74db766c-2600-40f1-9ffa-255dc147d8a5),
>> actions=(output;)
>> 64. priority 0
>> resubmit(,65)
>> 65. reg15=0x1,metadata=0x1, priority 100
>> output:21
>>
>> Final flow: tcp,reg11=0x5,reg12=0x1,reg13=0x9,reg14=0x4,reg15=0x1,metada
>> ta=0x1,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,
>> dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.
>> 169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
>> Megaflow: recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x/0x1000,dl_src
>> =fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=
>> 168.0.0.0/6,nw_frag=no
>> Datapath actions: 4
>>
>>
>>
>> At this point I would've expected the connection to be in conntrack (but
>> 

Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Daniel Alvarez Sanchez
Sorry, the problem seems to be that this ACL is not added in the Port
Groups case for some reason (I checked wrong lflows log I had):

_uuid   : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
actions : "reg0[0] = 1; next;"
external_ids: {source="ovn-northd.c:2931", stage-name=ls_in_pre_acl}
logical_datapath: 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
match   : ip
pipeline: ingress
priority: 100


Apparently, this code is not getting triggered for the Port Group case:
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2930




On Tue, Jun 19, 2018 at 10:09 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:

> Hi folks,
>
> Sorry for not being clear enough. In the tcpdump we can see the SYN
> packets being sent by port1 but retransmitted as it looks like the response
> to that SYN never reaches its destination. This is confirmed through the DP
> flows:
>
> $ sudo ovs-dpctl dump-flows
>
> recirc_id(0),in_port(3),eth(src=fa:16:3e:78:a2:cf,dst=fa:
> 16:3e:bf:6f:51),eth_type(0x0800),ipv4(src=10.0.0.6,dst=
> 168.0.0.0/252.0.0.0,proto=6,frag=no), packets:4, bytes:296, used:0.514s,
> flags:S, actions:4
>
> recirc_id(0),in_port(4),eth(src=fa:16:3e:bf:6f:51,dst=fa:
> 16:3e:78:a2:cf),eth_type(0x0800),ipv4(src=128.0.0.0/
> 128.0.0.0,dst=10.0.0.0/255.255.255.192,proto=6,frag=no),tcp(dst=32768/0x8000),
> packets:7, bytes:518, used:0.514s, flags:S., actions:drop
>
>
> $ sudo ovs-appctl ofproto/trace br-int in_port=20,tcp,dl_src=fa:16:
> 3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,tcp_dst=80
> | ovn-detrace
>
> Flow: tcp,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:
> cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.
> 254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_sr
> c=0,tp_dst=80,tcp_flags=0
>
> bridge("br-int")
> 
> 0. in_port=20, priority 100
> set_field:0x8->reg13
> set_field:0x5->reg11
> set_field:0x1->reg12
> set_field:0x1->metadata
> set_field:0x4->reg14
> resubmit(,8)
> 8. reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf, priority 50, cookie
> 0xe299b701
> resubmit(,9)
> 9. ip,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,
> priority 90, cookie 0x6581e351
> resubmit(,10)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
> * Logical flow: table=1 (ls_in_port_sec_ip), priority=90,
> match=(inport == "8ea9d963-7e55-49a6-8be7-cc294278180a" && eth.src ==
> fa:16:3e:78:a2:cf && i
> p4.src == {10.0.0.6}), actions=(next;)
> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
> resubmit(,11)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
> * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
> match=(1), actions=(next;)
>
> ...
>
> 47. metadata=0x1, priority 0, cookie 0xf35c5784
> resubmit(,48)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
> * Logical flow: table=7 (ls_out_stateful), priority=0, match=(1),
> actions=(next;)
> 48. metadata=0x1, priority 0, cookie 0x9546c56e
> resubmit(,49)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
> * Logical flow: table=8 (ls_out_port_sec_ip), priority=0,
> match=(1), actions=(next;)
> 49. reg15=0x1,metadata=0x1, priority 50, cookie 0x58af7841
> resubmit(,64)
> * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
> * Logical flow: table=9 (ls_out_port_sec_l2), priority=50,
> match=(outport == "74db766c-2600-40f1-9ffa-255dc147d8a5),
> actions=(output;)
> 64. priority 0
> resubmit(,65)
> 65. reg15=0x1,metadata=0x1, priority 100
> output:21
>
> Final flow: tcp,reg11=0x5,reg12=0x1,reg13=0x9,reg14=0x4,reg15=0x1,
> metadata=0x1,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:
> a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.
> 254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
> Megaflow: recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x/0x1000,dl_
> src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=
> 168.0.0.0/6,nw_frag=no
> Datapath actions: 4
>
>
>
> At this point I would've expected the connection to be in conntrack (but
> if i'm not mistaken this is not supported in ovn-trace :?) so the return
> packet would be dropped:
>
> $ sudo ovs-appctl ofproto/trace br-int in_port=21,tcp,dl_dst=fa:16:
> 3e:78:a2:cf,dl_src=fa:16:3e:bf:6f:51,nw_dst=10.0.0.6,nw_src=169.254.169.254,tcp_dst=80
> | ovn-detrace
> Flow: tcp,in_port=21,vlan_tci=0x,dl_src=fa:16:3e:bf:6f:
> 51,dl_dst=fa:16:3e:78:a2:cf,nw_src=169.254.169.254,nw_dst=
> 10.0.0.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
>
> bridge("br-int")
> 
> 0. in_port=21, priority 100
> 

Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-19 Thread Daniel Alvarez Sanchez
Hi folks,

Sorry for not being clear enough. In the tcpdump we can see the SYN packets
being sent by port1 but retransmitted as it looks like the response to that
SYN never reaches its destination. This is confirmed through the DP flows:

$ sudo ovs-dpctl dump-flows

recirc_id(0),in_port(3),eth(src=fa:16:3e:78:a2:cf,dst=fa:16:3e:bf:6f:51),eth_type(0x0800),ipv4(src=10.0.0.6,dst=
168.0.0.0/252.0.0.0,proto=6,frag=no), packets:4, bytes:296, used:0.514s,
flags:S, actions:4

recirc_id(0),in_port(4),eth(src=fa:16:3e:bf:6f:51,dst=fa:16:3e:78:a2:cf),eth_type(0x0800),ipv4(src=
128.0.0.0/128.0.0.0,dst=10.0.0.0/255.255.255.192,proto=6,frag=no),tcp(dst=32768/0x8000),
packets:7, bytes:518, used:0.514s, flags:S., actions:drop


$ sudo ovs-appctl ofproto/trace br-int
in_port=20,tcp,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,tcp_dst=80
| ovn-detrace

Flow:
tcp,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_sr
c=0,tp_dst=80,tcp_flags=0

bridge("br-int")

0. in_port=20, priority 100
set_field:0x8->reg13
set_field:0x5->reg11
set_field:0x1->reg12
set_field:0x1->metadata
set_field:0x4->reg14
resubmit(,8)
8. reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf, priority 50, cookie
0xe299b701
resubmit(,9)
9. ip,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,
priority 90, cookie 0x6581e351
resubmit(,10)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=1 (ls_in_port_sec_ip), priority=90,
match=(inport == "8ea9d963-7e55-49a6-8be7-cc294278180a" && eth.src ==
fa:16:3e:78:a2:cf && i
p4.src == {10.0.0.6}), actions=(next;)
10. metadata=0x1, priority 0, cookie 0x1c3ddeef
resubmit(,11)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=2 (ls_in_port_sec_nd), priority=0, match=(1),
actions=(next;)

...

47. metadata=0x1, priority 0, cookie 0xf35c5784
resubmit(,48)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
* Logical flow: table=7 (ls_out_stateful), priority=0, match=(1),
actions=(next;)
48. metadata=0x1, priority 0, cookie 0x9546c56e
resubmit(,49)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
* Logical flow: table=8 (ls_out_port_sec_ip), priority=0,
match=(1), actions=(next;)
49. reg15=0x1,metadata=0x1, priority 50, cookie 0x58af7841
resubmit(,64)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
* Logical flow: table=9 (ls_out_port_sec_l2), priority=50,
match=(outport == "74db766c-2600-40f1-9ffa-255dc147d8a5), actions=(output;)
64. priority 0
resubmit(,65)
65. reg15=0x1,metadata=0x1, priority 100
output:21

Final flow:
tcp,reg11=0x5,reg12=0x1,reg13=0x9,reg14=0x4,reg15=0x1,metadata=0x1,in_port=20,vlan_tci=0x,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
Megaflow:
recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x/0x1000,dl_src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=
168.0.0.0/6,nw_frag=no
Datapath actions: 4



At this point I would've expected the connection to be in conntrack (but if
i'm not mistaken this is not supported in ovn-trace :?) so the return
packet would be dropped:

$ sudo ovs-appctl ofproto/trace br-int
in_port=21,tcp,dl_dst=fa:16:3e:78:a2:cf,dl_src=fa:16:3e:bf:6f:51,nw_dst=10.0.0.6,nw_src=169.254.169.254,tcp_dst=80
| ovn-detrace
Flow:
tcp,in_port=21,vlan_tci=0x,dl_src=fa:16:3e:bf:6f:51,dl_dst=fa:16:3e:78:a2:cf,nw_src=169.254.169.254,nw_dst=10.0.0.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0

bridge("br-int")

0. in_port=21, priority 100
set_field:0x9->reg13
set_field:0x5->reg11
set_field:0x1->reg12
set_field:0x1->metadata
set_field:0x1->reg14
resubmit(,8)
8. reg14=0x1,metadata=0x1, priority 50, cookie 0x4017bca3
resubmit(,9)
9. metadata=0x1, priority 0, cookie 0x5f2a07c6
resubmit(,10)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=1 (ls_in_port_sec_ip), priority=0, match=(1),
actions=(next;)
10. metadata=0x1, priority 0, cookie 0x1c3ddeef
resubmit(,11)
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
(0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
* Logical flow: table=2 (ls_in_port_sec_nd), priority=0, match=(1),
actions=(next;)
...
44. ip,reg15=0x4,metadata=0x1, priority 2001, cookie 0x3a87f6e9
drop
* Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"

Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-18 Thread Han Zhou
On Mon, Jun 18, 2018 at 1:43 PM, Daniel Alvarez Sanchez 
wrote:
>
> Hi all,
>
> I'm writing the code to implement the port groups in networking-ovn (the
OpenStack integration project with OVN). I found out that when a boot a VM,
looks like the egress traffic (from VM) is not working properly. The VM
port belongs to 3 Port Groups:
>
> 1. Default drop port group with the following ACLs:
>
> _uuid   : 0b092bb2-e97b-463b-a678-8a28085e3d68
> action  : drop
> direction   : from-lport
> external_ids: {}
> log : false
> match   : "inport == @neutron_pg_drop && ip"
> name: []
> priority: 1001
> severity: []
>
> _uuid   : 849ee2e0-f86e-4715-a949-cb5d93437847
> action  : drop
> direction   : to-lport
> external_ids: {}
> log : false
> match   : "outport == @neutron_pg_drop && ip"
> name: []
> priority: 1001
> severity: []
>
>
> 2. Subnet port group to allow DHCP traffic on that subnet:
>
> _uuid   : 8360a415-b7e1-412b-95ff-15cc95059ef0
> action  : allow
> direction   : from-lport
> external_ids: {}
> log : false
> match   : "inport == @pg_b1a572c6_2331_4cfb_a892_3d9d7b0af70c
&& ip4 && ip4.dst == {255.255.255.255, 10.0.0.0/26} && udp && udp.src == 68
&& udp.dst == 67"
> name: []
> priority: 1002
> severity: []
>
>
> 3. Security group port group which the following rules:
>
> 3.1 Allow ICMP traffic:
>
> _uuid   : d12a749f-0f75-4634-aa20-6116e1d5d26d
> action  : allow-related
> direction   : to-lport
> external_ids:
{"neutron:security_group_rule_id"="9675d6df-56a1-4640-9a0f-1f88e49ed2b5"}
> log : false
> match   : "outport ==
@pg_d237185f_733f_4a09_8832_bcee773722ef && ip4 && ip4.src == 0.0.0.0/0 &&
icmp4"
> name: []
> priority: 1002
> severity: []
>
> 3.2 Allow SSH traffic:
>
> _uuid   : 05100729-816f-4a09-b15c-4759128019d4
> action  : allow-related
> direction   : to-lport
> external_ids:
{"neutron:security_group_rule_id"="2a48979f-8209-4fb7-b24b-fff8d82a2ae9"}
> log : false
> match   : "outport ==
@pg_d237185f_733f_4a09_8832_bcee773722ef && ip4 && ip4.src == 0.0.0.0/0 &&
tcp && tcp.dst == 22"
> name: []
> priority: 1002
> severity: []
>
>
> 3.3 Allow IPv4/IPv6 traffic from this same port group
>
>
> _uuid   : b56ce66e-da6b-48be-a66e-77c8cfd6ab92
> action  : allow-related
> direction   : to-lport
> external_ids:
{"neutron:security_group_rule_id"="5b0a47ee-8114-4b13-8d5b-b16d31586b3b"}
> log : false
> match   : "outport ==
@pg_d237185f_733f_4a09_8832_bcee773722ef && ip6 && ip6.src ==
$pg_d237185f_733f_4a09_8832_bcee773722ef_ip6"
> name: []
> priority: 1002
> severity: []
>
>
> _uuid   : 7b68f430-41b5-414d-a2ed-6c548be53dce
> action  : allow-related
> direction   : to-lport
> external_ids:
{"neutron:security_group_rule_id"="299bd9ca-89fb-4767-8ae9-a738e98603fb"}
> log : false
> match   : "outport ==
@pg_d237185f_733f_4a09_8832_bcee773722ef && ip4 && ip4.src ==
$pg_d237185f_733f_4a09_8832_bcee773722ef_ip4"
> name: []
> priority: 1002
> severity: []
>
>
> 3.4 Allow all egress (VM point of view) IPv4 traffic
>
> _uuid   : c5fbf0b7-6461-4f27-802e-b0d743be59e5
> action  : allow-related
> direction   : from-lport
> external_ids:
{"neutron:security_group_rule_id"="a4ffe40a-f773-41d6-bc04-40500d158f51"}
> log : false
> match   : "inport == @pg_d237185f_733f_4a09_8832_bcee773722ef
&& ip4"
> name: []
> priority: 1002
> severity: []
>
>
>
> So, I boot a VM using this port and I can verify that ICMP and SSH
traffic works good while the egress traffic doesn't work. From the VM I
curl to an IP living in a network namespace and this is what I see with
tcpdump there:
>
> On the VM:
> $ ip r get 169.254.254.169
> 169.254.254.169 via 10.0.0.1 dev eth0  src 10.0.0.6
> $ curl 169.254.169.254
>
> On the hypervisor (haproxy listening on 169.254.169.254:80):
>
> $ sudo ip net e ovnmeta-0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf tcpdump -i
any po
> rt 80 -vvn
> tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture
size 262144 bytes
> 21:59:47.106883 IP (tos 0x0, ttl 64, id 61543, offset 0, flags [DF],
proto TCP (6), length 60)
> 10.0.0.6.34553 > 169.254.169.254.http: Flags [S], cksum 0x851c
(correct), seq 2571046510, win 14020, options [mss 1402,sackOK,TS val
22740490 ecr 

Re: [ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-18 Thread Ben Pfaff
On Mon, Jun 18, 2018 at 10:43:22PM +0200, Daniel Alvarez Sanchez wrote:
> I'm writing the code to implement the port groups in networking-ovn (the
> OpenStack integration project with OVN). I found out that when a boot a VM,
> looks like the egress traffic (from VM) is not working properly. The VM
> port belongs to 3 Port Groups:

There's a lot of information here but I don't see any output from
ovn-trace.  Have you tried that?  Usually it's the first thing I reach
for.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

2018-06-18 Thread Daniel Alvarez Sanchez
Hi all,

I'm writing the code to implement the port groups in networking-ovn (the
OpenStack integration project with OVN). I found out that when a boot a VM,
looks like the egress traffic (from VM) is not working properly. The VM
port belongs to 3 Port Groups:

1. Default drop port group with the following ACLs:

_uuid   : 0b092bb2-e97b-463b-a678-8a28085e3d68
action  : drop
direction   : from-lport
external_ids: {}
log : false
match   : "inport == @neutron_pg_drop && ip"
name: []
priority: 1001
severity: []

_uuid   : 849ee2e0-f86e-4715-a949-cb5d93437847
action  : drop
direction   : to-lport
external_ids: {}
log : false
match   : "outport == @neutron_pg_drop && ip"
name: []
priority: 1001
severity: []


2. Subnet port group to allow DHCP traffic on that subnet:

_uuid   : 8360a415-b7e1-412b-95ff-15cc95059ef0
action  : allow
direction   : from-lport
external_ids: {}
log : false
match   : "inport == @pg_b1a572c6_2331_4cfb_a892_3d9d7b0af70c
&& ip4 && ip4.dst == {255.255.255.255, 10.0.0.0/26} && udp && udp.src == 68
&& udp.dst == 67"
name: []
priority: 1002
severity: []


3. Security group port group which the following rules:

3.1 Allow ICMP traffic:

_uuid   : d12a749f-0f75-4634-aa20-6116e1d5d26d
action  : allow-related
direction   : to-lport
external_ids:
{"neutron:security_group_rule_id"="9675d6df-56a1-4640-9a0f-1f88e49ed2b5"}
log : false
match   : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
&& ip4 && ip4.src == 0.0.0.0/0 && icmp4"
name: []
priority: 1002
severity: []

3.2 Allow SSH traffic:

_uuid   : 05100729-816f-4a09-b15c-4759128019d4
action  : allow-related
direction   : to-lport
external_ids:
{"neutron:security_group_rule_id"="2a48979f-8209-4fb7-b24b-fff8d82a2ae9"}
log : false
match   : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
&& ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst == 22"
name: []
priority: 1002
severity: []


3.3 Allow IPv4/IPv6 traffic from this same port group


_uuid   : b56ce66e-da6b-48be-a66e-77c8cfd6ab92
action  : allow-related
direction   : to-lport
external_ids:
{"neutron:security_group_rule_id"="5b0a47ee-8114-4b13-8d5b-b16d31586b3b"}
log : false
match   : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
&& ip6 && ip6.src == $pg_d237185f_733f_4a09_8832_bcee773722ef_ip6"
name: []
priority: 1002
severity: []


_uuid   : 7b68f430-41b5-414d-a2ed-6c548be53dce
action  : allow-related
direction   : to-lport
external_ids:
{"neutron:security_group_rule_id"="299bd9ca-89fb-4767-8ae9-a738e98603fb"}
log : false
match   : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
&& ip4 && ip4.src == $pg_d237185f_733f_4a09_8832_bcee773722ef_ip4"
name: []
priority: 1002
severity: []


3.4 Allow all egress (VM point of view) IPv4 traffic

_uuid   : c5fbf0b7-6461-4f27-802e-b0d743be59e5
action  : allow-related
direction   : from-lport
external_ids:
{"neutron:security_group_rule_id"="a4ffe40a-f773-41d6-bc04-40500d158f51"}
log : false
match   : "inport == @pg_d237185f_733f_4a09_8832_bcee773722ef
&& ip4"
name: []
priority: 1002
severity: []



So, I boot a VM using this port and I can verify that ICMP and SSH traffic
works good while the egress traffic doesn't work. From the VM I curl to an
IP living in a network namespace and this is what I see with tcpdump there:

On the VM:
$ ip r get 169.254.254.169
169.254.254.169 via 10.0.0.1 dev eth0  src 10.0.0.6
$ curl 169.254.169.254

On the hypervisor (haproxy listening on 169.254.169.254:80):

$ sudo ip net e ovnmeta-0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf tcpdump -i any
po
rt 80 -vvn
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size
262144 bytes
21:59:47.106883 IP (tos 0x0, ttl 64, id 61543, offset 0, flags [DF], proto
TCP (6), length 60)
10.0.0.6.34553 > 169.254.169.254.http: Flags [S], cksum 0x851c
(correct), seq 2571046510, win 14020, options [mss 1402,sackOK,TS val
22740490 ecr 0,nop,wscale 2], length 0
21:59:47.106935 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP
(6), length 60)
169.254.169.254.http > 10.0.0.6.34553: Flags [S.], cksum 0x5e31
(incorrect -> 0x34c0), seq 3215869181, ack 2571046511, win 28960, options
[mss 1460,sackOK,TS val