Hello!

I have been experimenting with conntrack support in OvS, trying to run the firewall example from the ovs-actions man page:

       table=0,priority=1,action=drop
       table=0,priority=10,arp,action=normal
       table=0,priority=100,ip,ct_state=-trk,action=ct(table=1)
       table=1,in_port=1,ip,ct_state=+trk+new,action=ct(commit),2
       table=1,in_port=1,ip,ct_state=+trk+est,action=2
       table=1,in_port=2,ip,ct_state=+trk+new,action=drop
       table=1,in_port=2,ip,ct_state=+trk+est,action=1

It works fine in the bridging setup, when two non-LOCAL ports send traffic to each other. However, once I tried using OvS's local port as one of the endpoints, everything broke: no connection in either direction could be established.

After looking at debug logs, I found that packets going to and from the LOCAL port already come with conntrack-related metadata initialized, e.g. ct_state is set to 0x21. Since the default table doesn't have a rule that matches ct_state=+trk+new, the packet is dropped, preventing the connection from being established. Adding the following rules helps, but feels like
a workaround at best:

       table=0,priority=100,ip,ct_state=+trk+new,actions=ct(table=1)
       table=0,priority=100,ip,ct_state=+trk+est,actions=ct(table=1)

Is this behavior expected? I have observed it on Linux with both the sfc and the mlx5 kernel drivers. Using DPDK as the datapath works as I expect: ct-related fields are not set and the firewall works as intended. Perhaps this is a bug in the netlink datapath interface or something along these lines, or maybe an artifact of how Linux processes traffic?
Please advise.

Thanks,
Viacheslav
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to