Thanks John

On Wed, Jan 4, 2017 at 2:12 AM, John Fastabend <[email protected]>
wrote:

> On 16-12-16 10:04 AM, Kevin Traynor wrote:
> > Thanks for the meeting notes Robin, I've edited a bit.
> >
>
> Hi,
>
> Delayed significantly but I can provide additional details related
> to _my_ opinions around connection tracking and would be interested
> in feedback. (warning it might be a bit off-topic for traditional
> dev mailing list, but seems more in the spirit to respond on
> list in the open vs in private round of emails.) Also I'm not an
> expert on the exact bits being used in the different variants of
> conntrack that OVS may call into DPDK/Linux/whatever HyperV uses or
> plans to use.
>
> +CC (experts) Thomas, Simon, and Justin
>
> > 15 DEC 2016
> > ATTENDEES
> > Kevin Traynor, Robin Giller, Rashid Khan, Mark Gray, Michael Lilja,
> > Bhanuprakash Bodireddy, Rony Efraim, Sugesh Chandran, Aaron Conole,
> > Thomas Monjalon, Daniele diProietto, Vinod Chegu
> >
>
> [...]
>
> >
> > * Optimise SW Conntrack perf (Mark Gray)
> > --Bhanu and Antonio will start looking at this start of 2017
> > HW acceleration:
> >
> > * Share use cases where conntrack is not needed (John Fastabend)
>
> First get some basics out. I tend to break connection tracking into
> at least two broad categories.
>
>   (a) related flow identification
>   (b) bucket list of protocol verification done in Linux conntrack
>         TCP window enforcement
>
> The challenge on the hardware side is both models require some state
> that is kept in software in the conntrack logic.
>
> To identify "related" flows though we can kick packets that have a
> miss in the hardware up to the software logic which can instantiate
> related flow rules in software dpif, hardware dpif or both. Once
> the related flow is established all other packets will have a match
> in hardware and be forwarded correctly. I believe this breaks the
> current model where every packet in software is sent to the connection
> tracking engine. But if we disregard (b) for a moment I do not see
> the need for every packet to be handled by this logic even in the
> software case. Established "related" flows _should_ be able to bypass
> stateful logic. Could this be an optimization Mark, et. al. look
> at assuming my assumption of every packet hitting the conntrack logic
> is correct?
>
> Now for (b) and possibly more controversial how valuable is the protocol
> validation provided here? I assume protocol violations should be handled
> by the terminating protocol stack e.g. VM, container, etc. OVS has been
> happily deployed without this so do we know of security issues here
> that are not easily fixed by patching the protocol stack? I googled
> around for some concrete examples or potential use cases but all I found
> was some RFC conformance. Is this to protect against malicious VM sending
> subtle and non-coformant TCP traffic? Other than reading the code I
> found it hard to decipher exactly what protocol validation was being
> done in Linux conntrack implementation. Is there some known documentation?
>
> > --Would like to get list of use cases not requiring conntrack
> > --Eg firewall in VM, conntrack is done in VM, GiLAN, Mobile edge compute
> >
>
> The exact use cases I was considering are when we "trust" the TCP protocol
> so (b) is not needed. Either because it is generated by local stacks or has
> been established via some TCP proxy in which case the TCP proxy should
> provide any required validation. I've made the assumption that (a) can be
> handled by setup logic.
>
> Alternatively the function can be provided via some form of service
> chaining where a function does the role per the above "firewall in VM"
> example.
>
> Thanks!
> John
> ([email protected])
> _______________________________________________
> dev mailing list
> [email protected]
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
>
>
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to