Re: [ovs-dev] Invitation: OVS-DPDK bi-weekly meeting @ Every 2 weeks from 5pm to 6pm on Thursday from Thu Dec 15 to Thu Jun 29, 2017 (GMT) (d...@openvswitch.org)

2017-01-04 Thread Rashid Khan
Thanks John


On Wed, Jan 4, 2017 at 2:12 AM, John Fastabend <john.fastab...@gmail.com>
wrote:

> On 16-12-16 10:04 AM, Kevin Traynor wrote:
> > Thanks for the meeting notes Robin, I've edited a bit.
> >
>
> Hi,
>
> Delayed significantly but I can provide additional details related
> to _my_ opinions around connection tracking and would be interested
> in feedback. (warning it might be a bit off-topic for traditional
> dev mailing list, but seems more in the spirit to respond on
> list in the open vs in private round of emails.) Also I'm not an
> expert on the exact bits being used in the different variants of
> conntrack that OVS may call into DPDK/Linux/whatever HyperV uses or
> plans to use.
>
> +CC (experts) Thomas, Simon, and Justin
>
> > 15 DEC 2016
> > ATTENDEES
> > Kevin Traynor, Robin Giller, Rashid Khan, Mark Gray, Michael Lilja,
> > Bhanuprakash Bodireddy, Rony Efraim, Sugesh Chandran, Aaron Conole,
> > Thomas Monjalon, Daniele diProietto, Vinod Chegu
> >
>
> [...]
>
> >
> > * Optimise SW Conntrack perf (Mark Gray)
> > --Bhanu and Antonio will start looking at this start of 2017
> > HW acceleration:
> >
> > * Share use cases where conntrack is not needed (John Fastabend)
>
> First get some basics out. I tend to break connection tracking into
> at least two broad categories.
>
>   (a) related flow identification
>   (b) bucket list of protocol verification done in Linux conntrack
> TCP window enforcement
>
> The challenge on the hardware side is both models require some state
> that is kept in software in the conntrack logic.
>
> To identify "related" flows though we can kick packets that have a
> miss in the hardware up to the software logic which can instantiate
> related flow rules in software dpif, hardware dpif or both. Once
> the related flow is established all other packets will have a match
> in hardware and be forwarded correctly. I believe this breaks the
> current model where every packet in software is sent to the connection
> tracking engine. But if we disregard (b) for a moment I do not see
> the need for every packet to be handled by this logic even in the
> software case. Established "related" flows _should_ be able to bypass
> stateful logic. Could this be an optimization Mark, et. al. look
> at assuming my assumption of every packet hitting the conntrack logic
> is correct?
>
> Now for (b) and possibly more controversial how valuable is the protocol
> validation provided here? I assume protocol violations should be handled
> by the terminating protocol stack e.g. VM, container, etc. OVS has been
> happily deployed without this so do we know of security issues here
> that are not easily fixed by patching the protocol stack? I googled
> around for some concrete examples or potential use cases but all I found
> was some RFC conformance. Is this to protect against malicious VM sending
> subtle and non-coformant TCP traffic? Other than reading the code I
> found it hard to decipher exactly what protocol validation was being
> done in Linux conntrack implementation. Is there some known documentation?
>
> > --Would like to get list of use cases not requiring conntrack
> > --Eg firewall in VM, conntrack is done in VM, GiLAN, Mobile edge compute
> >
>
> The exact use cases I was considering are when we "trust" the TCP protocol
> so (b) is not needed. Either because it is generated by local stacks or has
> been established via some TCP proxy in which case the TCP proxy should
> provide any required validation. I've made the assumption that (a) can be
> handled by setup logic.
>
> Alternatively the function can be provided via some form of service
> chaining where a function does the role per the above "firewall in VM"
> example.
>
> Thanks!
> John
> (john.r.fastab...@intel.com)
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
>
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Hardware acceleration enablement in OVS-DPDK

2018-03-01 Thread Rashid Khan
Sorry folks for the spam

I thought this was a different thread between Intel and RH.
Sorry my bad. Please ignore me.



On Thu, Mar 1, 2018 at 4:55 PM, Rashid Khan <rk...@redhat.com> wrote:

> Added Don, Kevin and Bill from RH side to this thread.
> Sugesh, kindly please keep them in the loop.
>
> Don is our Engineering partner manager (aka coordinator-in-chief) for
> Intel.
> Kevin is manager of EPM team
> And I have asked Bill T from my team to be the front person for this.
>
>
> Looking forward to collaborative effort around this.
>
>
> Thanks
> Rashid
>
> On Thu, Mar 1, 2018 at 4:46 PM, Chandran, Sugesh <
> sugesh.chand...@intel.com> wrote:
>
>> Hello All,
>>
>> I am setting up next call in this series to discuss further on approaches
>> to enable full hardware acceleration in OVS-DPDK.
>>
>> Agenda for this meeting is to discuss further on two proposals (one from
>> Intel and another from Napatech) for enabling full hardware acceleration in
>> OVS-DPDK.
>>
>> The Previous meeting minutes can be found at following google doc
>>
>> https://docs.google.com/document/d/1KeQB5NIUph721uuk1f1wMy4Q
>> XwSWxlzDaumd_bWX6YI/edit?usp=sharing
>>
>> I would encourage everyone to have look at the following two patch series
>> before the call, so that we can have a productive discussion.
>>
>>
>> The Napatech hardware acceleration approach in OVS-DPDK can be found
>> below,
>>
>> >>>>>>
>> Hi All,
>>
>> As agreed in last meeting, I have created an OVS branch of our OVS fork
>> (from late January - v2.9), and added the Partial hw offload proposal,
>> followed by our full offload extension - 3 additional commits.
>> It should compile against DPDK 17.11, and for that to be possible an
>> existing RTE_FLOW_ACTION_TYPE_VF is (mis-)used to send port-id to PMD.
>> Furthermore, tcp_flags update from NIC is commented out due to lack of that
>> field in the RTE FLOW query structure.
>> Please see this as a PoC. It is not yet ready for an actual proposal,
>> though it is fully functioning in our lab.
>>
>> https://github.com/napatech/ovs/tree/hw-full-offload-v1
>>
>> Further notes:
>> As mentioned at the last meeting, this proposal is based on vPorts on NIC
>> (being VF, virtio or other vPort), completely handled outside OVS. The
>> vPorts are then connected and configured in OVS as "normal" type=dpdk
>> ports. I know this is not in-line with Intels proposal, however, we think
>> it might be a good idea. It makes it simpler in OVS, since we only need
>> either capabilities, or trial & error to do transparent full hw-offload.
>> Anyway, this is our current proposal for the next discussion meeting.
>>
>> Thanks,
>> Finn
>>
>> >>>>>>
>>
>>
>> The intel hardware acceleration proposal can be found at
>>
>>
>> >>>>>>>>>>>>>>>>
>>
>> Hello All,
>>
>> As discussed in the last meeting, I have created a OVS 2.7 fork with our
>> hardware acceleration implementation as below.
>>
>>
>> https://github.com/sugchand/ovs.git (branch - dpdk-hw-accel-intel<
>> https://github.com/sugchand/ovs/tree/dpdk-hw-accel-intel>)
>>
>> Few points on the implementation.
>> 1)  This implementation is just for reference to show the proposal.
>> 2)  The code is still 2.7 based. We will merge to latest branch once
>> we have finalized on the approach.
>> 3)  Some of the hardware acceleration functionalities still missing
>> in this implementation, such as flow offload thread , flow stat and
>> tcp-flag handling. We are working on it to add those support.
>> 4)  This implementation uses some of hardware specific APIs that are
>> not yet available in the DPDK main tree. So the code may not build properly.
>>
>> Please review the implementation (in the last 12 commits), Will setup a
>> follow up call to discuss further on this.
>>
>> Thank you!
>>
>>
>> Regards
>> _Sugesh
>> >>>>>>>>>>>>>>>>>
>>
>> 
>> 
>> .
>> --> Join Skype Meeting<https://meet.intel.com/sugesh.chandran/LW26GQKW>
>>   Trouble Joining? Try Skype Web App<https://meet.intel.com/sug
>> esh.chandran/LW26GQKW?sl=1>
>> Join by phone
>> +1(916)356-2663 (or your local brid

Re: [ovs-dev] Hardware acceleration enablement in OVS-DPDK

2018-03-01 Thread Rashid Khan
Added Don, Kevin and Bill from RH side to this thread.
Sugesh, kindly please keep them in the loop.

Don is our Engineering partner manager (aka coordinator-in-chief) for
Intel.
Kevin is manager of EPM team
And I have asked Bill T from my team to be the front person for this.


Looking forward to collaborative effort around this.


Thanks
Rashid

On Thu, Mar 1, 2018 at 4:46 PM, Chandran, Sugesh 
wrote:

> Hello All,
>
> I am setting up next call in this series to discuss further on approaches
> to enable full hardware acceleration in OVS-DPDK.
>
> Agenda for this meeting is to discuss further on two proposals (one from
> Intel and another from Napatech) for enabling full hardware acceleration in
> OVS-DPDK.
>
> The Previous meeting minutes can be found at following google doc
>
> https://docs.google.com/document/d/1KeQB5NIUph721uuk1f1wMy4QXwSWx
> lzDaumd_bWX6YI/edit?usp=sharing
>
> I would encourage everyone to have look at the following two patch series
> before the call, so that we can have a productive discussion.
>
>
> The Napatech hardware acceleration approach in OVS-DPDK can be found below,
>
> >>
> Hi All,
>
> As agreed in last meeting, I have created an OVS branch of our OVS fork
> (from late January - v2.9), and added the Partial hw offload proposal,
> followed by our full offload extension - 3 additional commits.
> It should compile against DPDK 17.11, and for that to be possible an
> existing RTE_FLOW_ACTION_TYPE_VF is (mis-)used to send port-id to PMD.
> Furthermore, tcp_flags update from NIC is commented out due to lack of that
> field in the RTE FLOW query structure.
> Please see this as a PoC. It is not yet ready for an actual proposal,
> though it is fully functioning in our lab.
>
> https://github.com/napatech/ovs/tree/hw-full-offload-v1
>
> Further notes:
> As mentioned at the last meeting, this proposal is based on vPorts on NIC
> (being VF, virtio or other vPort), completely handled outside OVS. The
> vPorts are then connected and configured in OVS as "normal" type=dpdk
> ports. I know this is not in-line with Intels proposal, however, we think
> it might be a good idea. It makes it simpler in OVS, since we only need
> either capabilities, or trial & error to do transparent full hw-offload.
> Anyway, this is our current proposal for the next discussion meeting.
>
> Thanks,
> Finn
>
> >>
>
>
> The intel hardware acceleration proposal can be found at
>
>
> 
>
> Hello All,
>
> As discussed in the last meeting, I have created a OVS 2.7 fork with our
> hardware acceleration implementation as below.
>
>
> https://github.com/sugchand/ovs.git (branch - dpdk-hw-accel-intel github.com/sugchand/ovs/tree/dpdk-hw-accel-intel>)
>
> Few points on the implementation.
> 1)  This implementation is just for reference to show the proposal.
> 2)  The code is still 2.7 based. We will merge to latest branch once
> we have finalized on the approach.
> 3)  Some of the hardware acceleration functionalities still missing in
> this implementation, such as flow offload thread , flow stat and tcp-flag
> handling. We are working on it to add those support.
> 4)  This implementation uses some of hardware specific APIs that are
> not yet available in the DPDK main tree. So the code may not build properly.
>
> Please review the implementation (in the last 12 commits), Will setup a
> follow up call to discuss further on this.
>
> Thank you!
>
>
> Regards
> _Sugesh
> >
>
> 
> 
> .
> --> Join Skype Meeting
>   Trouble Joining? Try Skype Web App sugesh.chandran/LW26GQKW?sl=1>
> Join by phone
> +1(916)356-2663 (or your local bridge access #) Choose bridge
> 5. 20access%20#)%20Choose%20bridge%205.> (Global)   English
> (United States)
> Find a local number
>
> Conference ID: 9273892307
>  Forgot your dial-in PIN? |Help<
> https://o15.officeredir.microsoft.com/r/rlidLync15?clid=1033=5=2009>
>
> [!OC([1033])!]
> 
> 
> .
>
>
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
>
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev