Hi,
Just added then Napatech NIC supported to the list.

Finn

---
diff --git a/Documentation/howto/dpdk.rst b/Documentation/howto/dpdk.rst
index 40f9d9649..442848870 100644
--- a/Documentation/howto/dpdk.rst
+++ b/Documentation/howto/dpdk.rst
@@ -727,3 +727,20 @@ devices to bridge ``br0``. Once complete, follow the below 
steps:
    Check traffic on multiple queues::
 
        $ cat /proc/interrupts | grep virtio
+
+.. _dpdk-flow-hardware-offload:
+
+Flow Hardware Offload (Experimental)
+------------------------------------
+
+The flow hardware offload is disabled by default and can be enabled by::
+
+    $ ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
+
+So far only partial flow offload is implemented. Moreover, it only
+works with PMD drivers have the rte flow action "MARK + RSS" support.
+
+The validated NICs are:
+
+- Mellanox (ConnectX-4, ConnectX-4 Lx, ConnectX-5)
+- Napatech (NT200B01)
diff --git a/NEWS b/NEWS
index d7c83c21e..b9f9bd415 100644
--- a/NEWS
+++ b/NEWS
@@ -44,6 +44,7 @@ v2.9.0 - xx xxx xxxx
    - DPDK:
      * Add support for DPDK v17.11
      * Add support for vHost IOMMU
+     * Add experimental flow hardware offload support
      * New debug appctl command 'netdev-dpdk/get-mempool-info'.
      * All the netdev-dpdk appctl commands described in ovs-vswitchd man page.
      * Custom statistics:

--------------------


>-----Original Message-----
>From: Yuanhan Liu [mailto:y...@fridaylinux.org]
>Sent: 17. januar 2018 11:02
>To: Stokes, Ian <ian.sto...@intel.com>
>Cc: d...@openvswitch.org; Finn Christensen <f...@napatech.com>; Darrell Ball
><db...@vmware.com>; Chandran, Sugesh <sugesh.chand...@intel.com>;
>Simon Horman <simon.hor...@netronome.com>
>Subject: Re: [PATCH v5 0/5] OVS-DPDK flow offload with rte_flow
>
>On Mon, Jan 15, 2018 at 05:28:18PM +0000, Stokes, Ian wrote:
>> > Hi,
>> >
>> > Here is a joint work from Mellanox and Napatech, to enable the flow
>> > hw offload with the DPDK generic flow interface (rte_flow).
>> >
>> > The basic idea is to associate the flow with a mark id (a unit32_t
>> > number).
>> > Later, we then get the flow directly from the mark id, which could
>> > bypass some heavy CPU operations, including but not limiting to mini
>> > flow extract, emc lookup, dpcls lookup, etc.
>> >
>> > The association is done with CMAP in patch 1. The CPU workload
>> > bypassing is done in patch 2. The flow offload is done in patch 3,
>> > which mainly does two things:
>> >
>> > - translate the ovs match to DPDK rte flow patterns
>> > - bind those patterns with a RSS + MARK action.
>> >
>> > Patch 5 makes the offload work happen in another thread, for leaving
>> > the datapath as light as possible.
>> >
>> > A PHY-PHY forwarding with 1000 mega flows (udp,tp_src=1000-1999) and
>> > 1 million streams (tp_src=1000-1999, tp_dst=2000-2999) show more
>> > than 260% performance boost.
>> >
>> > Note that it's disabled by default, which can be enabled by:
>> >
>> >     $ ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
>>
>> Hi Yuanhan,
>>
>> Thanks for working on this, I'll be looking at this over the coming week so
>don't consider this a full review.
>>
>> Just a general comment, at first glance there doesn't seem to be any
>documentation in the patchset?
>
>Right, my bad.
>
>> I would expect a patch for the DPDK section of the OVS docs at minimum
>detailing:
>>
>> (i) HWOL requirements (Any specific SW/drivers required or DPDK
>libs/PMDs etc. that have to be enabled for HWOL).
>> (ii) HWOL Usage (Enablement and disablement as shown above).
>> (iii) List of Validated HW devices (As HWOL functionality will vary between
>devices it would be good to provide some known 'verified' cards to use with it.
>At this stage we don't have to have every card that it will work with it but 
>I'd
>like to see a list of NIC Models that this has been validated on to date).
>> (iv) Any Known limitations.
>>
>> You'll also have to add an entry to the NEWS document to flag that HWOL
>has been introduced to OVS with DPDK.
>>
>> As discussed previously on the community call, at this point the feature
>should be marked experimental in both NEWS and the documentation, this is
>just to signify that it is subject to change as more capabilities such as full
>offload is added over time.
>
>For not flooding the mailing list, here I listed the changes I made.
>Please help review it. Also, please let me know whether I should send out a
>new version (with the doc change) soon or I should wait for you full review.
>
>Regarding the validated NICs, probably Finn could add the nics from Napatech.
>
>---
>diff --git a/Documentation/howto/dpdk.rst
>b/Documentation/howto/dpdk.rst index d123819..20a4190 100644
>--- a/Documentation/howto/dpdk.rst
>+++ b/Documentation/howto/dpdk.rst
>@@ -709,3 +709,19 @@ devices to bridge ``br0``. Once complete, follow the
>below steps:
>    Check traffic on multiple queues::
>
>        $ cat /proc/interrupts | grep virtio
>+
>+.. _dpdk-flow-hardware-offload:
>+
>+Flow Hardware Offload (Experimental)
>+------------------------------------
>+
>+The flow hardware offload is disabled by default and can be enabled by::
>+
>+    $ ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
>+
>+So far only partial flow offload is implemented. Moreover, it only
>+works with PDM drivers have the rte flow action "MARK + RSS" support.
>+
>+The validated NICs are:
>+
>+- Mellanox (ConnectX-4, ConnectX-4 Lx, ConnectX-5)
>diff --git a/NEWS b/NEWS
>index 89bb7a7..02ce97d 100644
>--- a/NEWS
>+++ b/NEWS
>@@ -25,6 +25,7 @@ Post-v2.8.0
>    - DPDK:
>      * Add support for DPDK v17.11
>      * Add support for vHost IOMMU
>+     * Add experimental flow hardware offload support
>
> v2.8.0 - 31 Aug 2017
> --------------------
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to