Hi Simon,

You might have missed my general comments email before you committed the patchset to master.

Just now I also sent my full review, and it looks like there is one nasty memory trashing one in 3/3 which need fixing. It’s the x2nrealloc() always allocating 1 entry, but we write to other offsets.

//Eelco


On 19 Oct 2018, at 11:28, Simon Horman wrote:

On Thu, Oct 18, 2018 at 09:43:11PM +0530, Sriharsha Basavapatna via dev wrote:
With the current OVS offload design, when an offload-device fails to add a flow rule and returns an error, OVS adds the rule to the kernel datapath. The flow gets processed by the kernel datapath for the entire life of that flow. This is fine when an error is returned by the device due to lack of
support for certain keys or actions.

But when an error is returned due to temporary conditions such as lack of resources to add a flow rule, the flow continues to be processed by kernel even when resources become available later. That is, those flows never get offloaded again. This problem becomes more pronounced when a flow that has been initially offloaded may have a smaller packet rate than a later flow
that could not be offloaded due to lack of resources. This leads to
inefficient use of HW resources and wastage of host CPU cycles.

This patch-set addresses this issue by providing a way to detect temporary offload resource constraints (Out-Of-Resource or OOR condition) and to selectively and dynamically offload flows with a higher packets-per-second (pps) rate. This dynamic rebalancing is done periodically on netdevs that are in OOR state until resources become available to offload all pending
flows.

The patch-set involves the following changes at a high level:

1. Detection of Out-Of-Resources (OOR) condition on an offload-capable
   netdev.
2. Gathering flow offload selection criteria for all flows on an OOR netdev;
   i.e, packets-per-second (pps) rate of flows for offloaded and
   non-offloaded (pending) flows.
3. Dynamically replacing offloaded flows with a lower pps-rate, with
non-offloaded flows with a higher pps-rate, on an OOR netdev. A new
   OpenvSwitch configuration option - "offload-rebalance" to enable
   this policy.

Cost/benefits data points:

1. Rough cost of the new rebalancing, in terms of CPU time:

Ran a test that replaced 256 low pps-rate flows(pings) with 256 high pps-rate flows(iperf), in a system with 4 cpus (Intel Xeon E5 @ 2.40GHz; 2 cores with hw threads enabled, rest disabled). The data showed that cpu utilization increased by about ~20%. This increase occurs during the specific second in which rebalancing is done. And subsequently (from the next second), cpu utilization decreases significantly due to offloading of higher pps-rate flows. So effectively there's a bump in cpu utilization at the time of rebalancing, that is more than compensated by reduced cpu
   utilization once the right flows get offloaded.

2. Rough benefits to the user in terms of offload performance:

The benefits to the user is reduced cpu utilization in the host, since higher pps-rate flows get offloaded, replacing lower pps-rate flows. Replacing a single offloaded flood ping flow with an iperf flow (multiple connections), shows that the cpu %used that was originally 100% on a single cpu (rebalancing disabled) goes down to 35% (rebalancing enabled).
   That is, cpu utilization decreased 65% after rebalancing.

3. Circumstances under which the benefits would show up:

   The rebalancing benefits would show up once offload resources are
exhausted and new flows with higher pps-rate are initiated, that would
   otherwise be handled by kernel datapath costing host cpu cycles.

This can be observed using 'ovs appctl dpctl/ dump-flows' command. Prior to rebalancing, any high pps-rate flows that couldn't be offloaded due to resource crunch would show up in the output of 'dump-flows type=ovs' and
   after rebalancing such flows would appear in the output of
   'dump-flows type=offloaded'.

Thanks, applied to master.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to