Hi all,

This is a respin of the parallelisations for northd series.

As there are quite a few factors affecting ovn end-to-end performance,
I had to measure it by adding debug output.

I used this simple patch for the measurement (not included in the series):

@@ -11220,10 +11220,16 @@ build_lflows(struct northd_context *ctx, struct hmap 
*datapaths,
              struct hmap *lbs)
 {
     struct hmap lflows = HMAP_INITIALIZER(&lflows);
+    long long finish, start = time_usec();
 
     build_lswitch_and_lrouter_flows(datapaths, ports,
                                     port_groups, &lflows, mcgroups,
                                     igmp_groups, meter_groups, lbs);
+    finish = time_usec();
+
+    if (hmap_count(&lflows)) {
+        VLOG_INFO("Time to compute lflows %ld, %lld, %f", hmap_count(&lflows), 
finish - start, 1.0 * (finish - start)/hmap_count(&lflows));
+    }
 
     /* Push changes to the Logical_Flow table to database. */
     const struct sbrec_logical_flow *sbflow, *next_sbflow;

The results are as follows.

Single Threaded build flow time average is ~ 0.9 usec per lflow on a Ryzen 5 
3600
Multithreaded build using 8 vCPUs on the same machine is ~0.4 usec per lflow

So there is a clear ~ 2x gain.

I have followed the total CPU usage of ovn-northd. Its total CPU usage
(f.e. over the course of a whole test run) increases only slightly as
a result.

While it will spike for short periods to N x 100% where N is the number
of CPUs to process lflows, the total long term increase is not substantial
- a few % of mostly sys time on a vm. I would expect some of that to be even
less on real hardware.

Brhds,

A.


_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to