Hi Daniele,

Thanks for the review. I will send v5 asap.
A few answers inline below.

BR, Jan

From: Daniele Di Proietto [mailto:[email protected]]
Sent: Thursday, 11 August, 2016 02:59
To: Jan Scheurich
Cc: [email protected]; [email protected]
Subject: Re: [ovs-dev] [PATCH v4] dpif-netdev: dpcls per in_port with sorted 
subtables

Hi Jan,
thanks for the patch!
I tried benchmarking with flow tables from utilities/ovs-pipegen.py and the 
results look very good.
[Jan] Do you have any numbers? Setting up pipelines is one thing. Is there are 
standard test setup for measuring DPDK datapath performance for these pipelines 
also? Seems non-trivial. Everyone have their own setup and figures are 
difficult to compare.
I had to manually edit the patch to apply: some whitespaces are trimmed, some 
lines are truncated and the form feeds('^L') are two separate characters.  
Would you mind using git send-email next time?

[Jan] I will try to use git send-email. Never done it before. Please bear with 
me.

I have a few minor comments inline
Thanks,

Daniele

 static void
 dp_netdev_pmd_remove_flow(struct dp_netdev_pmd_thread *pmd,
                           struct dp_netdev_flow *flow)
     OVS_REQUIRES(pmd->flow_mutex)
 {
     struct cmap_node *node = CONST_CAST(struct cmap_node *, &flow->node);
+    struct dpcls *cls;
+    odp_port_t in_port = flow->flow.in_port.odp_port;

-    dpcls_remove(&pmd->cls, &flow->cr);
+    cls = dp_netdev_pmd_lookup_dpcls(pmd, in_port);
+    ovs_assert(cls != NULL);
+    dpcls_remove(cls, &flow->cr);

Do you think we should remove the dpcls if it becomes empty?

[Jan] Jarno initially suggested the same and I considered but dropped the idea 
for the following reasons:
a) The memory overhead of an empty dpcls is not significant
b) Freed DP port numbers (and hence empty dpcls instances) are re-used. So even 
if e.g. dpdkvhostuser ports come and go as VMs are started and stopped, the set 
of dpcls instances will not grow over time.

+static inline void
+dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd)
+{
+    struct dpcls *cls;
+    long long int now = time_msec();
+
+    if (now > pmd->next_optimization) {
+        /* Try to obtain the flow lock to block out revalidator threads.
+         * If not possible, just try next time. */
+        if (ovs_mutex_trylock(&pmd->flow_mutex)) {

ovs_mutex_trylock() returns non-zero if it fails, zero if it succeeds, we need 
an extra !
This was reported by a clang warning, I think it's extremely useful to deal 
with concurrency.

[Jan] Thanks for spotting this. It effectively disables sorting, which I didn’t 
notice as unexpectedly high number avg. subtable lookups because in my minimal 
performance test setup, there is just one subtable per in_port. Will try to 
have a go at clang also.

_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev

Reply via email to