These results look more clear: 2-3 % improvement for no EMC cases.
Maybe others have comments?


On 9/12/17, 9:34 AM, "[email protected]" 
<[email protected]> wrote:

    When EMC is disabled the reading of RSS hash is skipped.
    Also, for packets that are not recirculated it retrieves
    the hash value without considering the recirc id.
    
    CC: Darrell Ball <[email protected]>
    CC: Billy O Mahony <[email protected]>
    Signed-off-by: Antonio Fischetti <[email protected]>
    ---
     V5
      - Removed OVS_LIKELY when checking cur_min.
    
      - I see a performance improvement for P2P and PVP tescases
        when EMC is disabled, measurements are below.
    
      - I also tried different solutions, eg passing md_is_valid
        to dpif_netdev_packet_get_rss_hash - or even the recirc_id
        computed inside dp_execute_cb - as suggested by Billy but
        I didn't see any performance benefit.
    
      - Rebased on Commit id
           b9fedfa61f000f49500973d2a51e99a80d9cf9b8
    
      - Each measurement was repeated 5 times and an average was
        computed.
    
    
        P2P testcase
        ------------
    Flow setup:
    table=0, in_port=dpdk0 actions=output:dpdk1
    table=0, in_port=dpdk1 actions=output:dpdk0
    
    Mono-directional, 64B UDP packets. Traffic sent at line-rate.
    PMD threads: 2
    Built with "-O2 -march=native -g"
    
    Measurements and average are in Mpps.
    
       Orig   |           5 Measurments                  |  Avg
     ---------+------------------------------------------+---------
     With EMC | 11.39   11.37   11.46   11.35   11.39    | 11.39
     no EMC   |  8.23    8.22    8.26    8.20    8.22    |  8.23
    
      + patch |           5 Measurments                  |  Avg
     ---------+------------------------------------------+---------
     With EMC |  11.46   11.39   11.40   11.45   11.38   | 11.42
     no EMC   |   8.42    8.41    8.37    8.43    8.37   |  8.40
    
    
        PVP testcase
        ------------
    Flow setup:
    table=0, in_port=dpdk0 actions=output:dpdkvhostuser0
    table=0, in_port=dpdkvhostuser0 actions=output:dpdk0
    table=0, in_port=dpdkvhostuser1 actions=output:dpdk1
    table=0, in_port=dpdk1 actions=output:dpdkvhostuser1
    
    Bi-directional, 64B UDP packets. Traffic sent at line-rate.
    PMD threads:  2
    Built with "-O2 -march=native -g"
    
    Measurements and average are in Mpps.
    
       Orig   |           5 Measurments                  |  Avg
     ---------+------------------------------------------+---------
     With EMC |   4.59   4.60   4.46   4.59   4.59       |  4.57
     no EMC   |   3.72   3.72   3.64   3.72   3.72       |  3.70
    
      + patch |           5 Measurments                  |  Avg
     ---------+------------------------------------------+---------
     With EMC |  4.50   4.62   4.60   4.60   4.58        |  4.58
     no EMC   |  3.78   3.86   3.84   3.84   3.83        |  3.83
    
    
        Recirculation testcase
        ----------------------
    In a test setup with a firewall I didn't see any performance
    difference between the original and the patch.
    
    
     V4
      - reworked to remove dependencies from other patches in
        patchset "Skip EMC for recirc pkts and other optimizations."
        
https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_pipermail_ovs-2Ddev_2017-2DAugust_337320.html&d=DwIBAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=Mlu2yd3PFPpnA0HXsnY9Mq9JpsZYoS9_cSNEv6nMjsI&s=frcbor3lcJkzFUS3Dl5Mmioaz43QeZOA6AwDWjO0Iac&e=
 
    
      - measurements were repeated with the latest head of master.
    ---
     lib/dpif-netdev.c | 32 ++++++++++++++++++++++++++++----
     1 file changed, 28 insertions(+), 4 deletions(-)
    
    diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
    index 0ceef9d..baf65e8 100644
    --- a/lib/dpif-netdev.c
    +++ b/lib/dpif-netdev.c
    @@ -4765,6 +4765,22 @@ dp_netdev_upcall(struct dp_netdev_pmd_thread *pmd, 
struct dp_packet *packet_,
     }
     
     static inline uint32_t
    +dpif_netdev_packet_get_rss_hash_orig_pkt(struct dp_packet *packet,
    +                                const struct miniflow *mf)
    +{
    +    uint32_t hash;
    +
    +    if (OVS_LIKELY(dp_packet_rss_valid(packet))) {
    +        hash = dp_packet_get_rss_hash(packet);
    +    } else {
    +        hash = miniflow_hash_5tuple(mf, 0);
    +        dp_packet_set_rss_hash(packet, hash);
    +    }
    +
    +    return hash;
    +}
    +
    +static inline uint32_t
     dpif_netdev_packet_get_rss_hash(struct dp_packet *packet,
                                     const struct miniflow *mf)
     {
    @@ -4899,10 +4915,18 @@ emc_processing(struct dp_netdev_pmd_thread *pmd,
             }
             miniflow_extract(packet, &key->mf);
             key->len = 0; /* Not computed yet. */
    -        key->hash = dpif_netdev_packet_get_rss_hash(packet, &key->mf);
    -
    -        /* If EMC is disabled skip emc_lookup */
    -        flow = (cur_min == 0) ? NULL: emc_lookup(flow_cache, key);
    +        /* If EMC is disabled skip hash computation and emc_lookup */
    +        if (cur_min) {
    +            if (!md_is_valid) {
    +                key->hash = 
dpif_netdev_packet_get_rss_hash_orig_pkt(packet,
    +                        &key->mf);
    +            } else {
    +                key->hash = dpif_netdev_packet_get_rss_hash(packet, 
&key->mf);
    +            }
    +            flow = emc_lookup(flow_cache, key);
    +        } else {
    +            flow = NULL;
    +        }
             if (OVS_LIKELY(flow)) {
                 dp_netdev_queue_batches(packet, flow, &key->mf, batches,
                                         n_batches);
    -- 
    2.4.11
    
    

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to