Hi Bogdan,

I ran a quick test run with the patch and the overhead seems to be surprisingly 
small at least on a Xeon cpu (E5-2697v3). However, I would still suggest making 
some changes to the code. More below.

-Matias



>
> @@ -605,9 +663,11 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>        int nb_pkts = 0;
>        int alloc_len, num;
>        odp_pool_t pool = pktio_entry->s.pkt_dpdk.pool;
> +     odp_pktin_config_opt_t *pktin_cfg;
>
>        /* Allocate maximum sized packets */
>        alloc_len = pktio_entry->s.pkt_dpdk.data_room;
> +     pktin_cfg = &pktio_entry->s.config.pktin;
>
>        num = packet_alloc_multi(pool, alloc_len, pkt_table, mbuf_num);
>        if (num != mbuf_num) {
> @@ -658,6 +718,34 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>                if (mbuf->ol_flags & PKT_RX_RSS_HASH)
>                        odp_packet_flow_hash_set(pkt, mbuf->hash.rss);
>
> +             if ((mbuf->packet_type & RTE_PTYPE_L3_MASK) == 
> RTE_PTYPE_L3_IPV4 &&
> +                     pktin_cfg->bit.ipv4_chksum &&
> +                     mbuf->ol_flags & PKT_RX_IP_CKSUM_BAD) {
> +                     if (pktin_cfg->bit.drop_ipv4_err) {
> +                             odp_packet_free(pkt);
> +                             continue;
> +                     } else
> +                             pkt_hdr->p.error_flags.ip_err = 1;
> +             }
> +
> +             if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP 
> &&
> +                     pktin_cfg->bit.udp_chksum &&
> +                     mbuf->ol_flags & PKT_RX_L4_CKSUM_BAD) {
> +                     if (pktin_cfg->bit.drop_udp_err) {
> +                             odp_packet_free(pkt);
> +                             continue;
> +                     } else
> +                             pkt_hdr->p.error_flags.udp_err = 1;
> +             } else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == 
> RTE_PTYPE_L4_TCP &&
> +                     pktin_cfg->bit.tcp_chksum &&
> +                     mbuf->ol_flags & PKT_RX_L4_CKSUM_BAD) {
> +                     if (pktin_cfg->bit.drop_tcp_err) {
> +                             odp_packet_free(pkt);
> +                             continue;
> +                     } else
> +                             pkt_hdr->p.error_flags.tcp_err = 1;
> +             }
> +

Instead of doing packet parsing and checksum validation separately I would do 
both in one function. The api-next pktio code (should be merged to master) has 
a new configuration option 'odp_pktio_config_t.parser.layer', which selects the 
parsing level. packet_parse_layer() function is then used to parse the received 
packet up to the selected level.

So, instead of of calling packet_parse_layer() in dpdk pktio I would add a new 
dpdk specific implementation of this function. This way we can exploit all dpdk 
packet parsing features in addition to the checksum calculations. Also, by 
doing this you can remove most of the if() calls above. Enabling a higher 
protocol layer checksum calculation than the selected parsing level would be a 
user error (e.g. ODP_PKTIO_PARSER_LAYER_L2 and TCP checksum enabled).


Attachment: rfc-patch-bech.xlsx
Description: rfc-patch-bech.xlsx

Reply via email to