> -----Original Message-----
> From: EXT Zoltan Kiss [mailto:[email protected]]
> Sent: Thursday, January 28, 2016 3:21 PM
> To: Elo, Matias (Nokia - FI/Espoo) <[email protected]>; lng-
> [email protected]
> Subject: Re: [lng-odp] [API-NEXT PATCH 00/11] DPDK pktio implementation
> 
> Hi,
> 
> On 28/01/16 07:03, Matias Elo wrote:
> > The current unoptimized DPDK pktio implementation achieves forwarding rates
> > (odp_l2fwd), which are comparable to netmap pktio and scale better with
> larger
> > thread counts. Some initial benchmark results below
> > (odp_l2fwd  4 x 10 Gbps - 64B, Intel Xeon E5-2697v3).
> >
> >                             Threads
> >     1       2       4       6       8       10      12
> > DPDK        6.7     12      25.3    37.2    47.6    47.3    46.8    MPPS
> > Netmap      6.1     12.6    25.8    32.4    38.9    38.6    38.4
> 
> My performance results for ODP-DPDK are unidirectional between two
> ports, where one thread does the actual work (the other is idling), in
> that case it can achieve 14 Mpps. Is your number 6.7 Mpps comparable
> with this?

These numbers are combined throughputs from all 4 ports. No "maintenance"
thread is needed. With two ports and unidirectional traffic a single thread is 
able
to handle about 7 MPPS.

> Your main source of optimization seems to be to do zerocopy on RX side,
> but it needs change in linux-generic buffer management:
> - allow allocating zero length buffers, so you can append the buffers
> from the mbuf there
> - release the mbufs during odp_packet_free(), that needs some DPDK
> specific code, a destructor which calls rte_pktmbuf_free() on the stored
> pointers.
> 
> But even with that there will be a cost of wrapping the mbuf's into
> linux-generic buffers, and you can't avoid copy on TX side.

Yep, this is in my to-do list.

-Matias

> 
> Regards,
> 
> Zoltan
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to