Dear DPDK experts.

I really appreciate for your precious answers and advices and thank you for 
your great contributions.

I have questions about load balancer application in DPDK 
(~/dpdk.2.0.0/examples/load_balancer).

Please let me know, if I am wrong or I have something missed.

Question #1)
The IO rx process and IO tx process for a port seem be executed in a core, not 
separated.
I think that --tx option seems to be meaningless.
If it is not, can I spcifiy different cores to tx and rx process for a port 
(when spcifying --rx (port,queue,lcore) and --tx (port,lcore) )?

Question #2)
The figure 18.5 (load balancer example) in "Sample Applications User Guide, 
Release 2.0.0" has very inefficient core confiruation in xeon sever.
Since the { nic 0, nic 1} and { nic 2, nic 3} are installed at diffrent PCie in 
diffrent NUMA node usually.
The rx process should receive packets from the other NUMA node PCIe in this 
case.

In our xeon server with 2 NUMA nodes where each node has 10 cores, lcore 0 ~ 9 
belong to NUMA 0 and lcore 10~19 belong to NUMA 1.
In this reason, the rx processes should be assigend to different NUMA nodes 
which has it's own PCIe NIC card.
I think the first rx process #0 should be assigend to lcore 0~9 and the second 
rx process #1 should be assigend to lcore 10~19 in Xeon server.

In contrast, our i7 computer (with a NUMA node where each node has 4 physical 
cores and 8 logical cores) has the following core mapping. In this 

case, core mapping seems not to cause any problem.
   {lcore 0, lcore 4} -> physical core 0.
   {lcore 1, lcore 5} -> physical core 1.
   {lcore 2, lcore 6} -> physical core 2.
   {lcore 3, lcore 7} -> physical core 3.

Question #3)
The number of queues(rings) between worker and IO tx process seems to be 
(#workers * #ports), not (#workers * tx process) which is shown in 

Figure 18.5 according to source code.

I will appreciate to you if I can be given any answer, advice, and comments.

Thank you very much.

Sincerely Yours,

Ick-Sung Choi.

Reply via email to