Sebastien Roy wrote:
Rajagopal Kunhappan wrote:
The tuning that you have done is rather weird. By default, ip_squeues_per_cpu has a value of 1. But that does not mean that only one squeue will be there for a CPU. More squeues upto a maximum of 32 can get allocated for a CPU and they get allocated on demand. So setting ip_squeues_per_cpu to 32 is not very interesting.

What is interesting is that you have set ip_soft_rings_cnt to 2 times ip_squeues_per_cpu which would be 64 in your case. This does not seem right.

If you have a 64 CPU system and want to use all of them to process incoming packets for a NIC, then it may just be ok, but having more soft rings than the number of CPUs does not give you any performance advantage.

My suggestion is to not change ip_squeues_per_cpu at all but tune ip_soft_rings_cnt only. Set it to 2 or 3 for 1Gbps NIC and anywhere from 16 to 32 for 10 Gbps NIC. Again don't increase the soft ring count to more than the number of CPUs in the system. Also note that CPU speed is also important in the calculation. A simple rule of thumb is that a 1Ghz CPU should be able to handle a 1Gbps NIC.

Could all of this manual tuning be made obsolete by some simple heuristics in the code which could automatically set these things? The number of CPUs on the system is no secret to the kernel. Will any of the ongoing projects (such as Crossbow or PEF) make this situation better?

We (as in Crossbow) have discussed this. Some NICs like nxge (neptune) can run in 1Gbps and 10 Gbps mode. We are putting in code to make the right choice as far a the number of soft rings depending upon NIC speed.

Another approach that we have discussed it to enhance the intrd to increase/decrease the number of soft rings dynamically depending upon the load. However this enhancement to intrd is not going to be there for the first phase of Crossbow, I think.

-krgopi

-Seb
_______________________________________________
networking-discuss mailing list
[email protected]
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to