Sebastien Roy writes:
 > Rajagopal Kunhappan wrote:
 > > The tuning that you have done is rather weird. By default, 
 > > ip_squeues_per_cpu has a value of 1. But that does not mean that only 
 > > one squeue will be there for a CPU. More squeues upto a maximum of 32 
 > > can get allocated for a CPU and they get allocated on demand. So setting 
 > > ip_squeues_per_cpu to 32 is not very interesting.
 > > 
 > > What is interesting is that you have set ip_soft_rings_cnt  to 2 times 
 > > ip_squeues_per_cpu which would be 64 in your case. This does not seem 
 > > right.
 > > 
 > > If you have a 64 CPU system and want to use all of them to process 
 > > incoming packets for a NIC, then it may just be ok, but having more soft 
 > > rings than the number of CPUs does not give you any performance advantage.
 > > 
 > > My suggestion is to not change ip_squeues_per_cpu at all but tune 
 > > ip_soft_rings_cnt only. Set it to 2 or 3 for 1Gbps NIC and anywhere from 
 > > 16 to 32 for 10 Gbps NIC. Again don't increase the soft ring count to 
 > > more than the number of CPUs in the system. Also note that CPU speed is 
 > > also important in the calculation. A simple rule of thumb is that a 1Ghz 
 > > CPU should be able to handle a 1Gbps NIC.
 > 
 > Could all of this manual tuning be made obsolete by some simple 
 > heuristics in the code which could automatically set these things?  The 
 > number of CPUs on the system is no secret to the kernel.  Will any of 
 > the ongoing projects (such as Crossbow or PEF) make this situation better?
 > 

For now, I don't understand why fanout is not happening. The 
connections are not established in one swoop and so the flow 
should spread to multiple squeues (ip_squeues_fanout=1). Why
is this not happening ?


-r


 > -Seb
 > _______________________________________________
 > networking-discuss mailing list
 > [email protected]

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to