Hello,

not really specific to Ceph, but since one of the default questions by the
Ceph team when people are facing performance problems seems to be 
"Have you tried turning it off and on again?" ^o^ err, 
"Are all your interrupts on one CPU?" 
I'm going to wax on about this for a bit and hope for some feedback from
others with different experiences and architectures than me.

Now firstly that question if all your IRQ handling is happening on the
same CPU is a valid one, as depending on a bewildering range of factors
ranging from kernel parameters to actual hardware one often does indeed
wind up with that scenario, usually with all on CPU0. 
Which certainly is the case with all my recent hardware and Debian
kernels.

I'm using nearly exclusively AMD CPUs (Opteron 42xx, 43xx and 63xx) and
thus feedback from Intel users is very much sought after, as I'm
considering Intel based storage nodes in the future. 
It's vaguely amusing that Ceph storage nodes seem to have more CPU
(individual core performance, not necessarily # of cores) and similar RAM
requirements than my VM hosts. ^o^

So the common wisdom is that all IRQs on one CPU is a bad thing, lest it
gets overloaded and for example drop network packets because of this.
And while that is true, I'm hard pressed to generate any load on my
clusters where the IRQ ratio on CPU0 goes much beyond 50%.

Thus it should come as no surprise that spreading out IRQs with irqbalance
or more accurately by manually setting the /proc/irq/xx/smp_affinity mask
doesn't give me any discernible differences when it comes to benchmark
results. 

With irqbalance spreading things out willy-nilly w/o any regards or
knowledge about the hardware and what IRQ does what it's definitely
something I won't be using out of the box. This goes especially for systems
with different NUMA regions without proper policyscripts for irqbalance.

So for my current hardware I'm going to keep IRQs on CPU0 and CPU1 which
are the same Bulldozer module and thus sharing L2 and L3 cache. 
In particular the AHCI (journal SSDs) and HBA or RAID controller IRQs on
CPU0 and the network (Infiniband) on CPU1.
That should give me sufficient reserves in processing power and keep intra
core (module) and NUMA (additional physical CPUs) traffic to a minimum.
This also will (within a certain load range) allow these 2 CPUs (module)
to be ramped up to full speed while other cores can remain at a lower
frequency.

Now with Intel some PCIe lanes are handled by a specific CPU (that's why
you often see the need for adding a 2nd CPU to use all slots) and in that
case pinning the IRQ handling for those slots on a specific CPU might
actually make a lot of sense. Especially if not all the traffic generated
by that card will have to transferred to the other CPU anyway.


Christian
-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to