On Thu, 2008-12-11 at 07:14 -0800, Joseph Mack NA3T wrote: > (someone correct me if I'm wrong..., I don't really know too > much about this)
OK... not a correction so much as clarification! > There'll ony be one instance of the kernel/ip_vs() running. > Whether it's parked on one cpu or moves around is the > business of the kernel not you. My understanding of the SMP world is as follows: An SMP kernel will schedule jobs according to the availability of "free" CPU time. In a two CPU system, if a process is running on CPU0 then the kernel job scheduler will put the next requesting job onto CPU1. If both CPUs have something running (and I mean actually *running*, as in "doing computation", not waiting for input or some other idle task) then the scheduler will switch into a time-sharing mode and will give each waiting process a little bit of CPU if it can. It is, however, still very possible to write an application which will completely wedge a processor and render it unavailable to other processes - this is commonly used in HPC clusters where you want the CPUs to be used at as high a utilisation rate as possible. In the case of ip_vs() and multiple cores/processors, I'd hazard a guess that when the kernel needs to do some thinking on where a particular connection is going (for example) it'll just stick that job in the queue and let the scheduler handle it, so it'll be handled by the next available free CPU slot. At GHz frequencies these come around rather more quickly than they used to :) > I didn't realise you could have affinity with a NIC till > Horms pointed it out. I thought the packets were sent to the > routing table, which choose the NIC. I didn't think ip_vs > could have any affect on which NIC would be handling the > packets. It depends (on the driver in use, which kernel you're using, what the phase of the moon is, and probably right now on the USD/Sterling exchange rate). Intel have a reasonable paper explaining the benefits (but not an easy "howto") here: http://software.intel.com/en-us/articles/improved-linux-smp-scaling-user-directed-processor-affinity There's also a good guide here: http://www.cs.uwaterloo.ca/~brecht/servers/apic/SMP-affinity.txt This is from a dual Xeon with HyperThreading enabled (although it probably shouldn't be): CPU0 CPU1 CPU2 CPU3 0: 219335885 219316163 219544944 219540837 IO-APIC-edge timer 1: 4 3 2 0 IO-APIC-edge i8042 8: 0 0 1 0 IO-APIC-edge rtc 9: 0 0 0 0 IO-APIC-level acpi 12: 15 17 17 9 IO-APIC-edge i8042 14: 28981 3917209 3924066 23223 IO-APIC-edge ide0 177: 7519145 47 49 44 IO-APIC-level eth0 193: 1121614 38744 23477 1146912 IO-APIC-level cciss0 NMI: 0 0 0 0 LOC: 877795114 877795154 877800491 877800490 ERR: 0 MIS: 0 You can see there that CPU0 is handling almost all interrupts for eth0. If we were using several cards, we could assign each to a different core. It's all hiding away in /proq/irq. Happy exploring (and if you break anything, don't blame me)! Graeme _______________________________________________ Please read the documentation before posting - it's available at: http://www.linuxvirtualserver.org/ LinuxVirtualServer.org mailing list - [email protected] Send requests to [EMAIL PROTECTED] or go to http://lists.graemef.net/mailman/listinfo/lvs-users
