Thanks, that helped a lot. Did a bit of research online and came up with this article that answered my question:

"There are two types of CPU affinity. The first, soft affinity, also called natural affinity, is the tendency of a scheduler to try to keep processes on the same CPU as long as possible. It is merely an attempt; if it is ever infeasible, the processes certainly will migrate to another processor. The new O(1) scheduler in 2.5 exhibits excellent natural affinity. On the opposite end, however, is the 2.4 scheduler, which has poor CPU affinity. This behavior results in the ping-pong effect. The scheduler bounces processes between multiple processors each time they are scheduled and rescheduled. Table 1 is an example of poor natural affinity; Table 2 shows what good natural affinity looks like.

Hard affinity, on the other hand, is what a CPU affinity system call provides. It is a requirement, and processes must adhere to a specified hard affinity. If a processor is bound to CPU zero, for example, then it can run only on CPU zero."

Carlo


Ian Wienand wrote:
On Mon, Jun 26, 2006 at 01:55:16PM +1000, Carlo Sogono wrote:
I would like to find out how Linux distributes processes in an SMP-enabled box with n CPUs. Will the kernel "move" a process from one CPU to another if another CPU is idle?

It may do.  Keeping processes close to where they last run is called
CPU affinity, and is obviously better for the cache.  See the man
pages for sched_[set|get]affinity for the Linux interface for binding
to CPUs.

On a larger machine you also need to control node locality, for that
you can use libnuma and numactl, which should come with your
distribution.

-i


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to