On Saturday 23 June 2007 00:47:27 Duncan wrote:
> Peter Humphrey <[EMAIL PROTECTED]> posted
> [EMAIL PROTECTED], excerpted below, on  Fri, 22 Jun
>
> 2007 19:10:44 +0100:
> >> What I'm wondering, of course, is whether you have NUMA turned on when
> >> you shouldn't, or don't have core scheduling turned on when you
> >> should, thus artificially increasing the resistance to switching
> >> cores/cpus and causing the stickiness.
> >
> > I don't think so.
>
> Yeah, now that you've clarified that it's sockets and confirmed settings,
> you seem to have it right.

Here's an example of silly output from top. In this case I did this:
# schedtool -a 0x1 5280
to get 5280 onto CPU0, then when I didn't get any better loadings I restored 
the affinity to its original value:
# schedtool -a 0x3 5280

Here's what top showed then. Look at the /nice/ values on lines 3 and 4, and 
compare those with the %CPU and Processor fields of processes 5279 and 5280 
(sorry about the line wraps). This has me deeply puzzled:

top - 09:04:59 up 23 min,  5 users,  load average: 3.60, 4.79, 3.91
Tasks: 124 total,   2 running, 122 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.3%us,  0.3%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu1  :  0.0%us,  0.3%sy, 99.7%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Mem:   4088968k total,  1822644k used,  2266324k free,   218296k buffers
Swap:  4176848k total,        0k used,  4176848k free,   735708k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  P COMMAND
 5279 prh       34  19 60256  38m 3600 S   50  1.0   6:53.97 1 
setiathome-5.12
 5280 prh       34  19 60252  38m 3612 S   50  1.0   6:54.08 0 
setiathome-5.12
 3692 root      15   0  144m  63m 7564 S    0  1.6   0:36.92 0 X
 5272 prh       15   0  4464 2636 1692 S    0  0.1   0:00.70 1 boinc
 5286 prh       15   0 93016  21m  14m S    0  0.5   0:00.66 0 konsole
 5322 prh       15   0  145m  13m  10m S    0  0.3   0:03.01 0 gkrellm2
10357 root      15   0 10732 1340  964 R    0  0.0   0:00.01 1 top
[snip system processes]

I don't think this is a scheduling problem; it goes deeper, so that the 
kernel doesn't have a consistent picture of which processor is which.

-- 
Rgds
Peter Humphrey
Linux Counter 5290, Aug 93
-- 
[EMAIL PROTECTED] mailing list

Reply via email to