Bug#364030: top shows 98% for all CPUs on SMP system all the time

2006-04-24 Thread jagga
Hmm, I can't reproduce the bug anymore...
The machine had to be rebooted over the weekend (it is a server at the
company where I'm working) and now top's output is all right again:

Tasks:  86 total,   1 running,  85 sleeping,   0 stopped,   0 zombie
 Cpu0 :  0.1% us,  0.2% sy,  0.0% ni, 99.5% id,  0.2% wa,  0.0% hi,  0.0% si
 Cpu1 :  0.1% us,  0.1% sy,  0.0% ni, 99.8% id,  0.0% wa,  0.0% hi,  0.0% si
 Cpu2 :  0.1% us,  0.1% sy,  0.0% ni, 99.8% id,  0.0% wa,  0.0% hi,  0.0% si
 Cpu3 :  1.2% us,  0.0% sy,  0.0% ni, 98.7% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:   2075860k total,   141680k used,  1934180k free,29008k buffers
Swap:  2634652k total,0k used,  2634652k free,52388k cached

So I suppose you can close the bug. If top's weird behaviour will come up
again, I'll reopen it.

Thank you
Adalbert




Bug#364030: top shows 98% for all CPUs on SMP system all the time

2006-04-21 Thread Craig Small
On Fri, Apr 21, 2006 at 07:45:00AM +0200, Adalbert Dawid wrote:
 I installed Sarge on a Quad-Xeon (4 CPUs), 2.6.8-2-686-smp Debian kernel.
 top is showing a load 98% for all four CPUs, although there are no 
 relevant processes running on the system.
The top row comes from the /proc/stat file, if you see top doing this
again can you send me that file?

Low 2.6 kernels and Xeons have given some strange results, so let's get
that part sorted out first.

 - Craig
-- 
Craig Small  GnuPG:1C1B D893 1418 2AF4 45EE  95CB C76C E5AC 12CA DFA5
Eye-Net Consulting http://www.enc.com.au/   MIEE Debian developer
csmall at : enc.com.au  ieee.org   debian.org


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#364030: top shows 98% for all CPUs on SMP system all the time

2006-04-20 Thread Adalbert Dawid
Package: procps
Version: 1:3.2.1-2

I installed Sarge on a Quad-Xeon (4 CPUs), 2.6.8-2-686-smp Debian kernel.
top is showing a load 98% for all four CPUs, although there are no relevant 
processes running on the system.
Below, you can see the output of top sorted decreasingly by %CPU:

top - 07:31:11 up 2 days, 22:02,  4 users,  load average: 4.02, 4.00, 4.00
Tasks:  87 total,   2 running,  85 sleeping,   0 stopped,   0 zombie
 Cpu0 : 99.7% us,  0.3% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
 Cpu1 : 99.3% us,  0.7% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
 Cpu2 : 99.0% us,  1.0% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
 Cpu3 : 98.3% us,  1.3% sy,  0.0% ni,  0.3% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:   2075860k total,  1500320k used,   575540k free,   104716k buffers
Swap:  2634652k total,0k used,  2634652k free,   301584k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 5488 dawid 17   0  2072 1124 1852 S  0.3  0.1   0:04.51 top
1 root  16   0  1504  512 1352 S  0.0  0.0   0:00.87 init
2 root  RT   0 000 S  0.0  0.0   0:00.14 migration/0
3 root  34  19 000 S  0.0  0.0   0:00.00 ksoftirqd/0
4 root  RT   0 000 S  0.0  0.0   0:00.01 migration/1
5 root  34  19 000 S  0.0  0.0   0:00.00 ksoftirqd/1
6 root  RT   0 000 S  0.0  0.0   0:00.04 migration/2
7 root  34  19 000 S  0.0  0.0   0:00.00 ksoftirqd/2
8 root  RT   0 000 S  0.0  0.0   0:00.01 migration/3
9 root  34  19 000 S  0.0  0.0   0:00.00 ksoftirqd/3
   10 root   5 -10 000 S  0.0  0.0   0:01.04 events/0
   11 root   5 -10 000 S  0.0  0.0   0:01.32 events/1
   12 root   5 -10 000 S  0.0  0.0   0:00.00 events/2
   13 root   5 -10 000 S  0.0  0.0   0:00.00 events/3
   14 root   9 -10 000 S  0.0  0.0   0:00.01 khelper
   15 root   7 -10 000 S  0.0  0.0   0:00.00 kacpid
   69 root   5 -10 000 S  0.0  0.0   0:00.00 kblockd/0
   70 root   5 -10 000 S  0.0  0.0   0:00.00 kblockd/1
   71 root   5 -10 000 S  0.0  0.0   0:00.00 kblockd/2
   72 root   5 -10 000 S  0.0  0.0   0:00.00 kblockd/3
   82 root  20   0 000 S  0.0  0.0   0:00.00 pdflush
   83 root  15   0 000 S  0.0  0.0   0:00.20 pdflush
   84 root  16   0 000 S  0.0  0.0   0:00.00 kswapd0
   85 root   8 -10 000 S  0.0  0.0   0:00.00 aio/0
   86 root   7 -10 000 S  0.0  0.0   0:00.00 aio/1
   87 root   8 -10 000 S  0.0  0.0   0:00.00 aio/2
   88 root   5 -10 000 S  0.0  0.0   0:00.00 aio/3
  224 root  20   0 000 S  0.0  0.0   0:00.02 kseriod
  244 root  25   0 000 S  0.0  0.0   0:00.00 scsi_eh_0
  245 root  15   0 000 S  0.0  0.0   0:00.00 ahc_dv_0
  251 root  19   0 000 S  0.0  0.0   0:00.00 scsi_eh_1
  252 root  15   0 000 S  0.0  0.0   0:00.00 ahc_dv_1
  258 root  16   0 000 S  0.0  0.0   0:00.00 khubd
  330 root  21   0 000 S  0.0  0.0   0:00.00 pciehpd_event
  332 root  20   0 000 S  0.0  0.0   0:00.00 shpchpd_event
  365 root  15   0 000 S  0.0  0.0   0:00.53 kjournald
  430 root  12  -4  1500  468 1336 S  0.0  0.0   0:00.07 udevd
 2341 daemon16   0  1620  560 1440 S  0.0  0.0   0:00.55 portmap
 2830 root  16   0  1560  628 1392 S  0.0  0.0   0:00.16 syslogd
 2833 root  16   0  2388 1496 1344 S  0.0  0.1   0:00.09 klogd
 2848 root  16   0 18308  832 1728 S  0.0  0.0   0:00.00 ypbind
 3033 root  16   0  1732  816 1552 S  0.0  0.0   0:00.42 automount
 3092 root  16   0  1728  804 1552 S  0.0  0.0   0:00.01 automount
 3159 root  16   0  1732  820 1552 S  0.0  0.0   0:00.26 automount
 3234 root  16   0  1732  808 1552 S  0.0  0.0   0:00.82 automount
 3306 root  16   0  1732  812 1552 S  0.0  0.0   0:00.15 automount
 3312 messageb  16   0  2100 1108 1928 S  0.0  0.1   0:00.01 dbus-daemon-1
 3317 hal   16   0  7160 5732 3020 S  0.0  0.3   0:44.84 hald
 3356 Debian-e  16   0  4448 1636 4068 S  0.0  0.1   0:00.00 exim4
 3362 root  20   0  1540  536 1384 S  0.0  0.0   0:00.00 inetd
 3378 lp18   0  1768  696 1572 S  0.0  0.0   0:00.00 lpd
 3423 root  16   0  3472 1508 3092 S  0.0  0.1   0:00.01 sshd
 3453 root  16   0  4912 3440 2152 S  0.0  0.2   0:00.07 xfs




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]