The only time we saw idle SLES8 instances hog a system is when we brought up our
z/VM 4.3 and it's guests under a 4.2 1st level at our DR site. 12 Linux 
instances
pegged the CPU at 100%. We replaced the 1st level CP nucleus with 4.3, and then
everything was normal. We didn't investigate what the incompatibility was, but 
it
shows that some seemingly innocuous combinations of software can cause strange
behavior.

Ray Mrohs
Energy Information Administration
U.S. Department of Energy


-----Original Message-----
From: John Kaba [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 16, 2004 3:58 PM
To: [EMAIL PROTECTED]
Subject: Linux Performance Issue


We have just recently installed SuSE SLES8 under our VM3.1.0 system, and
are experiencing some performance issues.  Our Realtime monitor reveals
that Linux is utilizing approx 65% of our CPU cycles, but we really have
nothing running.  I was told by our Linux guy to give it lots of memory, so
I defined it with 1G.  I'm wondering if this is my problem, or if I might
have something else set up wrong.  We have Tivoli Directory Server 5.2
fixpack1, DB2 ver 8.2, and websphere express 5.1, but it is not configured
yet.  We have just installed these products, and they are not being used at
all, still we are seeing results like this:

---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
---------
Here is our Realtime Monitor results:

<>z/VM   CPU9672 SERIAL 069692    2G DATE 11/16/04 START 13:37:27 END
14:25:02<>
                                                               *
USERID-> LOGGED %ACT %PGW %IOW %SUS %RUN %ELG :DSK :XST :SPL :CPU :PAG :I/O
:STR
*TOTALS*   5544           14.7          .0         .0           75.6
24.3        .0      100    100   100    100    100   100    100
LINUX              192           71.3          .0        .0            43.7
56.2        .0         .0    100       .0    65.8   .0      30.9  55.7
VSE2                 96             100         .0         .0
90.6         9.3        .0         .0        .0    1.7    14.5   .0
12.1   2.9
TCPIP               96              100         .0        .0
100            .0        .0         .0        .0      .0      7.8   .0
30.7     .9
VSEIPO            96              100         .0        .0            98.9
1.0        .0          .0        .0   11.0    5.5   .0       23.7  1.1

---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
---------
Here is a "TOP" listing from our Linux guest:

70 processes: 68 sleeping, 2 running, 0 zombie, 0 stopped
CPU0 states:  1.4% user,  4.4% system,  0.0% nice, 93.0% idle
CPU1 states: 14.2% user,  3.4% system,  0.0% nice, 81.2% idle
Mem:  1008504K av,  953116K used,   55388K free,       0K shrd,  138316K
buff
Swap:  719896K av,       4K used,  719892K free                  668240K
cached

  PID     USER        PRI  NI   SIZE    RSS  SHARE STAT %CPU %MEM  TIME
COMMAND
10871 dmelende    15    0 11372    11M     10632           S     2.9
1.1   7:50     kdeinit
10905 dmelende    15    0 16804    16M     14572           S     2.5
1.6   3:26     kdeinit
23343 jkaba             15    0  1036    1032          828           R
1.5            0.1   0:00     top
10914 dmelende    15    0 12652    12M     11724           S     0.5
1.2   0:28      kdeinit
10890 dmelende    15    0 12980    12M     11848           R     0.3
1.2   1:20     kdeinit
23327 jkaba             15    0  2376    2376       2208            S
0.3            0.2   0:00    sshd
15843 root                15     0  2084    2080       1520            S
0.1           0.2   0:06     db2fmcd
        1  root                15     0   208       204          160
S     0.0           0.0   0:04      init
        2  root                0K     0     0              0            0
SW     0.0            0.0   0:00     migration_CPU0
        3  root                0K     0     0              0           0
SW     0.0            0.0   0:00     migration_CPU1
        4  root                25     0     0              0            0
SW     0.0            0.0   0:00     kmcheck
        5  root                15     0     0              0           0
SW     0.0            0.0   0:00     keventd
        6  root                34   19     0              0          0
SWN     0.0            0.0   8:50     ksoftirqd_CPU0
        7  root                34   19     0              0          0
SWN     0.0            0.0   8:36     ksoftirqd_CPU1
        8  root                15     0     0              0           0
SW      0.0            0.0   0:07     kswapd
        9  root                25     0     0              0           0
SW      0.0            0.0   0:00     bdflush
[EMAIL PROTECTED]:~>
---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
---------
Here is the directory entry for the Linux Guest:

USER LINUX xxxxxxx 1000M 2000M G
*-----------------------------------------------------
   ACCOUNT 442061 LINUX
   CPU 01 CPUID 111111
   CPU 02 CPUID 111222
   IPL CMS PARM AUTOCR
   IUCV ANY
   IUCV ALLOW
   MACHINE ESA 10
   OPTION MAINTCCW RMCHINFO
   SHARE REL 2000
   XSTORE 32M
   CONSOLE 01C0 3270 A
   SPECIAL 0808 CTCA
   SPECIAL 0809 CTCA
   SPOOL 000C 2540 READER *
   SPOOL 000D 2540 PUNCH A
   SPOOL 000E 3203 A
   LINK MAINT 0190 0190 RR
   LINK MAINT 019D 019D RR
   LINK MAINT 019E 019E RR
   LINK TCPMAINT 0592 0592 RR

---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
----
Our Real Storage looks like this:
q stor
STORAGE = 2G
Ready; T=0.01/0.01 14:48:39
q xstore
XSTORE= 2048M online= 2048M
XSTORE= 2016M userid= SYSTEM usage= 3% retained= 0M pending= 0M
XSTORE MDC min=10M, max=64M, usage=3%
XSTORE= 32M userid= LINUX
XSTORE= 2016M userid=  (none)  max. attach= 2016M
Ready; T=0.01/0.01 14:48:43

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to