With 1 Gig of real memory and 512 MB per guest, you're probably measuring
the VM paging subsystem or some other overhead phenonmena, which is
probably tunable, not the Linux guest.  - with 10x512 MB guest to 1 GB real
memory, you may be overcommited by more than 5 to 1 because VM chews up a
good bit of that memory too! The miracle would probaly be to do something
like add more memory.

I'd tried your perl_bench on an LPAR on a 9672 (G6) and a guest on a 2064
(z116)and the results were very consistent (though I did change the 2nd
loop to 10000).

      SLES8 with SP2 applied
      IFL
      2 GB memory
      9672 bogomips 634.06 (G6)
      time ./perl_bench
            real  11.7-11.8s
            user  11.7-11.8s
            sys   0.010s

      EC guest
      non-IFL
      SLES8 with SP2 applied
      512 GB guest memory/ 24 GB VM memory
      2064 bogomips 776.6 (z1 - 116)
      time ./perl_bench
            real  9.6s
            user  9.6s
            sys   0.0s

It both cases, the results repeated with a deviation around 1%.


Regards, Jim
Linux S/390-zSeries Support, SEEL, IBM Silicon Valley Labs
t/l 543-4021, 408-463-4021, [EMAIL PROTECTED]
*** Grace Happens ***




                      Matt Lashley/SCO
                      <[EMAIL PROTECTED]        To:       [EMAIL PROTECTED]
                      te.id.us>                cc:
                      Sent by: Linux on        Subject:  Re: offloading CPU intensive 
loads from zLinux to cheaper pastures
                      390 Port
                      <[EMAIL PROTECTED]
                      IST.EDU>


                      06/19/2003 08:32
                      AM
                      Please respond to
                      Linux on 390 Port






Just in case anyone is interested --

Though this initial focus of this comparison seemed geared toward S390
Linux running in an LPAR and Linux on an x86, since I no longer have an
S390 Linux LPAR I ran the test under a  few different VM guest machines.
The code I used is at the bottome of this note.  (Copied from John's
example.)

What I found strange was that the fatsest times (posted under Results1)
came off of the initial run.  Subsequent runs were always slower.  And,
when I ran the code on all three machines at the same time (posted under
Results2) the times increased fairly dramatically.

Dear IBM - please come up with a way to level the trade-off between
processing power of an IFL and its cache.  I fear MTBF, throughput and
white space won't be enough in the near future.  The gap in TCO won't be as
wide forever.  You guys have worked miracles before...


The system:

9672 G6 machine with a single IFL
1 gig of real mem (for the whole VM system)
VM 4.3
Ten concurrently running Linux guest machines

Guest machine one:

OS: SLES8
Mem: 512M
CPU Share: 2000 (relative)
Heaviest app on the server: Domino 6.5 (not heavily used)

Results1:
tuxd1:~ # time ./perl_bench.perl

real    0m14.014s
user    0m14.010s
sys     0m0.010s

Results 2:
tuxd1:~ # time ./perl_bench.perl

real    0m42.427s
user    0m42.350s
sys     0m0.080s

Guest machine two:

OS: SLES8
Mem: 700M
CPU Share: 3500 (relative)
Heaviest app(s) on the server: DB2, Information Builders WebFocus software,
Apache and Tomcat (not heavily used)

Results1:
tuxib:~ # time ./perl_bench.perl

real    0m14.470s
user    0m14.420s
sys     0m0.010s

Results2:
tuxib:~ # time ./perl_bench.perl

real    0m31.623s
user    0m31.320s
sys     0m0.080s

Guest machine three:

OS: SLES7
Mem: 128M
CPU Share: 1100 (relative)
Heaviest app(s) on the server: Apache, Mysql (moderately used)

Results1:
xapdvlp:~ # time ./perl_bench.perl
real    0m26.164s
user    0m26.160s
sys     0m0.000s

Results2:
xapdvlp:~ # time ./perl_bench.perl

real    0m58.682s
user    0m20.310s
sys     0m21.370s


Code:

#!/usr/bin/perl
#use integer;
$i = 0;
while ($i < 1000)
        {
                $j = 0;
                while ($j < 10000)
                        {
                                ++$j;
                        }
                ++$i;
        }


Matt Lashley
Idaho State Controller's Office

Reply via email to