Ok, you convinced me that your DASD performance problem
was really a CPU problem.  If your processor hit 80%,
then the absolutely best case would be 25% more work for
the run that hit 80%. You are very much limited by the
processor speed.

Your processor could only reach 100% if the overlap between
the I/O operations and the processing time was sufficient.
There is likely quite a bit of time where the guest was in
I/O wait, couldn't use more cpu.

Easy to prove this, run two linux guests. then you will have
overlap - assuming it is to different disks.  Your cpu will
go to 100% and you will get the max i/o thruput.


If you really want to go down the I/O route, there are
other possibilities - is this a cache friendly excersize?
If the data is all writes, was the write cache full?
If data is read, was it all in the cache? at ficon speeds,
and just seeing an excellent presentation on measuring
ficon, how busy was the ficon? the monitor data contains
this information.




>From: Jason Herne <[EMAIL PROTECTED]>
>
>Interesting thought, but the cpu it never 100% utilized when the
>testing is taking place.  At the higher end testing (21 to 24
>dbench clients) it gets up there around 80%.. but never 100%.
>
>- Jason Herne
>
>On Wed, 2003-12-10 at 21:07, Barton Robinson wrote:
>> I would say your two questions are very much related.
>> I'm betting that your processor on your z/800 is maxed out.
>> And since the intel processor is approximately 3 times faster
>> than your z/800, guess what, it performs that much more work.
>> More cycles allows for more work.
>>
>>
>>
>>
>>
>> >I think we may have a performance problem.
>> >
>> >Our z/800 model 0LF is connected to an ESS800 with two Ficon channel
>> >paths.  We have been running dbench in Linux for quite a few weeks now
>> >and we are seeing numbers much lower than we expected.  Can someone
>> >commnet on this?  Are these numbers about what we should be seeing, or
>> >is something wrong with our setup?
>> >
>> >We're running RHEL with kernel 2.4.21-1.1931.2.399.ent #1 SMP.  Linux is
>> >using a single 3390 DASD with ext3.
>> >
>> >Here are the numbers we're getting:
>> >all tests were run on a single guest with no other Linux guests and just
>> >a few z/VM service guests running.  No disk intestive or CPU intensive
>> >workload was running during the testing.
>> >
>> >dbench clients  avg throughout (MB/s)
>> >1               123.111
>> >3               116.729
>> >6               99.0626
>> >9               95.3577
>> >12              95.2825
>> >15              91.6009
>> >18              92.7745
>> >21              91.8808
>> >24              73.8885
>> >
>> >
>> >Here is what we get with our $1000 Dell Pentiium 2.4Ghz server with a
>> >SCSI disk.
>> >
>> >1               395.545
>> >3               281.957
>> >6               275.292
>> >9               285.756
>> >12              262.4333
>> >15              248.314
>> >18              237.879
>> >21              221.74355
>> >24              200.873
>> >
>> >
>> >As you can see, the $1k Dell is hammering our $250k mainframe.  We are
>> >currently trying to figure out why this is and hopefully fix the problem
>> >if there is indeed a problem...  Any comments or help that anyone could
>> >give woule be appreciated.
>> >
>> >- Jason Herne ([EMAIL PROTECTED])
>> >  Clarkson University Open Source Institute
>> >  z/Server Administrator







"If you can't measure it, I'm Just NOT interested!"(tm)

/************************************************************/
Barton Robinson - CBW     Internet: [EMAIL PROTECTED]
Velocity Software, Inc    Mailing Address:
 196-D Castro Street       P.O. Box 390640
 Mountain View, CA 94041   Mountain View, CA 94039-0640

VM Performance Hotline:   650-964-8867
Fax: 650-964-9012         Web Page:  WWW.VELOCITY-SOFTWARE.COM
/************************************************************/

Reply via email to