Tom,

I believe the original question asked the following:

"What is the maximum CPU time difference for the same job,  between
repeated runs,  under different system load ?"

And you responded with your experience, which is a perfectly reasonable
thing to do. I work for a software vendor, so what we see in our shop
and what we see the same software do in a very large shop with fifty
processors and many LPARs may be quite different. 

Your experience shows that in your shop, you probably have a fairly
consistent mix of jobs and workload, and therefore, the effects tend to
cancel each other out. My experience may be broader than yours, and
reflects the timings I have observed over many years in many shops with
a variety of hardware and software.

As the engineers have pushed the limits of the technology farther and
farther, behaviors we have known in the past may no longer be true. But
the engineers have made the right choice: they have minimized cost and
attempted to maximum performance, and for most work, it has vastly
improved the performance of the overall workload and maintained the
viability of the platform. This has come at the expense of
repeatability, but that is not such a bad thing. Yes, an individual job
may get different timings from run to run, but the aggregate workload
should run well overall. But you should be aware of the penalties of,
for example, non-dedicated physical processors on an LPAR.
  
Tom Harper
IMS Utilities Development Team
Neon Enterprise Software
Sugar Land, TX

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Kelman, Tom
Sent: Monday, February 04, 2008 3:54 PM
To: [email protected]
Subject: Re: CPU time differences for the same job

Tom,

What you and others have said here has really got me thinking.  It's
been about 20 years since I was really an MVS System Programmer getting
into the bowels of the operating system.  I've been a performance tuner
and now a capacity planner for many years.  However, what has been said
here about the variability of CPU time seems to through all my ideas
about capacity planning and charge back algorithms out the window.  If
the CPU usage is that variable how can one relate the usage of CPU
within a given application back to business units, which I'm constantly
trying to do?  Also, how does one change equitable for CPU utilization
if it can vary so much just because there might be a cache miss caused
by an interrupt that is not necessarily the fault of the executing
program?  The shop I'm in now is fairly small with a z9BC of 1004 MIPS
and just 3 LPARs.  We aren't SYSplexed (no CF) and there isn't even a
shared JES spool, but there is shared DASD.  Maybe it's the relative
simplicity of the shop, but I've found CPU times to be pretty
consistent.  

Tom Kelman
Commerce Bank of Kansas City
(816) 760-7632

> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Tom Harper
> Sent: Monday, February 04, 2008 2:34 PM
> To: [email protected]
> Subject: Re: CPU time differences for the same job
> 
> Tom,
> 
> Yes, you are missing something here. Others have mentioned it, but I
> will mention it again: Modern processors make extensive use of
> multi-level cache. When not interrupted, the cache does a great job of
> improving the performance of the processor by pre-fetching
instructions
> and data post-storing results. The key words here are "not
interrupted".
> Interruptions can occur for normal interrupts in this z/OS system,
such
> as I/O completing, etc., and can occur for other LPARs when the
> hyper-visor interrupts the processor to dispatch another LPAR.
Dedicated
> LPARs can remove those types of interrupts. But this is exactly the
> point: the job mix on this LPAR and other processors can and do affect
> CPU times significantly.
> 
> Another fact not well understood is the magnitude of performance
> degradation when a cache miss occurs. Bob Rogers at the last SHARE
> mentioned that while cache misses in first and second-level caches
were
> not too bad, cache misses that resulted in actual references to memory
> could be as much as 600 times slower. His comment: It's almost like
> doing an I/O to reference main storage compared to getting a hit in
> first-level cache.
> 
> You may want to read up on this in the IBM systems journal:
> 
> http://www.research.ibm.com/journal/rd51-12.html
> 
> Tom Harper
> IMS Utilities Development Team
> Neon Enterprise Software
> Sugar Land, TX
> 
> Tom Kelman wrote:
> 
> Am I missing something here?  Miklos is asking about the difference in
> CPU time between two runs of the same job step.  I would think that if
> the same program was processing the same data in the same way the CPU
> time should be close to consistent.  Maybe not exactly the same but
> there shouldn't be "several times another".  Now the elapsed time
could
> vary widely depending on the contentions that Tom has mentioned above.
> 
> 
> Tom Kelman
> Commerce Bank of Kansas City
> (816) 760-7632
> 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to