> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Ted MacNEIL
> Sent: Tuesday, February 05, 2008 5:30 PM
> To: [email protected]
> Subject: Re: CPU time differences for the same job
> 
> >Please correct me if I am wrong, but much of this discussion leads me
to
> believe that we are saying that there is essentially no reasonable way
to
> predict or model performance for a given application process.
> 
> I have been a performance/capacity analyst for 27 years, and worrying
> about CPU has become moot.
> Yes, there are some examples of sub-optimal algorithms; over the last
40+
> years CPU reduction rarely impacts performance any more (IMO).

Not in the context of overnight batch windows increasingly filled with
literally tens of thousands of jobs.  In that context, every CPU
microsecond counts.

> I/O and resource contention are the biggest/heaviest hitters.
> Optimise those and you'll get a bigger bang for your effort.
> 
> I'm constantly amazed at the number of files out there with 'bad'
block-
> sizes, no buffering, etc. These will be a better place to spend your
time.
> 
> Resource contention is either caused by a mis-match of shared versus
> exclusive intent, or scheduling.
> Again, you can do better if you spend your effort here (again, imo).

Granted.  That is always the first place I look, and you are right,
there is much more bang for the buck there.  But algorithm improvement
is still a real factor.

> If you cannot show a CPU savings outside the band-width of the
variance of
> measurement, then I posit that you are likely wasting time and effort.

Perhaps that is so.  But IMHO the measurements provided by the system
ought to be able to account for *any* system-caused variance without any
guesswork or approximation on our part.

And when the variance hides even buffering improvements (a recent
experience I have had), then what?  We know the buffering improvements
*should* produce better results, but how can we prove it?

In my recent experience I had to point to Strobe-reported reductions in
I/O wait times to "prove" that the buffering changes were an
improvement.  I'm only now beginning to understand why that happened.

Peter
This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to