Your response presupposes the change is transferred to Production use.
That transfer doesn't happen unless performance improvement has been
demonstrated in the test environment.  How to measure the improvement in
a repeatable way in a non-optimal test environment is the question at
hand.

If variability in the measurement tool hides the improvement, is it
worth doing?  Maybe it would be in the production environment, but the
variability in the test environment makes it impossible to prove in
advance.

Tom's suggestion of 5 interleaved runs, throwing out the high and low
values and averaging the remaining 3 runs seems like it would at least
help smooth out the variability a bit.

Peter

> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Ted MacNEIL
> Sent: Tuesday, February 05, 2008 5:59 PM
> To: [email protected]
> Subject: Re: CPU time differences for the same job
> 
> >We know the buffering improvements *should* produce better results,
but
> how can we prove it?
> 
> The final proof has always been:
> 
> "Is response better"?
> "Is throughput improved"?
> 
> If the answer is yes, you have won.
> If not ...
This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to