Did not see the original posting. If you are looking for a decrease in elapsed (clock) time, then that is the only metric of any interest. However, as others suggest, clock time can be expected to vary with transaction load and the overall load on the environment. You might want to build a reference base and pick a 'fuzzy' average. (i.e. value ranges x to y. Any measurement below x and above y would be a significant change. Any measurements between x and y would be equivalent.)
Typical commercial batch applications are almost impossible to multi thread. If that is an option, then the most common practice is to split into two (or more) different jobs that run concurrently. There are too many unknowns and too many variables to be able to predict what will happen. For example, consider a job step that has two phases, both of which are pure CPU, there is minimal contention for the CPU, and the CPU has more than one engine. Also suppose the CPU time and clock time were the same. Setting up a multitasking environment and running the two phases concurrently, the CPU time would stay roughly the same or slightly increase, but the clock time would be a little more than half the baseline. If, in the other hand, the job step were I/O bound, then I would not be surprised to see an overall increase in elapsed as well as CPU due to the increased overhead and internal resource contention. Compare to tuning files in typical situations that can yield a dramatic decrease in clock time (80%) as well as measurable decreases in CPU time (5%). Finally, from a bottom line perspective, it might cost more to develop, test, and maintain multi tasking programs than to simply upgrade the CPU. HTH and good luck -----Original Message----- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of John Giltner Sent: Monday, November 10, 2008 4:54 PM To: [email protected] Subject: Re: Measuring performance with job elapsed time Roded Bahat wrote: > Hello, > We're rewriting a batch application to work with more than one task as > it does now. > The basic idea is to divide the work needed to be done into 2 or more > different tasks in order to allow parallel processing and decrease the > job's elapsed running time. > > The improvement we're expecting to see is a decrease in the elapsed > time. The CPU will most likely not change much as the amount of work > being done remains the same (except maybe for the task managing > overhead). > > Does anyone have any idea of how I could accurately and empirically > measure the performance gain in a situation such as this? > Unfortunately, we don't have a sterile environment to produce the > before and after elapsed time and conclude the gained performance > percentage from that. > > Thanks a lot, > Roded NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

