Charles et al. Using the TCB time reported in IEF032 to measure and analyse the net CPU cost of program execution is a bit like a detective investigating a crime without leaving the office.
As others have said, there is more than one bucket used to measure the CPU time of a job or step. If you use MXG, then you know that Barry totals all those buckets into the CPUTM variable. Watch a job sit in privileged dispatch status while going through space recovery at allocation for an idea of why TCB time in IEF032 is inadequate. If you don't have MXG (why) or equivalent software, then I suggest you enhance IEFACTRT to provide the information or dig around CBT for an up to date SMF type 30 report program. RON HAWKINS Director, Ipsicsopt Pty Ltd (ACN: 627 705 971) m+61 400029610| t: +1 4085625415 | f: +1 4087912585 -----Original Message----- From: IBM Mainframe Discussion List <[email protected]> On Behalf Of Charles Mills Sent: Wednesday, 7 August 2019 02:25 To: [email protected] Subject: [IBM-MAIN] CPU time cost of dynamic allocation I have a batch program that does several SVC 99 allocations. They are fairly vanilla temporary dataset allocations, or at least that is how I think of them. I am seeing a CPU time of about .0025 CPU seconds per allocation on a z196. Is this what others would expect, or does it seem high? OTOH I have an IEFBR14 batch job on the same machine that allocates 15 temporary datasets in JCL. The entire job lock, stock and barrel uses (according to IEF032I) .00 CPU seconds. Can anyone explain why JCL allocation is apparently much more CPU efficient than SVC 99 allocation? Charles ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
