I brought this up on IBM-MAIN many years ago after noticing several system traces, from large customers, with less than one screen's worth of good trace data. At that time, the maximum trace table size was 999K and I recommended setting that size (if you could afford the real memory). 1M is now the default...

Just last night I was asked to take a look at a dump -- from a large customer running on the newest, biggest "iron" -- that had only .045 seconds of good trace data:

******** Trace data is not available from all processors before this time.
0008 0269 008C3448 I/O 02013 00000000_0D91ABCA 10104007 6F932740 0C000100 00000080 00000000 0009 0269 08:22:55.743019140 40 07544000 80000000 024FE070 00100000 00000000
.
. (two measly pages of trace data)
.
0012 0226 084B8B00 SSRV 118 A6476E64 01E6A110 8007611A 08245B88 Suspend 08:22:55.788338762 6E
                                                  00000000
******** Trace data is not available from all processors after this time.

Don't forget to re-visit your MVS trace table allocations whenever you upgrade your hardware. Luckily, major increases in processor speed and the number of logical processors have been accompanied by similar increases in LPAR real memory allocations, making larger trace tables possible.

The TRACE ST command now supports up to 9G per processor! LOL! I'm not in any way suggesting that value be used, I'm just pointing out that it's possible.

IMHO, trace tables should be sized to hold _at least_ one full clock second of data on your zIIPs or whatever your fastest processors might be. Others might have different ROTs.

Just do it! Your debugging teams will thank you. :)

--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
http://www.phoenixsoftware.com/

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to