Thanks again to all who made suggestions. We think the problem turns out to be that one section of code is doing a wide-ranging call (that is, calls a function that itself does a Whole Lot Of Stuff) on every iteration and shouldn't be. This would explain the relative lack of hotspots: the extra CPU is spread throughout dozens of other routines. The fact that the LE memory usage % dropped so much while the overall CPU use didn't is still a bit of a mystery, but clearly until we get this other issue ironed out, it's going to stay that way. It may be that (a) the fact that SHA via CPACF still uses some CPU; (b) the low capture ration; (c) there was some improvement - all these may have combined to make the measurement look worse than it is.
...phsiii ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
