Vadimir,

   You asked about CPU usage and so responses about elapsed time didn't
directly relate.

    If your application can, in a test environment, produce very similar
cpu usage (and maybe elapsed/response time too) under identical/similar
scenarios then CPU impact of malloc/new/free/delete/etc. can be quantified.

    From there you can determine if there is enough potential payoff to
warrant making code level or architectural level changes.

    Working for various ISVs it has been my experience that potentially
huge CPU savings (which often correspond to nice response/elapsed time
improvement) are often the case with C/C++, etc. on mainframes. If you have
CPU cycles to spare, don't worry. But if you are like many/most mainframe
shops CPU overhead is a bottleneck. If a shop is already at or near 100%
CPU then cpu hogs needn't apply.

    The above occurs more frequently in cases where perfectly performing
unix/windows applications are ported to mainframe only to shock mainframers
who are traditionally anal about cpu cycles.

    Memory management is often a good target for revision. So too for C
strings and C++ String classes.

    If you do not have access to Strobe (a great tool) or other profiler
there are still ways to measure cpu impact. Take a suspect logic path and
make a code change to invoke it twice as often. That is assuming such would
not be destructive. If overall CPU usage grows by 20% you a reasonable
guestimate that 20% of original cpu usage can be attributed to that logic path.

    I have some tips on Search390.com ("Code Rage" and "CPU Performance
Anxiety", etc.) as well as a recent cover article in March 2003 NASPA Tech
Support magazine
     titled "Slay the CPU Cyclops! Schedule your Application for MIPS
Reduction Surgery".

Dasvidanya! - Jim

Reply via email to