Very interesting stuff in the 113 records that can help explain why things may 
work the way they do.  I'm not sure that you can use them to actually optimize 
anything, but understanding the data that is in there is important for 
understanding why things work the way they do.  And of course you should use 
them to help inform your capacity planning for processor upgrades.

A few of interesting tidbits I've seen recently:

You can see the effect of Hiperdispatch sending work to mostly the higher 
polarity processors in an LPAR.  The "overhead" associated with the lower 
polarity processors' cache misses may call into question if you really want 
more than one low polarity processor online.  (I think probably not.  Maybe not 
even one, depending on the importance of the work that the processor is being 
brought online to handle.)

Deviations from norm for metrics like CPI and percent problem state can 
highlight periods where workloads changed from the norm too.  E.G. Looping 
processes may very well drive down the CPI and drive the percent problem state 
up.

Examining the data before/after a processor change can help explain why you're 
seeing the CPU time variations that you are seeing.  Hopefully those variations 
match what you expected from zPCR.

The metrics also help explain why your CPU time for the same workload might 
vary across time periods.  The inter-lpar effects on the cache can drive up the 
CPI on lightly used LPARs when the larger LPARs are busy.  And of course you 
can look at the details of where the memory accesses are being satisified to 
help understand why the CPI is changing.  

All good fascinating stuff.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to