On Tue, 4 Mar 2008 09:48:22 -0600, Anthony Fletcher <[EMAIL PROTECTED]> wrote:
>We have a GRS environment using XCF (ie not using a CF) with three LPARS >connected. We had changed the RESMIL value to OFF to try and improve >responsiveness. That worked, but the XCFAS address space CPU consumption >went up. We decided to change the RESMIL value to 0 since that does leave >the tuning mechanism working. The manual indicates that RESMIL can be >changed on any LPAR independent of the others. One LPAR was changed, leaving >the other two set to OFF. It looks as if changing the RESMIL value to 0 in >one LPAR has reduced the CPU consumption in the XCFAS address space in all >LPARS. >Question is: Is that to be expected, or do we need to look for something else? That behavior makes perfect sense. CPU percentages are calculated (loosely) as usage per unit time. In the case where all of the systems had RESMIL=OFF, GRS basically played 'hot potato' with the RSA, sending it off to the next system as fast as it could. Therefore, the number of RSA sends per unit time was maximized (certainly faster than 1 per millisecond), also maximizing the amount of CPU utilization by XCFAS. Now, changing one system to RESMIL=0 turned on the tuning _for that one system_. Each time an empty RSA was received, GRS tuned the effective RESMIL up by a millisecond (up to 4, IIRC). This effectively slows down the RSA for the entire complex, since it gets 'parked' on that system for a (comparatively) long time. This will impact the number of RSA sends/receives around the ENTIRE complex, thus reducing the impact on XCF everywhere. Scott Fagen Enterprise Systems Management ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

