You've had some great responses. I have seen this exact situation in the past.

As others have said I'd start by looking at the relative percentage of its 
share the production LPAR is normally consuming. For example, if the prod LPAR 
has a share equal to 60% of the box but normally consuming 80%, it's normally 
using 133% of it's share. If the other LPARs demand their 40%, it's going to be 
restricted to its 60%, which in this example is a 25% reduction from what it 
normally is getting. Certainly enough to be noticeable, and would explain the 
reported symptoms. 

If soft caps are in place then the situation gets more complicated and you have 
to look at the R4H utilization over time. My guess from the described symptoms 
that's probably not the situation here though.

Some remediation possibilities depending on the situation and the business 
requirements for the workloads in question...

-- Fix the weights so  that the production LPAR's weight assignment meets its 
normal requirement.

-- If the problem is occurring when soft capping, and the dev LPARs R4H is 
driving up the group's R4H, causing the group (including prod) to be capped, 
consider adding a defined cap limit for the dev LPAR to limit the amount of the 
group capacity it can consume. An LPAR can have a defined cap limit as well as 
belong to a capacity group.

-- Since these seem to happen regularly and truly are "bad" SQLs, and in dev, 
consider using DB2's governor in the dev region to cancel threads that have 
consumed some significant amount of resources.

-- Consider putting the DEV DDF work into a WLM resource group to limit the 
amount of CPU that work can consume.

Note I always recommend having multiple periods for DDF with the last period 
being a very low importance (likely discretionary for non-prod work) so that 
those bad SQLs that will eventually show up on your system and consume 
excessive amounts of resources will have very limited impact on other workloads 
on the system. While that's a good practice, it won't stop dev DDF work from 
consuming the dev LPAR's entire (essentially) assigned capacity. That's where 
the resource group may be useful. 

Scott Chapman

On Wed, 17 Dec 2014 23:40:18 -0500, Linda Hagedorn <[email protected]> 
wrote:

>The bad SQL is usually tablespace scans, and/or Cartesian product.  They are 
>relatively easy to identify and cancel.  
>
>MVS reports the stress in prod, the high CPU use on the dev lpar, and I find 
>the misbehaving thread and cancel it.  Mvs reports things then return to 
>normal.  
>
>The perplexing part is the bad SQL running on LPARA is affecting its own lpar 
>and the major lpar on the CEC.  It's own lpar I can understand, but the other 
>one too? 
>
>The prefetches - dynamic, list, and sequential are ziip eligible in DB2 V10, 
>so the comment about the bad SQL taking the ziips from prod is possible.  I'm 
>adding that to my list as something to check.  
>
>The I/o comment is interesting. I'll add it to my list to watch for also.  
>
>I'm hitting the books tonight.  Thanks for all the ideas and references. 
>
>Sent from my iPad
>
>>> On Dec 17, 2014, at 9:48 PM, Clark Morris <[email protected]> wrote:
>>> 
>>> On 17 Dec 2014 14:13:46 -0800, in bit.listserv.ibm-main you wrote:
>>> 
>>> In pretty good with DB2, and Craig is wonderful.  
>>> 
>>> It's the intricacies of MVS performance I need to bring in focus.  I have a 
>>> lot of reading and research to do so I can collect appropriate doc the next 
>>> time one of these hits.  
>> 
>> After reading most of this thread, two things hit this retired systems
>> programmer.  The first is that with all DASD shared, runaway bad SQL
>> may be doing a number on your disk performance due to contention and I
>> would look at I-O on both production and test.  DB2 and other experts
>> who are more familiar with current DASD technology and contention can
>> give more help.  The other is the role played on both LPARs by the use
>> of zAAP and zIIP processors which run hecat full speed and reduced cost
>> for selected work loads.  The bad SQL may be eligible to run on those
>> processors and taking away the availability from production.  This is
>> just a guess based on a dangerous (inadequate) amount of knowledge.
>> 
>> Clark Morris
>>> 
>>> Linda 
>>> Sent from my iPhone
>>> 
>>>> On Dec 17, 2014, at 2:34 PM, Ed Finnell 
>>>> <[email protected]> wrote:
>>>> 
>>>> Craig Mullin's DB/2 books are really educational in scope and insight(and  
>>>> heavy). Fundamental understanding of the interoperation is key to 
>>>> identifying  and tuning  problems. He was with Platinum  when he first 
>>>> began the  
>>>> series and moved on after the acquisition by CA.(He and other vendors were
>>>> presenting at our ADUG conference on 9/11/01. Haven't seen him since but  
>>>> still get the updates.)
>>>> 
>>>> The CA tools are really good at pinpointing problems. Detector and Log  
>>>> Analyzer are key. For SQL there's the SQL analyzer(priced) component. 
>>>> Sometimes 
>>>> if it's third party software there may be a known issues with certain  
>>>> transactions or areas of interactions.
>>>> 
>>>> For continual knowledge _www.share.org_ (http://www.share.org)  is a good 
>>>> source. Developers  and installations give current and timely advice on 
>>>> fitting metrics to machines.  The proceedings for the last three Events 
>>>> are kept 
>>>> online
>>>> without signup. The new RMF stuff for EC12 and Flash drives is pretty  
>>>> awesome. 
>>>> 
>>>> 
>>>> In a message dated 12/17/2014 11:00:20 A.M. Central Standard Time,  
>>>> [email protected] writes:
>>>> 
>>>> I lurk  on IBM-Main, and I'm always awed by the knowledge here.   
>>>> 
>>>> 
>>>> 
>>>> ----------------------------------------------------------------------
>>>> For IBM-MAIN subscribe / signoff / archive access instructions,
>>>> send email to [email protected] with the message: INFO IBM-MAIN
>>> 
>>> ----------------------------------------------------------------------
>>> For IBM-MAIN subscribe / signoff / archive access instructions,
>>> send email to [email protected] with the message: INFO IBM-MAIN
>> 
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to [email protected] with the message: INFO IBM-MAIN
>
>----------------------------------------------------------------------
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to