Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-19 Thread Martin Packer
Well there IS code before we hop on the enclave. Authorisation code and 
the like. Not being that good a DB2 faker  :-) I don't know if there are 
user exits involved.

But, yes, it's possible the code in DIST other than Enclave is 
significant. You can always tell for Type 30: TCB and SRB vs Enclave SRB 
(part of TCB as it's Preemptible-Class).

I'm just aware that this would in my experience (FWIW) be a first: 
Non-enclave in DIST being a problem. But life is full of firsts. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Shane Ginnane ibm-m...@tpg.com.au
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   19/12/2014 03:43
Subject:Re: MIPS, CEC, LPARs, PRISM, WLM
Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU



On Wed, 17 Dec 2014 16:12:31 +, Martin Packer wrote:

Sounds like you also need to familiarise yourself with how DIST works -
meaning enclaves that run the actual DDF SQL. As I say, unlikely DIST
itself but rather more likely the DDF is in play.

Hmmm - that might be the common lore.
DIST may indeed be the culprit rather than DDF enclaves.
Where I have seen this there is a *LOT* of enclaves being created - I 
presumed (without a lot of hard evidence) that the cost of setup/teardown 
was appearing in DIST. And, of course, DIST is an important address space, 
and runaway CPU consumption there hurts (almost) everyone else.

The OP indicated cancelling a single thread relieves the situation - 
should be easy enough to track down the likely suspect.

Shane ..

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-18 Thread Scott Chapman
You've had some great responses. I have seen this exact situation in the past.

As others have said I'd start by looking at the relative percentage of its 
share the production LPAR is normally consuming. For example, if the prod LPAR 
has a share equal to 60% of the box but normally consuming 80%, it's normally 
using 133% of it's share. If the other LPARs demand their 40%, it's going to be 
restricted to its 60%, which in this example is a 25% reduction from what it 
normally is getting. Certainly enough to be noticeable, and would explain the 
reported symptoms. 

If soft caps are in place then the situation gets more complicated and you have 
to look at the R4H utilization over time. My guess from the described symptoms 
that's probably not the situation here though.

Some remediation possibilities depending on the situation and the business 
requirements for the workloads in question...

-- Fix the weights so  that the production LPAR's weight assignment meets its 
normal requirement.

-- If the problem is occurring when soft capping, and the dev LPARs R4H is 
driving up the group's R4H, causing the group (including prod) to be capped, 
consider adding a defined cap limit for the dev LPAR to limit the amount of the 
group capacity it can consume. An LPAR can have a defined cap limit as well as 
belong to a capacity group.

-- Since these seem to happen regularly and truly are bad SQLs, and in dev, 
consider using DB2's governor in the dev region to cancel threads that have 
consumed some significant amount of resources.

-- Consider putting the DEV DDF work into a WLM resource group to limit the 
amount of CPU that work can consume.

Note I always recommend having multiple periods for DDF with the last period 
being a very low importance (likely discretionary for non-prod work) so that 
those bad SQLs that will eventually show up on your system and consume 
excessive amounts of resources will have very limited impact on other workloads 
on the system. While that's a good practice, it won't stop dev DDF work from 
consuming the dev LPAR's entire (essentially) assigned capacity. That's where 
the resource group may be useful. 

Scott Chapman

On Wed, 17 Dec 2014 23:40:18 -0500, Linda Hagedorn linda.haged...@gmail.com 
wrote:

The bad SQL is usually tablespace scans, and/or Cartesian product.  They are 
relatively easy to identify and cancel.  

MVS reports the stress in prod, the high CPU use on the dev lpar, and I find 
the misbehaving thread and cancel it.  Mvs reports things then return to 
normal.  

The perplexing part is the bad SQL running on LPARA is affecting its own lpar 
and the major lpar on the CEC.  It's own lpar I can understand, but the other 
one too? 

The prefetches - dynamic, list, and sequential are ziip eligible in DB2 V10, 
so the comment about the bad SQL taking the ziips from prod is possible.  I'm 
adding that to my list as something to check.  

The I/o comment is interesting. I'll add it to my list to watch for also.  

I'm hitting the books tonight.  Thanks for all the ideas and references. 

Sent from my iPad

 On Dec 17, 2014, at 9:48 PM, Clark Morris cfmpub...@ns.sympatico.ca wrote:
 
 On 17 Dec 2014 14:13:46 -0800, in bit.listserv.ibm-main you wrote:
 
 In pretty good with DB2, and Craig is wonderful.  
 
 It's the intricacies of MVS performance I need to bring in focus.  I have a 
 lot of reading and research to do so I can collect appropriate doc the next 
 time one of these hits.  
 
 After reading most of this thread, two things hit this retired systems
 programmer.  The first is that with all DASD shared, runaway bad SQL
 may be doing a number on your disk performance due to contention and I
 would look at I-O on both production and test.  DB2 and other experts
 who are more familiar with current DASD technology and contention can
 give more help.  The other is the role played on both LPARs by the use
 of zAAP and zIIP processors which run hecat full speed and reduced cost
 for selected work loads.  The bad SQL may be eligible to run on those
 processors and taking away the availability from production.  This is
 just a guess based on a dangerous (inadequate) amount of knowledge.
 
 Clark Morris
 
 Linda 
 Sent from my iPhone
 
 On Dec 17, 2014, at 2:34 PM, Ed Finnell 
 000248cce9f3-dmarc-requ...@listserv.ua.edu wrote:
 
 Craig Mullin's DB/2 books are really educational in scope and insight(and  
 heavy). Fundamental understanding of the interoperation is key to 
 identifying  and tuning  problems. He was with Platinum  when he first 
 began the  
 series and moved on after the acquisition by CA.(He and other vendors were
 presenting at our ADUG conference on 9/11/01. Haven't seen him since but  
 still get the updates.)
 
 The CA tools are really good at pinpointing problems. Detector and Log  
 Analyzer are key. For SQL there's the SQL analyzer(priced) component. 
 Sometimes 
 if it's third party software there may be a known issues with certain  
 

Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-18 Thread Shane Ginnane
On Wed, 17 Dec 2014 16:12:31 +, Martin Packer wrote:

Sounds like you also need to familiarise yourself with how DIST works -
meaning enclaves that run the actual DDF SQL. As I say, unlikely DIST
itself but rather more likely the DDF is in play.

Hmmm - that might be the common lore.
DIST may indeed be the culprit rather than DDF enclaves.
Where I have seen this there is a *LOT* of enclaves being created - I presumed 
(without a lot of hard evidence) that the cost of setup/teardown was appearing 
in DIST. And, of course, DIST is an important address space, and runaway CPU 
consumption there hurts (almost) everyone else.

The OP indicated cancelling a single thread relieves the situation - should be 
easy enough to track down the likely suspect.

Shane ..

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread R.S.

W dniu 2014-12-17 o 13:44, L Hagedorn pisze:

Hi IBM-MAIN,

We have a situation with multiple LPARS on a CEC, running DB2 asids prod, test, 
dev.

It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
stealing MIPS from the PROD LPAR and affecting production.

Others claim this is not possible due to Prism.

Will someone provide an overview of how Prism influences or controls MIPS usage 
(CPU) across LPARs sharing the same CEC, what are the limiting or controlling 
factors (if any), and how can the behavior be measured or reported upon so I 
can explain this with supporting doc?   Does WLM play a part in sharing CPU 
across LPARs?

Any information or referrals to doc is appreciated.

(some simlifications done)
First, it's easier to talkt about system A in LPAR A and system B in 
LPAR B. Nevermind what applications are running under given system.


You decide how much percentage of total CPU power is assigned to the 
LPARs, i.e. 70% for LPAR A and 20% for LPAR B and 10% for LPAR C.
However when LPAR A is not busy but LPAR B is busy it can happen LPAR B 
will consume more than you assigned. It's OK because stealing LPAR will 
consume only spare CPU cycles. You can prevent it (why? ) by using capping.
In fact the weights are in use when all LPARs want do consume more than 
assigned.



HTH

--
Radoslaw Skorupka
Lodz, Poland






--
Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku 
przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie 
jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem 
niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania 
adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by 
karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie 
zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo 
wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzib w Warszawie, ul. Senatorska 18, 00-950 Warszawa, www.mBank.pl, e-mail: kont...@mbank.pl 
Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. Wedug stanu na dzie 01.01.2014 r. kapita zakadowy mBanku S.A. (w caoci wpacony) wynosi 168.696.052 zote.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Elardus Engelbrecht
Linda Hagedorn wrote:

We have a situation with multiple LPARS on a CEC, running DB2 asids prod, 
test, dev.  

What z/OS levels? What levels of DB2 on all those LPARS?

It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
stealing MIPS from the PROD LPAR and affecting production.  

Based on what were those claims made? I would like to see for example, RMF or 
SMF data/reports or some other tools used to prove that claims.

Are you having response time problems?

Others claim this is not possible due to Prism.  
Will someone provide an overview of how Prism influences or controls MIPS 
usage (CPU) across LPARs sharing the same CEC, what are the limiting or 
controlling factors (if any), and how can the behavior be measured or reported 
upon so I can explain this with supporting doc?   Does WLM play a part in 
sharing CPU across LPARs?

Are your LPARS hard or soft capped? What are the CPU and MSU allocations to 
your LPARS?

Any information or referrals to doc is appreciated.  

I'll leave that to the real performance gurus on IBM-MAIN. 

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Staller, Allan
The answer is, it depends.

First, there is no priority across LPARS. All LPARS are dispatched equally 
according to the LPAR weights.  

For example, if LPARA is weighted are 80 and LPARB is weighted at 20, the 
following occurs:

If LPARA wants 85% and LPARB wants 10% (total 85%) everybody is happy and goes 
on their merry way.

If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20% and LPARA 
will be squeezed to 80%.

If LPARA wants 50% and LPARB wants 40% (total 90%) everybody is happy and goes 
on their merry way.

The LPARA weight represents a guaranteed minimum proportion (note: LPAR 
weights need not total to 100. The proportion is relative.)

All of the above occurs when capping (either hard or soft) is not present.

Software capping can occur with resource groups.
Hardware capping can occur with group capacity limits.

This is a complex subject and much more than can be covered in a short e-mail. 

If you have not already done so, I suggest you obtain a copy of and read the 
PR/SM Planning Guide. The most recent version I can find is SB10-7155-01 and is 
located here:
https://www-304.ibm.com/support/docview.wss?uid=isg202e537c11be0929c8525776100663570aid=1
 (watch the wrap).

RMF Monitor I (batch) has an excellent CPU report. This will also include the 
PARTITION DATA REPORT. I will refer you to the fine manuals for details/

WLM *may* reach across LPAR Boundaries. If fact, it is designed to do this. 
However, if the DVLP lpar is not in the same SYSPLEX, WLM cannot be a factor.

As others have pointed out, what evidence is there that the runaway task is 
affecting production  (factual, not conjecture!)?

HTH,

snip
We have a situation with multiple LPARS on a CEC, running DB2 asids prod, test, 
dev.  

It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
stealing MIPS from the PROD LPAR and affecting production.  

Others claim this is not possible due to Prism.  

Will someone provide an overview of how Prism influences or controls MIPS usage 
(CPU) across LPARs sharing the same CEC, what are the limiting or controlling 
factors (if any), and how can the behavior be measured or reported upon so I 
can explain this with supporting doc?   Does WLM play a part in sharing CPU 
across LPARs?
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread L Hagedorn
MVS Sysprogs reported the situation saying DB2DIST on LPAR A was affecting LPAR 
B.  I didn't ask them for doc, but will next time.  

This is a runtime issue, unrelated to reported changes.  That being said, I'm 
aware WLM is changed frequently and without broadcast notification.

This appears from time to time.  I am new here (couple of months) so cannot 
determine how long its been happening - could be months, could be years.  

I need to understand the interworkings of prism, WLM, and the LPARs, and from 
the postings today, more about soft and hard capping.  

All input is appreciated.  I have a learning curve in this area, so telling me 
to look for something in RMF is welcome, along with any display commands where 
I can see the caps.  I have Mainview. 

Sent from my iPhone

On Dec 17, 2014, at 8:35 AM, Lizette Koehler stars...@mindspring.com wrote:

 I would ask first, how do  you know it is affecting the Prod LPAR.
 
 What evidence, RMF Reports, Performance monitors, etc. ?
 
 There should be, I think, so data that could explicitly show that is what is
 happening.
 
 Is it all the time, some of the time?  Is it a specific time when the prod
 LPAR is affected?  How long has it been going on?  Is there a point in time
 when this started?  If so, what changes occurred at that point?
 
 Lizette
 
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of L Hagedorn
 Sent: Wednesday, December 17, 2014 5:44 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: MIPS, CEC, LPARs, PRISM, WLM
 
 Hi IBM-MAIN,
 
 We have a situation with multiple LPARS on a CEC, running DB2 asids prod,
 test,
 dev.
 
 It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and
 stealing MIPS from the PROD LPAR and affecting production.
 
 Others claim this is not possible due to Prism.
 
 Will someone provide an overview of how Prism influences or controls MIPS
 usage
 (CPU) across LPARs sharing the same CEC, what are the limiting or
 controlling
 factors (if any), and how can the behavior be measured or reported upon so
 I can
 explain this with supporting doc?   Does WLM play a part in sharing CPU
 across
 LPARs?
 
 Any information or referrals to doc is appreciated.
 
 Thanks, Linda
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread L Hagedorn
Thank you for the extensive information and examples.  

I will be hitting the books.  

Can you expand on this example:  

 If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20% and 
LPARA will be squeezed to 80%.

It seems counter intuitive to me and I'd like to understand.  Lets say LPARA is 
prod - they should get most of the resources.   Why would LPARA be squeezed 
instead of LPARB? 

Sent from my iPhone

On Dec 17, 2014, at 9:07 AM, Staller, Allan allan.stal...@kbmg.com wrote:

 The answer is, it depends.
 
 First, there is no priority across LPARS. All LPARS are dispatched 
 equally according to the LPAR weights.  
 
 For example, if LPARA is weighted are 80 and LPARB is weighted at 20, the 
 following occurs:
 
 If LPARA wants 85% and LPARB wants 10% (total 85%) everybody is happy and 
 goes on their merry way.
 
 If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20% and 
 LPARA will be squeezed to 80%.
 
 If LPARA wants 50% and LPARB wants 40% (total 90%) everybody is happy and 
 goes on their merry way.
 
 The LPARA weight represents a guaranteed minimum proportion (note: LPAR 
 weights need not total to 100. The proportion is relative.)
 
 All of the above occurs when capping (either hard or soft) is not present.
 
 Software capping can occur with resource groups.
 Hardware capping can occur with group capacity limits.
 
 This is a complex subject and much more than can be covered in a short 
 e-mail. 
 
 If you have not already done so, I suggest you obtain a copy of and read the 
 PR/SM Planning Guide. The most recent version I can find is SB10-7155-01 and 
 is located here:
 https://www-304.ibm.com/support/docview.wss?uid=isg202e537c11be0929c8525776100663570aid=1
  (watch the wrap).
 
 RMF Monitor I (batch) has an excellent CPU report. This will also include the 
 PARTITION DATA REPORT. I will refer you to the fine manuals for details/
 
 WLM *may* reach across LPAR Boundaries. If fact, it is designed to do this. 
 However, if the DVLP lpar is not in the same SYSPLEX, WLM cannot be a factor.
 
 As others have pointed out, what evidence is there that the runaway task is 
 affecting production  (factual, not conjecture!)?
 
 HTH,
 
 snip
 We have a situation with multiple LPARS on a CEC, running DB2 asids prod, 
 test, dev.  
 
 It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
 stealing MIPS from the PROD LPAR and affecting production.  
 
 Others claim this is not possible due to Prism.  
 
 Will someone provide an overview of how Prism influences or controls MIPS 
 usage (CPU) across LPARs sharing the same CEC, what are the limiting or 
 controlling factors (if any), and how can the behavior be measured or 
 reported upon so I can explain this with supporting doc?   Does WLM play a 
 part in sharing CPU across LPARs?
 /snip
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Martin Packer
I don't see the DIST ADDRESS SPACE going runaway. What is feasible is DDF 
work - separately WLM classified as it should be - taking a big chunk of 
CPU. (And that work runs IN the DIST address space but hopefully not at 
its dispatching priority.) DIST address space itself going rogue would 
sound like a bug.

Lots of things determine whether this would lead to a separate Production 
LPAR being impacted, most notably whether Prod LPAR is above its share (as 
determined by weights). With DDF work this could affect either the GCP 
pool or the zIIP pool.

There ARE interactions between WLM and PR/SM but generally not. Such 
interactions include if IRD Weight Management is active.

And I see you already have several other answers - so it looks like you 
came to the right place. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   L Hagedorn linda.haged...@gmail.com
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   17/12/2014 12:54
Subject:MIPS, CEC, LPARs, PRISM, WLM
Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU



Hi IBM-MAIN,

We have a situation with multiple LPARS on a CEC, running DB2 asids prod, 
test, dev. 

It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
stealing MIPS from the PROD LPAR and affecting production. 

Others claim this is not possible due to Prism. 

Will someone provide an overview of how Prism influences or controls MIPS 
usage (CPU) across LPARs sharing the same CEC, what are the limiting or 
controlling factors (if any), and how can the behavior be measured or 
reported upon so I can explain this with supporting doc?   Does WLM play a 
part in sharing CPU across LPARs?

Any information or referrals to doc is appreciated. 

Thanks, Linda 
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Mark Pace
You may ask IBM or your Business Partner to do a CP3000 study.  This can
uncover issues with WLM.

On Wed, Dec 17, 2014 at 11:36 AM, Ted MacNEIL eamacn...@yahoo.ca wrote:

 PR/SM (LPAR) doesn't know PROD from TEST.
 It only knows weight.
 If you have set it up for LPAR to have 80% and LPARB to have 20%, that's
 what they get in times of contention. No more, no less.
 20% is one's allotment so it's okay. 85% is above the allotment so it's
 scaled back to 80%.
 This is how it's always worked.


 -
 -teD
 -
   Original Message
 From: L Hagedorn
 Sent: Wednesday, December 17, 2014 11:15
 To: IBM-MAIN@LISTSERV.UA.EDU
 Reply To: IBM Mainframe Discussion List
 Subject: Re: MIPS, CEC, LPARs, PRISM, WLM

 Thank you for the extensive information and examples.

 I will be hitting the books.

 Can you expand on this example:

 If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20% and
 LPARA will be squeezed to 80%.

 It seems counter intuitive to me and I'd like to understand. Lets say
 LPARA is prod - they should get most of the resources. Why would LPARA be
 squeezed instead of LPARB?

 Sent from my iPhone

 On Dec 17, 2014, at 9:07 AM, Staller, Allan allan.stal...@kbmg.com
 wrote:

  The answer is, it depends.
 
  First, there is no priority across LPARS. All LPARS are dispatched
 equally according to the LPAR weights.
 
  For example, if LPARA is weighted are 80 and LPARB is weighted at 20,
 the following occurs:
 
  If LPARA wants 85% and LPARB wants 10% (total 85%) everybody is happy
 and goes on their merry way.
 
  If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20%
 and LPARA will be squeezed to 80%.
 
  If LPARA wants 50% and LPARB wants 40% (total 90%) everybody is happy
 and goes on their merry way.
 
  The LPARA weight represents a guaranteed minimum proportion (note:
 LPAR weights need not total to 100. The proportion is relative.)
 
  All of the above occurs when capping (either hard or soft) is not
 present.
 
  Software capping can occur with resource groups.
  Hardware capping can occur with group capacity limits.
 
  This is a complex subject and much more than can be covered in a short
 e-mail.
 
  If you have not already done so, I suggest you obtain a copy of and read
 the PR/SM Planning Guide. The most recent version I can find is
 SB10-7155-01 and is located here:
 
 https://www-304.ibm.com/support/docview.wss?uid=isg202e537c11be0929c8525776100663570aid=1
 (watch the wrap).
 
  RMF Monitor I (batch) has an excellent CPU report. This will also
 include the PARTITION DATA REPORT. I will refer you to the fine manuals
 for details/
 
  WLM *may* reach across LPAR Boundaries. If fact, it is designed to do
 this. However, if the DVLP lpar is not in the same SYSPLEX, WLM cannot be a
 factor.
 
  As others have pointed out, what evidence is there that the runaway
 task is affecting production (factual, not conjecture!)?
 
  HTH,
 
  snip
  We have a situation with multiple LPARS on a CEC, running DB2 asids
 prod, test, dev.
 
  It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU
 and stealing MIPS from the PROD LPAR and affecting production.
 
  Others claim this is not possible due to Prism.
 
  Will someone provide an overview of how Prism influences or controls
 MIPS usage (CPU) across LPARs sharing the same CEC, what are the limiting
 or controlling factors (if any), and how can the behavior be measured or
 reported upon so I can explain this with supporting doc? Does WLM play a
 part in sharing CPU across LPARs?
  /snip
 
  --
  For IBM-MAIN subscribe / signoff / archive access instructions,
  send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
The postings on this site are my own and don’t necessarily represent
Mainline’s positions or opinions

Mark D Pace
Senior Systems Engineer
Mainline Information Systems

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Martin Packer
Sounds like you also need to familiarise yourself with how DIST works - 
meaning enclaves that run the actual DDF SQL. As I say, unlikely DIST 
itself but rather more likely the DDF is in play.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   L Hagedorn linda.haged...@gmail.com
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   17/12/2014 16:08
Subject:Re: MIPS, CEC, LPARs, PRISM, WLM
Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU



MVS Sysprogs reported the situation saying DB2DIST on LPAR A was affecting 
LPAR B.  I didn't ask them for doc, but will next time. 

This is a runtime issue, unrelated to reported changes.  That being said, 
I'm aware WLM is changed frequently and without broadcast notification.

This appears from time to time.  I am new here (couple of months) so 
cannot determine how long its been happening - could be months, could be 
years. 

I need to understand the interworkings of prism, WLM, and the LPARs, and 
from the postings today, more about soft and hard capping. 

All input is appreciated.  I have a learning curve in this area, so 
telling me to look for something in RMF is welcome, along with any display 
commands where I can see the caps.  I have Mainview. 

Sent from my iPhone

On Dec 17, 2014, at 8:35 AM, Lizette Koehler stars...@mindspring.com 
wrote:

 I would ask first, how do  you know it is affecting the Prod LPAR.
 
 What evidence, RMF Reports, Performance monitors, etc. ?
 
 There should be, I think, so data that could explicitly show that is 
what is
 happening.
 
 Is it all the time, some of the time?  Is it a specific time when the 
prod
 LPAR is affected?  How long has it been going on?  Is there a point in 
time
 when this started?  If so, what changes occurred at that point?
 
 Lizette
 
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
On
 Behalf Of L Hagedorn
 Sent: Wednesday, December 17, 2014 5:44 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: MIPS, CEC, LPARs, PRISM, WLM
 
 Hi IBM-MAIN,
 
 We have a situation with multiple LPARS on a CEC, running DB2 asids 
prod,
 test,
 dev.
 
 It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU 
and
 stealing MIPS from the PROD LPAR and affecting production.
 
 Others claim this is not possible due to Prism.
 
 Will someone provide an overview of how Prism influences or controls 
MIPS
 usage
 (CPU) across LPARs sharing the same CEC, what are the limiting or
 controlling
 factors (if any), and how can the behavior be measured or reported upon 
so
 I can
 explain this with supporting doc?   Does WLM play a part in sharing CPU
 across
 LPARs?
 
 Any information or referrals to doc is appreciated.
 
 Thanks, Linda
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread L Hagedorn
Sent from my iPhone

On Dec 17, 2014, at 8:26 AM, Elardus Engelbrecht 
elardus.engelbre...@sita.co.za wrote:

 Linda Hagedorn wrote:
 
 We have a situation with multiple LPARS on a CEC, running DB2 asids prod, 
 test, dev.  
 
 What z/OS levels? What levels of DB2 on all those LPARS?

Z is 2.1.  DB2s are V10 Put1308

 
 It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
 stealing MIPS from the PROD LPAR and affecting production.  
 
 Based on what were those claims made? I would like to see for example, RMF or 
 SMF data/reports or some other tools used to prove that claims.
 
 Are you having response time problems?

Operations is reporting the situation.  I can confirm the presence of bad SQL 
in the dev system, and the operations say things return to normal after its 
cancelled.  Again, I don't have doc to indicate normal, so I'll ask that also.  

My questions are to understand how LPARA is causing a problem in LPARB.  
Operations management says this true, but the sysprog says it can't happen.  

Doc will be requested next time.  Any particular advice on what should be asked 
is appreciated.  I'll ask what they see, screen shots, source, etc.  

Thank you.  Linda 


 Others claim this is not possible due to Prism.  
 Will someone provide an overview of how Prism influences or controls MIPS 
 usage (CPU) across LPARs sharing the same CEC, what are the limiting or 
 controlling factors (if any), and how can the behavior be measured or 
 reported upon so I can explain this with supporting doc?   Does WLM play a 
 part in sharing CPU across LPARs?
 
 Are your LPARS hard or soft capped? What are the CPU and MSU allocations to 
 your LPARS?

I don't know.  Is there a display command that would show the status and 
limits? 


 Any information or referrals to doc is appreciated.  
 
 I'll leave that to the real performance gurus on IBM-MAIN. 
 
 Groete / Greetings
 Elardus Engelbrecht
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread L Hagedorn
I lurk on IBM-Main, and I'm always awed by the knowledge here.  

You are treasure.  

Sent from my iPhone

On Dec 17, 2014, at 10:02 AM, Martin Packer martin_pac...@uk.ibm.com wrote:

 I don't see the DIST ADDRESS SPACE going runaway. What is feasible is DDF 
 work - separately WLM classified as it should be - taking a big chunk of 
 CPU. (And that work runs IN the DIST address space but hopefully not at 
 its dispatching priority.) DIST address space itself going rogue would 
 sound like a bug.
 
 Lots of things determine whether this would lead to a separate Production 
 LPAR being impacted, most notably whether Prod LPAR is above its share (as 
 determined by weights). With DDF work this could affect either the GCP 
 pool or the zIIP pool.
 
 There ARE interactions between WLM and PR/SM but generally not. Such 
 interactions include if IRD Weight Management is active.
 
 And I see you already have several other answers - so it looks like you 
 came to the right place. :-)
 
 Cheers, Martin
 
 Martin Packer,
 zChampion, Principal Systems Investigator,
 Worldwide Banking Center of Excellence, IBM
 
 +44-7802-245-584
 
 email: martin_pac...@uk.ibm.com
 
 Twitter / Facebook IDs: MartinPacker
 Blog: 
 https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker
 
 
 
 From:   L Hagedorn linda.haged...@gmail.com
 To: IBM-MAIN@LISTSERV.UA.EDU
 Date:   17/12/2014 12:54
 Subject:MIPS, CEC, LPARs, PRISM, WLM
 Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU
 
 
 
 Hi IBM-MAIN,
 
 We have a situation with multiple LPARS on a CEC, running DB2 asids prod, 
 test, dev. 
 
 It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
 stealing MIPS from the PROD LPAR and affecting production. 
 
 Others claim this is not possible due to Prism. 
 
 Will someone provide an overview of how Prism influences or controls MIPS 
 usage (CPU) across LPARs sharing the same CEC, what are the limiting or 
 controlling factors (if any), and how can the behavior be measured or 
 reported upon so I can explain this with supporting doc?   Does WLM play a 
 part in sharing CPU across LPARs?
 
 Any information or referrals to doc is appreciated. 
 
 Thanks, Linda 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 
 
 
 Unless stated otherwise above:
 IBM United Kingdom Limited - Registered in England and Wales with number 
 741598. 
 Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread L Hagedorn
Thank you. 

Sent from my iPhone

On Dec 17, 2014, at 11:36 AM, Ted MacNEIL eamacn...@yahoo.ca wrote:

 PR/SM (LPAR) doesn't know PROD from TEST.
 It only knows weight.
 If you have set it up for LPAR to have 80% and LPARB to have 20%, that's what 
 they get in times of contention. No more, no less.
 20% is one's allotment so it's okay. 85% is above the allotment so it's 
 scaled back to 80%.
 This is how it's always worked.
 
 
 -
 -teD
 -
   Original Message  
 From: L Hagedorn
 Sent: Wednesday, December 17, 2014 11:15
 To: IBM-MAIN@LISTSERV.UA.EDU
 Reply To: IBM Mainframe Discussion List
 Subject: Re: MIPS, CEC, LPARs, PRISM, WLM
 
 Thank you for the extensive information and examples. 
 
 I will be hitting the books. 
 
 Can you expand on this example: 
 
 If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20% and 
 LPARA will be squeezed to 80%.
 
 It seems counter intuitive to me and I'd like to understand. Lets say LPARA 
 is prod - they should get most of the resources. Why would LPARA be squeezed 
 instead of LPARB? 
 
 Sent from my iPhone
 
 On Dec 17, 2014, at 9:07 AM, Staller, Allan allan.stal...@kbmg.com wrote:
 
 The answer is, it depends.
 
 First, there is no priority across LPARS. All LPARS are dispatched 
 equally according to the LPAR weights. 
 
 For example, if LPARA is weighted are 80 and LPARB is weighted at 20, the 
 following occurs:
 
 If LPARA wants 85% and LPARB wants 10% (total 85%) everybody is happy and 
 goes on their merry way.
 
 If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20% and 
 LPARA will be squeezed to 80%.
 
 If LPARA wants 50% and LPARB wants 40% (total 90%) everybody is happy and 
 goes on their merry way.
 
 The LPARA weight represents a guaranteed minimum proportion (note: LPAR 
 weights need not total to 100. The proportion is relative.)
 
 All of the above occurs when capping (either hard or soft) is not present.
 
 Software capping can occur with resource groups.
 Hardware capping can occur with group capacity limits.
 
 This is a complex subject and much more than can be covered in a short 
 e-mail. 
 
 If you have not already done so, I suggest you obtain a copy of and read the 
 PR/SM Planning Guide. The most recent version I can find is SB10-7155-01 and 
 is located here:
 https://www-304.ibm.com/support/docview.wss?uid=isg202e537c11be0929c8525776100663570aid=1
  (watch the wrap).
 
 RMF Monitor I (batch) has an excellent CPU report. This will also include 
 the PARTITION DATA REPORT. I will refer you to the fine manuals for 
 details/
 
 WLM *may* reach across LPAR Boundaries. If fact, it is designed to do this. 
 However, if the DVLP lpar is not in the same SYSPLEX, WLM cannot be a factor.
 
 As others have pointed out, what evidence is there that the runaway task 
 is affecting production (factual, not conjecture!)?
 
 HTH,
 
 snip
 We have a situation with multiple LPARS on a CEC, running DB2 asids prod, 
 test, dev. 
 
 It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU and 
 stealing MIPS from the PROD LPAR and affecting production. 
 
 Others claim this is not possible due to Prism. 
 
 Will someone provide an overview of how Prism influences or controls MIPS 
 usage (CPU) across LPARs sharing the same CEC, what are the limiting or 
 controlling factors (if any), and how can the behavior be measured or 
 reported upon so I can explain this with supporting doc? Does WLM play a 
 part in sharing CPU across LPARs?
 /snip
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Staller, Allan
Any LPAR can exceed their proportion if there is available resource. 
If there is insufficient resource, LPARS will be forced to their proportion, as 
defined by the LPAR weights (unless another LPAR is not consuming its share).

Remember, PR/SM does not know LPARA is production and should be favored. It 
just knows LPARA is assigned weight x.

For example there is  CEC with LPARS(weights)  LPARA(60%) LPARB(20%) and 
LPARC(20%).
If all LPARS demand their weight, the CPU will be distributed according to the 
weights defined (barring soft/hard capping).

Now, assume LPARC only wants 5%.

LPARA and LPARB can now consume the 15% that LPARC does not want.

In the example from my previous post, the total demand is 105%. 
LPARB has a defined weight of 20% and is guaranteed this value.
LPARA is guaranteed its weight of 80%. (total of 100).
The additional 5% exceeds the available capacity and will be presented as 
latent demand.

IF LPARB were to drop to only requiring 10%, LPAR would then be allowed to 
exceed the defined proportion because the total demand is now 90%.

There are man7y tricks of the trade to cap the non-production LPARs. The 
only question is the granularity appropriate.

Another good source would be the CMG archives (www.cmg.org) and Cheryl Watsons 
Tuning Letters.

Hopefully, Cheryl will spot this thread and chime in.

HTH,


snip
Can you expand on this example:  

 If LPARA wants 85% and LPARB want 20% (total 105%) LPARB will get 20% and 
LPARA will be squeezed to 80%.

It seems counter intuitive to me and I'd like to understand.  Lets say LPARA is 
prod - they should get most of the resources.   Why would LPARA be squeezed 
instead of LPARB? 
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Lizette Koehler
How is the dasd handled? Is it separate between Prd and other LPARs?  Or do
you share your dasd.

In other words any LPAR can see any dasd for any other LPAR or is it PRD
dasd is only on prd and not accessible by the other LPARS?

How does Operation identify the issue?  Are they looking at an alert? Or  a
Monitor (Tivoli Omegamon, other?) 

How is the network during this time?  Is connectivity optimal or slowed
down?



Lizette


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of L Hagedorn
 Sent: Wednesday, December 17, 2014 9:09 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: MIPS, CEC, LPARs, PRISM, WLM
 
 MVS Sysprogs reported the situation saying DB2DIST on LPAR A was affecting
 LPAR B.  I didn't ask them for doc, but will next time.
 
 This is a runtime issue, unrelated to reported changes.  That being said,
I'm aware
 WLM is changed frequently and without broadcast notification.
 
 This appears from time to time.  I am new here (couple of months) so
cannot
 determine how long its been happening - could be months, could be years.
 
 I need to understand the interworkings of prism, WLM, and the LPARs, and
from the
 postings today, more about soft and hard capping.
 
 All input is appreciated.  I have a learning curve in this area, so
telling me to look for
 something in RMF is welcome, along with any display commands where I can
see
 the caps.  I have Mainview.
 
 Sent from my iPhone
 
 On Dec 17, 2014, at 8:35 AM, Lizette Koehler stars...@mindspring.com
 wrote:
 
  I would ask first, how do  you know it is affecting the Prod LPAR.
 
  What evidence, RMF Reports, Performance monitors, etc. ?
 
  There should be, I think, so data that could explicitly show that is
  what is happening.
 
  Is it all the time, some of the time?  Is it a specific time when the
  prod LPAR is affected?  How long has it been going on?  Is there a
  point in time when this started?  If so, what changes occurred at that
point?
 
  Lizette
 
 
  -Original Message-
  From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
  On Behalf Of L Hagedorn
  Sent: Wednesday, December 17, 2014 5:44 AM
  To: IBM-MAIN@LISTSERV.UA.EDU
  Subject: MIPS, CEC, LPARs, PRISM, WLM
 
  Hi IBM-MAIN,
 
  We have a situation with multiple LPARS on a CEC, running DB2 asids
  prod,
  test,
  dev.
 
  It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU
  and stealing MIPS from the PROD LPAR and affecting production.
 
  Others claim this is not possible due to Prism.
 
  Will someone provide an overview of how Prism influences or controls
  MIPS
  usage
  (CPU) across LPARs sharing the same CEC, what are the limiting or
  controlling
  factors (if any), and how can the behavior be measured or reported
  upon so
  I can
  explain this with supporting doc?   Does WLM play a part in sharing CPU
  across
  LPARs?
 
  Any information or referrals to doc is appreciated.
 
  Thanks, Linda
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Ed Finnell
Craig Mullin's DB/2 books are really educational in scope and insight(and  
heavy). Fundamental understanding of the interoperation is key to 
identifying  and tuning  problems. He was with Platinum  when he first began 
the  
series and moved on after the acquisition by CA.(He and other vendors were
presenting at our ADUG conference on 9/11/01. Haven't seen him since but  
still get the updates.)
 
The CA tools are really good at pinpointing problems. Detector and Log  
Analyzer are key. For SQL there's the SQL analyzer(priced) component. Sometimes 
 if it's third party software there may be a known issues with certain  
transactions or areas of interactions.
 
For continual knowledge _www.share.org_ (http://www.share.org)  is a good 
source. Developers  and installations give current and timely advice on 
fitting metrics to machines.  The proceedings for the last three Events are 
kept 
online
without signup. The new RMF stuff for EC12 and Flash drives is pretty  
awesome. 
 
 
In a message dated 12/17/2014 11:00:20 A.M. Central Standard Time,  
linda.haged...@gmail.com writes:

I lurk  on IBM-Main, and I'm always awed by the knowledge here.   



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread L Hagedorn
In pretty good with DB2, and Craig is wonderful.  

It's the intricacies of MVS performance I need to bring in focus.  I have a lot 
of reading and research to do so I can collect appropriate doc the next time 
one of these hits.  

Linda 
Sent from my iPhone

On Dec 17, 2014, at 2:34 PM, Ed Finnell 
000248cce9f3-dmarc-requ...@listserv.ua.edu wrote:

 Craig Mullin's DB/2 books are really educational in scope and insight(and  
 heavy). Fundamental understanding of the interoperation is key to 
 identifying  and tuning  problems. He was with Platinum  when he first began 
 the  
 series and moved on after the acquisition by CA.(He and other vendors were
 presenting at our ADUG conference on 9/11/01. Haven't seen him since but  
 still get the updates.)
 
 The CA tools are really good at pinpointing problems. Detector and Log  
 Analyzer are key. For SQL there's the SQL analyzer(priced) component. 
 Sometimes 
 if it's third party software there may be a known issues with certain  
 transactions or areas of interactions.
 
 For continual knowledge _www.share.org_ (http://www.share.org)  is a good 
 source. Developers  and installations give current and timely advice on 
 fitting metrics to machines.  The proceedings for the last three Events are 
 kept 
 online
 without signup. The new RMF stuff for EC12 and Flash drives is pretty  
 awesome. 
 
 
 In a message dated 12/17/2014 11:00:20 A.M. Central Standard Time,  
 linda.haged...@gmail.com writes:
 
 I lurk  on IBM-Main, and I'm always awed by the knowledge here.   
 
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread L Hagedorn
Sent from my iPhone

On Dec 17, 2014, at 12:38 PM, Lizette Koehler stars...@mindspring.com wrote:

 How is the dasd handled? Is it separate between Prd and other LPARs?  Or do
 you share your dasd.

Dasd is shared.  

 
 In other words any LPAR can see any dasd for any other LPAR or is it PRD
 dasd is only on prd and not accessible by the other LPARS?

Its all shared.  


 How does Operation identify the issue?  Are they looking at an alert? Or  a
 Monitor (Tivoli Omegamon, other?) 

That's one of the outstanding questions.

 How is the network during this time?  Is connectivity optimal or slowed
 down?

Operations is adept at monitoring.  They do not identify any network issues.  
I'll ask them specifically next time.  


 
 Lizette
 
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of L Hagedorn
 Sent: Wednesday, December 17, 2014 9:09 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: MIPS, CEC, LPARs, PRISM, WLM
 
 MVS Sysprogs reported the situation saying DB2DIST on LPAR A was affecting
 LPAR B.  I didn't ask them for doc, but will next time.
 
 This is a runtime issue, unrelated to reported changes.  That being said,
 I'm aware
 WLM is changed frequently and without broadcast notification.
 
 This appears from time to time.  I am new here (couple of months) so
 cannot
 determine how long its been happening - could be months, could be years.
 
 I need to understand the interworkings of prism, WLM, and the LPARs, and
 from the
 postings today, more about soft and hard capping.
 
 All input is appreciated.  I have a learning curve in this area, so
 telling me to look for
 something in RMF is welcome, along with any display commands where I can
 see
 the caps.  I have Mainview.
 
 Sent from my iPhone
 
 On Dec 17, 2014, at 8:35 AM, Lizette Koehler stars...@mindspring.com
 wrote:
 
 I would ask first, how do  you know it is affecting the Prod LPAR.
 
 What evidence, RMF Reports, Performance monitors, etc. ?
 
 There should be, I think, so data that could explicitly show that is
 what is happening.
 
 Is it all the time, some of the time?  Is it a specific time when the
 prod LPAR is affected?  How long has it been going on?  Is there a
 point in time when this started?  If so, what changes occurred at that
 point?
 
 Lizette
 
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of L Hagedorn
 Sent: Wednesday, December 17, 2014 5:44 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: MIPS, CEC, LPARs, PRISM, WLM
 
 Hi IBM-MAIN,
 
 We have a situation with multiple LPARS on a CEC, running DB2 asids
 prod,
 test,
 dev.
 
 It is claimed a runaway DB2 DIST asid on the DVLP LPAR is burning CPU
 and stealing MIPS from the PROD LPAR and affecting production.
 
 Others claim this is not possible due to Prism.
 
 Will someone provide an overview of how Prism influences or controls
 MIPS
 usage
 (CPU) across LPARs sharing the same CEC, what are the limiting or
 controlling
 factors (if any), and how can the behavior be measured or reported
 upon so
 I can
 explain this with supporting doc?   Does WLM play a part in sharing CPU
 across
 LPARs?
 
 Any information or referrals to doc is appreciated.
 
 Thanks, Linda
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Clark Morris
On 17 Dec 2014 14:13:46 -0800, in bit.listserv.ibm-main you wrote:

In pretty good with DB2, and Craig is wonderful.  

It's the intricacies of MVS performance I need to bring in focus.  I have a 
lot of reading and research to do so I can collect appropriate doc the next 
time one of these hits.  

After reading most of this thread, two things hit this retired systems
programmer.  The first is that with all DASD shared, runaway bad SQL
may be doing a number on your disk performance due to contention and I
would look at I-O on both production and test.  DB2 and other experts
who are more familiar with current DASD technology and contention can
give more help.  The other is the role played on both LPARs by the use
of zAAP and zIIP processors which run at full speed and reduced cost
for selected work loads.  The bad SQL may be eligible to run on those
processors and taking away the availability from production.  This is
just a guess based on a dangerous (inadequate) amount of knowledge.

Clark Morris

Linda 
Sent from my iPhone

On Dec 17, 2014, at 2:34 PM, Ed Finnell 
000248cce9f3-dmarc-requ...@listserv.ua.edu wrote:

 Craig Mullin's DB/2 books are really educational in scope and insight(and  
 heavy). Fundamental understanding of the interoperation is key to 
 identifying  and tuning  problems. He was with Platinum  when he first began 
 the  
 series and moved on after the acquisition by CA.(He and other vendors were
 presenting at our ADUG conference on 9/11/01. Haven't seen him since but  
 still get the updates.)
 
 The CA tools are really good at pinpointing problems. Detector and Log  
 Analyzer are key. For SQL there's the SQL analyzer(priced) component. 
 Sometimes 
 if it's third party software there may be a known issues with certain  
 transactions or areas of interactions.
 
 For continual knowledge _www.share.org_ (http://www.share.org)  is a good 
 source. Developers  and installations give current and timely advice on 
 fitting metrics to machines.  The proceedings for the last three Events are 
 kept 
 online
 without signup. The new RMF stuff for EC12 and Flash drives is pretty  
 awesome. 
 
 
 In a message dated 12/17/2014 11:00:20 A.M. Central Standard Time,  
 linda.haged...@gmail.com writes:
 
 I lurk  on IBM-Main, and I'm always awed by the knowledge here.   
 
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-17 Thread Linda Hagedorn
The bad SQL is usually tablespace scans, and/or Cartesian product.  They are 
relatively easy to identify and cancel.  

MVS reports the stress in prod, the high CPU use on the dev lpar, and I find 
the misbehaving thread and cancel it.  Mvs reports things then return to 
normal.  

The perplexing part is the bad SQL running on LPARA is affecting its own lpar 
and the major lpar on the CEC.  It's own lpar I can understand, but the other 
one too? 

The prefetches - dynamic, list, and sequential are ziip eligible in DB2 V10, so 
the comment about the bad SQL taking the ziips from prod is possible.  I'm 
adding that to my list as something to check.  

The I/o comment is interesting. I'll add it to my list to watch for also.  

I'm hitting the books tonight.  Thanks for all the ideas and references. 

Sent from my iPad

 On Dec 17, 2014, at 9:48 PM, Clark Morris cfmpub...@ns.sympatico.ca wrote:
 
 On 17 Dec 2014 14:13:46 -0800, in bit.listserv.ibm-main you wrote:
 
 In pretty good with DB2, and Craig is wonderful.  
 
 It's the intricacies of MVS performance I need to bring in focus.  I have a 
 lot of reading and research to do so I can collect appropriate doc the next 
 time one of these hits.  
 
 After reading most of this thread, two things hit this retired systems
 programmer.  The first is that with all DASD shared, runaway bad SQL
 may be doing a number on your disk performance due to contention and I
 would look at I-O on both production and test.  DB2 and other experts
 who are more familiar with current DASD technology and contention can
 give more help.  The other is the role played on both LPARs by the use
 of zAAP and zIIP processors which run hecat full speed and reduced cost
 for selected work loads.  The bad SQL may be eligible to run on those
 processors and taking away the availability from production.  This is
 just a guess based on a dangerous (inadequate) amount of knowledge.
 
 Clark Morris
 
 Linda 
 Sent from my iPhone
 
 On Dec 17, 2014, at 2:34 PM, Ed Finnell 
 000248cce9f3-dmarc-requ...@listserv.ua.edu wrote:
 
 Craig Mullin's DB/2 books are really educational in scope and insight(and  
 heavy). Fundamental understanding of the interoperation is key to 
 identifying  and tuning  problems. He was with Platinum  when he first 
 began the  
 series and moved on after the acquisition by CA.(He and other vendors were
 presenting at our ADUG conference on 9/11/01. Haven't seen him since but  
 still get the updates.)
 
 The CA tools are really good at pinpointing problems. Detector and Log  
 Analyzer are key. For SQL there's the SQL analyzer(priced) component. 
 Sometimes 
 if it's third party software there may be a known issues with certain  
 transactions or areas of interactions.
 
 For continual knowledge _www.share.org_ (http://www.share.org)  is a good 
 source. Developers  and installations give current and timely advice on 
 fitting metrics to machines.  The proceedings for the last three Events are 
 kept 
 online
 without signup. The new RMF stuff for EC12 and Flash drives is pretty  
 awesome. 
 
 
 In a message dated 12/17/2014 11:00:20 A.M. Central Standard Time,  
 linda.haged...@gmail.com writes:
 
 I lurk  on IBM-Main, and I'm always awed by the knowledge here.   
 
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN