Re: XCF/GRS question

2020-06-06 Thread Jesse 1 Robinson
When we were transforming our environment from separate CPUs/LPARs to sysplex, 
we did so by subdividing existing systems into sysplex members rather than 
combining systems into sysplexes. Resulting sysplexes were based on traditional 
workloads. We ended up with one sysplex that only one member. No other system 
had the same workload, and no one could justify subdividing it just on 
principle. No problem.

There was one scheduled housekeeping job that did heavy ICF catalog reading. On 
all sysplexes it ran with x resource utilization except for this one sysplex, 
where the same job used 2x or 3x resources. I finally asked the question, how 
is this sysplex different from all other sysplexes? It was also the only 
parallel sysplex that was still running traditional ring GRS only because with 
a single system, it didn't seem worth additional CF structure overhead. IBM at 
the time said for up to four members, GRS ring was adequate. I'm not much into 
measuring and micro analyzing, so on a hunch I converted this single member 
sysplex to GRS star. The change was dramatic. Suddenly, with no other changes, 
the catalog housekeeping job dropped to x resource utilization. 

This was quite a few years ago. Things may have changed, but I still recommend 
GRS star for any parallel sysplex regardless of the number of members.  

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Edgington, Jerry
Sent: Tuesday, June 2, 2020 10:39 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):XCF/GRS question

CAUTION EXTERNAL EMAIL

We are running on single SYSPlex with two LPARs (Prod and Test) with 2 ICFs, 
all running on the GPs.  We are experiencing slowdowns, due to PROC-GRS on 
Test, PROC-XCFAS on Prod.  Weights are 20/20/20/80 for ICF1/ICF2/Test/Prod.  We 
have setup XCF Structures and FCTC for GRS Star

Higher Weight:
PROC-GRS3.4 users
PROC-GRS2.4 users
ENQ -ACF2ACB  100.0 % delay LOGONIDS
PROC-GRS   99.0 % delay
PROC-GRS   13.0 % delay

Lower weight:
PROC-XCFAS 14.1 users
PROC-XCFAS 13.1 users
PROC-XCFAS 99.0 % delay
PROC-XCFAS 45.0 % delay
PROC-XCFAS 16.0 % delay
PROC-XCFAS 11.0 % delay
PROC-XCFAS 33.0 % delay
PROC-XCFAS 77.0 % delay
PROC-XCFAS 45.0 % delay

GRSCNFxx:
GRSDEF MATCHSYS(*)
   SYNCHRES (YES)
   GRSQ (CONTENTION)
   ENQMAXA(25)
   ENQMAXU(16384)
   AUTHQLVL(2)
   RESMIL(5)
   TOLINT(180)

IEASYSxx:
GRS=STAR, JOIN GRS STAR
GRSCNF=00,GRS INITIALIZATION MEMBER
GRSRNL=00,GRS RESOURCE LIST

D GRS:
RESPONSE=TEST
 ISG343I 13.38.49 GRS STATUS 604
 SYSTEMSTATE   SYSTEMSTATE
 MVSZ  CONNECTED   TEST  CONNECTED
 GRS STAR MODE INFORMATION
   LOCK STRUCTURE (ISGLOCK) CONTAINS 1048576 LOCKS.
   THE CONTENTION NOTIFYING SYSTEM IS TEST
   SYNCHRES:  YES
   ENQMAXU: 16384
   ENQMAXA:25
   GRSQ:   CONTENTION
   AUTHQLVL:1
   MONITOR:NO

Any advice or help on what I can do about these delays, would be great?

Thanks,
Jerry

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: XCF/GRS question

2020-06-02 Thread Edgington, Jerry
See below and thank you very much for all the information and suggestions.


Jerry



From: IBM Mainframe Discussion List  on behalf of 
Peter Bishop 
Sent: Tuesday, June 2, 2020 8:06 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: XCF/GRS question

This message was sent from an external source outside of Western & Southern's 
network. Do not click links or open attachments unless you recognize the sender 
and know the contents are safe.


Hi Jerry,

questions, and a suggestion.  These are more at the hardware layer than the GRS 
one, which I saw Paul Feller addressing quite well.  It may be that you cannot 
change the LPAR setup, but if you can, here are some ideas.

1.   Must the CFs share the GPs with the z/OS systems, or are there ICF engines 
they can use?
- Yes, we don't have CF engines and only 2 GPs, being max'ed out at 16 MSUs

For small workloads it may be acceptable to have z/OS and CF workloads in the 
same processor pool, but CF workloads are different than z/OS ones, and where 
possible I have seen much benefit from having an ICF pool for CFs, and a CP 
pool for z/OS (and if you have VM or Linux, an IFL pool, which may be out of 
scope here).
   - I am trying to get a CF engine, but I can't get it approved in the budget, 
yet. z/VM, Linux and IFL are running on a different CEC

2.   Must the non-production and production workloads share the same Sysplex?
   -  I started with only one LPAR, running production and Test workloads.  Our 
maintenance window is only once a month for 4 hours. So, we needed a way to 
"fit" the upgrades into the maintenance windows.

I'd be inclined to separate them were I in charge.  Two monoplexes may be less 
hassle than a "forced sharing" Sysplex.  But you may have reasons for joining 
non-production into the production Sysplex.
- This is a small system, and I would have a major change to split into 
separate SYSplex or even two monoplex'es.  Due to the way the batch and 
development are setup, it would be a very big change

3.   Do you have DYNDISP=THIN set on the CF LPARs?
 - Yes, DYNDISP=THIN on the ICF LARs

For non-production CFs, this is best, but in your case with a single plex it 
may be inapplicable.  Consider how you might benefit from it.  It is a 
much-improved algorithm than its predecessors has been my experience.  
Considering you are sharing the pool, it may be a "quick fix" if you can live 
with it.  Try a test.
- we two ICF LPARs, so I single SYSplex was really the only way to accomplish 
all the goals, with minimal impact to the business and developers.

4.   If you split the plexes, and have separate CFs, it will be better if you 
weight the CF LPARs as you do the z/OS ones, e.g. if z/OS has an 80:20 CP pool 
weight, then the CF LPARs should have the same weights for the ICF pool.
- it would be a big deal, both political and business impact to split the 
environments into more SYSplex'es with more ICF.  I think wouldn't be a good 
idea, with this small setup.
- Thanks, I will take a look at the CPU weights, about increasing the ICF LPAR 
CP pool weights.

kind regards,
Peter

On Tue, 2 Jun 2020 17:39:19 +, Edgington, Jerry 
 wrote:

>
>We are running on single SYSPlex with two LPARs (Prod and Test) with 2 ICFs, 
>all running on the GPs.  We are experiencing slowdowns, due to PROC-GRS on 
>Test, PROC-XCFAS on Prod.  Weights are 20/20/20/80 for ICF1/ICF2/Test/Prod.  
>We have setup XCF Structures and FCTC for GRS Star
>
>Higher Weight:
>PROC-GRS3.4 users
>PROC-GRS2.4 users
>ENQ -ACF2ACB  100.0 % delay LOGONIDS
>PROC-GRS   99.0 % delay
>PROC-GRS   13.0 % delay
>
>Lower weight:
>PROC-XCFAS 14.1 users
>PROC-XCFAS 13.1 users
>PROC-XCFAS 99.0 % delay
>PROC-XCFAS 45.0 % delay
>PROC-XCFAS 16.0 % delay
>PROC-XCFAS 11.0 % delay
>PROC-XCFAS 33.0 % delay
>PROC-XCFAS 77.0 % delay
>PROC-XCFAS 45.0 % delay
>
>GRSCNFxx:
>GRSDEF MATCHSYS(*)
>   SYNCHRES (YES)
>   GRSQ (CONTENTION)
>   ENQMAXA(25)
>   ENQMAXU(16384)
>   AUTHQLVL(2)
>   RESMIL(5)
>   TOLINT(180)
>
>IEASYSxx:
>GRS=STAR, JOIN GRS STAR
>GRSCNF=00,GRS INITIALIZATION MEMBER
>GRSRNL=00,GRS RESOURCE LIST
>
>D GRS:
>RESPONSE=TEST
> ISG343I 13.38.49 GRS STATUS 604
> SYSTEMSTATE   SYSTEMSTATE
> MVSZ  CONNECTED   TEST  CONNECTED
> GRS STAR MODE INFORMATION
>   LOCK STRUCTURE (ISGLOCK) CONTAINS 1048576 LOCKS.
>   THE CONTENTION NOTIFYING SYSTEM IS TEST
>   SYNCHRES:  YES
>   ENQMAXU: 16384
>   ENQMAXA:25
>   GRSQ:   CONTENTION
>   AUTHQLVL:1
>   MONITO

Re: XCF/GRS question

2020-06-02 Thread Peter Bishop
Hi Jerry,

questions, and a suggestion.  These are more at the hardware layer than the GRS 
one, which I saw Paul Feller addressing quite well.  It may be that you cannot 
change the LPAR setup, but if you can, here are some ideas.

1.   Must the CFs share the GPs with the z/OS systems, or are there ICF engines 
they can use?  For small workloads it may be acceptable to have z/OS and CF 
workloads in the same processor pool, but CF workloads are different than z/OS 
ones, and where possible I have seen much benefit from having an ICF pool for 
CFs, and a CP pool for z/OS (and if you have VM or Linux, an IFL pool, which 
may be out of scope here).
2.   Must the non-production and production workloads share the same Sysplex?  
I'd be inclined to separate them were I in charge.  Two monoplexes may be less 
hassle than a "forced sharing" Sysplex.  But you may have reasons for joining 
non-production into the production Sysplex.
3.   Do you have DYNDISP=THIN set on the CF LPARs?  For non-production CFs, 
this is best, but in your case with a single plex it may be inapplicable.  
Consider how you might benefit from it.  It is a much-improved algorithm than 
its predecessors has been my experience.  Considering you are sharing the pool, 
it may be a "quick fix" if you can live with it.  Try a test.
4.   If you split the plexes, and have separate CFs, it will be better if you 
weight the CF LPARs as you do the z/OS ones, e.g. if z/OS has an 80:20 CP pool 
weight, then the CF LPARs should have the same weights for the ICF pool.

kind regards,
Peter

On Tue, 2 Jun 2020 17:39:19 +, Edgington, Jerry 
 wrote:

>
>We are running on single SYSPlex with two LPARs (Prod and Test) with 2 ICFs, 
>all running on the GPs.  We are experiencing slowdowns, due to PROC-GRS on 
>Test, PROC-XCFAS on Prod.  Weights are 20/20/20/80 for ICF1/ICF2/Test/Prod.  
>We have setup XCF Structures and FCTC for GRS Star
>
>Higher Weight:
>PROC-GRS3.4 users
>PROC-GRS2.4 users
>ENQ -ACF2ACB  100.0 % delay LOGONIDS
>PROC-GRS   99.0 % delay
>PROC-GRS   13.0 % delay
>
>Lower weight:
>PROC-XCFAS 14.1 users
>PROC-XCFAS 13.1 users
>PROC-XCFAS 99.0 % delay
>PROC-XCFAS 45.0 % delay
>PROC-XCFAS 16.0 % delay
>PROC-XCFAS 11.0 % delay
>PROC-XCFAS 33.0 % delay
>PROC-XCFAS 77.0 % delay
>PROC-XCFAS 45.0 % delay
>
>GRSCNFxx:
>GRSDEF MATCHSYS(*)
>   SYNCHRES (YES)
>   GRSQ (CONTENTION)
>   ENQMAXA(25)
>   ENQMAXU(16384)
>   AUTHQLVL(2)
>   RESMIL(5)
>   TOLINT(180)
>
>IEASYSxx:
>GRS=STAR, JOIN GRS STAR
>GRSCNF=00,GRS INITIALIZATION MEMBER
>GRSRNL=00,GRS RESOURCE LIST
>
>D GRS:
>RESPONSE=TEST
> ISG343I 13.38.49 GRS STATUS 604
> SYSTEMSTATE   SYSTEMSTATE
> MVSZ  CONNECTED   TEST  CONNECTED
> GRS STAR MODE INFORMATION
>   LOCK STRUCTURE (ISGLOCK) CONTAINS 1048576 LOCKS.
>   THE CONTENTION NOTIFYING SYSTEM IS TEST
>   SYNCHRES:  YES
>   ENQMAXU: 16384
>   ENQMAXA:25
>   GRSQ:   CONTENTION
>   AUTHQLVL:1
>   MONITOR:NO
>
>Any advice or help on what I can do about these delays, would be great?
>
>Thanks,
>Jerry
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: XCF/GRS question [EXTERNAL]

2020-06-02 Thread Feller, Paul
Jerry, I don't have the type of setup you have but I'll list a few things I 
might look at.

Move the CONTENTION NOTIFYING SYSTEM from your TEST lpar to your MVSZ lpar, 
assuming MVSZ is your production lpar.  This process can add overhead to GRS 
and on a small lpar it can be noticeable.  The process will move around based 
on which lpar gets IPLed and GRS will not move it back.  So we issue the SETGRS 
CNS=PR03,NP command on our PR03 lpar anytime it is IPLed.

Sample of what it looks like on the SYSLOG.
SETGRS CNS=PR03,NP  
ISG364I CONTENTION NOTIFYING SYSTEM MOVED FROM SYSTEM TS03 TO SYSTEM
PR03. OPERATOR COMMAND INITIATED.

I'm going to guess that the ICF1 and ICF2 lpars may not be getting dispatched 
as well as you might hope.  You might see this in the response times of the CF 
links or in requests getting switched from synchronous to asynchronous.  So 
that could affect how well they respond to things.  

Another thing to think about is the size of your IGWLOCK00 structure.  The size 
will be different base on your environment as compared to what I have.  I was 
not the one to size our IGWLOCK00 structure so I'm not much help there.

You could try to move some of the XCF traffic off of the CFs and on to CTCs to 
see if that helps any.  Just a guess.

Also if the TEST lpar is not getting dispatched when needed that can cause 
issues for things like GRS and XCF and their ability to respond to request.

I hope that some of this is helpful.  Good luck.

Thanks..

Paul Feller
GTS Mainframe Technical Support


-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Edgington, Jerry
Sent: Tuesday, June 02, 2020 12:39 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: XCF/GRS question [EXTERNAL]


We are running on single SYSPlex with two LPARs (Prod and Test) with 2 ICFs, 
all running on the GPs.  We are experiencing slowdowns, due to PROC-GRS on 
Test, PROC-XCFAS on Prod.  Weights are 20/20/20/80 for ICF1/ICF2/Test/Prod.  We 
have setup XCF Structures and FCTC for GRS Star

Higher Weight:
PROC-GRS3.4 users
PROC-GRS2.4 users
ENQ -ACF2ACB  100.0 % delay LOGONIDS
PROC-GRS   99.0 % delay
PROC-GRS   13.0 % delay

Lower weight:
PROC-XCFAS 14.1 users
PROC-XCFAS 13.1 users
PROC-XCFAS 99.0 % delay
PROC-XCFAS 45.0 % delay
PROC-XCFAS 16.0 % delay
PROC-XCFAS 11.0 % delay
PROC-XCFAS 33.0 % delay
PROC-XCFAS 77.0 % delay
PROC-XCFAS 45.0 % delay

GRSCNFxx:
GRSDEF MATCHSYS(*)
   SYNCHRES (YES)
   GRSQ (CONTENTION)
   ENQMAXA(25)
   ENQMAXU(16384)
   AUTHQLVL(2)
   RESMIL(5)
   TOLINT(180)

IEASYSxx:
GRS=STAR, JOIN GRS STAR
GRSCNF=00,GRS INITIALIZATION MEMBER
GRSRNL=00,GRS RESOURCE LIST

D GRS:
RESPONSE=TEST
 ISG343I 13.38.49 GRS STATUS 604
 SYSTEMSTATE   SYSTEMSTATE
 MVSZ  CONNECTED   TEST  CONNECTED
 GRS STAR MODE INFORMATION
   LOCK STRUCTURE (ISGLOCK) CONTAINS 1048576 LOCKS.
   THE CONTENTION NOTIFYING SYSTEM IS TEST
   SYNCHRES:  YES
   ENQMAXU: 16384
   ENQMAXA:25
   GRSQ:   CONTENTION
   AUTHQLVL:1
   MONITOR:NO

Any advice or help on what I can do about these delays, would be great?

Thanks,
Jerry

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
Please note:  This message originated outside your organization. Please use 
caution when opening links or attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


XCF/GRS question

2020-06-02 Thread Edgington, Jerry

We are running on single SYSPlex with two LPARs (Prod and Test) with 2 ICFs, 
all running on the GPs.  We are experiencing slowdowns, due to PROC-GRS on 
Test, PROC-XCFAS on Prod.  Weights are 20/20/20/80 for ICF1/ICF2/Test/Prod.  We 
have setup XCF Structures and FCTC for GRS Star

Higher Weight:
PROC-GRS3.4 users
PROC-GRS2.4 users
ENQ -ACF2ACB  100.0 % delay LOGONIDS
PROC-GRS   99.0 % delay
PROC-GRS   13.0 % delay

Lower weight:
PROC-XCFAS 14.1 users
PROC-XCFAS 13.1 users
PROC-XCFAS 99.0 % delay
PROC-XCFAS 45.0 % delay
PROC-XCFAS 16.0 % delay
PROC-XCFAS 11.0 % delay
PROC-XCFAS 33.0 % delay
PROC-XCFAS 77.0 % delay
PROC-XCFAS 45.0 % delay

GRSCNFxx:
GRSDEF MATCHSYS(*)
   SYNCHRES (YES)
   GRSQ (CONTENTION)
   ENQMAXA(25)
   ENQMAXU(16384)
   AUTHQLVL(2)
   RESMIL(5)
   TOLINT(180)

IEASYSxx:
GRS=STAR, JOIN GRS STAR
GRSCNF=00,GRS INITIALIZATION MEMBER
GRSRNL=00,GRS RESOURCE LIST

D GRS:
RESPONSE=TEST
 ISG343I 13.38.49 GRS STATUS 604
 SYSTEMSTATE   SYSTEMSTATE
 MVSZ  CONNECTED   TEST  CONNECTED
 GRS STAR MODE INFORMATION
   LOCK STRUCTURE (ISGLOCK) CONTAINS 1048576 LOCKS.
   THE CONTENTION NOTIFYING SYSTEM IS TEST
   SYNCHRES:  YES
   ENQMAXU: 16384
   ENQMAXA:25
   GRSQ:   CONTENTION
   AUTHQLVL:1
   MONITOR:NO

Any advice or help on what I can do about these delays, would be great?

Thanks,
Jerry

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN