I'm also a bit puzzled by the situation Ed describes. We've run parallel
sysplexes since the mid-90s. We have had two widely spaced CEC failures that
took down all data LPARs plus ICFs on the failing CEC. (In both cases we had a
second CEC that housed backup ICF LPARs.) In neither case was there any damage
to the pair of CFRM couple data sets, which as Bill Neiman pointed out, contain
'compiled' CFRM policies as well as a record of the last policy used. For me,
the head scratcher is what happened to Ed's CPL data sets? The recovering data
LPARs should have been able to locate the correct CFRM policy and load it into
the recovered ICF(s). If CFRMPOL differs from the CPL data set indication, you
are prompted at IPL to choose which one to use.
A sysplex-wide 'cold start' occurs when both ICF(s) and couple data sets are
empty. We actually experience that situation regularly when we first IPL a
sysplex at our DR site. ICFs have been newly initialized, and the CPL data sets
have been freshly formatted. That's where COUPLExx CFRMPOL comes into play.
That keyword by the way is younger than sysplex itself. In our early experience
with DR (global mirror for z/OS, or XRC), we simply IPLed with no active CFRM
policy. We would logon and run a job to create a policy from mirrored source,
then SETXCF switch to that policy, then start up remaining apps. That procedure
was kyboshed by the advent of GRS star, which *requires* a supporting structure
at IPL. No policy, no structure, no IPL. So CFRMPOL was introduced to allow
specification in a cold start. Around the same time, the formatting utility
IXCL1DSU was enhanced to allow pointing to a couple data set other than the
currently active one. Before we IPL a DR sysplex, we run IXCL1DSU from the
driving system to stuff a policy into the DR system's couple data set, which is
named by CFRMPOL.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
[email protected]
-----Original Message-----
From: IBM Mainframe Discussion List <[email protected]> On Behalf Of Ed
Jaffe
Sent: Monday, August 17, 2020 9:13 AM
To: [email protected]
Subject: (External):Re: Reminder: Don't "Forget" Your CFRMPOL Keywords
CAUTION EXTERNAL EMAIL
On 8/17/2020 4:59 AM, Bill Neiman wrote:
> I'm a bit confused by this. CFRMPOL is only relevant when you IPL with a
> CFRM couple data set (CDS) that has never been used before, so there was no
> previously-activated policy. It's normally not necessary to update the
> CFRMPOL statement when you update your CFRM policy, unless you're changing
> *which* policy you're using. Even then, it only matters if you come up with
> a new CFRM CDS. The CFRM CDS records which policy was last used, and that's
> the policy that will be used if you are forced to perform a sysplex-wide IPL.
> There must be more to this story.
Take a look at software case TS004055963 and hardware PMH 59261,227,000 (and
several other related PMHs automatically opened since).
We came up with new CFRM data sets using the CFRM policy specified in CFRMPOL.
While up, we switched CFRM policies but did not update CFRMPOL in COUPLExx as
we should have (and as my reminder admonishes others to do). We then
experienced a CPC failure in the early AM on 8/15 that took down all LPARs and
internal coupling facilities. When coming up after the "crash," all of our
structures were at the old sizes. Turned out we were running the policy
specified in CFRMPOL and not the policy we had switched to prior to the crash.
That's when I started this "reminder"
thread.
I took my own advice and corrected CFRMPOL to reflect our current policy. Good
thing I did because we had a second second crash of the same type at 9PM that
same day. Coming back up after that crash, all structure size are as expected.
--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN