On Wednesday, September 5, 2012 2:53:45 AM UTC+10, Joe D'Alessandro wrote:
> We have done this to two 3-system parallel sysplexes.  One system was removed 
> from each 3-system sysplex and the removed systems were each defined as a 
> monoplex and rejoined to their former partners in a GRSPLEX.  So we now have 
> two GRSPLEXes:  each with a 2-system parallel sysplex GRS'd with a monoplex.  
> 
> 
> 
> The complications were few but could be significant depending on your site.  
> The clients only had one issue, and that was due to PDSE sharing.  
> 
> 
> 
> () We left the system names and the SMFID names the same but gave the 
> monoplexes new sysplex names, which affected one system component and a few 
> STC PROCs.
> 
> 
> 
> () PDSE sharing had to be downgraded to NORMAL for all three systems in each 
> GRSPLEX, which is very problematic if the systems really share PDSEs for 
> UPDATE because NORMAL is very restrictive (read the PDSE Usage Guide 
> closely), so many PDSEs had to be cloned so that there is now one PDSE per 
> system (that is, for many PDSEs, we went from one PDSE for 3 systems with 
> ENHANCED sharing to three PDSEs, each for one system with NORMAL sharing).  
> Fault Analyzer PDSEs and lnklst PDSEs were the most affected.  SMSPDSE1 is no 
> longer active with NORMAL sharing.
> 
> 
> 
> () The GRS CTCs had to be defined as BCTC so an HCD gen was required.  
> 
> 
> 
> () ECS must be turned off so one may see a performance impact 
> 
> 
> 
> () Automatic tape sharing cannot cross the sysplex boundary so the tape 
> drives had to be divided, but the systems in the sysplex could still handle 
> their drives on their own.
> 
> 
> 
> () Obviously, the monoplex cannot still participate in the sysplex's spool, 
> so you may need to setup a spool and NJE.  Depending on how you submit jobs, 
> this may or may not become another issue (like, for a scheduler or restart 
> subsystem).
> 
> 
> 
> () Some components that use XCF must be reconfigured to use TCPIP or SNA to 
> communicate, like CCI and Mainview.
> 
> 
> 
> There are other issues like BRODCAST and DAE but they are minor.  The biggest 
> loss after PDSE sharing was in recovery.  A crash or even normal shutdown 
> must be responded to carefully or GRS may just turn off across the remaining 
> GRS members.  GRS can be restarted but it is not always obvious that this has 
> happened as message fly past.  You may want to ensure that automation grabs 
> all the messages, if you have automation, to alert the operator and instruct 
> the operator how to restart GRS on the remaining members.
> 
> 
> 
> regards, Joe D'Alessandro
> 
> 
> 
> ----------------------------------------------------------------------
> 
> For IBM-MAIN subscribe / signoff / archive access instructions,
> 
> send email to [email protected] with the message: INFO IBM-MAIN

Thanks all for your useful replies.

The system to be taken out of the sysplex is going to disappear soon anyway (it 
has no 'real' users), I just wanted to turn it into a 'rescue' system that was 
still part of the GRS complex. All of the GRS CTCS are defined OK, it's non-ECS 
and although PDSESHARING is on, I don't believe any are really shared for 
update. The JES spool is already separate.

One other specific question relates to the CLOCK - there is a sysplex timer in 
use by the plexed systems (even though the images are running on the same CEC), 
so could the newly un-plexed system (in XCFLOCAL mode) run with CLOCKxx set to 
ETRMODE=NO?

What I'm getting at is whether the GRS complex would be OK with two systems 
using a timer and one not? 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to