Sorry Mike. That's what management seems to believe. It's what we've done for 
TWENTY years. It allows us to fail over to the 'DR site' at will. Including 
for-real failover where DR becomes the day-to-day home from then on. That's not 
the hard part. The hard part is that after failover, production now lives at 
the DR site, yet it's officially temporary. All new REAL customer data now 
lives over the hill and through the woods, not at home. 

-- 1000's of volumes/UCBs worth of data that is suddenly obsolete 'at home' and 
therefore useless.     

-- A GDPS mapping of all prod volumes to DR secondary volumes.   

-- A GDPS mapping of all secondary volumes to tertiary volumes, which we 
actually IPL from. If we IPL'ed and ran with secondary volumes, mirroring would 
be destroyed until we resynched.     

 -- We're under the gun to move back 'home' with some alacrity because two CECs 
are required for CF redundancy. DR has only one CEC for economy.  

All of this bodes ill for willy-nilly switching back and forth between data 
centers unless there's some secret trick(s) I don't know about.     

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
[email protected]


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf 
Of Mike Schwab
Sent: Monday, February 11, 2019 5:03 PM
To: [email protected]
Subject: (External):Re: Wells Fargo? Well f*&%#d at the moment: Data center up 
in smoke, bank website, app down . The Register

Mirroring the DASD to the backup site.

On Mon, Feb 11, 2019 at 4:50 PM Jesse 1 Robinson <[email protected]> 
wrote:
>
> I have nothing but admiration for a shop that can slosh workload back and 
> forth between two data centers. There are those that can and those (like us) 
> that cannot. How does one get from the first group into the second?
>
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> 626-543-6132 Office ⇐=== NEW
> [email protected]
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[email protected]] 
> On Behalf Of Vernooij, Kees (ITOP NM) - KLM
> Sent: Monday, February 11, 2019 5:58 AM
> To: [email protected]
> Subject: (External):Re: Wells Fargo? Well f*&%#d at the moment: Data 
> center up in smoke, bank website, app down . The Register
>
> We run in 2 sites continuously. Mainly Prod in 1 site and Dev/Acc in the 
> other site, a CF in both sites and Dasd mirrored between the sites.
> Our DRP consists of moving workload from 1 site's LPARs to the corresponding 
> LPARs in the other site and adding capacity b.m.o. CBUs. No manipulation of 
> LPARs.
> We do this from time to time, sometimes as part of a DRP test, recently also 
> because of a potentially disastrous power maintenance in a site.
> A piece of cake, compared to the comments I read above.
>
> Kees.
>
>
> > -----Original Message-----
> > From: IBM Mainframe Discussion List 
> > [mailto:[email protected]] On Behalf Of Allan Staller
> > Sent: 11 February, 2019 14:30
> > To: [email protected]
> > Subject: Re: Wells Fargo? Well f*&%#d at the moment: Data center up 
> > in smoke, bank website, app down . The Register
> >
> > I have heard of a company in the Far East the periodically (every 6 
> > mths., IIRC) flips from site A to site B (and back).
> > Aus(?) to Phillipines ?) and back.
> >
> > -----Original Message-----
> > From: IBM Mainframe Discussion List <[email protected]> On 
> > Behalf Of Savor, Thomas (Alpharetta)
> > Sent: Friday, February 8, 2019 7:08 PM
> > To: [email protected]
> > Subject: Re: Wells Fargo? Well f*&%#d at the moment: Data center up 
> > in smoke, bank website, app down . The Register
> >
> > >We've been doing DR mirroring for 20 years. It gets tested often.
> > >We've
> > moved production twice to another >data center using our procedures.
> > What we've never done is run production in another location
> > >temporarily. 'Temporary' means move it, run it until at least one
> > transaction is committed, then move it >*all* back. That is hugely 
> > complex and costly.
> >
> > >A lot of management fantasizes about a big A-B switch that we throw 
> > >one
> > way or the other. So wrong.
> >
> > About 5-6 years ago, I was working as a vendor (Daily Support) for 
> > Credit Card Software for Halifax/Bank of Scotland.  I believe the 
> > first time I was given a "heads up" was when we were supporting 12 
> > Million Cardholders.  The "Heads up" was that on Friday evening at 
> > 6pm Main site was going to shut down and the whole weekend 
> > PRODUCTION was going to run on DR site, then 6am Monday Morning, 
> > Main site is to come back up and continue as if nothing happened !!!  I 
> > literally peed  in my pants !!!
> > Probably everyone in Atlanta could hear me...NOOOOOOOOO !!!  
> > Thinking of all the network signons with Visa and Mastercard...all 
> > the credit card Authorizations...there was absolutely zero chance of 
> > this working without issues.
> >
> > Well, came in Monday morning after receiving no calls over the 
> > week-end ....everything was fine.  We ran the Batch Monday 
> > night...and that would be pulling in transactions from over the week-end 
> > from DR site.
> > NO ISSUES !!!   Everything was fine.
> >
> > Whoever did this, from a Systems perspective....I tip my hat.  I've 
> > never seen someone do this with Production, but it worked fine...so 
> > what do I know.  Never seen anyone else do this in my 42 years of 
> > Mainframing either.
> >
> > Unfortunately, Halifax/Bank of Scotland is no longer with us.  They 
> > were absorbed by Lloyds Banking Group.
> >
> >
> > Thanks,
> > Tom Savor


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to