Re: SYSPLEX distance

2018-01-16 Thread Glenn Miller
Hi Skip,
One possible option for the survivability of a 24x7 z/OS environment would be 
to place the Primary DASD Control Unit(s) at a different site ( a 3rd site ) 
from the Mainframe CEC's. Then, if one or the other Mainframe CEC "glass house" 
is unusable, the other ( in theory ) continues to operate. Also, if you happen 
to be really lucky, the site for the Primary DASD Control Unit (s) would be at 
or near "half way" between the 2 Mainframe CEC sites. Take that another step 
further and place standalone Coupling Facilities at that 3rd site as well.  

Of course, the Primary DASD Control Unit(s) are still a single point of 
failure. So, extend this "design" to a 4th site and place duplicate DASD 
Control Unit(s) at that 4th site. This is where GDPS/Hyperswap would provide 
near immediate "switchover" from one DASD site to the other.

This is just one possible option.

Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-16 Thread Jesse 1 Robinson
My recollection is that the term 'GDPS' was coined at a time when IBM had the 
*ambition* to run a single sysplex with members at a considerable distance 
apart. That ambition was too optimistic for the technology of the day, so 
'GDPS' was redefined. A remnant of that shift is the difficulty of finding an 
actual spelling out of the acronym in GDPS doc. 

GDPS as presented to my shop around Y2K had morphed into a service offering 
(not a 'product') for managing a sysplex and simplifying recovery of it 
elsewhere. That's how we use it. Whatever the supporting technology, DASD 
mirroring is key to GDPS. We actually implemented mirroring (XRC) before we 
obtained GDPS, which greatly simplified our previously RYO procedures. 

I've asked this question earlier in this thread. If you have a truly 
'dispersed' sysplex with XCF functioning properly over a great distance, how do 
you survive the total loss of one glass house? At any given time, all members 
of the sysplex must be using one copy of DASD or another. As long as all 
remains sweetness and light, it doesn't much matter where the active copy 
resides and where the mirrored copy. But loss of one side implies that only one 
copy of the DASD remains accessible. How do the surviving sysplex members 
continue running seamlessly when the DASD farm suddenly changes? Of course you 
can re-IPL the surviving members and carry on. But that's basically what we do 
with 'cold' members. What is the advantage of running hot sysplex members in 
the remote site?

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Peter Hunkeler
Sent: Monday, January 15, 2018 11:49 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):AW: Re: SYSPLEX distance

> I'm still not clear on how a 'geographically dispersed sysplex' (original 
> definition, not 'GDPS') would work.

You say "original definition". I seem to remember, but might be wrong, that the 
term GDPS was coined when sysplexes were al contained within a single building 
or in buildings near by. GDPS was taking sysplexes with members in data centers 
up to a few kilometers apart. Apart from the longer distance between members, 
they were sysplexes as usual. No XRC involved.

--Peter Hunkeler


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


AW: Re: SYSPLEX distance

2018-01-15 Thread Peter Hunkeler
> I'm still not clear on how a 'geographically dispersed sysplex' (original 
> definition, not 'GDPS') would work.

You say "original definition". I seem to remember, but might be wrong, that the 
term GDPS was coined when sysplexes were al contained within a single building 
or in buildings near by. GDPS was taking sysplexes with members in data centers 
up to a few kilometers apart. Apart from the longer distance between members, 
they were sysplexes as usual. No XRC involved.

--Peter Hunkeler



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-14 Thread Parwez Hamid
Abstract: Asynchronous CF Lock Duplexing is a new enhancement to IBM’s parallel 
sysplex technology that was made generally available in October 2016. It is an 
alternative to the synchronous system managed duplexing of coupling facility 
(CF) lock structures that has been available for many years. The new 
Asynchronous CF Lock Duplexing feature was designed to be a viable alternative 
to synchronous system managed duplexing. The goal was to provide the benefits 
of lock duplexing without the high performance penalty. It eliminates the 
synchronous mirroring between the CFs to keep the primary and secondary 
structures in sync. 

https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102720

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-13 Thread Alan(GMAIL)Watthey
Kees,

It all helps and it's always nice to know others are doing it successfully.
Your comments on SCMFSD are particularly interesting (not at all you say) as
I'm sure some cleaning up is possible there.

I read somewhere that IBM expect nearly all requests to become asynchronous
eventually.  Processors are becoming quicker whereas the speed of light
isn't, so the heuristic algorithm used will deem spinning too costly for
shorter and shorter distances.

Groetjes,
Alan


-Original Message-
From: Vernooij, Kees (ITOPT1) - KLM [mailto:kees.verno...@klm.com] 
Sent: 11 January 2018 11:03 am
Subject: Re: SYSPLEX distance

If this helps: 
We run a parallel sysplex with sites at 16 - 18 km (2 separate routes with
some difference in distance) with active systems and CFs at both sites,
without problems.
Most Sync CF Requests to the Remote CFs are converted to Async.
To minimize the Async/Remote CF delays, we configure structures over the CFs
in such a way that the most busy or most important structures are in the
busiest or the most important site.
We do not use System Managed Coupling Facility Structure Duplexing. All our
applications are able to recover their structures well.
SMCFSD's inter-CF communication would add a number of elongated delays to
each CF update request. The advantage of SMCFSD is that each site has a copy
of the structure and intelligence can chose the nearest (=fastest) CF for
read requests.

Kees.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-11 Thread Jesse 1 Robinson
To clarify. We have *no* XCF connection between primary and backup data 
centers. All DASD is mirrored continuously via XRC, but the DR LPARs are 
'cold'. They get IPLed only on demand: for (frequent) testing and for 
(godforbid) actual failover.

When got into serious DR in the 90s, channel technology was ESCON, and network 
technology was ISV CNT. Parallel sysplex synchronization was governed by 
external timers (9037). When we started with parallel sysplex, loss of timer 
connection would kill the member that experienced it first. Then IBM introduced 
a change whereby the entire sysplex would go down on timer loss. This 
technology did not bode well for running a single sysplex over 100+ KM. Network 
connectivity was far too flaky to bet the farm on. Now we have FICON over DWDM. 
Way more reliable, but sysplex timing would still be an issue AFAIK. 

In our actual sysplexes (prod and DR), boxes are literally feet apart connected 
by physical cables du jour. I cannot recall a complete loss of XCF connectivity 
ever in this configuration. I'm still not clear on how a 'geographically 
dispersed sysplex' (original definition, not 'GDPS') would work. Critical data 
sets must be shared by all members. One of each set of mirrored pairs must be 
chosen as 'The Guy' that everyone uses. If The Guy suddenly loses connection to 
the other site--i.e. site disaster--how will the surviving member(s) at the 
other site continue running without interruption? If there is an interruption 
that requires some reconfig and IPL(s), then what's the point of running this 
way in first place? 

We commit to a four-hour recovery (including user validation) with data 
currency within seconds of the disaster. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of retired mainframer
Sent: Thursday, January 11, 2018 9:20 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: SYSPLEX distance

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
> On Behalf Of Rob Schramm
> Sent: Thursday, January 11, 2018 9:01 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: SYSPLEX distance
> 
> SFM and planning for what your surviving system should always be done.  
> And yes early on there was a failure of one of the two dark fiber 
> connections and the sysplex timers were not connected properly to 
> allow for a continued service.
> 
> Planning planning planning.

To which you should add testing testing testing.

And once the developers of the plan have succeeded in making it work, it should 
be tested again with many of the least experienced people in the organization.  
Murphy will guarantee that they will be the only ones available when it really 
hits the fan.  (It is amazing how differently a pro and a rookie read the same 
set of instructions.)


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-11 Thread retired mainframer
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Rob Schramm
> Sent: Thursday, January 11, 2018 9:01 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: SYSPLEX distance
> 
> SFM and planning for what your surviving system should always be done.  And
> yes early on there was a failure of one of the two dark fiber connections
> and the sysplex timers were not connected properly to allow for a continued
> service.
> 
> Planning planning planning.

To which you should add testing testing testing.

And once the developers of the plan have succeeded in making it work, it should 
be tested again with many of the least experienced people in the organization.  
Murphy will guarantee that they will be the only ones available when it really 
hits the fan.  (It is amazing how differently a pro and a rookie read the same 
set of instructions.)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-11 Thread Rob Schramm
SFM and planning for what your surviving system should always be done.  And
yes early on there was a failure of one of the two dark fiber connections
and the sysplex timers were not connected properly to allow for a continued
service.

Planning planning planning.

On Thu, Jan 11, 2018 at 10:22 AM Mike Schwab 
wrote:

> One company had data centers in Miami and New Orleans.  Miami shut
> down for a hurricane, and wasn't back up before Katrina hit New
> Orleans.
>
> On Thu, Jan 11, 2018 at 6:44 AM, Timothy Sipples 
> wrote:
> > J.O.Skip Robinson wrote:
> >>Losing XCF connection to a sysplex member would be a whole
> >>nother level of impact that I've never been willing to sign
> >>up for even though our network today is far more reliable
> >>than it was 20 years ago.
> >
> > Isn't losing XCF connectivity something worth planning for? It's rare,
> but
> > I suppose it could happen no matter what the distance.
> >
> > Isn't it always best to weigh various risks, sometimes competing ones,
> and
> > try to get as much overall risk reduction as you can? You're in southern
> > California, and there are earthquakes and fires there, I've noticed.
> (Maybe
> > plagues of locusts next? :-)) One would think there's some extra
> California
> > value in awarding an extra point or two to distance there. Japan's 2011
> > Tōhoku earthquake and tsunami triggered some business continuity
> rethinking
> > there, and it has altered some decisions about data center locations,
> > distances, and deployment patterns. The risk profile can change. And, as
> > you mentioned, networks have improved a lot in 20 years while the risks
> > California faces seem to be somewhat different. It's always worth
> > revisiting past risk calculations when there's some material change in
> the
> > parameters -- "marking to market."
> >
> > If losing XCF connectivity would be that devastating, why have XCF links
> > (and a Parallel Sysplex) at all? It is technically possible to eliminate
> > those links. You just might not like the alternative. :-)
> >
> > You're also allowed to do "some of both." You can stretch a Parallel
> > Sysplex and run certain workloads across the stretch, while at the same
> > time you can have a non-stretched Parallel Sysplex and run other
> workloads
> > non-stretched. That sort of deployment configuration is technically
> > possible, and conceptually it's not a huge leap from the classic remote
> > tape library deployments.
> >
> >
> 
> > Timothy Sipples
> > IT Architect Executive, Industry Solutions, IBM Z and LinuxONE,
> AP/GCG/MEA
> > E-Mail: sipp...@sg.ibm.com
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
>
> --
> Mike A Schwab, Springfield IL USA
> Where do Forest Rangers go to get away from it all?
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>


-- 

Rob Schramm

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-11 Thread Mike Schwab
One company had data centers in Miami and New Orleans.  Miami shut
down for a hurricane, and wasn't back up before Katrina hit New
Orleans.

On Thu, Jan 11, 2018 at 6:44 AM, Timothy Sipples  wrote:
> J.O.Skip Robinson wrote:
>>Losing XCF connection to a sysplex member would be a whole
>>nother level of impact that I've never been willing to sign
>>up for even though our network today is far more reliable
>>than it was 20 years ago.
>
> Isn't losing XCF connectivity something worth planning for? It's rare, but
> I suppose it could happen no matter what the distance.
>
> Isn't it always best to weigh various risks, sometimes competing ones, and
> try to get as much overall risk reduction as you can? You're in southern
> California, and there are earthquakes and fires there, I've noticed. (Maybe
> plagues of locusts next? :-)) One would think there's some extra California
> value in awarding an extra point or two to distance there. Japan's 2011
> Tōhoku earthquake and tsunami triggered some business continuity rethinking
> there, and it has altered some decisions about data center locations,
> distances, and deployment patterns. The risk profile can change. And, as
> you mentioned, networks have improved a lot in 20 years while the risks
> California faces seem to be somewhat different. It's always worth
> revisiting past risk calculations when there's some material change in the
> parameters -- "marking to market."
>
> If losing XCF connectivity would be that devastating, why have XCF links
> (and a Parallel Sysplex) at all? It is technically possible to eliminate
> those links. You just might not like the alternative. :-)
>
> You're also allowed to do "some of both." You can stretch a Parallel
> Sysplex and run certain workloads across the stretch, while at the same
> time you can have a non-stretched Parallel Sysplex and run other workloads
> non-stretched. That sort of deployment configuration is technically
> possible, and conceptually it's not a huge leap from the classic remote
> tape library deployments.
>
> 
> Timothy Sipples
> IT Architect Executive, Industry Solutions, IBM Z and LinuxONE, AP/GCG/MEA
> E-Mail: sipp...@sg.ibm.com
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-11 Thread Timothy Sipples
J.O.Skip Robinson wrote:
>Losing XCF connection to a sysplex member would be a whole
>nother level of impact that I've never been willing to sign
>up for even though our network today is far more reliable
>than it was 20 years ago.

Isn't losing XCF connectivity something worth planning for? It's rare, but
I suppose it could happen no matter what the distance.

Isn't it always best to weigh various risks, sometimes competing ones, and
try to get as much overall risk reduction as you can? You're in southern
California, and there are earthquakes and fires there, I've noticed. (Maybe
plagues of locusts next? :-)) One would think there's some extra California
value in awarding an extra point or two to distance there. Japan's 2011
Tōhoku earthquake and tsunami triggered some business continuity rethinking
there, and it has altered some decisions about data center locations,
distances, and deployment patterns. The risk profile can change. And, as
you mentioned, networks have improved a lot in 20 years while the risks
California faces seem to be somewhat different. It's always worth
revisiting past risk calculations when there's some material change in the
parameters -- "marking to market."

If losing XCF connectivity would be that devastating, why have XCF links
(and a Parallel Sysplex) at all? It is technically possible to eliminate
those links. You just might not like the alternative. :-)

You're also allowed to do "some of both." You can stretch a Parallel
Sysplex and run certain workloads across the stretch, while at the same
time you can have a non-stretched Parallel Sysplex and run other workloads
non-stretched. That sort of deployment configuration is technically
possible, and conceptually it's not a huge leap from the classic remote
tape library deployments.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z and LinuxONE, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-11 Thread Vernooij, Kees (ITOPT1) - KLM
If this helps: 
We run a parallel sysplex with sites at 16 - 18 km (2 separate routes with some 
difference in distance) with active systems and CFs at both sites, without 
problems.
Most Sync CF Requests to the Remote CFs are converted to Async.
To minimize the Async/Remote CF delays, we configure structures over the CFs in 
such a way that the most busy or most important structures are in the busiest 
or the most important site.
We do not use System Managed Coupling Facility Structure Duplexing. All our 
applications are able to recover their structures well.
SMCFSD's inter-CF communication would add a number of elongated delays to each 
CF update request. The advantage of SMCFSD is that each site has a copy of the 
structure and intelligence can chose the nearest (=fastest) CF for read 
requests.

Kees.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Alan(GMAIL)Watthey
> Sent: 07 January, 2018 8:43
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: SYSPLEX distance
> 
> Thanks to everyone for their insights and pointers on this matter.  It
> is
> obviously going to be very complicated to predict what might happen if
> we
> increase from our current 0.3km to something like (say) 20km.
> 
> The IBM Redbook I mentioned suggests an IBM service to analyse some data
> (presumably SMF) that can give some information.  If that were to
> highlight
> our particularly bad transactions it would be very useful.  I suspect we
> have some badly written ones that would be particularly susceptible to
> longer CF response times.  Does anyone know if this service still exists
> and
> where one might find it?
> 
> I'll see if I can find the 2017 information Timothy mentioned below as
> this
> is new to me (any pointers - here, offline or Sametime as appropriate).
> The
> Asynchronous CF feature was mentioned in an earlier response but we will
> have to upgrade our software to get there.  However, that was already in
> the
> planning.
> 
> I have no idea where the question originally came from but maybe they
> feel
> that with the two sites so close together, if they lose one system then
> they
> could very easily lose the other as well.  This would affect our
> Business
> Continuity (Metro Mirror).  Our DR site (Global Mirror) is safe being
> much
> further away but of course would realistically take at least an hour (on
> a
> good day and with a following wind) to get the end users connected in
> to.
> 
> Regards,
> Alan Watthey
> 
> -Original Message-
> From: Timothy Sipples [mailto:sipp...@sg.ibm.com]
> Sent: 04 January 2018 8:42 am
> Subject: Re: SYSPLEX distance
> 
> Please make sure you take one recent (late 2016) innovation into
> consideration: Asynchronous CF Lock Duplexing. My understanding is that
> this recently introduced Coupling Facility feature offers performance
> improvements in many scenarios, including some distance "stretched"
> Parallel Sysplexes. IBM published some related performance test data
> only
> last year (2017). If you're looking at older references, you might be
> missing a lot.
> 
> It could be helpful to understand the motivation(s) behind the question.
> As
> a notable example, does somebody want to create (or maintain) a
> "BronzePlex" to satisfy Parallel Sysplex aggregation rules? (Those rules
> are becoming less relevant now, at least, but that's a separate point.)
> As
> another example, if the focus is on protecting and preserving data, then
> it
> might make sense to stretch the storage but not the Sysplex.
> 
> 
> 
> 
> Timothy Sipples
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incompl

Re: SYSPLEX distance

2018-01-10 Thread Jesse 1 Robinson
My reservation about running a sysplex over a network (as opposed to direct 
link cables) is not merely latency. We went with XRC for DR because it's 
asynchronous and therefore fairly tolerant of network delays. For me the real 
problem is that an entire extended sysplex may be at risk in case of network 
disconnection. In the early days for us (late 90s) we experienced fairly 
frequent network interruptions that caused mirroring to suspend. There was no 
impact on the production sysplex itself, and mirroring could be resumed with 
minimum effort. Losing XCF connection to a sysplex member would be a whole 
nother level of impact that I've never been willing to sign up for even though 
our network today is far more reliable than it was 20 years ago. Likewise 
connection to common sysplex DASD would be subject to the same level of 
uncertainty. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Rob Schramm
Sent: Wednesday, January 10, 2018 8:09 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: SYSPLEX distance

I was part of a group that ran a parallel sysplex that was 13 miles apart.
The time it takes the light to travel adds up.  This was back in 2007.   It
worked.  It ran that way for at least a few years.  It was not a GDPS setup.  I 
think it was EMC disk at the time.

Rob Schramm



On Sun, Jan 7, 2018 at 12:13 PM Jesse 1 Robinson <jesse1.robin...@sce.com>
wrote:

> Our DR site is >100KM from our production site. In the early days of 
> serious mirroring for DR (mid/late 90s), running a single sysplex 
> across that distance was out of the question. It wasn't just a timing 
> issue--although that was enough reason not to try--but network 
> technology of the day was too flaky to run a single sysplex reliably. 
> Connectivity is far better today, but I would not bet day-to-day 
> production continuity on it.
>
> I think you would find a lot of complexity in trying to run a single 
> sysplex. In particular, how would you handle mirrored DASD? The remote 
> sysplex member would surely have to use the same physical DASD as the 
> local member(s). If the local glass house failed, the DASD would 
> presumably be unusable for the remote member. You would then have to 
> re-IPL with the mirrored copy, so there would be an outage of 
> uncertain duration. In our case, the goal is to be up and running 
> within four hours, which includes time for applications to recover and 
> verify the environment. That's a lot longer than your hypothetical one 
> hour, but even with a remote member up and running, how long would take to 
> switch over to it?
>
> I don't know where you're located, but we live in earthquake country 
> where disruption can be widespread. The more you need significant 
> separation for contingency, the less opportunity you have for running a 
> single sysplex.
>
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 <(323)%20715-0595> Mobile
> 626-543-6132 <(626)%20543-6132> Office ⇐=== NEW robin...@sce.com
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
> On Behalf Of Alan(GMAIL)Watthey
> Sent: Saturday, January 06, 2018 11:43 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: (External):Re: SYSPLEX distance
>
> Thanks to everyone for their insights and pointers on this matter.  It 
> is obviously going to be very complicated to predict what might happen 
> if we increase from our current 0.3km to something like (say) 20km.
>
> The IBM Redbook I mentioned suggests an IBM service to analyse some 
> data (presumably SMF) that can give some information.  If that were to 
> highlight our particularly bad transactions it would be very useful.  
> I suspect we have some badly written ones that would be particularly 
> susceptible to longer CF response times.  Does anyone know if this 
> service still exists and where one might find it?
>
> I'll see if I can find the 2017 information Timothy mentioned below as 
> this is new to me (any pointers - here, offline or Sametime as 
> appropriate).  The Asynchronous CF feature was mentioned in an earlier 
> response but we will have to upgrade our software to get there.  
> However, that was already in the planning.
>
> I have no idea where the question originally came from but maybe they 
> feel that with the two sites so close together, if they lose one 
> system then they could very easily lose the other as well.  This would 
> affect our Business Continuity (Metro 

Re: SYSPLEX distance

2018-01-10 Thread Rob Schramm
I was part of a group that ran a parallel sysplex that was 13 miles apart.
The time it takes the light to travel adds up.  This was back in 2007.   It
worked.  It ran that way for at least a few years.  It was not a GDPS
setup.  I think it was EMC disk at the time.

Rob Schramm



On Sun, Jan 7, 2018 at 12:13 PM Jesse 1 Robinson <jesse1.robin...@sce.com>
wrote:

> Our DR site is >100KM from our production site. In the early days of
> serious mirroring for DR (mid/late 90s), running a single sysplex across
> that distance was out of the question. It wasn't just a timing
> issue--although that was enough reason not to try--but network technology
> of the day was too flaky to run a single sysplex reliably. Connectivity is
> far better today, but I would not bet day-to-day production continuity on
> it.
>
> I think you would find a lot of complexity in trying to run a single
> sysplex. In particular, how would you handle mirrored DASD? The remote
> sysplex member would surely have to use the same physical DASD as the local
> member(s). If the local glass house failed, the DASD would presumably be
> unusable for the remote member. You would then have to re-IPL with the
> mirrored copy, so there would be an outage of uncertain duration. In our
> case, the goal is to be up and running within four hours, which includes
> time for applications to recover and verify the environment. That's a lot
> longer than your hypothetical one hour, but even with a remote member up
> and running, how long would take to switch over to it?
>
> I don't know where you're located, but we live in earthquake country where
> disruption can be widespread. The more you need significant separation for
> contingency, the less opportunity you have for running a single sysplex.
>
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 <(323)%20715-0595> Mobile
> 626-543-6132 <(626)%20543-6132> Office ⇐=== NEW
> robin...@sce.com
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Alan(GMAIL)Watthey
> Sent: Saturday, January 06, 2018 11:43 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: (External):Re: SYSPLEX distance
>
> Thanks to everyone for their insights and pointers on this matter.  It is
> obviously going to be very complicated to predict what might happen if we
> increase from our current 0.3km to something like (say) 20km.
>
> The IBM Redbook I mentioned suggests an IBM service to analyse some data
> (presumably SMF) that can give some information.  If that were to highlight
> our particularly bad transactions it would be very useful.  I suspect we
> have some badly written ones that would be particularly susceptible to
> longer CF response times.  Does anyone know if this service still exists
> and where one might find it?
>
> I'll see if I can find the 2017 information Timothy mentioned below as
> this is new to me (any pointers - here, offline or Sametime as
> appropriate).  The Asynchronous CF feature was mentioned in an earlier
> response but we will have to upgrade our software to get there.  However,
> that was already in the planning.
>
> I have no idea where the question originally came from but maybe they feel
> that with the two sites so close together, if they lose one system then
> they could very easily lose the other as well.  This would affect our
> Business Continuity (Metro Mirror).  Our DR site (Global Mirror) is safe
> being much further away but of course would realistically take at least an
> hour (on a good day and with a following wind) to get the end users
> connected in to.
>
> Regards,
> Alan Watthey
>
> -Original Message-
> From: Timothy Sipples [mailto:sipp...@sg.ibm.com]
> Sent: 04 January 2018 8:42 am
> Subject: Re: SYSPLEX distance
>
> Please make sure you take one recent (late 2016) innovation into
> consideration: Asynchronous CF Lock Duplexing. My understanding is that
> this recently introduced Coupling Facility feature offers performance
> improvements in many scenarios, including some distance "stretched"
> Parallel Sysplexes. IBM published some related performance test data only
> last year (2017). If you're looking at older references, you might be
> missing a lot.
>
> It could be helpful to understand the motivation(s) behind the question.
> As a notable example, does somebody want to create (or maintain) a
> "BronzePlex" to satisfy Parallel Sysplex aggregation rules? (Those rules
> are becoming less relevant now, at least, but that's a separate point.) As
> another example, if the focus is on protecting and preserving data, then it
> mi

Re: SYSPLEX distance

2018-01-07 Thread Jesse 1 Robinson
Our DR site is >100KM from our production site. In the early days of serious 
mirroring for DR (mid/late 90s), running a single sysplex across that distance 
was out of the question. It wasn't just a timing issue--although that was 
enough reason not to try--but network technology of the day was too flaky to 
run a single sysplex reliably. Connectivity is far better today, but I would 
not bet day-to-day production continuity on it. 

I think you would find a lot of complexity in trying to run a single sysplex. 
In particular, how would you handle mirrored DASD? The remote sysplex member 
would surely have to use the same physical DASD as the local member(s). If the 
local glass house failed, the DASD would presumably be unusable for the remote 
member. You would then have to re-IPL with the mirrored copy, so there would be 
an outage of uncertain duration. In our case, the goal is to be up and running 
within four hours, which includes time for applications to recover and verify 
the environment. That's a lot longer than your hypothetical one hour, but even 
with a remote member up and running, how long would take to switch over to it?

I don't know where you're located, but we live in earthquake country where 
disruption can be widespread. The more you need significant separation for 
contingency, the less opportunity you have for running a single sysplex. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Alan(GMAIL)Watthey
Sent: Saturday, January 06, 2018 11:43 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: SYSPLEX distance

Thanks to everyone for their insights and pointers on this matter.  It is 
obviously going to be very complicated to predict what might happen if we 
increase from our current 0.3km to something like (say) 20km.

The IBM Redbook I mentioned suggests an IBM service to analyse some data 
(presumably SMF) that can give some information.  If that were to highlight our 
particularly bad transactions it would be very useful.  I suspect we have some 
badly written ones that would be particularly susceptible to longer CF response 
times.  Does anyone know if this service still exists and where one might find 
it?

I'll see if I can find the 2017 information Timothy mentioned below as this is 
new to me (any pointers - here, offline or Sametime as appropriate).  The 
Asynchronous CF feature was mentioned in an earlier response but we will have 
to upgrade our software to get there.  However, that was already in the 
planning.

I have no idea where the question originally came from but maybe they feel that 
with the two sites so close together, if they lose one system then they could 
very easily lose the other as well.  This would affect our Business Continuity 
(Metro Mirror).  Our DR site (Global Mirror) is safe being much further away 
but of course would realistically take at least an hour (on a good day and with 
a following wind) to get the end users connected in to.

Regards,
Alan Watthey

-Original Message-
From: Timothy Sipples [mailto:sipp...@sg.ibm.com]
Sent: 04 January 2018 8:42 am
Subject: Re: SYSPLEX distance

Please make sure you take one recent (late 2016) innovation into
consideration: Asynchronous CF Lock Duplexing. My understanding is that this 
recently introduced Coupling Facility feature offers performance improvements 
in many scenarios, including some distance "stretched"
Parallel Sysplexes. IBM published some related performance test data only last 
year (2017). If you're looking at older references, you might be missing a lot.

It could be helpful to understand the motivation(s) behind the question. As a 
notable example, does somebody want to create (or maintain) a "BronzePlex" to 
satisfy Parallel Sysplex aggregation rules? (Those rules are becoming less 
relevant now, at least, but that's a separate point.) As another example, if 
the focus is on protecting and preserving data, then it might make sense to 
stretch the storage but not the Sysplex.



Timothy Sipples


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-06 Thread Alan(GMAIL)Watthey
Thanks to everyone for their insights and pointers on this matter.  It is
obviously going to be very complicated to predict what might happen if we
increase from our current 0.3km to something like (say) 20km.

The IBM Redbook I mentioned suggests an IBM service to analyse some data
(presumably SMF) that can give some information.  If that were to highlight
our particularly bad transactions it would be very useful.  I suspect we
have some badly written ones that would be particularly susceptible to
longer CF response times.  Does anyone know if this service still exists and
where one might find it?

I'll see if I can find the 2017 information Timothy mentioned below as this
is new to me (any pointers - here, offline or Sametime as appropriate).  The
Asynchronous CF feature was mentioned in an earlier response but we will
have to upgrade our software to get there.  However, that was already in the
planning.

I have no idea where the question originally came from but maybe they feel
that with the two sites so close together, if they lose one system then they
could very easily lose the other as well.  This would affect our Business
Continuity (Metro Mirror).  Our DR site (Global Mirror) is safe being much
further away but of course would realistically take at least an hour (on a
good day and with a following wind) to get the end users connected in to.

Regards,
Alan Watthey

-Original Message-
From: Timothy Sipples [mailto:sipp...@sg.ibm.com] 
Sent: 04 January 2018 8:42 am
Subject: Re: SYSPLEX distance

Please make sure you take one recent (late 2016) innovation into
consideration: Asynchronous CF Lock Duplexing. My understanding is that
this recently introduced Coupling Facility feature offers performance
improvements in many scenarios, including some distance "stretched"
Parallel Sysplexes. IBM published some related performance test data only
last year (2017). If you're looking at older references, you might be
missing a lot.

It could be helpful to understand the motivation(s) behind the question. As
a notable example, does somebody want to create (or maintain) a
"BronzePlex" to satisfy Parallel Sysplex aggregation rules? (Those rules
are becoming less relevant now, at least, but that's a separate point.) As
another example, if the focus is on protecting and preserving data, then it
might make sense to stretch the storage but not the Sysplex.



Timothy Sipples

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Edward Gould
> On Jan 3, 2018, at 7:55 PM, Mike Schwab  wrote:
> 
> Searched for ibm zos cryptography.  First 10 results seemed useful.  A
> share expo, some intros, some PDF manuals.
> https://www.google.com/search?q=ibm+zos+cryptography
> 
(off list)
I glanced through the google list and a couple of entries. Maybe I am to too 
thick.

I still have yet to figure out how if one system writes the dataset and another 
system goes to read it how will it know the key that was used during dataset 
creation. It becomes increasingly complex with GDPS, no?
Ed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Timothy Sipples
Please make sure you take one recent (late 2016) innovation into
consideration: Asynchronous CF Lock Duplexing. My understanding is that
this recently introduced Coupling Facility feature offers performance
improvements in many scenarios, including some distance "stretched"
Parallel Sysplexes. IBM published some related performance test data only
last year (2017). If you're looking at older references, you might be
missing a lot.

It could be helpful to understand the motivation(s) behind the question. As
a notable example, does somebody want to create (or maintain) a
"BronzePlex" to satisfy Parallel Sysplex aggregation rules? (Those rules
are becoming less relevant now, at least, but that's a separate point.) As
another example, if the focus is on protecting and preserving data, then it
might make sense to stretch the storage but not the Sysplex.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z and LinuxONE, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Mike Schwab
Searched for ibm zos cryptography.  First 10 results seemed useful.  A
share expo, some intros, some PDF manuals.
https://www.google.com/search?q=ibm+zos+cryptography

On Wed, Jan 3, 2018 at 6:53 PM, Edward Gould  wrote:
>> On Jan 3, 2018, at 3:42 PM, Mike Schwab  wrote:
>>
>> It says global, but limited intercontinental links limit you to one 
>> continent.
>>
>> On Wed, Jan 3, 2018 at 3:31 PM, Edward Gould  wrote:
>
> Mike,
>
> At one previous employers place wanted to span Continents (US and India) I 
> believe. I am not sure how much of this actually happened as I left in the 
> thinking process.
>
> I do vaguely remember that they had issues just going from Chicago to the 
> west coast. There were technical issues I believe (but was not privy to them).
>
> I am *still* unclear as to how cryptography works in any sysplex whether it 
> be local or remote. Is there a red piece on this somewhere?
>
> Ed
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Edward Gould
> On Jan 3, 2018, at 3:42 PM, Mike Schwab  wrote:
> 
> It says global, but limited intercontinental links limit you to one continent.
> 
> On Wed, Jan 3, 2018 at 3:31 PM, Edward Gould  wrote:

Mike,

At one previous employers place wanted to span Continents (US and India) I 
believe. I am not sure how much of this actually happened as I left in the 
thinking process.

I do vaguely remember that they had issues just going from Chicago to the west 
coast. There were technical issues I believe (but was not privy to them).

I am *still* unclear as to how cryptography works in any sysplex whether it be 
local or remote. Is there a red piece on this somewhere?

Ed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Mike Schwab
It says global, but limited intercontinental links limit you to one continent.

On Wed, Jan 3, 2018 at 3:31 PM, Edward Gould  wrote:
>> On Jan 3, 2018, at 2:51 AM, Mike Schwab  wrote:
>>
>> https://www.ibm.com/it-infrastructure/z/technologies/gdps 
>> 
>>
>> https://www.infoworld.com/article/2614033/disaster-recovery/data-centers-under-water--what--me-worry-.html
>>  
>> 
> Mike,
>
> Thank you a LOT for this! Management will be getting a copy today.
>
> Ed
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Edward Gould
> On Jan 3, 2018, at 2:51 AM, Mike Schwab  wrote:
> 
> https://www.ibm.com/it-infrastructure/z/technologies/gdps 
> 
> 
> https://www.infoworld.com/article/2614033/disaster-recovery/data-centers-under-water--what--me-worry-.html
>  
> 
Mike,

Thank you a LOT for this! Management will be getting a copy today.

Ed


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Dana Mitchell
Also keep in mind if you are reading old Redbooks if they reference Sysplex 
timers.  The change from Sysplex Timer to STP changed timer signalling from 
discrete ETR links to using CF links for timing signals.  This may have 
influence on distances and latency planning.
Dana

On Wed, 3 Jan 2018 09:18:50 +0300, Alan(GMAIL)Watthey  
wrote:

> I've
>dug up an old IBM Redbook on the issue where they did tests with a sysplex
>at 0, 20, 40, 60 and 80 kms apart.  Physical limitations (eg. FICON) don't
>seem to be an issue.  We are a CICS and DB2 shop so the manual certainly
>addressed issues that we might see but it is dated 2008 so has anything
>changed since then?  
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread mario bezzi

Alan,

after the redbook the team and I have been involved in many projects to 
distribute workloads across distance.


The main issue is the secondary effect of distance, which depends on how 
your applications are written. In my experience this is mainly related 
to data access and serialization. I have large customers happily running 
at 30+ km distance, and others who gave up at 5 km because of their 
applications' locking rates, which makes their applications very 
sensitive to distance.


A review of your current transactions' response times by component could 
tell you if there are serious inhibitors, but in most cases these only 
show up when going over distance. That's why we always recommend testing 
as there is no accurate way to predict the impact of distance on 
response times.


Attention must be paid to batch processing, which benefits less from 
caching mechanisms provided by middleware. Is your batch window 
constrained? Do you plan to run batch across sites or locally in the 
site which will host primary DASD and CFs? Batch can be a more sensitive 
workload than online.


Finally, and I try to address this before starting any technological 
discussion: What are you trying to achieve by going over distance? Most 
customers I worked with equate multi-site parallel sysplex with even 
workload distribution to non stop operation. This may be or may not be 
true, and depending on your actual needs a slightly different 
configuration might do with less performance impact.


Asynchronous CF locking is the major enhancement in the area of 
performance over distance. It was made available ealry last year, 
requires DB2 v12, z/OS Version 2 Release 2 + APAR OA47796 and CFCC lvl 
21 with Service Level 02.16 or later.


Hope this helps,
mario

On 01/03/2018 07:17 AM, AlanWatthey , GMAIL wrote:

I have had a strange request from management as to how far apart we can move
our production systems.  I know there are limitations on how far the (in
this case) two systems (and two coupling facilities) can be apart and I've
dug up an old IBM Redbook on the issue where they did tests with a sysplex
at 0, 20, 40, 60 and 80 kms apart.  Physical limitations (eg. FICON) don't
seem to be an issue.  We are a CICS and DB2 shop so the manual certainly
addressed issues that we might see but it is dated 2008 so has anything
changed since then?  CICS and DB2 have moved on a long way in that time.

  


I was thinking of saying up to 10km but this is really a finger in the air
value.  Maybe it's only 5 and maybe its 20 or 30.  Can we just throw CPU and
memory at it as I would think we would have a lot more transactions running
at the same time with some (but not all apparently) having extra delays
incurred?  The transaction increases suggested in that manual are
milleseconds and these days (with all the distributed systems involved) the
users are happy to get 10 second response times.  Admitted this can involve
many, many transactions behind the scenes from the application servers to
populate their crowded browser screens.  Gone are the days of data-entry
pools and sub-second responses.

  


What I was wondering is there anyone out there with real life experience on
this kind of activity.  How far apart do people run their sysplex systems?
What gotchas sprang up to relieve them of their sanity?  Any pointers would
be gratefully appreciated.

  


Regards,

Alan Watthey

  


From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of IBM-MAIN automatic digest system
Sent: 03 January 2018 8:00 am
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: IBM-MAIN Digest - 1 Jan 2018 to 2 Jan 2018 (#2018-2)

  

  



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Parwez Hamid
Some additional sources of info:

System z Parallel Sysplex Best Practices: 
http://www.redbooks.ibm.com/redpieces/abstracts/sg247817.html

Parallel Sysplex on IBM Z: 
https://www.ibm.com/it-infrastructure/z/technologies/parallel-sysplex

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSPLEX distance

2018-01-03 Thread Mike Schwab
https://www.ibm.com/it-infrastructure/z/technologies/gdps

https://www.infoworld.com/article/2614033/disaster-recovery/data-centers-under-water--what--me-worry-.html

On Wed, Jan 3, 2018 at 12:18 AM, Alan(GMAIL)Watthey  wrote:
> I have had a strange request from management as to how far apart we can move
> our production systems.  I know there are limitations on how far the (in
> this case) two systems (and two coupling facilities) can be apart and I've
> dug up an old IBM Redbook on the issue where they did tests with a sysplex
> at 0, 20, 40, 60 and 80 kms apart.  Physical limitations (eg. FICON) don't
> seem to be an issue.  We are a CICS and DB2 shop so the manual certainly
> addressed issues that we might see but it is dated 2008 so has anything
> changed since then?  CICS and DB2 have moved on a long way in that time.
>
>
>
> I was thinking of saying up to 10km but this is really a finger in the air
> value.  Maybe it's only 5 and maybe its 20 or 30.  Can we just throw CPU and
> memory at it as I would think we would have a lot more transactions running
> at the same time with some (but not all apparently) having extra delays
> incurred?  The transaction increases suggested in that manual are
> milleseconds and these days (with all the distributed systems involved) the
> users are happy to get 10 second response times.  Admitted this can involve
> many, many transactions behind the scenes from the application servers to
> populate their crowded browser screens.  Gone are the days of data-entry
> pools and sub-second responses.
>
>
>
> What I was wondering is there anyone out there with real life experience on
> this kind of activity.  How far apart do people run their sysplex systems?
> What gotchas sprang up to relieve them of their sanity?  Any pointers would
> be gratefully appreciated.
>
>
>
> Regards,
>
> Alan Watthey
>
>
>
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of IBM-MAIN automatic digest system
> Sent: 03 January 2018 8:00 am
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: IBM-MAIN Digest - 1 Jan 2018 to 2 Jan 2018 (#2018-2)
>
>
>
>
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


SYSPLEX distance

2018-01-02 Thread Alan(GMAIL)Watthey
I have had a strange request from management as to how far apart we can move
our production systems.  I know there are limitations on how far the (in
this case) two systems (and two coupling facilities) can be apart and I've
dug up an old IBM Redbook on the issue where they did tests with a sysplex
at 0, 20, 40, 60 and 80 kms apart.  Physical limitations (eg. FICON) don't
seem to be an issue.  We are a CICS and DB2 shop so the manual certainly
addressed issues that we might see but it is dated 2008 so has anything
changed since then?  CICS and DB2 have moved on a long way in that time.

 

I was thinking of saying up to 10km but this is really a finger in the air
value.  Maybe it's only 5 and maybe its 20 or 30.  Can we just throw CPU and
memory at it as I would think we would have a lot more transactions running
at the same time with some (but not all apparently) having extra delays
incurred?  The transaction increases suggested in that manual are
milleseconds and these days (with all the distributed systems involved) the
users are happy to get 10 second response times.  Admitted this can involve
many, many transactions behind the scenes from the application servers to
populate their crowded browser screens.  Gone are the days of data-entry
pools and sub-second responses.

 

What I was wondering is there anyone out there with real life experience on
this kind of activity.  How far apart do people run their sysplex systems?
What gotchas sprang up to relieve them of their sanity?  Any pointers would
be gratefully appreciated.

 

Regards,

Alan Watthey

 

From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of IBM-MAIN automatic digest system
Sent: 03 January 2018 8:00 am
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: IBM-MAIN Digest - 1 Jan 2018 to 2 Jan 2018 (#2018-2)

 

 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN