The 'fiber-suitecase' is nothing more than a cable reel with 10 or 20 km of 
fiber. You plug this in your fiber configuration and start measurements for the 
what-if situation you are interested in.

Kees.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Martin Packer
Sent: 23 December, 2015 9:17
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

Skip, I'd share your scepticism about 100+ km apart. I don't know of 
anybody doing anything remotely stressful in CF terms over that distance.

All my customers who are doing e.g. Data Sharing over distance plan and 
measure extremely carefully - and they're doing it over a very few tens of 
km.

I've heard of something called something like a "fibre suitcase" for 
measuring in test.

Could someone who has such a thing tell me its proper name and a little 
more about it? Thanks!

I've actually blogged extensively about the RMF 74-4 latency number 
(relatively new) - which I think is useful in checking distance and 
hinting at routing. While not wanting to advertise the posts I think this 
latency number is one people should check occasionally.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Skip Robinson <jo.skip.robin...@att.net>
To:     IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:        Re: Coupling Facility Structure Re-sizing
Sent by:        IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have 
(rather
unkindly) scoffed at suggestions that we build a single sysplex between 
our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a 
sysplex
go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous 
post
looms large in my imagination. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Tuesday, December 22, 2015 12:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> One crucial parameter: at what distance are the CFs?
> There must be a noticable difference between 5 usecs for an unduplexed
local
> CF or a number of 150 usecs signals between CFs at 15 km distance.
> 
> Kees.
> 
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Martin Packer
> Sent: 22 December, 2015 8:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> We're not going to BLANKET recommend System-Managed Duplexing for high-
> volume, high stringency structures such as LOCK1. SCA has little 
traffic.
> 
> But I've seen MANY customers (including the one I worked with yesterday
here
> in Istanbul) that successfully use it. And I support their use of it.
> Other customers:
> 
> 1) Have a failure-isolated CF for such structures.
> 
> Or
> 
> 2) Take the risk of doing neither.
> 
> I've seen all 3 architectures even in the past 6 months. And your local
IBMer is
> normally willing to give their view, hopefully backed up by data and
people who
> know what they're talking about. :-)
> 
> Cheers, Martin
> 
> Martin Packer,
> zChampion, Principal Systems Investigator, Worldwide Cloud & Systems
> Performance, IBM
> 
> +44-7802-245-584
> 
> email: martin_pac...@uk.ibm.com
> 
> Twitter / Facebook IDs: MartinPacker
> Blog:
> https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker
> 
> 
> 
> From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
> To:     IBM-MAIN@LISTSERV.UA.EDU
> Date:   22/12/2015 07:39
> Subject:        Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> Sent by:        IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>
> 
> 
> 
> Of course 'it depends'.
> 
> At least on the distance between the CFs. Signals are delayed by 10
usec/km.
> The number of signals traveling for SMCFSD have indeed been optimized
since
> the beginning, but it still makes a difference if the CF's are 1 or 15 
kms
apart. Our
> latest researches from this year is that IBM still does not recommend
SMCFSD
> for Lock and SCA.
> 
> What is your configuration? If a CEC fails, others DB2's in the group
should do
> the recovery without delay. Did all your CECs and DB2s fail? Our
experience is
> that a group-restart is very fast, at max. 2 - 3 minutes and that are 
also
IBMs
> figures.
> Altogether, we still see advantages in not using SMCFSD for Lock and 
SCA.
> 
> Why did you decide different?
> 
> Kees.
> 
> 
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Skip Robinson
> Sent: 21 December, 2015 20:32
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> I'm talking from experience. The two hours-long CEC failures we 
had--most
> recently in the fall of 2014--took down all CICS and DB2 applications as
well as
> three ICFs on the box that failed. The secondary 'penalty' box stayed up
and kept
> live copies of structures so that after hardware repair, all LPARs--host
and CF--
> came up with no recovery needed. In particular, no DB2 log processing,
which is
> the worst case for recovery.
> 
> As for processing overhead, that's why IBM delayed SMCFSD. We're as
> concerned with performance as any shop. Millions of CICS/DB2 
transactions
per
> hour. For DASD mirroring, we went with XRC (async) rather than PPRC
> (sync) for that reason. Today we see no visible delays from SMCFSD. This
is
> predicated on having enough CF engines to do the job. As previously
stated,
> beware of putting CF LPARs on hardware that's slower than the 
exploiters.
Note
> that CF, ZIIP, and IFL engines run at full rated speed even on a box
that's
> 'downsized' to run GP engines at less than maximum speed--to save 
software
> costs. That's why we're happy to put ICFs on otherwise slower penalty
boxes.
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler The
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> jo.skip.robin...@att.net
> jo.skip.robin...@gmail.com
> 
> 
> > -----Original Message-----
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On Behalf Of Vernooij, CP (ITOPT1) - KLM
> > Sent: Sunday, December 20, 2015 11:35 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> >
> > Your last statement is far too general in my opinion. SMCFSD is not
> free: besides
> > memory, which indeed is cheap these days, it will cost performance,
> > like
> PPRC
> > does.
> > So one must always make the decision about having high availability or
> high
> > performance.
> > Even without SMCFSD, Structure availability is very high. And in the
> rare event of
> > a CF failure (when was you last one?) each exploiter of CF Structures
> should be
> > able to recover from that failure. In my experience they all do,
> > except
> MQ.
> > If  you have a CF failure, the structures are recovered within seconds
> or minutes.
> > If you can't bear the recovery delay, you can use Duplexing. Besides
> that, if you
> > have a CF failure, what other problems do you have? Do you still need
> the zero
> > recovery delay then?
> >
> > Kees.
> >
> > -----Original Message-----
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On Behalf Of Skip Robinson
> > Sent: 19 December, 2015 5:57
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Coupling Facility Structure Re-sizing
> >
> > Wow, I feel so ancient. In the History of the World Part II, there are
> two kinds of
> > duplexing. The late comer is System Managed Duplexing, which is
> > provided
> by
> > z/OS - XCF - XES. The exploiter does not need to participate in SMD
> > (my acronym); he just reaps the benefits. But SMD for customer use was
> delayed for
> > quite a while because IBM could not get it working. (More history.)
> >
> > Meanwhile DB2 could not wait for SMD and developed their own duplexing
> > mechanism. Hence DB2/IRLM does not need/use SMD. I forgot that when I
> > mentioned DB2 recovery. So I recommend that DUPLEX be specified for
> > all
> other
> > structures that need SMD.
> >
> > .
> > .
> > .
> > J.O.Skip Robinson
> > Southern California Edison Company
> > Electric Dragon Team Paddler
> > SHARE MVS Program Co-Manager
> > 626-302-7535 Office
> > 323-715-0595 Mobile
> > jo.skip.robin...@att.net
> > jo.skip.robin...@gmail.com
> >
> > -----Original Message-----
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On Behalf Of phil yogendran
> > Sent: Friday, December 18, 2015 12:19 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> >
> > The increases recommended by the CF Sizer is marginal. Our structures
> > in production are generously sized and we have lots of storage in the
> > new
> CFs so
> > that's not a concern. I will however lookout for messages as 
suggested.
> >
> > Most of our structures are duplexed. Some like the structure for the
> IRLM lock
> > are not. I have a note to investigate the product specific doc to
> understand this
> > better.
> >
> > I also need to check on the performance of CF links as we're going to
> ICB links
> > now.
> >
> > Thanks for the info.
> >
> >
> >
> >
> > On Fri, Dec 18, 2015 at 12:42 PM, Skip Robinson
> <jo.skip.robin...@att.net>
> > wrote:
> >
> > > In case you're  curious, the parameters 'missing' from your old
> > > definitions were added over the years since the advent of coupling
> > > facility. The new parameters all have defaults such that they do not
> > > actually require specification, but using them may give you better
> > > control over structure sizes. Some additional points:
> > >
> > > -- At any time, the CF Sizer makes recommendations based on the
> > > latest hardware with the latest microcode. Newer hardware or newer
> > > microcode typically requires larger structures to accomplish the
> > > same work even with no changes to the exploiters.
> > >
> > > -- In my experience, CF Sizer makes very generous recommendations.
> > > Memory is cheaper now than ever, but watch out for gratuitous over
> > allocation.
> > > Especially on an external CF, you might be constrained.
> > >
> > > -- Several structures require that you input data to CF Sizer on how
> > > busy you expect the structure to be. For most, this has less to do
> > > with the number of sysplex members than the amount of data the
> > > structure has to handle. This is seldom easy to determine. Make your
> > > best SWAG and monitor the results.
> > >
> > > -- The worst case is when a structure is too small for the exploiter
> > > to initialize. I have not seen this for some time; maybe the big
> > > exploiters have been (re)designed to come up regardless. But watch
> > > for messages indicating that a structure needed more than the
> > > specified minimum size at the outset.
> > >
> > > -- A parameter you did not ask about is DUPLEX. Even if you have
> > > only one box for CF use, I recommend two CF LPARs on that box with
> > > duplexing for relevant structures. Better of course would be two
> > > boxes. The best thing about sysplex is its ability to survive
> > > disruptions. Over the years we have had two CEC failures. In both
> > > cases, the second CF allowed all applications to resume with zero
> > > data recovery efforts. Note that some structures do not require
> > > duplexing,
> notably
> > GRS. If a host dies, so do all of its enqueues.
> > >
> > >
> > > .
> > > .
> > > .
> > > J.O.Skip Robinson
> > > Southern California Edison Company
> > > Electric Dragon Team Paddler
> > > SHARE MVS Program Co-Manager
> > > 323-715-0595 Mobile
> > > jo.skip.robin...@att.net
> > > jo.skip.robin...@gmail.com
> > >
> > > -----Original Message-----
> > > From: IBM Mainframe Discussion List
> > > [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of phil yogendran
> > > Sent: Friday, December 18, 2015 07:39 AM
> > > To: IBM-MAIN@LISTSERV.UA.EDU
> > > Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> > >
> > > Thank you all for your replies. I will take your suggestions into
> > > consideration going forward. We are in the process of upgrading from
> > > z10 -
> > > > z12 -> z13 over the next few months. The CF upgrade is a part of
> > > > this
> > > project. The CFs are going from 2097/E10 and 2098/E12 to 2817/M15.
> > >
> > > I expect to see better structure response with these changes and
> > > will be surprised to see anything otherwise. Will keep you posted.
> > > Thanks
> again.
> > >
> > > On Fri, Dec 18, 2015 at 8:16 AM, Richards, Robert B. <
> > > robert.richa...@opm.gov> wrote:
> > >
> > > > The archives probably have it, but simply put and if IIRC, there
> > > > was an old 9674 being used with z990s. Waiting on CF structure
> > > > response was horrific as compared to the speed of the z990
> > > > processor
> response.
> > > >
> > > > -----Original Message-----
> > > > From: IBM Mainframe Discussion List
> > > > [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Elardus Engelbrecht
> > > > Sent: Friday, December 18, 2015 7:55 AM
> > > > To: IBM-MAIN@LISTSERV.UA.EDU
> > > > Subject: Re: Coupling Facility Structure Re-sizing
> > > >
> > > > Richards, Robert B. wrote:
> > > >
> > > > >The last thing you want is for your CFs to be slower than the 
CPs.
> > > > >BTDTGTS
> > > >
> > > > Ouch. Could you be kind to tell us about it? Are there any manuals
> > > > stating that trouble? Any configuration changes to avoid? Or is it
> > > > about the sizes or quantity of LPARs involved?
> > > >
> > > > TIA!
> > > >
> > > > Groete / Greetings
> > > > Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
********************************************************
For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286
********************************************************
                        

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to