Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Martin Packer
And did I get the right name, Kees?

And I also hear of a reflectometer for measuring signalling latency - to 
independently confirm what the Infiniband / ICA-SR cards are saying. 
"Every home should have one." :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   23/12/2015 08:33
Subject:Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



The 'fiber-suitecase' is nothing more than a cable reel with 10 or 20 km 
of fiber. You plug this in your fiber configuration and start measurements 
for the what-if situation you are interested in.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Martin Packer
Sent: 23 December, 2015 9:17
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

Skip, I'd share your scepticism about 100+ km apart. I don't know of 
anybody doing anything remotely stressful in CF terms over that distance.

All my customers who are doing e.g. Data Sharing over distance plan and 
measure extremely carefully - and they're doing it over a very few tens of 

km.

I've heard of something called something like a "fibre suitcase" for 
measuring in test.

Could someone who has such a thing tell me its proper name and a little 
more about it? Thanks!

I've actually blogged extensively about the RMF 74-4 latency number 
(relatively new) - which I think is useful in checking distance and 
hinting at routing. While not wanting to advertise the posts I think this 
latency number is one people should check occasionally.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Skip Robinson <jo.skip.robin...@att.net>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have 
(rather
unkindly) scoffed at suggestions that we build a single sysplex between 
our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a 
sysplex
go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous 
post
looms large in my imagination. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Tuesday, December 22, 2015 12:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> One crucial parameter: at what distance are the CFs?
> There must be a noticable difference between 5 usecs for an unduplexed
local
> CF or a number of 150 usecs signals between CFs at 15 km distance.
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Martin Packer
> Sent: 22 December, 2015 8:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> We're not going to BLANKET recommend System-Managed Duplexing for high-
> volume, high stringency structures such as LOCK1. SCA has little 
traffic.
> 
> But I've seen MANY customers (including the one I worked with yesterday
here
> in Istanbul) that successfully use it. And I support their use of it.
> Other customers:
> 
> 1) Have a failure-isolated CF for such structures.
> 
> Or
> 
> 2) Take the risk of doing neither.
> 
> I've seen all 3 architectures even in the past 6 months. And your local
IBMer is
> normally willing to give their view, hopefully backed up by data and
people who
> know what they're talking about. :-)
> 
> Cheers, Martin
> 
> Martin Pa

Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Elardus Engelbrecht
Martin Packer wrote:

>And I also hear of a reflectometer for measuring signalling latency - to 
>independently confirm what the Infiniband / ICA-SR cards are saying.

Or this thing? https://en.wikipedia.org/wiki/Optical_time-domain_reflectometer

PS: of course I never handled it or observed someone handling those toys.

>"Every home should have one." :-)

Including fridges of course. ;-D

Your favourite drinks must have a nice cold and dark place to hide from thirsty 
people... ;-D

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Vernooij, CP (ITOPT1) - KLM
I don't think the cable reel has a fancy name. It just delivers you distance.

Besides that, there are devices that measure the quality of the cable 
connection, like number and distance of welds and their delays, attenuation in 
the fiber etc. etc. DWDM devices seem to be able to do so too. 

Still don't have a name, nor have I seen one. I know we used them.

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Martin Packer
Sent: 23 December, 2015 10:43
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

And did I get the right name, Kees?

And I also hear of a reflectometer for measuring signalling latency - to 
independently confirm what the Infiniband / ICA-SR cards are saying. 
"Every home should have one." :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   23/12/2015 08:33
Subject:Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



The 'fiber-suitecase' is nothing more than a cable reel with 10 or 20 km 
of fiber. You plug this in your fiber configuration and start measurements 
for the what-if situation you are interested in.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Martin Packer
Sent: 23 December, 2015 9:17
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

Skip, I'd share your scepticism about 100+ km apart. I don't know of 
anybody doing anything remotely stressful in CF terms over that distance.

All my customers who are doing e.g. Data Sharing over distance plan and 
measure extremely carefully - and they're doing it over a very few tens of 

km.

I've heard of something called something like a "fibre suitcase" for 
measuring in test.

Could someone who has such a thing tell me its proper name and a little 
more about it? Thanks!

I've actually blogged extensively about the RMF 74-4 latency number 
(relatively new) - which I think is useful in checking distance and 
hinting at routing. While not wanting to advertise the posts I think this 
latency number is one people should check occasionally.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Skip Robinson <jo.skip.robin...@att.net>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have 
(rather
unkindly) scoffed at suggestions that we build a single sysplex between 
our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a 
sysplex
go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous 
post
looms large in my imagination. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Tuesday, December 22, 2015 12:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> One crucial parameter: at what distance are the CFs?
> There must be a noticable difference between 5 usecs for an unduplexed
local
> CF or a number of 150 usecs signals between CFs at 15 km distance.
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Martin Packer
> Sent: 22 December, 2015 8:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> We're not going to BLANKET recommend System-Managed Duplexing for high-
> volume, high stringency structures such as

Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Vernooij, CP (ITOPT1) - KLM
Of course the cable reel provides you with xx km of ideal fiber. The real world 
fiber will have welds, attenuation etc. as mentioned below and therefor more 
delay.

Kees.

-Original Message-
From: Vernooij, CP (ITOPT1) - KLM 
Sent: 23 December, 2015 11:17
To: IBM Mainframe Discussion List
Subject: RE: Coupling Facility Structure Re-sizing

I don't think the cable reel has a fancy name. It just delivers you distance.

Besides that, there are devices that measure the quality of the cable 
connection, like number and distance of welds and their delays, attenuation in 
the fiber etc. etc. DWDM devices seem to be able to do so too. 

Still don't have a name, nor have I seen one. I know we used them.

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Martin Packer
Sent: 23 December, 2015 10:43
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

And did I get the right name, Kees?

And I also hear of a reflectometer for measuring signalling latency - to 
independently confirm what the Infiniband / ICA-SR cards are saying. 
"Every home should have one." :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   23/12/2015 08:33
Subject:Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



The 'fiber-suitecase' is nothing more than a cable reel with 10 or 20 km 
of fiber. You plug this in your fiber configuration and start measurements 
for the what-if situation you are interested in.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Martin Packer
Sent: 23 December, 2015 9:17
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

Skip, I'd share your scepticism about 100+ km apart. I don't know of 
anybody doing anything remotely stressful in CF terms over that distance.

All my customers who are doing e.g. Data Sharing over distance plan and 
measure extremely carefully - and they're doing it over a very few tens of 

km.

I've heard of something called something like a "fibre suitcase" for 
measuring in test.

Could someone who has such a thing tell me its proper name and a little 
more about it? Thanks!

I've actually blogged extensively about the RMF 74-4 latency number 
(relatively new) - which I think is useful in checking distance and 
hinting at routing. While not wanting to advertise the posts I think this 
latency number is one people should check occasionally.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Skip Robinson <jo.skip.robin...@att.net>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have 
(rather
unkindly) scoffed at suggestions that we build a single sysplex between 
our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a 
sysplex
go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous 
post
looms large in my imagination. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Tuesday, December 22, 2015 12:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> One crucial parameter: at what distance are the CFs?
> There must be a noticable difference between 5 usecs for an unduplexed
local
> CF or a number of 150 usecs signals between CFs at 15 km distance.
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe

Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Mike Schwab
On Tue, Dec 22, 2015 at 5:58 PM, Skip Robinson  wrote:
> I made a lame assumption based on 20 years of parallel sysplex. Our
> sysplexes have always consisted of boxes a few meters apart. I have (rather
> unkindly) scoffed at suggestions that we build a single sysplex between our
> data centers 100+ KM apart. It's not as much about speed as about the
> fallibility of network connections. The DWDM links that transport XRC
> connections are wicked fast, but they hiccup occasionally for usually
> unfathomable reasons. We can handle XRC suspend/resume, but having a sysplex
> go hard down in such circumstances is not acceptable. Maybe I'm behind the
> times, but that 'conversation with the boss' I alluded to in a previous post
> looms large in my imagination.
>
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> jo.skip.robin...@att.net
> jo.skip.robin...@gmail.com

One wrong swipe from a backhoe could have a cross - campus / city /
state / country / continent sysplex down for a day or two.  A phone
company building fire could be a month or more.

(1988 Hinsdale, IL fire).
http://articles.chicagotribune.com/1989-03-11/news/8903250918_1_state-fire-marshal-alarm-electrical-power

-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Shmuel Metz (Seymour J.)
In
<874b151289704e46a874bf2ae6fdd8d1310d6...@kl126r4b.cs.ad.klmcorp.net>,
on 12/23/2015
   at 03:18 PM, "Vernooij, CP (ITOPT1) - KLM" 
said:

>I was not asking for all the possibilities on all possible platforms,
>I was trying to ensure that it was readable and understandable on
>each platform that the message could appear on.

What current platform won't display µ () if the MIME header
fields are correct and the charset is one of the ISO-8859 character
sets? How many currently supported platforms can't handle
charset=UTF8?
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see  
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Shmuel Metz (Seymour J.)
In , on 12/22/2015
   at 11:59 PM, "Robert A. Rosenberg"  said:

>Via Email just go UTF-8. The same for Web Pages or just use the 
>Unicode codepoint () or .

Using  for µ is appropriate for web pages, but not for text in
e-mail. HTML in e-mail is the sin for which there is no forgiveness,
althogh multipart/alternative with a proper text/plain subpart is
okay.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see  
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Shmuel Metz (Seymour J.)
In <5679d5c0.3090...@bremultibank.com.pl>, on 12/22/2015
   at 11:59 PM, "R.S."  said:

>It is kind for recipients to use "lowest common denominator"

The least common denominator is ASCII, but as long as you have the
right MIME header fields pretty much everybody can read the ISO 8859-*
pages. Whether they can read non-English words is a sepate issue.

While my e-mail client can't handle UTF-8, it's clearly the direction
things are going.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see  
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Shmuel Metz (Seymour J.)
In , on 12/22/2015
   at 11:54 PM, "Robert A. Rosenberg"  said:

>On a Mac it is Option-m.

On a PC there are multiple keyboard layouts. On OS/2 with the US
International layout, Right-Alt-M gets µ. Depending on the layout R.S.
is using, it may or may not be easy. I wouldn't consider Alt-ddd to be
easy.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see  
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Vernooij, CP (ITOPT1) - KLM
This a pointless driftaway from the case (besides the fact that the case was 
actually CF structure sizes). 
I was not asking for all the possibilities on all possible platforms, I was 
trying to ensure that it was readable and understandable on each platform that 
the message could appear on. The most simple characters make the most chance 
then. Unless someone can assure me that there is a form of mu will always be 
displayed correctly on every platform.
Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Shmuel Metz (Seymour J.)
Sent: 23 December, 2015 16:16
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

In <p06240405d29fd8ea06d7@[192.168.1.242]>, on 12/22/2015
   at 11:54 PM, "Robert A. Rosenberg" <hal9...@panix.com> said:

>On a Mac it is Option-m.

On a PC there are multiple keyboard layouts. On OS/2 with the US
International layout, Right-Alt-M gets µ. Depending on the layout R.S.
is using, it may or may not be easy. I wouldn't consider Alt-ddd to be
easy.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see <http://patriot.net/~shmuel/resume/brief.html> 
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Ed Gould

WARNING not current information**

About 15 years ago I worked for a bank that mirrored more than 100  
3390 type volumes at 1000+ mile distances.
It sort of worked most of the time but when it didn't nobody noticed  
it and things began to get interesting (politically) and I think the  
government got involved.
I left in the middle of the mess and never found out what had  
happened (mostly rumors).


Ed
On Dec 23, 2015, at 2:17 AM, Martin Packer wrote:


Skip, I'd share your scepticism about 100+ km apart. I don't know of
anybody doing anything remotely stressful in CF terms over that  
distance.


All my customers who are doing e.g. Data Sharing over distance plan  
and
measure extremely carefully - and they're doing it over a very few  
tens of

km.

I've heard of something called something like a "fibre suitcase" for
measuring in test.

Could someone who has such a thing tell me its proper name and a  
little

more about it? Thanks!

I've actually blogged extensively about the RMF 74-4 latency number
(relatively new) - which I think is useful in checking distance and
hinting at routing. While not wanting to advertise the posts I  
think this

latency number is one people should check occasionally.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Skip Robinson <jo.skip.robin...@att.net>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:    Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List m...@listserv.ua.edu>




I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have
(rather
unkindly) scoffed at suggestions that we build a single sysplex  
between

our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a
sysplex
go hard down in such circumstances is not acceptable. Maybe I'm  
behind the
times, but that 'conversation with the boss' I alluded to in a  
previous

post
looms large in my imagination.

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Vernooij, CP (ITOPT1) - KLM
Sent: Tuesday, December 22, 2015 12:11 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing

One crucial parameter: at what distance are the CFs?
There must be a noticable difference between 5 usecs for an  
unduplexed

local

CF or a number of 150 usecs signals between CFs at 15 km distance.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Martin Packer
Sent: 22 December, 2015 8:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

We're not going to BLANKET recommend System-Managed Duplexing for  
high-

volume, high stringency structures such as LOCK1. SCA has little

traffic.


But I've seen MANY customers (including the one I worked with  
yesterday

here

in Istanbul) that successfully use it. And I support their use of it.
Other customers:

1) Have a failure-isolated CF for such structures.

Or

2) Take the risk of doing neither.

I've seen all 3 architectures even in the past 6 months. And your  
local

IBMer is

normally willing to give their view, hopefully backed up by data and

people who

know what they're talking about. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator, Worldwide Cloud & Systems
Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/ 
MartinPacker




From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 07:39
Subject:    Re: [Bulk] Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List m...@listserv.ua.edu>




Of course 'it depends'.

At least on the distance between the CFs. Signals are delayed by 10

usec/km.

The number of signals traveling for SMCFSD have indeed been optimized

since
the beginning, but it still makes a difference if the CF's are 1  
or 15

kms
apart. Our

latest researches from this year is that IBM still does not recommend

SMCFSD

for Lock and SCA.

What is your configurat

Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread R.S.

W dniu 2015-12-23 o 10:42, Martin Packer pisze:

And did I get the right name, Kees?


It's just "distance in the suitcase". :-)
It is NOT an electronic device, it is PURE FIBRE OPTIC CABLE.
Of course you can stack several suitcases to get desired distance.
Note - welding or connectors can be substituted with additional distance.
Note2 - quality of FO cable should not be checked during PoC tests. 
Reason: during PoC we analyze delays, cable quality should be just OK. 
If it's not, the problem is in quite diffrerent layer - whether the link 
is reliable.


BTW: there are electronic devices for copper lines, you can set delay 
using some knob. AFAIR other parameters like signal attenuation are also 
customisable.


--
Radoslaw Skorupka
Lodz, Poland






--
Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku 
przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie 
jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem 
niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania 
adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by 
karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie 
zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo 
wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzib w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru 
Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. 
Wedug stanu na dzie 01.01.2015 r. kapita zakadowy mBanku S.A. (w caoci 
wpacony) wynosi 168.840.228 zotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Pinnacle

On 12/23/2015 2:53 PM, Tom Brennan wrote:

I'm somewhat involved in a distance test scheduled for next month, and I
believe it will be using the "Fiber Lab Flex" box on this page:
http://www.m2optics.com/fiber-test-boxes/multi-spool-enclosures

The main plan is to check 20km+ between two machines currently right
next to each other.

Martin Packer wrote:

Skip, I'd share your scepticism about 100+ km apart. I don't know of
anybody doing anything remotely stressful in CF terms over that distance.

All my customers who are doing e.g. Data Sharing over distance plan
and measure extremely carefully - and they're doing it over a very few
tens of km.

I've heard of something called something like a "fibre suitcase" for
measuring in test.

Could someone who has such a thing tell me its proper name and a
little more about it? Thanks!



Martin,

Check out Meral Temel's presentation on distance testing (WTW):

https://share.confex.com/share/125/webprogram/Handout/Session17920/SHAREOrlandoMeralv5.pdf

Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Tom Brennan
I'm somewhat involved in a distance test scheduled for next month, and I 
believe it will be using the "Fiber Lab Flex" box on this page:

http://www.m2optics.com/fiber-test-boxes/multi-spool-enclosures

The main plan is to check 20km+ between two machines currently right 
next to each other.


Martin Packer wrote:
Skip, I'd share your scepticism about 100+ km apart. I don't know of 
anybody doing anything remotely stressful in CF terms over that distance.


All my customers who are doing e.g. Data Sharing over distance plan and 
measure extremely carefully - and they're doing it over a very few tens of 
km.


I've heard of something called something like a "fibre suitcase" for 
measuring in test.


Could someone who has such a thing tell me its proper name and a little 
more about it? Thanks!


I've actually blogged extensively about the RMF 74-4 latency number 
(relatively new) - which I think is useful in checking distance and 
hinting at routing. While not wanting to advertise the posts I think this 
latency number is one people should check occasionally.


Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker




From:   Skip Robinson <jo.skip.robin...@att.net>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have 
(rather
unkindly) scoffed at suggestions that we build a single sysplex between 
our

data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a 
sysplex

go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous 
post
looms large in my imagination. 


.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager

323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com




-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Vernooij, CP (ITOPT1) - KLM
Sent: Tuesday, December 22, 2015 12:11 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing

One crucial parameter: at what distance are the CFs?
There must be a noticable difference between 5 usecs for an unduplexed


local


CF or a number of 150 usecs signals between CFs at 15 km distance.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Martin Packer
Sent: 22 December, 2015 8:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

We're not going to BLANKET recommend System-Managed Duplexing for high-
volume, high stringency structures such as LOCK1. SCA has little 


traffic.


But I've seen MANY customers (including the one I worked with yesterday


here


in Istanbul) that successfully use it. And I support their use of it.
Other customers:

1) Have a failure-isolated CF for such structures.

Or

2) Take the risk of doing neither.

I've seen all 3 architectures even in the past 6 months. And your local


IBMer is


normally willing to give their view, hopefully backed up by data and


people who


know what they're talking about. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator, Worldwide Cloud & Systems
Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 07:39
Subject:Re: [Bulk] Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



Of course 'it depends'.

At least on the distance between the CFs. Signals are delayed by 10


usec/km.


The number of signals traveling for SMCFSD have indeed been optimized


since

the beginning, but it still makes a difference if the CF's are 1 or 15 


kms
apart. Our


latest researches from this year is that IBM still does not recommend


SMCFSD


for Lock and SCA.

What is your configuration? If a CEC fails, others DB2's in the group


should do


the recovery without delay. Did all your CECs and DB2s fail? Our


e

Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Vernooij, CP (ITOPT1) - KLM
The 'fiber-suitecase' is nothing more than a cable reel with 10 or 20 km of 
fiber. You plug this in your fiber configuration and start measurements for the 
what-if situation you are interested in.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Martin Packer
Sent: 23 December, 2015 9:17
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

Skip, I'd share your scepticism about 100+ km apart. I don't know of 
anybody doing anything remotely stressful in CF terms over that distance.

All my customers who are doing e.g. Data Sharing over distance plan and 
measure extremely carefully - and they're doing it over a very few tens of 
km.

I've heard of something called something like a "fibre suitcase" for 
measuring in test.

Could someone who has such a thing tell me its proper name and a little 
more about it? Thanks!

I've actually blogged extensively about the RMF 74-4 latency number 
(relatively new) - which I think is useful in checking distance and 
hinting at routing. While not wanting to advertise the posts I think this 
latency number is one people should check occasionally.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Skip Robinson <jo.skip.robin...@att.net>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:    Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have 
(rather
unkindly) scoffed at suggestions that we build a single sysplex between 
our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a 
sysplex
go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous 
post
looms large in my imagination. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Tuesday, December 22, 2015 12:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> One crucial parameter: at what distance are the CFs?
> There must be a noticable difference between 5 usecs for an unduplexed
local
> CF or a number of 150 usecs signals between CFs at 15 km distance.
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Martin Packer
> Sent: 22 December, 2015 8:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> We're not going to BLANKET recommend System-Managed Duplexing for high-
> volume, high stringency structures such as LOCK1. SCA has little 
traffic.
> 
> But I've seen MANY customers (including the one I worked with yesterday
here
> in Istanbul) that successfully use it. And I support their use of it.
> Other customers:
> 
> 1) Have a failure-isolated CF for such structures.
> 
> Or
> 
> 2) Take the risk of doing neither.
> 
> I've seen all 3 architectures even in the past 6 months. And your local
IBMer is
> normally willing to give their view, hopefully backed up by data and
people who
> know what they're talking about. :-)
> 
> Cheers, Martin
> 
> Martin Packer,
> zChampion, Principal Systems Investigator, Worldwide Cloud & Systems
> Performance, IBM
> 
> +44-7802-245-584
> 
> email: martin_pac...@uk.ibm.com
> 
> Twitter / Facebook IDs: MartinPacker
> Blog:
> https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker
> 
> 
> 
> From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date:   22/12/2015 07:39
> Subject:Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>
> 
> 
> 
> Of course 'it depends'.
> 
> At least on the distance between the CFs. Signals are delayed by 10
usec/km.
> The number of s

Re: Coupling Facility Structure Re-sizing

2015-12-23 Thread Martin Packer
Skip, I'd share your scepticism about 100+ km apart. I don't know of 
anybody doing anything remotely stressful in CF terms over that distance.

All my customers who are doing e.g. Data Sharing over distance plan and 
measure extremely carefully - and they're doing it over a very few tens of 
km.

I've heard of something called something like a "fibre suitcase" for 
measuring in test.

Could someone who has such a thing tell me its proper name and a little 
more about it? Thanks!

I've actually blogged extensively about the RMF 74-4 latency number 
(relatively new) - which I think is useful in checking distance and 
hinting at routing. While not wanting to advertise the posts I think this 
latency number is one people should check occasionally.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Skip Robinson <jo.skip.robin...@att.net>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 23:59
Subject:    Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have 
(rather
unkindly) scoffed at suggestions that we build a single sysplex between 
our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a 
sysplex
go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous 
post
looms large in my imagination. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Tuesday, December 22, 2015 12:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> One crucial parameter: at what distance are the CFs?
> There must be a noticable difference between 5 usecs for an unduplexed
local
> CF or a number of 150 usecs signals between CFs at 15 km distance.
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Martin Packer
> Sent: 22 December, 2015 8:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> We're not going to BLANKET recommend System-Managed Duplexing for high-
> volume, high stringency structures such as LOCK1. SCA has little 
traffic.
> 
> But I've seen MANY customers (including the one I worked with yesterday
here
> in Istanbul) that successfully use it. And I support their use of it.
> Other customers:
> 
> 1) Have a failure-isolated CF for such structures.
> 
> Or
> 
> 2) Take the risk of doing neither.
> 
> I've seen all 3 architectures even in the past 6 months. And your local
IBMer is
> normally willing to give their view, hopefully backed up by data and
people who
> know what they're talking about. :-)
> 
> Cheers, Martin
> 
> Martin Packer,
> zChampion, Principal Systems Investigator, Worldwide Cloud & Systems
> Performance, IBM
> 
> +44-7802-245-584
> 
> email: martin_pac...@uk.ibm.com
> 
> Twitter / Facebook IDs: MartinPacker
> Blog:
> https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker
> 
> 
> 
> From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date:   22/12/2015 07:39
> Subject:Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>
> 
> 
> 
> Of course 'it depends'.
> 
> At least on the distance between the CFs. Signals are delayed by 10
usec/km.
> The number of signals traveling for SMCFSD have indeed been optimized
since
> the beginning, but it still makes a difference if the CF's are 1 or 15 
kms
apart. Our
> latest researches from this year is that IBM still does not recommend
SMCFSD
> for Lock and SCA.
> 
> What is your configuration? If a CEC fails, others DB2's in the group
should do
> the recovery without delay. Did all your CECs and DB2s fail? Our
experience is
> tha

Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread R.S.

W dniu 2015-12-22 o 21:13, Paul Gilmartin pisze:

On 2015-12-22 11:15, R.S. wrote:

It would be better to define usec at the first occurence.
Or use full name: 'microsecond'.
BTW: 'us' seems to be more cryptic, while it's more correct than usec.


what about "μs"
That's the best, the most accurate, but ...still can be problematic 
since not everything can use greek letters.



BTW: some time ago in Poland there was netiquette rule to AVOID using 
polish characters (ąćęłńóśżź) since it was rearely understood. Not to 
mention we have many codepages: CP870, ISO8859-2, CP1250, CP852 to name 
few popular ones.


--
Radoslaw Skorupka
Lodz, Poland






--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2015 r. kapitał zakładowy mBanku S.A. (w całości 
wpłacony) wynosi 168.840.228 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Mike Schwab
On Tue, Dec 22, 2015 at 2:39 PM, R.S.  wrote:
> W dniu 2015-12-22 o 21:13, Paul Gilmartin pisze:
>>
>> On 2015-12-22 11:15, R.S. wrote:
>>>
>>> It would be better to define usec at the first occurence.
>>> Or use full name: 'microsecond'.
>>> BTW: 'us' seems to be more cryptic, while it's more correct than usec.
>>>
>> what about "μs"
>
> That's the best, the most accurate, but ...still can be problematic since
> not everything can use greek letters.
>
>
> BTW: some time ago in Poland there was netiquette rule to AVOID using polish
> characters (ąćęłńóśżź) since it was rearely understood. Not to mention we
> have many codepages: CP870, ISO8859-2, CP1250, CP852 to name few popular
> ones.
>
> --
> Radoslaw Skorupka
> Lodz, Poland
>
UTF-8 includes them all.  But UTF-EBCDIC is only suggested
transformations, not storage.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Skip Robinson
I made a lame assumption based on 20 years of parallel sysplex. Our
sysplexes have always consisted of boxes a few meters apart. I have (rather
unkindly) scoffed at suggestions that we build a single sysplex between our
data centers 100+ KM apart. It's not as much about speed as about the
fallibility of network connections. The DWDM links that transport XRC
connections are wicked fast, but they hiccup occasionally for usually
unfathomable reasons. We can handle XRC suspend/resume, but having a sysplex
go hard down in such circumstances is not acceptable. Maybe I'm behind the
times, but that 'conversation with the boss' I alluded to in a previous post
looms large in my imagination. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Tuesday, December 22, 2015 12:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> One crucial parameter: at what distance are the CFs?
> There must be a noticable difference between 5 usecs for an unduplexed
local
> CF or a number of 150 usecs signals between CFs at 15 km distance.
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Martin Packer
> Sent: 22 December, 2015 8:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> We're not going to BLANKET recommend System-Managed Duplexing for high-
> volume, high stringency structures such as LOCK1. SCA has little traffic.
> 
> But I've seen MANY customers (including the one I worked with yesterday
here
> in Istanbul) that successfully use it. And I support their use of it.
> Other customers:
> 
> 1) Have a failure-isolated CF for such structures.
> 
> Or
> 
> 2) Take the risk of doing neither.
> 
> I've seen all 3 architectures even in the past 6 months. And your local
IBMer is
> normally willing to give their view, hopefully backed up by data and
people who
> know what they're talking about. :-)
> 
> Cheers, Martin
> 
> Martin Packer,
> zChampion, Principal Systems Investigator, Worldwide Cloud & Systems
> Performance, IBM
> 
> +44-7802-245-584
> 
> email: martin_pac...@uk.ibm.com
> 
> Twitter / Facebook IDs: MartinPacker
> Blog:
> https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker
> 
> 
> 
> From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date:   22/12/2015 07:39
> Subject:Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>
> 
> 
> 
> Of course 'it depends'.
> 
> At least on the distance between the CFs. Signals are delayed by 10
usec/km.
> The number of signals traveling for SMCFSD have indeed been optimized
since
> the beginning, but it still makes a difference if the CF's are 1 or 15 kms
apart. Our
> latest researches from this year is that IBM still does not recommend
SMCFSD
> for Lock and SCA.
> 
> What is your configuration? If a CEC fails, others DB2's in the group
should do
> the recovery without delay. Did all your CECs and DB2s fail? Our
experience is
> that a group-restart is very fast, at max. 2 - 3 minutes and that are also
IBMs
> figures.
> Altogether, we still see advantages in not using SMCFSD for Lock and SCA.
> 
> Why did you decide different?
> 
> Kees.
> 
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Skip Robinson
> Sent: 21 December, 2015 20:32
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> I'm talking from experience. The two hours-long CEC failures we had--most
> recently in the fall of 2014--took down all CICS and DB2 applications as
well as
> three ICFs on the box that failed. The secondary 'penalty' box stayed up
and kept
> live copies of structures so that after hardware repair, all LPARs--host
and CF--
> came up with no recovery needed. In particular, no DB2 log processing,
which is
> the worst case for recovery.
> 
> As for processing overhead, that's why IBM delayed SMCFSD. We're as
> concerned with performance as any shop. Millions of CICS/DB2 transactions
per
> hour. For DASD mirroring, we went with XRC (async) rather than PPRC
> (sync) for that reason. Today we see no visible delays from SMCFSD. This
is
> predicated on having

Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread R.S.

W dniu 2015-12-22 o 21:58, Mike Schwab pisze:

UTF-8 includes them all.  But UTF-EBCDIC is only suggested
transformations, not storage.

Compatibility.
It is kind for recipients to use "lowest common denominator" instead of 
"my standard is the best one". I have no problems with viewing mu letter 
(it's much harder to type it), but I can imagine others still may have 
ones.


BTW: AFAIK mu is the only greek letter used as prefix. It wasn't good 
choice.


Note: there is (was?) also Å like Angstrem (Ångström), which is non-SI.  
It's 10^-10.
Obviously it's a name, 19th century Swedish scientist. The letter Å is 
also not the best choice.


--
Radoslaw Skorupka
Lodz, Poland






--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2015 r. kapitał zakładowy mBanku S.A. (w całości 
wpłacony) wynosi 168.840.228 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Paul Gilmartin
On 2015-12-22 11:15, R.S. wrote:
> It would be better to define usec at the first occurence.
> Or use full name: 'microsecond'.
> BTW: 'us' seems to be more cryptic, while it's more correct than usec.
> 
what about "μs"

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Robert A. Rosenberg
At 23:59 +0100 on 12/22/2015, R.S. wrote about Re: [Bulk] Re: 
Coupling Facility Structure Re-sizing:


I have no problems with viewing mu letter (it's much harder to type 
it), but I can imagine others still may have ones.


HARD TO TYPE? On a Mac it is Option-m. On a Windows machine, there is 
a similar key combination (alt-m I think).


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Robert A. Rosenberg
At 03:58 -0600 on 12/22/2015, Elardus Engelbrecht wrote about Re: 
[Bulk] Re: Coupling Facility Structure Re-sizing:



Vernooij, CP (ITOPT1) - KLM wrote:

Yes, u as a replacement of the greek letter mu, used to indicate 
the micro prefix, where the greek letter cannot be used.


Many thanks. Much appreciated. I agree it can be somewhat 
troublesome to send Greek and Russian (or Japanese) characters via 
e-mail or webpage. Either you use the wrong font / codepage or it is 
enforced up you... groan...


Via Email just go UTF-8. The same for Web Pages or just use the 
Unicode codepoint () or . BTW: This symbol is part of 
ISO-8859-1 as codepoint b5 (181) so should be easy to add to normal 
email - µ.




I will try to post this greek letter via IB-MAIN web-page below this 
line and see what happens (and send a copy to my e-mail address:

É s


It came through as a UTF-8 character.



I just wonder why only microsecond has this greek letter, but nano, 
pico, femto, etc. don't have greek letters.


I think I will not spend one more microsecond of my time on this thread...

Thanks Kees for your kind answer! Much appreciated.

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Elardus Engelbrecht
Vernooij, CP (ITOPT1) - KLM wrote:

>Yes, u as a replacement of the greek letter mu, used to indicate the micro 
>prefix, where the greek letter cannot be used. 
 
Many thanks. Much appreciated. I agree it can be somewhat troublesome to send 
Greek and Russian (or Japanese) characters via e-mail or webpage. Either you 
use the wrong font / codepage or it is enforced up you... groan...

I will try to post this greek letter via IB-MAIN web-page below this line and 
see what happens (and send a copy to my e-mail address: 
μs

I just wonder why only microsecond has this greek letter, but nano, pico, 
femto, etc. don't have greek letters.

I think I will not spend one more microsecond of my time on this thread... ;-)

Thanks Kees for your kind answer! Much appreciated.

Groete / Greetings 
Elardus Engelbrecht 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Vernooij, CP (ITOPT1) - KLM
The mu was readable, but for some reason I hesitate to bet on relying on it.
When both milli and micro abbreviate to the same letter m (and mega already 
abbreviates to M), you must invent something. No technician would like to make 
a guess for mseconds or mmeters between their milli and micro versions. 

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Elardus Engelbrecht
Sent: 22 December, 2015 10:59
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

Vernooij, CP (ITOPT1) - KLM wrote:

>Yes, u as a replacement of the greek letter mu, used to indicate the micro 
>prefix, where the greek letter cannot be used. 
 
Many thanks. Much appreciated. I agree it can be somewhat troublesome to send 
Greek and Russian (or Japanese) characters via e-mail or webpage. Either you 
use the wrong font / codepage or it is enforced up you... groan...

I will try to post this greek letter via IB-MAIN web-page below this line and 
see what happens (and send a copy to my e-mail address: 
μs

I just wonder why only microsecond has this greek letter, but nano, pico, 
femto, etc. don't have greek letters.

I think I will not spend one more microsecond of my time on this thread... ;-)

Thanks Kees for your kind answer! Much appreciated.

Groete / Greetings 
Elardus Engelbrecht 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Elardus Engelbrecht
Vernooij, CP (ITOPT1) - KLM wrote:

>One crucial parameter: at what distance are the CFs? 

Distance is indeed important.

>... 5 usecs  ... 150 usecs ...

First time I see 'usecs' here [1] on IBM-MAIM. After looking in Wikipedia, I 
want to know - is this microseconds? ( SI unit of time equal to one millionth 
of a second)?

Just curious please.

Groete / Greetings
Elardus Engelbrecht

[1] - from the context - this is not a Linux variable.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Vernooij, CP (ITOPT1) - KLM
One crucial parameter: at what distance are the CFs? 
There must be a noticable difference between 5 usecs for an unduplexed local CF 
or a number of 150 usecs signals between CFs at 15 km distance.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Martin Packer
Sent: 22 December, 2015 8:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

We're not going to BLANKET recommend System-Managed Duplexing for 
high-volume, high stringency structures such as LOCK1. SCA has little 
traffic.

But I've seen MANY customers (including the one I worked with yesterday 
here in Istanbul) that successfully use it. And I support their use of it. 
Other customers:

1) Have a failure-isolated CF for such structures.

Or

2) Take the risk of doing neither.

I've seen all 3 architectures even in the past 6 months. And your local 
IBMer is normally willing to give their view, hopefully backed up by data 
and people who know what they're talking about. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 07:39
Subject:    Re: [Bulk] Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



Of course 'it depends'.

At least on the distance between the CFs. Signals are delayed by 10 
usec/km. The number of signals traveling for SMCFSD have indeed been 
optimized since the beginning, but it still makes a difference if the CF's 
are 1 or 15 kms apart. Our latest researches from this year is that IBM 
still does not recommend SMCFSD for Lock and SCA.

What is your configuration? If a CEC fails, others DB2's in the group 
should do the recovery without delay. Did all your CECs and DB2s fail? Our 
experience is that a group-restart is very fast, at max. 2 - 3 minutes and 
that are also IBMs figures.
Altogether, we still see advantages in not using SMCFSD for Lock and SCA.

Why did you decide different?

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Skip Robinson
Sent: 21 December, 2015 20:32
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

I'm talking from experience. The two hours-long CEC failures we had--most 
recently in the fall of 2014--took down all CICS and DB2 applications as 
well as three ICFs on the box that failed. The secondary 'penalty' box 
stayed up and kept live copies of structures so that after hardware 
repair, all LPARs--host and CF--came up with no recovery needed. In 
particular, no DB2 log processing, which is the worst case for recovery.

As for processing overhead, that's why IBM delayed SMCFSD. We're as 
concerned with performance as any shop. Millions of CICS/DB2 transactions 
per hour. For DASD mirroring, we went with XRC (async) rather than PPRC 
(sync) for that reason. Today we see no visible delays from SMCFSD. This 
is predicated on having enough CF engines to do the job. As previously 
stated, beware of putting CF LPARs on hardware that's slower than the 
exploiters. Note that CF, ZIIP, and IFL engines run at full rated speed 
even on a box that's 'downsized' to run GP engines at less than maximum 
speed--to save software costs. That's why we're happy to put ICFs on 
otherwise slower penalty boxes. 
.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler The
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Sunday, December 20, 2015 11:35 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> Your last statement is far too general in my opinion. SMCFSD is not 
free: besides
> memory, which indeed is cheap these days, it will cost performance, like 
PPRC
> does.
> So one must always make the decision about having high availability or 
high
> performance.
> Even without SMCFSD, Structure availability is very high. And in the 
rare event of
> a CF failure (when was you last one?) each exploiter of CF Structures 
should be
> able to recover from that failure. In my experience they all do, except 
MQ.
> If  you have a CF failure, the structures are recovered within seconds 
or minutes.
> If you can't bear the recovery delay, you can use Duplexing. Besides 
that, if you
> have a CF failure, what other pr

Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Vernooij, CP (ITOPT1) - KLM
Yes, u as a replacement of the greek letter mu, used to indicate the micro 
prefix, where the greek letter cannot be used.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Elardus Engelbrecht
Sent: 22 December, 2015 10:16
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

Vernooij, CP (ITOPT1) - KLM wrote:

>One crucial parameter: at what distance are the CFs? 

Distance is indeed important.

>... 5 usecs  ... 150 usecs ...

First time I see 'usecs' here [1] on IBM-MAIM. After looking in Wikipedia, I 
want to know - is this microseconds? ( SI unit of time equal to one millionth 
of a second)?

Just curious please.

Groete / Greetings
Elardus Engelbrecht

[1] - from the context - this is not a Linux variable.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-22 Thread Shmuel Metz (Seymour J.)
In
<2623328940257590.wa.elardus.engelbrechtsita.co...@listserv.ua.edu>,
on 12/22/2015
   at 03:15 AM, Elardus Engelbrecht 
said:

>First time I see 'usecs' here [1] on IBM-MAIM.

I've never seen usec or Ásec[1] on IBM-MAIM. I have, however, seen
them on IBM-MAIN.

[1] That should come out sec


--
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see 
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-21 Thread Skip Robinson
I'm talking from experience. The two hours-long CEC failures we had--most 
recently in the fall of 2014--took down all CICS and DB2 applications as well 
as three ICFs on the box that failed. The secondary 'penalty' box stayed up and 
kept live copies of structures so that after hardware repair, all LPARs--host 
and CF--came up with no recovery needed. In particular, no DB2 log processing, 
which is the worst case for recovery.

As for processing overhead, that's why IBM delayed SMCFSD. We're as concerned 
with performance as any shop. Millions of CICS/DB2 transactions per hour. For 
DASD mirroring, we went with XRC (async) rather than PPRC (sync) for that 
reason. Today we see no visible delays from SMCFSD. This is predicated on 
having enough CF engines to do the job. As previously stated, beware of putting 
CF LPARs on hardware that's slower than the exploiters. Note that CF, ZIIP, and 
IFL engines run at full rated speed even on a box that's 'downsized' to run GP 
engines at less than maximum speed--to save software costs. That's why we're 
happy to put ICFs on otherwise slower penalty boxes. 
.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler The
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Sunday, December 20, 2015 11:35 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> Your last statement is far too general in my opinion. SMCFSD is not free: 
> besides
> memory, which indeed is cheap these days, it will cost performance, like PPRC
> does.
> So one must always make the decision about having high availability or high
> performance.
> Even without SMCFSD, Structure availability is very high. And in the rare 
> event of
> a CF failure (when was you last one?) each exploiter of CF Structures should 
> be
> able to recover from that failure. In my experience they all do, except MQ.
> If  you have a CF failure, the structures are recovered within seconds or 
> minutes.
> If you can't bear the recovery delay, you can use Duplexing. Besides that, if 
> you
> have a CF failure, what other problems do you have? Do you still need the zero
> recovery delay then?
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Skip Robinson
> Sent: 19 December, 2015 5:57
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Coupling Facility Structure Re-sizing
> 
> Wow, I feel so ancient. In the History of the World Part II, there are two 
> kinds of
> duplexing. The late comer is System Managed Duplexing, which is provided by
> z/OS - XCF - XES. The exploiter does not need to participate in SMD (my
> acronym); he just reaps the benefits. But SMD for customer use was delayed for
> quite a while because IBM could not get it working. (More history.)
> 
> Meanwhile DB2 could not wait for SMD and developed their own duplexing
> mechanism. Hence DB2/IRLM does not need/use SMD. I forgot that when I
> mentioned DB2 recovery. So I recommend that DUPLEX be specified for all other
> structures that need SMD.
> 
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 626-302-7535 Office
> 323-715-0595 Mobile
> jo.skip.robin...@att.net
> jo.skip.robin...@gmail.com
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of phil yogendran
> Sent: Friday, December 18, 2015 12:19 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> The increases recommended by the CF Sizer is marginal. Our structures in
> production are generously sized and we have lots of storage in the new CFs so
> that's not a concern. I will however lookout for messages as suggested.
> 
> Most of our structures are duplexed. Some like the structure for the IRLM lock
> are not. I have a note to investigate the product specific doc to understand 
> this
> better.
> 
> I also need to check on the performance of CF links as we're going to ICB 
> links
> now.
> 
> Thanks for the info.
> 
> 
> 
> 
> On Fri, Dec 18, 2015 at 12:42 PM, Skip Robinson <jo.skip.robin...@att.net>
> wrote:
> 
> > In case you're  curious, the parameters 'missing' from your old
> > definitions were added over the years since the advent of coupling
> > facility. The new parameters all have defaults such that they do not
> > actually require specification, but using them may give yo

Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-21 Thread Vernooij, CP (ITOPT1) - KLM
Of course 'it depends'.

At least on the distance between the CFs. Signals are delayed by 10 usec/km. 
The number of signals traveling for SMCFSD have indeed been optimized since the 
beginning, but it still makes a difference if the CF's are 1 or 15 kms apart. 
Our latest researches from this year is that IBM still does not recommend 
SMCFSD for Lock and SCA.

What is your configuration? If a CEC fails, others DB2's in the group should do 
the recovery without delay. Did all your CECs and DB2s fail? Our experience is 
that a group-restart is very fast, at max. 2 - 3 minutes and that are also IBMs 
figures.
Altogether, we still see advantages in not using SMCFSD for Lock and SCA.

Why did you decide different?

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Skip Robinson
Sent: 21 December, 2015 20:32
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

I'm talking from experience. The two hours-long CEC failures we had--most 
recently in the fall of 2014--took down all CICS and DB2 applications as well 
as three ICFs on the box that failed. The secondary 'penalty' box stayed up and 
kept live copies of structures so that after hardware repair, all LPARs--host 
and CF--came up with no recovery needed. In particular, no DB2 log processing, 
which is the worst case for recovery.

As for processing overhead, that's why IBM delayed SMCFSD. We're as concerned 
with performance as any shop. Millions of CICS/DB2 transactions per hour. For 
DASD mirroring, we went with XRC (async) rather than PPRC (sync) for that 
reason. Today we see no visible delays from SMCFSD. This is predicated on 
having enough CF engines to do the job. As previously stated, beware of putting 
CF LPARs on hardware that's slower than the exploiters. Note that CF, ZIIP, and 
IFL engines run at full rated speed even on a box that's 'downsized' to run GP 
engines at less than maximum speed--to save software costs. That's why we're 
happy to put ICFs on otherwise slower penalty boxes. 
.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler The
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Sunday, December 20, 2015 11:35 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> Your last statement is far too general in my opinion. SMCFSD is not free: 
> besides
> memory, which indeed is cheap these days, it will cost performance, like PPRC
> does.
> So one must always make the decision about having high availability or high
> performance.
> Even without SMCFSD, Structure availability is very high. And in the rare 
> event of
> a CF failure (when was you last one?) each exploiter of CF Structures should 
> be
> able to recover from that failure. In my experience they all do, except MQ.
> If  you have a CF failure, the structures are recovered within seconds or 
> minutes.
> If you can't bear the recovery delay, you can use Duplexing. Besides that, if 
> you
> have a CF failure, what other problems do you have? Do you still need the zero
> recovery delay then?
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Skip Robinson
> Sent: 19 December, 2015 5:57
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Coupling Facility Structure Re-sizing
> 
> Wow, I feel so ancient. In the History of the World Part II, there are two 
> kinds of
> duplexing. The late comer is System Managed Duplexing, which is provided by
> z/OS - XCF - XES. The exploiter does not need to participate in SMD (my
> acronym); he just reaps the benefits. But SMD for customer use was delayed for
> quite a while because IBM could not get it working. (More history.)
> 
> Meanwhile DB2 could not wait for SMD and developed their own duplexing
> mechanism. Hence DB2/IRLM does not need/use SMD. I forgot that when I
> mentioned DB2 recovery. So I recommend that DUPLEX be specified for all other
> structures that need SMD.
> 
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 626-302-7535 Office
> 323-715-0595 Mobile
> jo.skip.robin...@att.net
> jo.skip.robin...@gmail.com
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of phil yogendran
> Sent: Friday, December 18, 2015 12:19 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> The increases 

Re: [Bulk] Re: Coupling Facility Structure Re-sizing

2015-12-21 Thread Martin Packer
We're not going to BLANKET recommend System-Managed Duplexing for 
high-volume, high stringency structures such as LOCK1. SCA has little 
traffic.

But I've seen MANY customers (including the one I worked with yesterday 
here in Istanbul) that successfully use it. And I support their use of it. 
Other customers:

1) Have a failure-isolated CF for such structures.

Or

2) Take the risk of doing neither.

I've seen all 3 architectures even in the past 6 months. And your local 
IBMer is normally willing to give their view, hopefully backed up by data 
and people who know what they're talking about. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   "Vernooij, CP (ITOPT1) - KLM" <kees.verno...@klm.com>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/12/2015 07:39
Subject:    Re: [Bulk] Re: Coupling Facility Structure Re-sizing
Sent by:IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



Of course 'it depends'.

At least on the distance between the CFs. Signals are delayed by 10 
usec/km. The number of signals traveling for SMCFSD have indeed been 
optimized since the beginning, but it still makes a difference if the CF's 
are 1 or 15 kms apart. Our latest researches from this year is that IBM 
still does not recommend SMCFSD for Lock and SCA.

What is your configuration? If a CEC fails, others DB2's in the group 
should do the recovery without delay. Did all your CECs and DB2s fail? Our 
experience is that a group-restart is very fast, at max. 2 - 3 minutes and 
that are also IBMs figures.
Altogether, we still see advantages in not using SMCFSD for Lock and SCA.

Why did you decide different?

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Skip Robinson
Sent: 21 December, 2015 20:32
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [Bulk] Re: Coupling Facility Structure Re-sizing

I'm talking from experience. The two hours-long CEC failures we had--most 
recently in the fall of 2014--took down all CICS and DB2 applications as 
well as three ICFs on the box that failed. The secondary 'penalty' box 
stayed up and kept live copies of structures so that after hardware 
repair, all LPARs--host and CF--came up with no recovery needed. In 
particular, no DB2 log processing, which is the worst case for recovery.

As for processing overhead, that's why IBM delayed SMCFSD. We're as 
concerned with performance as any shop. Millions of CICS/DB2 transactions 
per hour. For DASD mirroring, we went with XRC (async) rather than PPRC 
(sync) for that reason. Today we see no visible delays from SMCFSD. This 
is predicated on having enough CF engines to do the job. As previously 
stated, beware of putting CF LPARs on hardware that's slower than the 
exploiters. Note that CF, ZIIP, and IFL engines run at full rated speed 
even on a box that's 'downsized' to run GP engines at less than maximum 
speed--to save software costs. That's why we're happy to put ICFs on 
otherwise slower penalty boxes. 
.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler The
SHARE MVS Program Co-Manager
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Vernooij, CP (ITOPT1) - KLM
> Sent: Sunday, December 20, 2015 11:35 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
> 
> Your last statement is far too general in my opinion. SMCFSD is not 
free: besides
> memory, which indeed is cheap these days, it will cost performance, like 
PPRC
> does.
> So one must always make the decision about having high availability or 
high
> performance.
> Even without SMCFSD, Structure availability is very high. And in the 
rare event of
> a CF failure (when was you last one?) each exploiter of CF Structures 
should be
> able to recover from that failure. In my experience they all do, except 
MQ.
> If  you have a CF failure, the structures are recovered within seconds 
or minutes.
> If you can't bear the recovery delay, you can use Duplexing. Besides 
that, if you
> have a CF failure, what other problems do you have? Do you still need 
the zero
> recovery delay then?
> 
> Kees.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Skip Robinson
> Sent: 19 December, 2015 5:57
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Coupling Facility Structure Re-sizing
> 
> Wow, I feel so ancient. In the History of the World Part II, there are 
two kinds of
> duple

Re: Coupling Facility Structure Re-sizing

2015-12-21 Thread Bill Neiman
Just to clarify a few points from this thread:

The CFRM policy should specify INITSIZE for a structure only when that 
structure supports alter.  It's not meaningful otherwise.

The reason for the recommendation that SIZE should never be more than 1.5 - 2 
times INITSIZE is that when the CF initially allocates a structure, it must 
provide sufficient internal control objects to support the structure's eventual 
maximum size.  If SIZE is excessive relative to INITSIZE, it may be impossible 
to allocate the structure at size INITSIZE and still provide those internal 
control objects.  Allocation may fail entirely, or it may create a structure 
with so much of its storage consumed by internal controls that it provides 
insufficient objects (entries, elements, etc.) for application use.

Only DB2 GBP structures exploit user-managed duplexing.  DB2 (IRLM) lock 
structures and DB2 list structures use system-managed duplexing.

CFSizer and the SIZER utility are two different things, both available at the 
CFSizer web site (http://www.ibm.com/systems/support/z/cfsizer/).  The SIZER 
utility is the tool of choice for migrations of the type described by the 
original post.  It collects information about attributes and object counts for 
all currently allocated structures, and determines what size would be required 
to support the same attributes and counts in all CFs connected to the system 
where the utility is being run.  It is useful when you are satisfied that the 
currently allocated structures are adequately sized for the existing workload.  
If you wish to verify that a structure is adequately sized, or if you're 
introducing a new workload type or changing an existing workload, that's when 
you use CFSizer.   CFSizer requires you to provide input describing the 
application workload (peak ENQ count, message arrival rate and retention time, 
data base size, etc., specific to the application) and returns a size estimate 
based on that workload description.  As noted in a previous append, it 
deliberately produces generous size estimates, because an undersized structure 
can cause serious problems while it's practically impossible to go wrong by 
moderately over-sizing a structure (assuming you don't exceed the CF's 
available storage).

Bill Neiman
IBM Parallel Sysplex development

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-20 Thread Vernooij, CP (ITOPT1) - KLM
Your last statement is far too general in my opinion. SMCFSD is not free: 
besides memory, which indeed is cheap these days, it will cost performance, 
like PPRC does.
So one must always make the decision about having high availability or high 
performance. 
Even without SMCFSD, Structure availability is very high. And in the rare event 
of a CF failure (when was you last one?) each exploiter of CF Structures should 
be able to recover from that failure. In my experience they all do, except MQ. 
If  you have a CF failure, the structures are recovered within seconds or 
minutes. If you can't bear the recovery delay, you can use Duplexing. Besides 
that, if you have a CF failure, what other problems do you have? Do you still 
need the zero recovery delay then?

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Skip Robinson
Sent: 19 December, 2015 5:57
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

Wow, I feel so ancient. In the History of the World Part II, there are two 
kinds of duplexing. The late comer is System Managed Duplexing, which is 
provided by z/OS - XCF - XES. The exploiter does not need to participate in SMD 
(my acronym); he just reaps the benefits. But SMD for customer use was delayed 
for quite a while because IBM could not get it working. (More history.)

Meanwhile DB2 could not wait for SMD and developed their own duplexing 
mechanism. Hence DB2/IRLM does not need/use SMD. I forgot that when I mentioned 
DB2 recovery. So I recommend that DUPLEX be specified for all other structures 
that need SMD. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of phil yogendran
Sent: Friday, December 18, 2015 12:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [Bulk] Re: Coupling Facility Structure Re-sizing

The increases recommended by the CF Sizer is marginal. Our structures in 
production are generously sized and we have lots of storage in the new CFs so 
that's not a concern. I will however lookout for messages as suggested.

Most of our structures are duplexed. Some like the structure for the IRLM lock 
are not. I have a note to investigate the product specific doc to understand 
this better.

I also need to check on the performance of CF links as we're going to ICB links 
now.

Thanks for the info.




On Fri, Dec 18, 2015 at 12:42 PM, Skip Robinson <jo.skip.robin...@att.net>
wrote:

> In case you're  curious, the parameters 'missing' from your old 
> definitions were added over the years since the advent of coupling 
> facility. The new parameters all have defaults such that they do not 
> actually require specification, but using them may give you better 
> control over structure sizes. Some additional points:
>
> -- At any time, the CF Sizer makes recommendations based on the latest 
> hardware with the latest microcode. Newer hardware or newer microcode 
> typically requires larger structures to accomplish the same work even 
> with no changes to the exploiters.
>
> -- In my experience, CF Sizer makes very generous recommendations. 
> Memory is cheaper now than ever, but watch out for gratuitous over allocation.
> Especially on an external CF, you might be constrained.
>
> -- Several structures require that you input data to CF Sizer on how 
> busy you expect the structure to be. For most, this has less to do 
> with the number of sysplex members than the amount of data the 
> structure has to handle. This is seldom easy to determine. Make your 
> best SWAG and monitor the results.
>
> -- The worst case is when a structure is too small for the exploiter 
> to initialize. I have not seen this for some time; maybe the big 
> exploiters have been (re)designed to come up regardless. But watch for 
> messages indicating that a structure needed more than the specified 
> minimum size at the outset.
>
> -- A parameter you did not ask about is DUPLEX. Even if you have only 
> one box for CF use, I recommend two CF LPARs on that box with 
> duplexing for relevant structures. Better of course would be two 
> boxes. The best thing about sysplex is its ability to survive 
> disruptions. Over the years we have had two CEC failures. In both 
> cases, the second CF allowed all applications to resume with zero data 
> recovery efforts. Note that some structures do not require duplexing, notably 
> GRS. If a host dies, so do all of its enqueues.
>
>
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile

Re: Coupling Facility Structure Re-sizing

2015-12-18 Thread nitz-ibm
> 1) Should INITSIZE and MINSIZE always be specified?
I have always specified both, and I have always made them equal. I had some 
unpleasant surprises when MINSIZE was smaller than INITSIZE. And I have always 
set INITSIZE to half of SIZE.

> 2) Are all structures 'eligible' to be defined with these parameters?
IIRC yes. If not, IXCMIAPU will tell you.

> 3) Besides the ratio specified above, are their any guidelines for the
> definitions?
Check 'Setting up a sysplex' for general guidelines, considerations for 
individual structures are usually found where the product documentation is. 
Don't bother going to the PRISM guide (if that is still referenced in sysplex 
setup) - the CFSizer is pretty good instead.

> 4) The sizer tool is based on current allocation which may not be ideal. Is
> there a way to confirm if I have the best fit?
Check the SMF records once things are set and change as needed.

Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-18 Thread Richards, Robert B.
Phil,

Since no one else has asked, why are you going from internal to external CFs? 
What is the hardware involved for both your regular lpars and your CF lpars?

The last thing you want is for your CFs to be slower than the CPs. BTDTGTS

Bob

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of phil yogendran
Sent: Thursday, December 17, 2015 4:03 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Coupling Facility Structure Re-sizing

Hello,

I am in the middle of attempting to go from internal to external CFs. I ran the 
sizer tool and have the info for the new sizes. However, the INITSIZE and 
MINSIZE values have me challenged.
For most structures, the  current allocation doesn't specify a value for either 
parameter or the specified value appears to be a poor choice. For instance, the 
book says that SIZE should be no more than 1.5 - 2.0 times INITSIZE but we have 
some where SIZE is 30 times INITSIZE. My questions are;

1) Should INITSIZE and MINSIZE always be specified?
2) Are all structures 'eligible' to be defined with these parameters?
3) Besides the ratio specified above, are their any guidelines for the 
definitions?
4) The sizer tool is based on current allocation which may not be ideal. Is 
there a way to confirm if I have the best fit?
5) Besides the manual "Setting up a Sysplex" is there more doc where I can find 
additional info?

Any suggestions or thoughts will be appreciated. Thanks.

Phil

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-18 Thread Elardus Engelbrecht
Richards, Robert B. wrote:

>The last thing you want is for your CFs to be slower than the CPs. BTDTGTS 

Ouch. Could you be kind to tell us about it? Are there any manuals stating that 
trouble? Any configuration changes to avoid? Or is it about the sizes or 
quantity of LPARs involved? 

TIA!

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-18 Thread Richards, Robert B.
The archives probably have it, but simply put and if IIRC, there was an old 
9674 being used with z990s. Waiting on CF structure response was horrific as 
compared to the speed of the z990 processor response. 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Elardus Engelbrecht
Sent: Friday, December 18, 2015 7:55 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Coupling Facility Structure Re-sizing

Richards, Robert B. wrote:

>The last thing you want is for your CFs to be slower than the CPs. 
>BTDTGTS

Ouch. Could you be kind to tell us about it? Are there any manuals stating that 
trouble? Any configuration changes to avoid? Or is it about the sizes or 
quantity of LPARs involved? 

TIA!

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-18 Thread phil yogendran
Thank you all for your replies. I will take your suggestions into
consideration going forward. We are in the process of upgrading from z10 -
> z12 -> z13 over the next few months. The CF upgrade is a part of this
project. The CFs are going from 2097/E10 and 2098/E12 to 2817/M15.

I expect to see better structure response with these changes and will be
surprised to see anything otherwise. Will keep you posted. Thanks again.

On Fri, Dec 18, 2015 at 8:16 AM, Richards, Robert B. <
robert.richa...@opm.gov> wrote:

> The archives probably have it, but simply put and if IIRC, there was an
> old 9674 being used with z990s. Waiting on CF structure response was
> horrific as compared to the speed of the z990 processor response.
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Elardus Engelbrecht
> Sent: Friday, December 18, 2015 7:55 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Coupling Facility Structure Re-sizing
>
> Richards, Robert B. wrote:
>
> >The last thing you want is for your CFs to be slower than the CPs.
> >BTDTGTS
>
> Ouch. Could you be kind to tell us about it? Are there any manuals stating
> that trouble? Any configuration changes to avoid? Or is it about the sizes
> or quantity of LPARs involved?
>
> TIA!
>
> Groete / Greetings
> Elardus Engelbrecht
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-18 Thread phil yogendran
The increases recommended by the CF Sizer is marginal. Our structures in
production are generously sized and we have lots of storage in the new CFs
so that's not a concern. I will however lookout for messages as suggested.

Most of our structures are duplexed. Some like the structure for the IRLM
lock are not. I have a note to investigate the product specific doc to
understand this better.

I also need to check on the performance of CF links as we're going to ICB
links now.

Thanks for the info.




On Fri, Dec 18, 2015 at 12:42 PM, Skip Robinson <jo.skip.robin...@att.net>
wrote:

> In case you're  curious, the parameters 'missing' from your old
> definitions were added over the years since the advent of coupling
> facility. The new parameters all have defaults such that they do not
> actually require specification, but using them may give you better control
> over structure sizes. Some additional points:
>
> -- At any time, the CF Sizer makes recommendations based on the latest
> hardware with the latest microcode. Newer hardware or newer microcode
> typically requires larger structures to accomplish the same work even with
> no changes to the exploiters.
>
> -- In my experience, CF Sizer makes very generous recommendations. Memory
> is cheaper now than ever, but watch out for gratuitous over allocation.
> Especially on an external CF, you might be constrained.
>
> -- Several structures require that you input data to CF Sizer on how busy
> you expect the structure to be. For most, this has less to do with the
> number of sysplex members than the amount of data the structure has to
> handle. This is seldom easy to determine. Make your best SWAG and monitor
> the results.
>
> -- The worst case is when a structure is too small for the exploiter to
> initialize. I have not seen this for some time; maybe the big exploiters
> have been (re)designed to come up regardless. But watch for messages
> indicating that a structure needed more than the specified minimum size at
> the outset.
>
> -- A parameter you did not ask about is DUPLEX. Even if you have only one
> box for CF use, I recommend two CF LPARs on that box with duplexing for
> relevant structures. Better of course would be two boxes. The best thing
> about sysplex is its ability to survive disruptions. Over the years we have
> had two CEC failures. In both cases, the second CF allowed all applications
> to resume with zero data recovery efforts. Note that some structures do not
> require duplexing, notably GRS. If a host dies, so do all of its enqueues.
>
>
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> jo.skip.robin...@att.net
> jo.skip.robin...@gmail.com
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of phil yogendran
> Sent: Friday, December 18, 2015 07:39 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
>
> Thank you all for your replies. I will take your suggestions into
> consideration going forward. We are in the process of upgrading from z10 -
> > z12 -> z13 over the next few months. The CF upgrade is a part of this
> project. The CFs are going from 2097/E10 and 2098/E12 to 2817/M15.
>
> I expect to see better structure response with these changes and will be
> surprised to see anything otherwise. Will keep you posted. Thanks again.
>
> On Fri, Dec 18, 2015 at 8:16 AM, Richards, Robert B. <
> robert.richa...@opm.gov> wrote:
>
> > The archives probably have it, but simply put and if IIRC, there was
> > an old 9674 being used with z990s. Waiting on CF structure response
> > was horrific as compared to the speed of the z990 processor response.
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On Behalf Of Elardus Engelbrecht
> > Sent: Friday, December 18, 2015 7:55 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Coupling Facility Structure Re-sizing
> >
> > Richards, Robert B. wrote:
> >
> > >The last thing you want is for your CFs to be slower than the CPs.
> > >BTDTGTS
> >
> > Ouch. Could you be kind to tell us about it? Are there any manuals
> > stating that trouble? Any configuration changes to avoid? Or is it
> > about the sizes or quantity of LPARs involved?
> >
> > TIA!
> >
> > Groete / Greetings
> > Elardus Engelbrecht
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-18 Thread Skip Robinson
Wow, I feel so ancient. In the History of the World Part II, there are two 
kinds of duplexing. The late comer is System Managed Duplexing, which is 
provided by z/OS - XCF - XES. The exploiter does not need to participate in SMD 
(my acronym); he just reaps the benefits. But SMD for customer use was delayed 
for quite a while because IBM could not get it working. (More history.)

Meanwhile DB2 could not wait for SMD and developed their own duplexing 
mechanism. Hence DB2/IRLM does not need/use SMD. I forgot that when I mentioned 
DB2 recovery. So I recommend that DUPLEX be specified for all other structures 
that need SMD. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
jo.skip.robin...@att.net
jo.skip.robin...@gmail.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of phil yogendran
Sent: Friday, December 18, 2015 12:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [Bulk] Re: Coupling Facility Structure Re-sizing

The increases recommended by the CF Sizer is marginal. Our structures in 
production are generously sized and we have lots of storage in the new CFs so 
that's not a concern. I will however lookout for messages as suggested.

Most of our structures are duplexed. Some like the structure for the IRLM lock 
are not. I have a note to investigate the product specific doc to understand 
this better.

I also need to check on the performance of CF links as we're going to ICB links 
now.

Thanks for the info.




On Fri, Dec 18, 2015 at 12:42 PM, Skip Robinson <jo.skip.robin...@att.net>
wrote:

> In case you're  curious, the parameters 'missing' from your old 
> definitions were added over the years since the advent of coupling 
> facility. The new parameters all have defaults such that they do not 
> actually require specification, but using them may give you better 
> control over structure sizes. Some additional points:
>
> -- At any time, the CF Sizer makes recommendations based on the latest 
> hardware with the latest microcode. Newer hardware or newer microcode 
> typically requires larger structures to accomplish the same work even 
> with no changes to the exploiters.
>
> -- In my experience, CF Sizer makes very generous recommendations. 
> Memory is cheaper now than ever, but watch out for gratuitous over allocation.
> Especially on an external CF, you might be constrained.
>
> -- Several structures require that you input data to CF Sizer on how 
> busy you expect the structure to be. For most, this has less to do 
> with the number of sysplex members than the amount of data the 
> structure has to handle. This is seldom easy to determine. Make your 
> best SWAG and monitor the results.
>
> -- The worst case is when a structure is too small for the exploiter 
> to initialize. I have not seen this for some time; maybe the big 
> exploiters have been (re)designed to come up regardless. But watch for 
> messages indicating that a structure needed more than the specified 
> minimum size at the outset.
>
> -- A parameter you did not ask about is DUPLEX. Even if you have only 
> one box for CF use, I recommend two CF LPARs on that box with 
> duplexing for relevant structures. Better of course would be two 
> boxes. The best thing about sysplex is its ability to survive 
> disruptions. Over the years we have had two CEC failures. In both 
> cases, the second CF allowed all applications to resume with zero data 
> recovery efforts. Note that some structures do not require duplexing, notably 
> GRS. If a host dies, so do all of its enqueues.
>
>
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> jo.skip.robin...@att.net
> jo.skip.robin...@gmail.com
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
> On Behalf Of phil yogendran
> Sent: Friday, December 18, 2015 07:39 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: [Bulk] Re: Coupling Facility Structure Re-sizing
>
> Thank you all for your replies. I will take your suggestions into 
> consideration going forward. We are in the process of upgrading from 
> z10 -
> > z12 -> z13 over the next few months. The CF upgrade is a part of 
> > this
> project. The CFs are going from 2097/E10 and 2098/E12 to 2817/M15.
>
> I expect to see better structure response with these changes and will 
> be surprised to see anything otherwise. Will keep you posted. Thanks again.
>
> On Fri, Dec 18, 2015 at 8:16 AM, Richards, Robert B. < 
> robert.richa...@opm.gov> wrote:
>
> > The archives probably have it, but simply pu

Re: Coupling Facility Structure Re-sizing

2015-12-17 Thread Elardus Engelbrecht
phil yogendran wrote:

>I am in the middle of attempting to go from internal to external CFs. I ran 
>the sizer tool and have the info for the new sizes. However, the INITSIZE and 
>MINSIZE values have me challenged. For most structures, the  current 
>allocation doesn't specify a value for either parameter or the specified value 
>appears to be a poor choice. For instance, the book says that SIZE should be 
>no more than 1.5 - 2.0 times INITSIZE but we have some where SIZE is 30 times 
>INITSIZE. My questions are;

>1) Should INITSIZE and MINSIZE always be specified?
>2) Are all structures 'eligible' to be defined with these parameters?
>3) Besides the ratio specified above, are their any guidelines for the
>definitions?
>4) The sizer tool is based on current allocation which may not be ideal. Is
>there a way to confirm if I have the best fit?
>5) Besides the manual "Setting up a Sysplex" is there more doc where I can
>find additional info?

>Any suggestions or thoughts will be appreciated. Thanks.

As always, YMMV. Especially you need to take in account how many LPARs are 
connected to that CF.

That Sizer Tool recommendations are based on general sizes used and estimates. 
You only know what sizes are the best for you.

Use D XCF,STRUCTURE,STRNAME=??? and run program IXCMIAPU to check your current 
usage.

Modify as needed. Wash/Rinse/Repeat if needed.

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Coupling Facility Structure Re-sizing

2015-12-17 Thread Vernooij, CP (ITOPT1) - KLM
I will try to answer some questions.

1) No, you should specify them when you need their function.
2) I have a different approach: if a structure should be allowed to occupy its 
SIZE at any moment, you better have the memory available then. Specifying 
INITSIZES and gambling that not all structures ask for their SIZE at the same 
moment, could allow you to define less memory to the CF LPAR then the sum of 
all SIZEs, but I consider this dangerous gambling with only few benefits.
5) Setting up a Sysplex is THE source of information. Plus the archives of this 
group.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of phil yogendran
Sent: 17 December, 2015 22:03
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Coupling Facility Structure Re-sizing

Hello,

I am in the middle of attempting to go from internal to external CFs. I ran
the sizer tool and have the info for the new sizes. However, the INITSIZE
and MINSIZE values have me challenged.
For most structures, the  current allocation doesn't specify a value for
either parameter or the specified value appears to be a poor choice. For
instance, the book says that SIZE should be no more than 1.5 - 2.0 times
INITSIZE but we have some where SIZE is 30 times INITSIZE. My questions are;

1) Should INITSIZE and MINSIZE always be specified?
2) Are all structures 'eligible' to be defined with these parameters?
3) Besides the ratio specified above, are their any guidelines for the
definitions?
4) The sizer tool is based on current allocation which may not be ideal. Is
there a way to confirm if I have the best fit?
5) Besides the manual "Setting up a Sysplex" is there more doc where I can
find additional info?

Any suggestions or thoughts will be appreciated. Thanks.

Phil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN