Re: Internal Coupling Channel on z13

2019-02-06 Thread Allan Staller
I disagree. The answer is "it depends".

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Seymour J Metz
Sent: Tuesday, February 5, 2019 2:50 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Internal Coupling Channel on z13

It is possible to convert ENQ with SYSTEMS to RESERVE, but the performance 
ramifications are unacceptable.


--
Shmuel (Seymour J.) Metz
https://apac01.safelinks.protection.outlook.com/?url=http:%2F%2Fmason.gmu.edu%2F~smetz3data=02%7C01%7Callan.staller%40HCL.COM%7Cd460b1fa54164846cc9608d68bab8303%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C636849966118974286sdata=qJCwpw%2F9ywQBZLGUH5jHGRaW5FRG5mGzCkE6%2Bc3umkw%3Dreserved=0


From: IBM Mainframe Discussion List  on behalf of 
Tony Harminc 
Sent: Monday, February 4, 2019 3:42 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Internal Coupling Channel on z13

On Wed, 30 Jan 2019 at 01:53, Ed Jaffe  wrote:

> Is it no longer possible to use "old school" shared DASD
> RESERVE/RELEASE to protect data? I know it won't work for sharing
> PDSE, but for old-school PDS and sequential, it should still work.

Reserve/Release works only if someone issues those CCWs. DADSM issues them to 
protect the VTOC, and maybe catalog management does (I don't know), and the 
Binder does if SYSLMOD is on a shared device. But data management generally 
does not, so e.g. if you have two jobs on separate LPARs and each has DISP=OLD 
for the same dataset using QSAM or BPAM, say, nothing protects the data itself 
from being written from both sides at the same time.

You can put the Reserve/Release in your application program (via ENQ), but 
that's not going to work very well for your typical COBOL program, and unless 
you have extreme data separation by volume, performance will suck, to put it 
politely.

Typically what is wanted is the notion of ENQ with the SYSTEMS option, and 
that's just not available (and is silently ignored if requested) without either 
GRS or one of the products popular in the 1980s that implemented GRS-like 
behaviour using Reserve/Release on some kind of control dataset. I don't know 
if any of those products are still on the market.

Or why not, as another poster suggested, use a GRS ring. This can protect 
applications against dataset corruption at on-the-box CTC speed, and I don't 
see any obvious reason if would be slower than a timeshared CF on the same 
machine. Of course you get GRS - not Parallel Sysplex, and there are various 
things such as JES2 shared SPOOL that need the Sysplex.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN
::DISCLAIMER::
--
The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.
--

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-02-05 Thread Seymour J Metz
It is possible to convert ENQ with SYSTEMS to RESERVE, but the performance 
ramifications are unacceptable.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List  on behalf of 
Tony Harminc 
Sent: Monday, February 4, 2019 3:42 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Internal Coupling Channel on z13

On Wed, 30 Jan 2019 at 01:53, Ed Jaffe  wrote:

> Is it no longer possible to use "old school" shared DASD RESERVE/RELEASE
> to protect data? I know it won't work for sharing PDSE, but for
> old-school PDS and sequential, it should still work.

Reserve/Release works only if someone issues those CCWs. DADSM issues
them to protect the VTOC, and maybe catalog management does (I don't
know), and the Binder does if SYSLMOD is on a shared device. But data
management generally does not, so e.g. if you have two jobs on
separate LPARs and each has DISP=OLD for the same dataset using QSAM
or BPAM, say, nothing protects the data itself from being written from
both sides at the same time.

You can put the Reserve/Release in your application program (via ENQ),
but that's not going to work very well for your typical COBOL program,
and unless you have extreme data separation by volume, performance
will suck, to put it politely.

Typically what is wanted is the notion of ENQ with the SYSTEMS option,
and that's just not available (and is silently ignored if requested)
without either GRS or one of the products popular in the 1980s that
implemented GRS-like behaviour using Reserve/Release on some kind of
control dataset. I don't know if any of those products are still on
the market.

Or why not, as another poster suggested, use a GRS ring. This can
protect applications against dataset corruption at on-the-box CTC
speed, and I don't see any obvious reason if would be slower than a
timeshared CF on the same machine. Of course you get GRS - not
Parallel Sysplex, and there are various things such as JES2 shared
SPOOL that need the Sysplex.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-02-04 Thread Tony Harminc
On Wed, 30 Jan 2019 at 01:10, Brian Westerman
 wrote:
>
> Do you have any figures for how much "more" friendly the CPU usage is?

Funny thing... More than 40 years ago VM/370 was able to detect the
standard TIO/BC loop when issued in a virtual machine, and instead of
allowing it to eat CPU time, dispatched other work until the needed
I/O interrupt occurred or the virtual device otherwise became
available. One might think that PR/SM or whatever that code is called
these days could do the analogous thing without needing any changes to
the coupling code...

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-02-04 Thread Tony Harminc
On Wed, 30 Jan 2019 at 01:53, Ed Jaffe  wrote:

> Is it no longer possible to use "old school" shared DASD RESERVE/RELEASE
> to protect data? I know it won't work for sharing PDSE, but for
> old-school PDS and sequential, it should still work.

Reserve/Release works only if someone issues those CCWs. DADSM issues
them to protect the VTOC, and maybe catalog management does (I don't
know), and the Binder does if SYSLMOD is on a shared device. But data
management generally does not, so e.g. if you have two jobs on
separate LPARs and each has DISP=OLD for the same dataset using QSAM
or BPAM, say, nothing protects the data itself from being written from
both sides at the same time.

You can put the Reserve/Release in your application program (via ENQ),
but that's not going to work very well for your typical COBOL program,
and unless you have extreme data separation by volume, performance
will suck, to put it politely.

Typically what is wanted is the notion of ENQ with the SYSTEMS option,
and that's just not available (and is silently ignored if requested)
without either GRS or one of the products popular in the 1980s that
implemented GRS-like behaviour using Reserve/Release on some kind of
control dataset. I don't know if any of those products are still on
the market.

Or why not, as another poster suggested, use a GRS ring. This can
protect applications against dataset corruption at on-the-box CTC
speed, and I don't see any obvious reason if would be slower than a
timeshared CF on the same machine. Of course you get GRS - not
Parallel Sysplex, and there are various things such as JES2 shared
SPOOL that need the Sysplex.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-31 Thread Martin Packer
I would form a view as to how important CF request performance is for Dev. 
If it's important I'd be tempted to turn DYNDISP off for Dev and let 
Sandbox suffer.

Cheers, Martin

Martin Packer

zChampion, Systems Investigator & Performance Troubleshooter, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker

Podcast Series (With Marna Walle): https://developer.ibm.com/tv/mpt/or 
  
https://itunes.apple.com/gb/podcast/mainframe-performance-topics/id1127943573?mt=2


Youtube channel: https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA



From:   Jesse 1 Robinson 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   31/01/2019 18:44
Subject:Re: Internal Coupling Channel on z13
Sent by:IBM Mainframe Discussion List 



We share CFs between Sandbox and Dev. The latter is not classified here as 
Prod, though its usage is a lot more prod-ish than is Sandbox. We 
currently set all the shared-CP LPARs to THIN based on some Q in a SHARE 
session. Should we revisit that question?

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Martin Packer
Sent: Thursday, January 31, 2019 1:38 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Internal Coupling Channel on z13

(This thread has got quite long so pardon me if I repeat something someone 
else said.)

If you must run a PRODUCTION Coupling Facility LPAR on SHARED engines I 
would generally recommend turning DYNDISP off for that one LPAR. Set it to 
THIN for the non-Production ones it's forced to share with.

The reason is you don't want the Production CF to give up the engine in a 
particularly timely fashion - because another request might come in soon 
after the last one. For the Non-Prod ones you want THIN because you want 
them to get out of the way ASAP. However, the fact of sharing at all is 
the major destructive factor. THIN just mitigates it.

(The instrumentation to guide you on this is SMF74INT, R744PBSY and 
R744PWAI - which can be examined down to the engine level. Sometimes with
value.)

But, as always, actual mainframe estate shape governs the decisions.

Cheers, Martin

Martin Packer

zChampion, Systems Investigator & Performance Troubleshooter, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker

Podcast Series (With Marna Walle): https://developer.ibm.com/tv/mpt/or 

 
https://urldefense.proofpoint.com/v2/url?u=https-3A__itunes.apple.com_gb_podcast_mainframe-2Dperformance-2Dtopics_id1127943573-3Fmt-3D2=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=BsPGKdq7-Vl8MW2-WOWZjlZ0NwmcFSpQCLphNznBSDQ=Kc_UcJWF9tIwa5POuK33JinEdDOOQUM6djqCZa7uYTo=72EebpfuqG6c8aHIvYYSvia9yw4fcOi8awkWGDnFY6o=



Youtube channel: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_channel_UCu-5F65HaYgksbF6Q8SQ4oOvA=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=BsPGKdq7-Vl8MW2-WOWZjlZ0NwmcFSpQCLphNznBSDQ=Kc_UcJWF9tIwa5POuK33JinEdDOOQUM6djqCZa7uYTo=NMd8FE00CVO_bO7Ac23QRR1b8oyZSR2vWQwPLwwXG9I=




From:   Ravi Gaur 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   31/01/2019 08:46
Subject:Re: Internal Coupling Channel on z13
Sent by:IBM Mainframe Discussion List 



DYNDISP is recommended with THIN INTERRUPT for Non dedicated CF CPU 
...like for our sandbox/dev systems we have non dedicated and we keep it 
thin for them while for production which has CF CPU dedicated it's good 
idea to keep it OFF ...You may like to look at Z14 Configuration setup 
guide which well explained it and obviously to make these changes you will 
have to do it from HMC . 


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-31 Thread Jesse 1 Robinson
We share CFs between Sandbox and Dev. The latter is not classified here as 
Prod, though its usage is a lot more prod-ish than is Sandbox. We currently set 
all the shared-CP LPARs to THIN based on some Q in a SHARE session. Should we 
revisit that question?

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Martin Packer
Sent: Thursday, January 31, 2019 1:38 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Internal Coupling Channel on z13

(This thread has got quite long so pardon me if I repeat something someone else 
said.)

If you must run a PRODUCTION Coupling Facility LPAR on SHARED engines I would 
generally recommend turning DYNDISP off for that one LPAR. Set it to THIN for 
the non-Production ones it's forced to share with.

The reason is you don't want the Production CF to give up the engine in a 
particularly timely fashion - because another request might come in soon after 
the last one. For the Non-Prod ones you want THIN because you want them to get 
out of the way ASAP. However, the fact of sharing at all is the major 
destructive factor. THIN just mitigates it.

(The instrumentation to guide you on this is SMF74INT, R744PBSY and R744PWAI - 
which can be examined down to the engine level. Sometimes with
value.)

But, as always, actual mainframe estate shape governs the decisions.

Cheers, Martin

Martin Packer

zChampion, Systems Investigator & Performance Troubleshooter, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker

Podcast Series (With Marna Walle): https://developer.ibm.com/tv/mpt/or 
  
https://itunes.apple.com/gb/podcast/mainframe-performance-topics/id1127943573?mt=2


Youtube channel: https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA



From:   Ravi Gaur 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   31/01/2019 08:46
Subject:Re: Internal Coupling Channel on z13
Sent by:IBM Mainframe Discussion List 



DYNDISP is recommended with THIN INTERRUPT for Non dedicated CF CPU ...like for 
our sandbox/dev systems we have non dedicated and we keep it thin for them 
while for production which has CF CPU dedicated it's good idea to keep it OFF 
...You may like to look at Z14 Configuration setup guide which well explained 
it and obviously to make these changes you will have to do it from HMC . 


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-31 Thread Jesse 1 Robinson
Issuing DYNDISP THIN command on a CF with no shared CF engines gets

CF0505I DYNDISP command cancelled 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ravi Gaur
Sent: Thursday, January 31, 2019 12:46 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Internal Coupling Channel on z13

DYNDISP is recommended with THIN INTERRUPT for Non dedicated CF CPU ...like for 
our sandbox/dev systems we have non dedicated and we keep it thin for them 
while for production which has CF CPU dedicated it's good idea to keep it OFF 
...You may like to look at Z14 Configuration setup guide which well explained 
it and obviously to make these changes you will have to do it from HMC . 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-31 Thread Martin Packer
(This thread has got quite long so pardon me if I repeat something someone 
else said.)

If you must run a PRODUCTION Coupling Facility LPAR on SHARED engines I 
would generally recommend turning DYNDISP off for that one LPAR. Set it to 
THIN for the non-Production ones it's forced to share with.

The reason is you don't want the Production CF to give up the engine in a 
particularly timely fashion - because another request might come in soon 
after the last one. For the Non-Prod ones you want THIN because you want 
them to get out of the way ASAP. However, the fact of sharing at all is 
the major destructive factor. THIN just mitigates it.

(The instrumentation to guide you on this is SMF74INT, R744PBSY and 
R744PWAI - which can be examined down to the engine level. Sometimes with 
value.)

But, as always, actual mainframe estate shape governs the decisions.

Cheers, Martin

Martin Packer

zChampion, Systems Investigator & Performance Troubleshooter, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker

Podcast Series (With Marna Walle): https://developer.ibm.com/tv/mpt/or 
  
https://itunes.apple.com/gb/podcast/mainframe-performance-topics/id1127943573?mt=2


Youtube channel: https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA



From:   Ravi Gaur 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   31/01/2019 08:46
Subject:Re: Internal Coupling Channel on z13
Sent by:IBM Mainframe Discussion List 



DYNDISP is recommended with THIN INTERRUPT for Non dedicated CF CPU 
...like for our sandbox/dev systems we have non dedicated and we keep it 
thin for them while for production which has CF CPU dedicated it's good 
idea to keep it OFF ...You may like to look at Z14 Configuration setup 
guide which well explained it and obviously to make these changes you will 
have to do it from HMC . 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-31 Thread Ravi Gaur
DYNDISP is recommended with THIN INTERRUPT for Non dedicated CF CPU ...like for 
our sandbox/dev systems we have non dedicated and we keep it thin for them 
while for production which has CF CPU dedicated it's good idea to keep it OFF 
...You may like to look at Z14 Configuration setup guide which well explained 
it and obviously to make these changes you will have to do it from HMC .

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-30 Thread Brian Westerman
Were this on single CP systems or did you have multiple CP's or specialty 
processors handing things?

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-30 Thread Ed Jaffe

On 1/30/2019 3:19 PM, Jesse 1 Robinson wrote:

Hence we were running with the default DYNDISP OFF. We set DYNDISP to THIN 
INTERRUPT via CF command, and suddenly all was well again. Night and day.


Haha! No kidding! DYNDISP=OFF is basically a TIGHT CPU LOOP!


--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/



This e-mail message, including any attachments, appended messages and the
information contained therein, is for the sole use of the intended
recipient(s). If you are not an intended recipient or have otherwise
received this email message in error, any use, dissemination, distribution,
review, storage or copying of this e-mail message and the information
contained therein is strictly prohibited. If you are not an intended
recipient, please contact the sender by reply e-mail and destroy all copies
of this email message and do not otherwise utilize or retain this email
message or any or all of the information contained therein. Although this
email message and any attachments or appended messages are believed to be
free of any virus or other defect that might affect any computer system into
which it is received and opened, it is the responsibility of the recipient
to ensure that it is virus free and no responsibility is accepted by the
sender for any loss or damage arising in any way from its opening or use.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-30 Thread Jesse 1 Robinson
(Mentioned this in a previous post.) We replaced two CECs recently. Once 
everything was running, we discovered that the two-member sandbox was running 
horribly. Unusable really. Turned out to be configuration. We had copied CEC, 
LPAR, and Load profiles from old to new boxes. But there is no exportable 
'profile' to govern shared CF, i.e. no DYNDISP. Hence we were running with the 
default DYNDISP OFF. We set DYNDISP to THIN INTERRUPT via CF command, and 
suddenly all was well again. Night and day. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ed Jaffe
Sent: Tuesday, January 29, 2019 11:29 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Internal Coupling Channel on z13

On 1/29/2019 10:07 PM, Brian Westerman wrote:
> No, just one single CP, no specialty processors are available.


We have two CF LPARs (CF01 and CF02) sharing a single ICF engine. From both CF 
consoles I see:


2019029 22:56:50 => display dyndisp
2019029 22:56:50 CF0512I Dynamic CF Dispatching is THINinterrupts.

Right now things are pretty quiet on the system and from my z13s 
monitoring dashboard, I am seeing our ICF only 1% utilized with THIN 
interrupts.

When I change to DYNDISP ON instead of DYNDISP THIN, I still see only 1% 
utilized.

Of course, when I switch to DYNDISP OFF utilization jumps to 100%.

DYNDISP THIN should be more responsive and less CPU hungry than DYNDISP 
ON, but I can't really tell the difference with my simple test. Both 
seem pretty thrifty.


-- 
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-30 Thread Scott Chapman
I haven't seen many single-CP boxes in general, and haven't seen one running 
both CFCC and z/OS on that single CP. My expectation is that this would perform 
poorly. Sync requests would be impossible since PR/SM can't have both the z/OS 
and CFCC dispatched on the single CP at the same time, so all requests would 
have to be converted to async. 

My expectation is that just using CTCs would probably be faster. I would 
exercise caution in testing this though, and if it was my production system I 
probably wouldn't even try. But I'd certainly be curious about the results if 
you do try it. 

Note that my reluctance applies specifically to single-CP machines. If I had 
even two CPs, then I'd be much more willing to give it a shot, depending on 
current utilization levels and so forth. With dyndisp=thin of course. 

Scott Chapman

On Wed, 30 Jan 2019 00:10:26 -0600, Brian Westerman 
 wrote:

>Do you have any figures for how much "more" friendly the CPU usage is?
>
>This box is a single CPU, no ICF, Zipp or zapp.
>
>Brian
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-30 Thread Parwez Hamid
Others have already said this. With just a single CP on the system, its not a 
good idea to have a CFCC LPAR on the same CP which is also being used for z/OS 
LPARs.

If you have not already read the the doc at the link below, I suggest you do. 
It is very comprehensive and covers a lot more than you need.

https://www.google.com/url?sa=t=j==s=web=2=2ahUKEwiA3vqJlpXgAhXPc98KHdhGCv8QFjABegQIBxAC=https%3A%2F%2Fwww.ibm.com%2Fdownloads%2Fcas%2FJZB2E38Q=AOvVaw3YAB90hstZWbpesUsL-IDn

If the link doesn't work, search for ZSW01971-USEN-26 on google. In addition to 
this doc, there are various Redbooks and of course Knowledge Center:-)

I know you are looking for possible use cases/recommendations for what you 
would like to do with a single CP system. I will be very surprised if anyone 
has such a configuration on a 'production' system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-30 Thread Vernooij, Kees (ITOP NM) - KLM
To: " If they can't afford a second CP, they can't afford an ICF." 
AFAIK an ICF is cheaper than a CP (I believe half the price).

Met vriendelijke groet,
Kees Vernooij
KLM Information Services
z/OS Systems
Tel +31 6 10 14 58 78


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Ed Jaffe
> Sent: 30 January, 2019 7:53
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Internal Coupling Channel on z13
> 
> On 1/29/2019 10:17 PM, Brian Westerman wrote:
> > This particular box has just a single CP, no specialty processors, 3
> LPARs, one of them production, one application programmer test, and the
> other a sandbox that is extremely low use and in any case shares only
> the res volume.
> >
> > They "need" to run GRS because it's not really safe to run without it
> (especially since there is nothing to keep a lockout from occurring but
> to be careful), but they can't afford to add a specialty processor, at
> least not at this time.
> >
> > I'm trying to find out if they can install the microcode CF and
> assuming that the CPU use for the CF is now "low" if they could use it
> for DASD sharing only.  There is no real data sharing involved (no DB2,
> no CICS).  So we are just talking about GRS shipping the ENQs around.
> 
> In theory, coupling facility thin interrupts is faster than previous CF
> technologies on shared CPs:
> https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102400
> 
> ICF specialty engines are ridiculously priced (i.e., wy too high).
> If they can't afford a second CP, they can't afford an ICF. Too bad,
> because that would make all the difference.
> 
> Is it no longer possible to use "old school" shared DASD RESERVE/RELEASE
> to protect data? I know it won't work for sharing PDSE, but for
> old-school PDS and sequential, it should still work.
> 
> 
> --
> Phoenix Software International
> Edward E. Jaffe
> 831 Parkview Drive North
> El Segundo, CA 90245
> https://www.phoenixsoftware.com/
> 
> 
> 
> 
> This e-mail message, including any attachments, appended messages and
> the
> information contained therein, is for the sole use of the intended
> recipient(s). If you are not an intended recipient or have otherwise
> received this email message in error, any use, dissemination,
> distribution,
> review, storage or copying of this e-mail message and the information
> contained therein is strictly prohibited. If you are not an intended
> recipient, please contact the sender by reply e-mail and destroy all
> copies
> of this email message and do not otherwise utilize or retain this email
> message or any or all of the information contained therein. Although
> this
> email message and any attachments or appended messages are believed to
> be
> free of any virus or other defect that might affect any computer system
> into
> which it is received and opened, it is the responsibility of the
> recipient
> to ensure that it is virus free and no responsibility is accepted by the
> sender for any loss or damage arising in any way from its opening or
> use.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Ed Jaffe

On 1/29/2019 10:07 PM, Brian Westerman wrote:

No, just one single CP, no specialty processors are available.



We have two CF LPARs (CF01 and CF02) sharing a single ICF engine. From 
both CF consoles I see:



2019029 22:56:50 => display dyndisp
2019029 22:56:50 CF0512I Dynamic CF Dispatching is THINinterrupts.

Right now things are pretty quiet on the system and from my z13s 
monitoring dashboard, I am seeing our ICF only 1% utilized with THIN 
interrupts.


When I change to DYNDISP ON instead of DYNDISP THIN, I still see only 1% 
utilized.


Of course, when I switch to DYNDISP OFF utilization jumps to 100%.

DYNDISP THIN should be more responsive and less CPU hungry than DYNDISP 
ON, but I can't really tell the difference with my simple test. Both 
seem pretty thrifty.



--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/



This e-mail message, including any attachments, appended messages and the
information contained therein, is for the sole use of the intended
recipient(s). If you are not an intended recipient or have otherwise
received this email message in error, any use, dissemination, distribution,
review, storage or copying of this e-mail message and the information
contained therein is strictly prohibited. If you are not an intended
recipient, please contact the sender by reply e-mail and destroy all copies
of this email message and do not otherwise utilize or retain this email
message or any or all of the information contained therein. Although this
email message and any attachments or appended messages are believed to be
free of any virus or other defect that might affect any computer system into
which it is received and opened, it is the responsibility of the recipient
to ensure that it is virus free and no responsibility is accepted by the
sender for any loss or damage arising in any way from its opening or use.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Mike Schwab
What is the percent busy at peak times?  How big a percent do you
need?  10% for the ICF partition?  Would you save that much by
converting Ring to Star?

On Wed, Jan 30, 2019 at 12:07 AM Brian Westerman
 wrote:
>
> No, just one single CP, no specialty processors are available.
>
> Brian
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Ed Jaffe

On 1/29/2019 10:17 PM, Brian Westerman wrote:

This particular box has just a single CP, no specialty processors, 3 LPARs, one 
of them production, one application programmer test, and the other a sandbox 
that is extremely low use and in any case shares only the res volume.

They "need" to run GRS because it's not really safe to run without it 
(especially since there is nothing to keep a lockout from occurring but to be careful), 
but they can't afford to add a specialty processor, at least not at this time.

I'm trying to find out if they can install the microcode CF and assuming that the CPU use 
for the CF is now "low" if they could use it for DASD sharing only.  There is 
no real data sharing involved (no DB2, no CICS).  So we are just talking about GRS 
shipping the ENQs around.


In theory, coupling facility thin interrupts is faster than previous CF 
technologies on shared CPs: 
https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102400


ICF specialty engines are ridiculously priced (i.e., wy too high). 
If they can't afford a second CP, they can't afford an ICF. Too bad, 
because that would make all the difference.


Is it no longer possible to use "old school" shared DASD RESERVE/RELEASE 
to protect data? I know it won't work for sharing PDSE, but for 
old-school PDS and sequential, it should still work.



--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/



This e-mail message, including any attachments, appended messages and the
information contained therein, is for the sole use of the intended
recipient(s). If you are not an intended recipient or have otherwise
received this email message in error, any use, dissemination, distribution,
review, storage or copying of this e-mail message and the information
contained therein is strictly prohibited. If you are not an intended
recipient, please contact the sender by reply e-mail and destroy all copies
of this email message and do not otherwise utilize or retain this email
message or any or all of the information contained therein. Although this
email message and any attachments or appended messages are believed to be
free of any virus or other defect that might affect any computer system into
which it is received and opened, it is the responsibility of the recipient
to ensure that it is virus free and no responsibility is accepted by the
sender for any loss or damage arising in any way from its opening or use.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Brian Westerman
This particular box has just a single CP, no specialty processors, 3 LPARs, one 
of them production, one application programmer test, and the other a sandbox 
that is extremely low use and in any case shares only the res volume.

They "need" to run GRS because it's not really safe to run without it 
(especially since there is nothing to keep a lockout from occurring but to be 
careful), but they can't afford to add a specialty processor, at least not at 
this time.  

I'm trying to find out if they can install the microcode CF and assuming that 
the CPU use for the CF is now "low" if they could use it for DASD sharing only. 
 There is no real data sharing involved (no DB2, no CICS).  So we are just 
talking about GRS shipping the ENQs around.

Has anyone tried this yet?

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Brian Westerman
I agree that before the current CF20 implementation of the micocode only 
version, it was supposed ot be almost fatal to try it, but with the new z13s 
and z14 it's "supposed" to be "low" impact, but it doesn't really talk about 
how low the impact is, and if it can be done with a single CP and no specialty 
processors.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Brian Westerman
Do you have any figures for how much "more" friendly the CPU usage is?

This box is a single CPU, no ICF, Zipp or zapp.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Brian Westerman
NO, I'm trying to use the Microcode Implementation, it has no CTC, and is just 
memory to memory between LPARs on the same physical box.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Brian Westerman
No, just one single CP, no specialty processors are available.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Timothy Sipples
I agree with the recommendation to get a CF engine as the best/first choice
all around. I also agree to proceed with caution if you're going to share
one or more CPs between the Coupling Facility Control Code (CFCC) and z/OS
and/or other operating systems. "Proceed with caution" is not the same
thing as "don't." You just want to be extra careful, test well, and back
off the idea if it's not suitable in your environment since there are some
potential issues that could surface.

However, if you have one or more CPs (general purpose processors) that
you're willing to *dedicate* to a CFCC LPAR, this caution doesn't apply. In
this case you're using a dedicated CP as if it were a CF engine, and that's
perfectly fine. You might be dedicating a sub-capacity CP (or more than
one) with different capacity characteristics than a CF engine, but there's
no particular issue with that as long as you have sufficient capacity for
your needs.

Or, if you have some "trivial" z/OS workload sharing that CP with the CFCC,
such as a "sandbox" z/OS LPAR for system programmers that sees little
activity, that might be fine.

There are certain upgrade scenarios when it can make a great deal of sense
to use one or more CPs for the CFCC for relatively brief periods of time.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z & LinuxONE
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Jesse 1 Robinson
We've used ICF on pretty much every model of hardware since z10. We're 
currently on z14 and z13s. We use internal coupling links with external links 
to another CEC. 

We do have--and always have--CF engines shared among non-prod LPARs using thin 
interrupt protocol. I would hesitate to suggest buying CFs just for GRS star, 
but if you already have them, star so far outperforms ring that there should be 
no hesitation in moving there. I also would not consider using regular CPs. 
They would work, but poorly. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Dana Mitchell
Sent: Tuesday, January 29, 2019 9:28 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Internal Coupling Channel on z13

Brian,

We've actually been thinking about this too, as we evaluate moving to z14's. We 
currently have a 5 member GRS ring that could really benefit moving to GRS 
Star,  but nobody want's to spend any $ for an ICF.

As of BC12 generation IBM was still recommending against using GPs for CF 
processors:

According to an IBM paper titled:  Coupling Thin Interrupts and CF Performance 
in Shared Processor Environments

While it is possible to use general purpose processors, CPs, as CF processors 
and share the CPs with z/OS and other similarly defined CFs, this is not a 
recommended configuration. Using CPs as CF processors creates interaction 
between z/OS and the CF in terms of cache reuse and other factors that may 
impact performance. 

Dana


On Tue, 29 Jan 2019 00:23:56 -0600, Brian Westerman 
 wrote:

>Hi,
>
>Has anyone had any experience with using the internal coupling channels on a 
>z13.  "supposedly" IBM has removed the active wait problems (where the CF lpar 
>would try to use 100% of whatever it gets from PR/SM), but I was wondering if 
>it's ready for prime time yet.  I have a z13s (single CPU) with 3 LPARs on it, 
>they are fairly low use, but I think it'a about time the started to use GRS 
>since they share DASD (it was small, but now more and more as time goes on).  
>They have been lucky so far, but I would rather not rely on luck.  They don't 
>have the cash to add a specialty processor, and if IBM has really reduced the 
>overhead of ICF, then it might be a good fit for them.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Dana Mitchell
Brian,

We've actually been thinking about this too, as we evaluate moving to z14's. We 
currently have a 5 member GRS ring that could really benefit moving to GRS 
Star,  but nobody want's to spend any $ for an ICF.

As of BC12 generation IBM was still recommending against using GPs for CF 
processors:

According to an IBM paper titled:  Coupling Thin Interrupts and CF Performance 
in Shared Processor Environments

While it is possible to use general purpose processors, CPs, as CF processors
and share the CPs with z/OS and other similarly defined CFs, this is not a
recommended configuration. Using CPs as CF processors creates interaction
between z/OS and the CF in terms of cache reuse and other factors that may
impact performance. 

Dana


On Tue, 29 Jan 2019 00:23:56 -0600, Brian Westerman 
 wrote:

>Hi,
>
>Has anyone had any experience with using the internal coupling channels on a 
>z13.  "supposedly" IBM has removed the active wait problems (where the CF lpar 
>would try to use 100% of whatever it gets from PR/SM), but I was wondering if 
>it's ready for prime time yet.  I have a z13s (single CPU) with 3 LPARs on it, 
>they are fairly low use, but I think it'a about time the started to use GRS 
>since they share DASD (it was small, but now more and more as time goes on).  
>They have been lucky so far, but I would rather not rely on luck.  They don't 
>have the cash to add a specialty processor, and if IBM has really reduced the 
>overhead of ICF, then it might be a good fit for them.
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13 [EXTERNAL]

2019-01-29 Thread Feller, Paul
If the lpars are in a base sysplex you can use XCF to carry the GRS traffic and 
still be in a RING setup.  GRS using XCF is better than letting GRS try to 
handle things.

We have been using both an internal and external CF for years.  Currently the 
internal is on a z13.  All the lpars on that CEC talk to the CF over internal 
links.  Just for completeness our external CF is sitting on a z13s.

Thanks..

Paul Feller
AGT Mainframe Technical Support


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Allan Staller
Sent: Tuesday, January 29, 2019 7:40 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Internal Coupling Channel on z13 [EXTERNAL]

GRS RING can/will run over CTC's. Not sure I would want to do that in a 3 
member ring. It was bad enough in a 2 member ring.

I am currently using an ICF on a z/12 BC w/no issue. Howerr,  I am not sure an 
ICF is the same thing you are describing.



-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Brian Westerman
Sent: Tuesday, January 29, 2019 12:24 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Internal Coupling Channel on z13

Hi,

Has anyone had any experience with using the internal coupling channels on a 
z13.  "supposedly" IBM has removed the active wait problems (where the CF lpar 
would try to use 100% of whatever it gets from PR/SM), but I was wondering if 
it's ready for prime time yet.  I have a z13s (single CPU) with 3 LPARs on it, 
they are fairly low use, but I think it'a about time the started to use GRS 
since they share DASD (it was small, but now more and more as time goes on).  
They have been lucky so far, but I would rather not rely on luck.  They don't 
have the cash to add a specialty processor, and if IBM has really reduced the 
overhead of ICF, then it might be a good fit for them.

Anyone using the microcode ICF on a z13 or z14?  If so, does it play well with 
everyone else?

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN
::DISCLAIMER::
--
The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.
--

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
Please note:  This message originated outside your organization. Please use 
caution when opening links or attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread R.S.

W dniu 2019-01-29 o 07:23, Brian Westerman pisze:

Hi,

Has anyone had any experience with using the internal coupling channels on a z13.  
"supposedly" IBM has removed the active wait problems (where the CF lpar would 
try to use 100% of whatever it gets from PR/SM), but I was wondering if it's ready for 
prime time yet.  I have a z13s (single CPU) with 3 LPARs on it, they are fairly low use, 
but I think it'a about time the started to use GRS since they share DASD (it was small, 
but now more and more as time goes on).  They have been lucky so far, but I would rather 
not rely on luck.  They don't have the cash to add a specialty processor, and if IBM has 
really reduced the overhead of ICF, then it might be a good fit for them.

Anyone using the microcode ICF on a z13 or z14?  If so, does it play well with 
everyone else?


Few remarks:
1. Internal channels (ICP) have *nothing to do* with CPU consumption of 
internal CF.
2. Internal CF is just CF. There is no difference between them, except 
failure domain.
3. Yes, IBM changed way how CFCC use processor. Instead of legacy 
"active wait" (means 100% CPU all the time, even not serving anything) 
there more CPU friendly mode.


--
Radoslaw Skorupka
Lodz, Poland




==

Jeśli nie jesteś adresatem tej wiadomości:

- powiadom nas o tym w mailu zwrotnym (dziękujemy!),
- usuń trwale tę wiadomość (i wszystkie kopie, które wydrukowałeś lub zapisałeś 
na dysku).
Wiadomość ta może zawierać chronione prawem informacje, które może wykorzystać 
tylko adresat.Przypominamy, że każdy, kto rozpowszechnia (kopiuje, rozprowadza) 
tę wiadomość lub podejmuje podobne działania, narusza prawo i może podlegać 
karze.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 
Warszawa,www.mBank.pl, e-mail: kont...@mbank.pl. Sąd Rejonowy dla m. st. 
Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, KRS 025237, 
NIP: 526-021-50-88. Kapitał zakładowy (opłacony w całości) według stanu na 
01.01.2018 r. wynosi 169.248.488 złotych.

If you are not the addressee of this message:

- let us know by replying to this e-mail (thank you!),
- delete this message permanently (including all the copies which you have 
printed out or saved).
This message may contain legally protected information, which may be used 
exclusively by the addressee.Please be reminded that anyone who disseminates 
(copies, distributes) this message or takes any similar action, violates the 
law and may be penalised.

mBank S.A. with its registered office in Warsaw, ul. Senatorska 18, 00-950 
Warszawa,www.mBank.pl, e-mail: kont...@mbank.pl. District Court for the Capital 
City of Warsaw, 12th Commercial Division of the National Court Register, KRS 
025237, NIP: 526-021-50-88. Fully paid-up share capital amounting to PLN 
169,248,488 as at 1 January 2018.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-29 Thread Allan Staller
GRS RING can/will run over CTC's. Not sure I would want to do that in a 3 
member ring. It was bad enough in a 2 member ring.

I am currently using an ICF on a z/12 BC w/no issue. Howerr,  I am not sure an 
ICF is the same thing you are describing.



-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Brian Westerman
Sent: Tuesday, January 29, 2019 12:24 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Internal Coupling Channel on z13

Hi,

Has anyone had any experience with using the internal coupling channels on a 
z13.  "supposedly" IBM has removed the active wait problems (where the CF lpar 
would try to use 100% of whatever it gets from PR/SM), but I was wondering if 
it's ready for prime time yet.  I have a z13s (single CPU) with 3 LPARs on it, 
they are fairly low use, but I think it'a about time the started to use GRS 
since they share DASD (it was small, but now more and more as time goes on).  
They have been lucky so far, but I would rather not rely on luck.  They don't 
have the cash to add a specialty processor, and if IBM has really reduced the 
overhead of ICF, then it might be a good fit for them.

Anyone using the microcode ICF on a z13 or z14?  If so, does it play well with 
everyone else?

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN
::DISCLAIMER::
--
The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.
--

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-28 Thread Martin Packer


Hi Brian.

Are you referring to Coupling Facility Thin Interrupts? It sounds like you
might be.

Also, are you running a dedicated ICF engine? Or sharing?

Thanks, Martin

Sent from my iPad

> On 29 Jan 2019, at 06:24, Brian Westerman 
wrote:
>
> Hi,
>
> Has anyone had any experience with using the internal coupling channels
on a z13.  "supposedly" IBM has removed the active wait problems (where the
CF lpar would try to use 100% of whatever it gets from PR/SM), but I was
wondering if it's ready for prime time yet.  I have a z13s (single CPU)
with 3 LPARs on it, they are fairly low use, but I think it'a about time
the started to use GRS since they share DASD (it was small, but now more
and more as time goes on).  They have been lucky so far, but I would rather
not rely on luck.  They don't have the cash to add a specialty processor,
and if IBM has really reduced the overhead of ICF, then it might be a good
fit for them.
>
> Anyone using the microcode ICF on a z13 or z14?  If so, does it play well
with everyone else?
>
> Brian
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Internal Coupling Channel on z13

2019-01-28 Thread Brian Westerman
Hi,

Has anyone had any experience with using the internal coupling channels on a 
z13.  "supposedly" IBM has removed the active wait problems (where the CF lpar 
would try to use 100% of whatever it gets from PR/SM), but I was wondering if 
it's ready for prime time yet.  I have a z13s (single CPU) with 3 LPARs on it, 
they are fairly low use, but I think it'a about time the started to use GRS 
since they share DASD (it was small, but now more and more as time goes on).  
They have been lucky so far, but I would rather not rely on luck.  They don't 
have the cash to add a specialty processor, and if IBM has really reduced the 
overhead of ICF, then it might be a good fit for them.

Anyone using the microcode ICF on a z13 or z14?  If so, does it play well with 
everyone else?

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN