A switch can relieve the limitation, because it's actually limitation of SFP (electro-optical converter). You can replace SFP in the switch so your new CPC is connected to 16 or even 32 Gbps SFP in a switch, but your old CU is still connected to a 4 Gbps SFP, which can slow down to 1 Gbps. And the switch can use 128 Gbps ISL.
Note: feasible is not wise or optimal. It is just feasible.

BTW: It's a pity IBM does not allow to replace SFP in a FICON card.  It could address SX vs LX problems as well as speed limitations (without switch requirement).

--
Radoslaw Skorupka
Lodz, Poland







W dniu 2018-02-20 o 16:48, Mike Schwab pisze:
Printed Page numbers 63-64 of
https://www.redbooks.ibm.com/redbooks/pdfs/sg245444.pdf
Ficon Express 16 will negotiate down to 8 or 4.
Ficon Express 8 will negotiate down to 4 or 2.

You will need to upgrade Dasd Ficon cards or downgrade mainframe Ficon
Express cards.
I don't know if there is a speed adjusting director.

On Tue, Feb 20, 2018 at 4:41 AM, Tommy Tsui <tommyt...@gmail.com> wrote:
Hi Ron,
What happens to if our ficon card is 16gb, and fcp connection is 2gb, I try
to do the simulation on monoplex  lpar , the result is fine, now we are
suspect the GRS or other system parm which will increase the disconnect time

Ron hawkins <ronjhawk...@sbcglobal.net> 於 2018年2月

15日 星期四寫道:

Tommy,

This should not be a surprise. The name "Synchronous Remote Copy" implies
the overhead that you are seeing, namely the time for the synchronous write
to the remote site.

PPRC will more than double the response time of random writes because they
the Host write to cache has the additional time of controller latency,
round trip delay, and block transfer before the write is complete. On IBM
and HDS (not sure with EMC) the impact is greater for single blocks, as
chained sequential writes have some overlap between the host write, and the
synchronous write.

Some things to check:

1) Buffer Credits on ISLs between the sites. If no ISLs then settings on
the storage host ports to cater for 30km B2B credits
2) Channel speed step-down - If your FICON channels are 8Gb, and the FCP
connections are 2Gb, then PPRC writes will take up to four times longer to
transfer. It dep[ends on the block size.
3) Unbalanced ISLs - ISLs do not automatically rebalance after one drops.
The more concurrent IO there is on an ISL, the longer the transfer time for
each PPRC write. There may be one opr more ISL that are not being used,
while others are overloaded
4) Switch board connections not optimal - talk to your switch vendor
5) Host adapter ports connections not optimal - talk to your storage vendor
6) Sysplex tuning may identify IO that can convert from disk to Sysplex
caching. Not my expertise, but I'm sure there are some red books.

There is good information on PPRC activity in the RMF Type 78 records. You
may want to do some analysis of these to see how transfer rates and PPRC
write response time correlate with your DASD disconnect time.

Final Comment: do you really need synchronous remote copy? If your company
requires zero data loss, then you don't get this from synchronous
replication alone. You must use the Critical=Yes option which has it's own
set of risks and challenges. If you are not using GDPS and Hyperswap for
hot failover, then synchronous is not much better than asynchronous.
Rolling disasters, transaction roll back, and options that turn off
in-flight data set recovery can all see synchronous recovery time end up
with the same RPO as Asynchronous.

Ron






-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Tommy Tsui
Sent: Thursday, February 15, 2018 12:41 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] DASD problem

Hi,
The distance is around 30km, do you know any settings on sysplex
environment such as GRS and JES2 checkpoint need to aware?
Direct DASD via San switch to Dr site , 2GBPS interface , we check with
vendor, they didn't find any problem on San switch or DASD, I suspect the
system settings

Alan(GMAIL)Watthey <a.watt...@gmail.com> 於 2018年2月15日 星期四寫道:

Tommy,

This sounds like the PPRC links might be a bit slow or there are not
enough of them.

What do you have?  Direct DASD to DASD or via a single SAN switch or
even cascaded?  What settings (Gbps) are all the interfaces running at
(you can ask the switch for the switch and RMF for the DASD)?

What type of fibre are they?  LX or SX?  What kind of length are they?

Any queueing?

There are so many variables that can affect the latency.  Are there
any of the above that you can improve on?

I can't remember what IBM recommends but 80% sounds a little high to me.
They are only used for writes (not reads).

Regards,
Alan Watthey

-----Original Message-----
From: Tommy Tsui [mailto:tommyt...@gmail.com]
Sent: 15 February 2018 12:15 am
Subject: DASD problem

Hi all,

Our shop found the most job elapse time prolong due to pprc
synchronization versus without pprc mode. It's almost 4 times faster
if without pprc synchronization. Is there any parameters we need to
tune on z/os or disk subsystem side? We found the % disk util in RMF
report over 80, Any help will be appreciated. Many thanks



======================================================================


       --
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.plsąd Rejonowy dla m. st. Warszawy XII 
Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców 
KRS 0000025237, NIP: 526-021-50-88. Według stanu na dzień 01.01.2018 r. kapitał 
zakładowy mBanku S.A. (w całości wpłacony) wynosi 169.248.488 złotych.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to