Re: DASD problem

2018-02-23 Thread Tommy Tsui
nal Message- > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On > Behalf Of Tommy Tsui > Sent: Friday, February 23, 2018 8:26 AM > To: IBM-MAIN@LISTSERV.UA.EDU > Subject: Re: DASD problem > > **CAUTION EXTERNAL EMAIL** > > **DO NOT open attac

Re: DASD problem

2018-02-23 Thread Jousma, David
RSCB2H p 616.653.8429 f 616.653.2717 -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Tommy Tsui Sent: Friday, February 23, 2018 8:26 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: DASD problem **CAUTION EXTERNAL EMAIL** **DO NOT open

Re: DASD problem

2018-02-23 Thread Tommy Tsui
Star? >> >> -Original Message- >> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On >> Behalf Of Tommy Tsui >> Sent: Wednesday, February 21, 2018 2:33 AM >> To: IBM-MAIN@LISTSERV.UA.EDU >> Subject: Re: DASD problem >> >> W

Re: DASD problem

2018-02-21 Thread Tommy Tsui
o: IBM-MAIN@LISTSERV.UA.EDU > Subject: Re: DASD problem > > We use HDS instead not Ibm, we report this case to Ibm and perform the > same operation on monoplex lpar the result is around 7mins write 28gb data > using utility IEBDG, but use 12 mins while in sysplex lpar with same DASD, > o

Re: DASD problem

2018-02-21 Thread Allan Staller
GRS- Ring or Star? -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Tommy Tsui Sent: Wednesday, February 21, 2018 2:33 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: DASD problem We use HDS instead not Ibm, we report this case to Ibm

Re: DASD problem

2018-02-21 Thread Tommy Tsui
After I switched the zhpf to No , the elapsed time reduce from 12 to 7 mins? How comes, is it the switch or director no support the concurrent i/o from zhpf? Tommy Tsui 於 2018年2月21日 星期三寫道: > We use HDS instead not Ibm, we report this case to Ibm and perform the > same

Re: DASD problem

2018-02-21 Thread Tommy Tsui
We use HDS instead not Ibm, we report this case to Ibm and perform the same operation on monoplex lpar the result is around 7mins write 28gb data using utility IEBDG, but use 12 mins while in sysplex lpar with same DASD, only can find is high disconnect time from RMF report Ron hawkins

Re: DASD problem

2018-02-20 Thread Ron hawkins
Tommy, The RTD at 30km is quite small, and the benefit of write spoofing will be small. There is an option to turn on write spoofing with the FCP PPRC links on IBM storage, but you should check with them that it is a benefit at small distances on your model of storage at all write rates. Ron

Re: DASD problem

2018-02-20 Thread Ron hawkins
Rob, The sweet spot for Basic Access Methods (BAM) and half track blocking used to be 16. It is much larger with zHPF BSAM and QSAM are limited to a chain length of 240KiB (please correct me if I remember incorrectly), so the sweet spot used to be FLOOR(240KiB/blocksize)*2. For 27998 it is 8

Re: DASD problem

2018-02-20 Thread Ron hawkins
Tommy, That's a really big question. You're talking about a complete batch tuning exercise. The increased buffering is an old tuning hack, but with synchronous replication, the primary benefit is for sequential writes where you can achieve the Host/RIO overlap. It works equally well for the

Re: DASD problem

2018-02-20 Thread Alan(GMAIL)Watthey
...@gmail.com] Sent: 21 February 2018 7:22 am Subject: Re: DASD problem Is there any way to improve the pprc command latency and round trip delay time? Anything can tune on DASD Hardware or switch side? Anything can tune on os side? BUFNO

Re: DASD problem

2018-02-20 Thread Tommy Tsui
Is there any way to improve the pprc command latency and round trip delay time? Anything can tune on DASD Hardware or switch side? Anything can tune on os side? BUFNO, Rob Schramm 於 2018年2月21日 星期三寫道: > It used to be 20 or 25 buffers to establish the I/o sweet spot. Maybe

Re: DASD problem

2018-02-20 Thread Rob Schramm
It used to be 20 or 25 buffers to establish the I/o sweet spot. Maybe with the faster dasd the amount is different. Rob On Tue, Feb 20, 2018, 7:53 PM Tommy Tsui wrote: > Hi Ron, > You are right when I changed BUFNO to 255, > The overall elapsed time reduce from 12mins to

Re: DASD problem

2018-02-20 Thread Tommy Tsui
Hi Ron, You are right when I changed BUFNO to 255, The overall elapsed time reduce from 12mins to 6 mins, So what can I do now,? Change BUFNO only ? How about vsam or db2 performance? Ron hawkins 於 2018年2月21日 星期三寫道: > Tommy, > > With PPRC, TrueCopy or SRDF

Re: DASD problem

2018-02-20 Thread Ron hawkins
Tommy, With PPRC, TrueCopy or SRDF synchronous the FICON and FCP speed are independent of one another, but the stepped down speed elongate the Remote IO. In simple terms a block that you write from the host to the P-VOL takes 0.5ms to transfer on 16Gb FICON, and but then you do the synchronous

Re: DASD problem

2018-02-20 Thread R.S.
A switch can relieve the limitation, because it's actually limitation of SFP (electro-optical converter). You can replace SFP in the switch so your new CPC is connected to 16 or even 32 Gbps SFP in a switch, but your old CU is still connected to a 4 Gbps SFP, which can slow down to 1 Gbps. And

Re: DASD problem

2018-02-20 Thread Mike Schwab
Printed Page numbers 63-64 of https://www.redbooks.ibm.com/redbooks/pdfs/sg245444.pdf Ficon Express 16 will negotiate down to 8 or 4. Ficon Express 8 will negotiate down to 4 or 2. You will need to upgrade Dasd Ficon cards or downgrade mainframe Ficon Express cards. I don't know if there is a

Re: DASD problem

2018-02-20 Thread Tommy Tsui
Hi Ron, What happens to if our ficon card is 16gb, and fcp connection is 2gb, I try to do the simulation on monoplex lpar , the result is fine, now we are suspect the GRS or other system parm which will increase the disconnect time Ron hawkins 於 2018年2月 15日 星期四寫道: >

Re: DASD problem

2018-02-15 Thread Jesse 1 Robinson
, 2018 6:52 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: (External):Re: DASD problem Disclaimer: I am not a performance expert, so take this with a large grain of salt. I agree with what Ron wrote: That synchronously replicated disk I/O write response times are longer than those for volumes

Re: DASD problem

2018-02-15 Thread John Eells
Disclaimer: I am not a performance expert, so take this with a large grain of salt. I agree with what Ron wrote: That synchronously replicated disk I/O write response times are longer than those for volumes that are not replicated is not surprising. For basic PPRC it will be higher to start

Re: DASD problem

2018-02-15 Thread Ron hawkins
Tommy, This should not be a surprise. The name "Synchronous Remote Copy" implies the overhead that you are seeing, namely the time for the synchronous write to the remote site. PPRC will more than double the response time of random writes because they the Host write to cache has the

Re: DASD problem

2018-02-15 Thread Tommy Tsui
Hi, The distance is around 30km, do you know any settings on sysplex environment such as GRS and JES2 checkpoint need to aware? Direct DASD via San switch to Dr site , 2GBPS interface , we check with vendor, they didn't find any problem on San switch or DASD, I suspect the system settings

Re: DASD problem

2018-02-14 Thread Alan(GMAIL)Watthey
Tommy, This sounds like the PPRC links might be a bit slow or there are not enough of them. What do you have? Direct DASD to DASD or via a single SAN switch or even cascaded? What settings (Gbps) are all the interfaces running at (you can ask the switch for the switch and RMF for the DASD)?

Re: DASD problem

2018-02-14 Thread Tommy Tsui
We can't use xcr mode because of RTO and RPO requirements, all DASD must in sync mode, Jesse 1 Robinson 於 2018年2月15日 星期四寫道: > When we started mirroring for DR around 2000, we opted for XCF (now > something-something-for-z/OS) over PPRC because of possible I/O impact.

Re: DASD problem

2018-02-14 Thread Jesse 1 Robinson
When we started mirroring for DR around 2000, we opted for XCF (now something-something-for-z/OS) over PPRC because of possible I/O impact. XRC was asynchronous, so mirroring might get behind but not impact production. . . J.O.Skip Robinson Southern California Edison Company Electric Dragon