We have same problem on this both Ibm and vendor Hds say not problem at
all. Use or not up to you...

Jousma, David <david.jou...@53.com> 於 2018年2月23日 星期五寫道:

> If you are running with NON-IBM DASD, you might want to get with the
> vendor to look into their zHPF support on the DASD backend.  We have had it
> on for over a year, but in that year, we've had a few occasions where I/O
> slows to a crawl.  Most recently, it was so bad that the application self
> terminated(banking app, self detects when audit logs cant keep up,
> shutsdown to not lose financial data).  800+ms of connect time, all 128
> Dynamic PAV's exhausted for the control unit.   We are turning it off this
> weekend.   We have open tickets with both IBM and the other vendor.   IBM
> says they don’t see a problem from the mainframe/OS perspective, DASD
> vendor says, we see no problems.....
>
> _________________________________________________________________
> Dave Jousma
> Manager Mainframe Engineering, Assistant Vice President
> david.jou...@53.com
> 1830 East Paris, Grand Rapids, MI  49546 MD RSCB2H
> p 616.653.8429
> f 616.653.2717
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Tommy Tsui
> Sent: Friday, February 23, 2018 8:26 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: DASD problem
>
> **CAUTION EXTERNAL EMAIL**
>
> **DO NOT open attachments or click on links from unknown senders or
> unexpected emails**
>
> Hi all,
> Anything need to aware when turn on zHPF, any experience can share, such
> as configuration, thanks all
>
> Tommy Tsui <tommyt...@gmail.com> 於 2018年2月21日 星期三寫道:
>
> > Star
> >
> >
> > Allan Staller <allan.stal...@hcl.com> 於 2018年2月21日 星期三寫道:
> >
> >> GRS- Ring or Star?
> >>
> >> -----Original Message-----
> >> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> >> On Behalf Of Tommy Tsui
> >> Sent: Wednesday, February 21, 2018 2:33 AM
> >> To: IBM-MAIN@LISTSERV.UA.EDU
> >> Subject: Re: DASD problem
> >>
> >> We use HDS instead not Ibm, we report this case to Ibm and perform
> >> the same operation on monoplex lpar the result is around 7mins write
> >> 28gb data using utility IEBDG, but use 12 mins while in sysplex lpar
> >> with same DASD, only can find is high disconnect time from RMF report
> >>
> >> Ron hawkins <ronjhawk...@sbcglobal.net> 於 2018年2月21日 星期三寫道:
> >>
> >> > Tommy,
> >> >
> >> > The RTD at 30km is quite small, and the benefit of write spoofing
> >> > will be small.
> >> >
> >> > There is an option to turn on write spoofing with the FCP PPRC
> >> > links on IBM storage, but you should check with them that it is a
> >> > benefit at small distances on your model of storage at all write
> rates.
> >> >
> >> > Ron
> >> >
> >> > -----Original Message-----
> >> > From: IBM Mainframe Discussion List
> >> > [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Tommy Tsui
> >> > Sent: Tuesday, February 20, 2018 8:22 PM
> >> > To: IBM-MAIN@LISTSERV.UA.EDU
> >> > Subject: Re: [IBM-MAIN] DASD problem
> >> >
> >> > Is there any way to improve the pprc command latency and round trip
> >> > delay time?
> >> > Anything can tune on DASD Hardware or switch side?
> >> > Anything can tune on os side? BUFNO,
> >> >
> >> > Rob Schramm <rob.schr...@gmail.com> 於 2018年2月21日 星期三寫道:
> >> >
> >> > > It used to be 20 or 25 buffers to establish the I/o sweet spot.
> >> > > Maybe with the faster dasd the amount is different.
> >> > >
> >> > > Rob
> >> > >
> >> > > On Tue, Feb 20, 2018, 7:53 PM Tommy Tsui <tommyt...@gmail.com>
> wrote:
> >> > >
> >> > > > Hi Ron,
> >> > > > You are right when I changed BUFNO to 255,  The overall elapsed
> >> > > > time reduce from 12mins to 6 mins, So what can I do now,?
> >> > > > Change BUFNO only ? How about vsam or db2 performance?
> >> > > >
> >> > > >
> >> > > > Ron hawkins <ronjhawk...@sbcglobal.net> 於 2018年2月21日 星期三寫道:
> >> > > >
> >> > > > > Tommy,
> >> > > > >
> >> > > > > With PPRC, TrueCopy or SRDF synchronous the FICON and FCP
> >> > > > > speed are independent of one another, but the stepped down
> >> > > > > speed elongate the
> >> > > > Remote
> >> > > > > IO.
> >> > > > >
> >> > > > > In simple terms a block that you write from the host to the
> >> > > > > P-VOL takes 0.5ms to transfer on 16Gb FICON, and but then you
> >> > > > > do the synchronous
> >> > > > write
> >> > > > > on 2Gb FCP to the S-VOL it will take 4ms, or 8 times longer
> >> > > > > to
> >> > > transfer.
> >> > > > > This time is in addition to command latency and round-trip
> >> > > > > delay
> >> > time.
> >> > > As
> >> > > > > described below, this impact will be less for long, chained
> >> > > > > writes
> >> > > > because
> >> > > > > of the Host/PPRC overlap.
> >> > > > >
> >> > > > > I'm not sure how you simulate this on your monoplex, but I
> >> > > > > assume you
> >> > > set
> >> > > > > up a PPRC pair to the remote site. If you are testing with
> >> > > > > BSAM or QSAM (like OLDGENER), then set SYSUT2 BUFNO=1 to see
> >> > > > > the single block
> >> > > impact.
> >> > > > If
> >> > > > > you are using zHPF, I think you can vary the BUFNO or NCP to
> >> > > > > get up to
> >> > > > 255
> >> > > > > chained blocks.
> >> > > > >
> >> > > > > I'm not aware of anything in GRS that adds to remote IO
> >> > > > > disconnect
> >> > > time.
> >> > > > >
> >> > > > > Ron
> >> > > > >
> >> > > > > -----Original Message-----
> >> > > > > From: IBM Mainframe Discussion List
> >> > > > > [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> >> > > On
> >> > > > > Behalf Of Tommy Tsui
> >> > > > > Sent: Tuesday, February 20, 2018 2:42 AM
> >> > > > > To: IBM-MAIN@LISTSERV.UA.EDU
> >> > > > > Subject: Re: [IBM-MAIN] DASD problem
> >> > > > >
> >> > > > > Hi Ron,
> >> > > > > What happens to if our ficon card is 16gb, and fcp connection
> >> > > > > is 2gb, I try to do the simulation on monoplex  lpar , the
> >> > > > > result is fine, now we
> >> > > > are
> >> > > > > suspect the GRS or other system parm which will increase the
> >> > > > > disconnect
> >> > > > time
> >> > > > >
> >> > > > > Ron hawkins <ronjhawk...@sbcglobal.net> 於 2018年2月
> >> > > > >
> >> > > > > 15日 星期四寫道:
> >> > > > >
> >> > > > > > Tommy,
> >> > > > > >
> >> > > > > > This should not be a surprise. The name "Synchronous Remote
> >> Copy"
> >> > > > > > implies the overhead that you are seeing, namely the time
> >> > > > > > for the synchronous write to the remote site.
> >> > > > > >
> >> > > > > > PPRC will more than double the response time of random
> >> > > > > > writes because they the Host write to cache has the
> >> > > > > > additional time of controller latency, round trip delay,
> >> > > > > > and block transfer before the write is complete. On IBM and
> >> > > > > > HDS (not sure with
> >> > > > > > EMC) the impact is greater
> >> > > for
> >> > > > > > single blocks, as chained sequential writes have some
> >> > > > > > overlap between the host write, and the synchronous write.
> >> > > > > >
> >> > > > > > Some things to check:
> >> > > > > >
> >> > > > > > 1) Buffer Credits on ISLs between the sites. If no ISLs
> >> > > > > > then settings on the storage host ports to cater for 30km
> >> > > > > > B2B credits
> >> > > > > > 2) Channel speed step-down - If your FICON channels are
> >> > > > > > 8Gb, and the FCP connections are 2Gb, then PPRC writes will
> >> > > > > > take up to four times longer to transfer. It dep[ends on the
> block size.
> >> > > > > > 3) Unbalanced ISLs - ISLs do not automatically rebalance
> >> > > > > > after one
> >> > > > drops.
> >> > > > > > The more concurrent IO there is on an ISL, the longer the
> >> > > > > > transfer time for each PPRC write. There may be one opr
> >> > > > > > more ISL that are not being used, while others are
> >> > > > > > overloaded
> >> > > > > > 4) Switch board connections not optimal - talk to your
> >> > > > > > switch vendor
> >> > > > > > 5) Host adapter ports connections not optimal - talk to
> >> > > > > > your storage vendor
> >> > > > > > 6) Sysplex tuning may identify IO that can convert from
> >> > > > > > disk to Sysplex caching. Not my expertise, but I'm sure
> >> > > > > > there are some red
> >> > > > books.
> >> > > > > >
> >> > > > > > There is good information on PPRC activity in the RMF Type
> >> > > > > > 78
> >> > > records.
> >> > > > > > You may want to do some analysis of these to see how
> >> > > > > > transfer rates and PPRC write response time correlate with
> >> > > > > > your DASD disconnect
> >> > > time.
> >> > > > > >
> >> > > > > > Final Comment: do you really need synchronous remote copy?
> >> > > > > > If your company requires zero data loss, then you don't get
> >> > > > > > this from synchronous replication alone. You must use the
> >> > > > > > Critical=Yes option which has it's own set of risks and
> >> > > > > > challenges. If you are not using GDPS and Hyperswap for hot
> >> > > > > > failover, then synchronous is not much
> >> > > > better
> >> > > > > than asynchronous.
> >> > > > > > Rolling disasters, transaction roll back, and options that
> >> > > > > > turn off in-flight data set recovery can all see
> >> > > > > > synchronous recovery time end up with the same RPO as
> Asynchronous.
> >> > > > > >
> >> > > > > > Ron
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > > -----Original Message-----
> >> > > > > > From: IBM Mainframe Discussion List
> >> > > > > > [mailto:IBM-MAIN@LISTSERV.UA.EDU
> >> > > ]
> >> > > > > > On Behalf Of Tommy Tsui
> >> > > > > > Sent: Thursday, February 15, 2018 12:41 AM
> >> > > > > > To: IBM-MAIN@LISTSERV.UA.EDU
> >> > > > > > Subject: Re: [IBM-MAIN] DASD problem
> >> > > > > >
> >> > > > > > Hi,
> >> > > > > > The distance is around 30km, do you know any settings on
> >> > > > > > sysplex environment such as GRS and JES2 checkpoint need to
> >> aware?
> >> > > > > > Direct DASD via San switch to Dr site , 2GBPS interface ,
> >> > > > > > we check with vendor, they didn't find any problem on San
> >> > > > > > switch or DASD, I suspect the system settings
> >> > > > > >
> >> > > > > > Alan(GMAIL)Watthey <a.watt...@gmail.com> 於 2018年2月15日
> >> > > > > > 星期四寫道:
> >> > > > > >
> >> > > > > > > Tommy,
> >> > > > > > >
> >> > > > > > > This sounds like the PPRC links might be a bit slow or
> >> > > > > > > there are
> >> > > not
> >> > > > > > > enough of them.
> >> > > > > > >
> >> > > > > > > What do you have?  Direct DASD to DASD or via a single
> >> > > > > > > SAN switch
> >> > > or
> >> > > > > > > even cascaded?  What settings (Gbps) are all the
> >> > > > > > > interfaces running at (you can ask the switch for the
> >> > > > > > > switch and RMF for
> >> > the DASD)?
> >> > > > > > >
> >> > > > > > > What type of fibre are they?  LX or SX?  What kind of
> >> > > > > > > length are
> >> > > > they?
> >> > > > > > >
> >> > > > > > > Any queueing?
> >> > > > > > >
> >> > > > > > > There are so many variables that can affect the latency.
> >> > > > > > > Are there any of the above that you can improve on?
> >> > > > > > >
> >> > > > > > > I can't remember what IBM recommends but 80% sounds a
> >> > > > > > > little high
> >> > > to
> >> > > > > me.
> >> > > > > > > They are only used for writes (not reads).
> >> > > > > > >
> >> > > > > > > Regards,
> >> > > > > > > Alan Watthey
> >> > > > > > >
> >> > > > > > > -----Original Message-----
> >> > > > > > > From: Tommy Tsui [mailto:tommyt...@gmail.com]
> >> > > > > > > Sent: 15 February 2018 12:15 am
> >> > > > > > > Subject: DASD problem
> >> > > > > > >
> >> > > > > > > >
> >> > > > > > > > Hi all,
> >> > > > > > >
> >> > > > > > >
> >> > > > > > > Our shop found the most job elapse time prolong due to
> >> > > > > > > pprc synchronization versus without pprc mode. It's
> >> > > > > > > almost 4 times
> >> > > faster
> >> > > > > > > if without pprc synchronization. Is there any parameters
> >> > > > > > > we need to tune on z/os or disk subsystem side? We found
> >> > > > > > > the % disk util in
> >> > > RMF
> >> > > > > > > report over 80, Any help will be appreciated. Many thanks
> >> > > > > > >
> >> > > > > > > ---------------------------------------------------------
> >> > > > > > > ---
> >> > > --------
> >> > > > > > > -- For IBM-MAIN subscribe / signoff / archive access
> >> > > > > > > instructions, send email to lists...@listserv.ua.edu with
> >> > > > > > > the
> >> > > > > > > message: INFO IBM-MAIN
> >> > > > > > >
> >> > > > > > > ---------------------------------------------------------
> >> > > > > > > ---
> >> > > --------
> >> > > > > > > -- For IBM-MAIN subscribe / signoff / archive access
> >> > > > > > > instructions, send email to lists...@listserv.ua.edu with
> >> > > > > > > the
> >> > > > > > > message: INFO IBM-MAIN
> >> > > > > > >
> >> > > > > >
> >> > > > > > -----------------------------------------------------------
> >> > > > > > -
> >> > > ----------
> >> > > > > > For IBM-MAIN subscribe / signoff / archive access
> >> > > > > > instructions, send email to lists...@listserv.ua.edu with
> >> > > > > > the
> >> > > > > > message: INFO IBM-MAIN
> >> > > > > >
> >> > > > > > -----------------------------------------------------------
> >> > > > > > -
> >> > > ----------
> >> > > > > > For IBM-MAIN subscribe / signoff / archive access
> >> > > > > > instructions, send email to lists...@listserv.ua.edu with
> >> > > > > > the
> >> > > > > > message: INFO IBM-MAIN
> >> > > > > >
> >> > > > >
> >> > > > > -------------------------------------------------------------
> >> > > > > ---
> >> > > > > --
> >> > > > > ---- For IBM-MAIN subscribe / signoff / archive access
> >> > > > > instructions, send
> >> > > > email
> >> > > > > to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >> > > > >
> >> > > > > -------------------------------------------------------------
> >> > > > > ---
> >> > > > > --
> >> > > > > ---- For IBM-MAIN subscribe / signoff / archive access
> >> > > > > instructions, send email to lists...@listserv.ua.edu with the
> >> > > > > message: INFO IBM-MAIN
> >> > > > >
> >> > > >
> >> > > > ---------------------------------------------------------------
> >> > > > ---
> >> > > > --
> >> > > > -- For IBM-MAIN subscribe / signoff / archive access
> >> > > > instructions, send email to lists...@listserv.ua.edu with the
> >> > > > message: INFO IBM-MAIN
> >> > > >
> >> > > --
> >> > >
> >> > > Rob Schramm
> >> > >
> >> > > -----------------------------------------------------------------
> >> > > ---
> >> > > -- For IBM-MAIN subscribe / signoff / archive access
> >> > > instructions, send email to lists...@listserv.ua.edu with the
> >> > > message: INFO IBM-MAIN
> >> > >
> >> >
> >> > -------------------------------------------------------------------
> >> > --- For IBM-MAIN subscribe / signoff / archive access instructions,
> >> > send email to lists...@listserv.ua.edu with the message: INFO
> >> > IBM-MAIN
> >> >
> >> > -------------------------------------------------------------------
> >> > --- For IBM-MAIN subscribe / signoff / archive access instructions,
> >> > send email to lists...@listserv.ua.edu with the message: INFO
> >> > IBM-MAIN
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> - For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to lists...@listserv.ua.edu with the message: INFO
> >> IBM-MAIN
> >> ::DISCLAIMER::
> >> ------------------------------------------------------------
> >> ------------------------------------------------------------
> >> ------------------------------------------------------------
> >> ------------------------------------------------------------
> >> --------------------------------------
> >> The contents of this e-mail and any attachment(s) are confidential
> >> and intended for the named recipient(s) only. E-mail transmission is
> >> not guaranteed to be secure or error-free as information could be
> >> intercepted, corrupted, lost, destroyed, arrive late or incomplete,
> >> or may contain viruses in transmission. The e mail and its contents
> >> (with or without referred errors) shall therefore not attach any
> >> liability on the originator or HCL or its affiliates. Views or
> >> opinions, if any, presented in this email are solely those of the
> >> author and may not necessarily reflect the views or opinions of HCL
> >> or its affiliates. Any form of reproduction, dissemination, copying,
> >> disclosure, modification, distribution and / or publication of this
> >> message without the prior written consent of authorized
> >> representative of HCL is strictly prohibited. If you have received
> >> this email in error please delete it and notify the sender
> >> immediately. Before opening any email and/or attachments, please check
> them for viruses and other defects.
> >> ------------------------------------------------------------
> >> ------------------------------------------------------------
> >> ------------------------------------------------------------
> >> ------------------------------------------------------------
> >> --------------------------------------
> >>
> >> ---------------------------------------------------------------------
> >> - For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to lists...@listserv.ua.edu with the message: INFO
> >> IBM-MAIN
> >>
> >
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to lists...@listserv.ua.edu with the message: INFO IBM-MAIN **CAUTION
> EXTERNAL EMAIL**
>
> **DO NOT open attachments or click on links from unknown senders or
> unexpected emails**
>
> This e-mail transmission contains information that is confidential and may
> be privileged.   It is intended only for the addressee(s) named above. If
> you receive this e-mail in error, please do not read, copy or disseminate
> it in any manner. If you are not the intended recipient, any disclosure,
> copying, distribution or use of the contents of this information is
> prohibited. Please reply to the message immediately by informing the sender
> that the message was misdirected. After replying, please erase it from your
> computer system. Your assistance in correcting this error is appreciated.
>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to