Joel,

That's when you start looking at the Type42 subtype 6 records. Read
disconnect time is reported separately, so you can establish if the problem
is TrueCopy links or Sibling Pend.

Ron

> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[email protected]] On
Behalf Of
> Joel Wolpert
> Sent: Thursday, July 29, 2010 2:37 PM
> To: [email protected]
> Subject: Re: [IBM-MAIN] Any ROT for DASD Response time
> 
> Disconnect time can also show up as synchronous remote copy (SRDF). If the
> links are saturated it will elongate the DISC time and therefore the RT.
> 
> 
> Joel Wolpert
> Performance and Capacity Planning consultant
> WEBSITE: www.perfconsultant.com
> ----- Original Message -----
> From: "Ron Hawkins" <[email protected]>
> Newsgroups: bit.listserv.ibm-main
> To: <[email protected]>
> Sent: Thursday, July 29, 2010 11:59 AM
> Subject: Re: Any ROT for DASD Response time
> 
> 
> > J,
> >
> > This is what Pat describes as "SIBLING PEND" and it is easy to spot the
> > symptom because it shows up as Disconnect Time.
> >
> > I sure IBM and EMC have something like HDS's Performance Monitor. It's a
> > GUI
> > with exportable data that you can use to monitor skewed activity across
> > the
> > parity groups that lead to the elongated seek affect that you describe.
> >
> > Ron
> >
> >> -----Original Message-----
> >> From: IBM Mainframe Discussion List [mailto:[email protected]] On
> > Behalf Of
> >> J Ellis
> >> Sent: Thursday, July 29, 2010 4:47 AM
> >> To: [email protected]
> >> Subject: Re: [IBM-MAIN] Any ROT for DASD Response time
> >>
> >> also, as some very wise people on this list and other lists/papers have
> >> pointed out "Know thy data" some of our worst dasd response time issues
> > were
> >> eliminated by simply looking at the logical volume placement and the
data
> >> sets there on. Moving some DB2 indexes into different array groups
and/or
> >> off logical volumes that were splattered out towards the middle of the
> > small
> >> spinning back end dasd can make a great deal of difference (know your
> > chunk
> >> sizes, how the chunks are spread across the disks in the arrays,
etc...).
> >> RAID technology (at least yet) doesn't make up for bad placement of
> > storage
> >> group volumes and the data sets on them. As others pointed out,
research
> > Dr.
> >> Artis papers on the subject--just my opinion and experience.
> >> (Because we've always done it this way *BANG*)---Running HSM migration
> >> management/space release-along with compactor jobs on  different lpar
in
> > the
> >> plex- on logical volumes that were on the front of the disks in the
> >> arrays
> >> absolutely killed the hot indexes that were in the middle of the disks
in
> >> the same array.
> >>
> >> ----------------------------------------------------------------------
> >> For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to [email protected] with the message: GET IBM-MAIN INFO
> >> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> >
> > ----------------------------------------------------------------------
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to [email protected] with the message: GET IBM-MAIN INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> >
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to