The purpose behind cache is to reduce the amount of time elapsed between
issuing a read request and having data to process. If the data to be
read is in cache because someone just read or wrote it or the controller
was doing sequential read ahead, I do not care. I did not have to wait
on the slow DASD to position and transfer the data. That is a valid read
hit in my book.

Dennis Roach
GHG Corporation
Lockheed Marten Mission Services
Flight Design and Operations Contract
Address:
   2100 Space Park Drive 
   LM-15-4BH
   Houston, Texas 77058
Mail:
   P.O. Box 58487
   Mail Code H4C
   Houston, Texas 77258
Phone:
   Voice:  (281)336-5027
   Cell:   (713)591-1059
   Fax:    (281)336-5410
E-Mail:  [email protected]

All opinions expressed by me are mine and may not agree with my employer
or any person, company, or thing, living or dead, on or near this or any
other planet, moon, asteroid, or other spatial object, natural or
manufactured, since the beginning of time.

> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[email protected]] On
> Behalf Of Ron Hawkins
> Sent: Wednesday, February 18, 2009 12:58 PM
> To: [email protected]
> Subject: Re: High DASD disconnect time due to RANDOM READ
> 
> Rick,
> 
> You may find that Sequential IO cache benefits are something of an
> illusion.
> While cache is involved, the benefit actually comes from the pre-fetch
> process, where the storage controller is fetching tracks asynchronous
> to the
> host requests and staying ahead of the next host request. Even though
> cache
> statistics show 100% cache hit you will find that almost every IO had
> to be
> read from the disk. Because sequential IO must always come from the
> disk, it
> will be one of the first things to suffer when there is sibling pend
on
> an
> Array Group.
> 
> The best cache candidates I have found are actually random IO. The
> first
> controller I used was a 3880-23 with 8MB of cache and volumes like
> SYSRES,
> Checkpoint1, MIM and RACF improved throughput radically. 350 IOPS for
a
> controller was unheard of at that time.
> 
> The success of random IO depends on the locality of reference of the
IO
> requests. I found VSAM files with clusters of IO around particular
keys
> -
> new, active customers vs old, dormant customers or some company phone
> numbers - get excellent benefit from cache, while databases with
hashed
> keys
> like IMS Fast Path and CA-IDMS were nasty cache polluters. YMMV.
> 
> Back to the Jason's original idea of excluding some IO from cache; I'm
> afraid you're out of luck. At bests you can use DCME with MSR in SMS
to
> influence how some controllers will retain or discard an IO once it is
> in
> cache, but in all vendor offerings - HDS, IBM, SUN and EMC -
everything
> must
> go through cache. Inhibit Cache Load and Bypass Cache do not do what
> they
> say.
> 
> Jason, I'd be interested in the paper that says NORMAL Read IO is not
> recommended for cache, as in most shops I have analyzed random read IO
> represents greater than 80% of all cache hits - it is what cache was
> designed for. Your references may need updating.
> 
> Ron
> 
> 
> 
> >
------------------------------------<snip>---------------------------
> ---
> > I have discovered we have been experiencing high disconnect time to
> most
> > our LCUs/DASDs due to NORMAL/RANDOM read CACHE misses (only 50% hit
> > ratio). I have read somewhere that NORMAL read normally is not
> > recommended for CACHING and was suggested to be excluded. My
question
> is
> > have anyone here implemented this to exclude the NORMAL read thru
SMS
> > storage class? After the exclusion, does it really improve the IO
> > performance? how to handle those files both have normal/sequential
> read?
> > TIA.
> >
-------------------------------------<unsnip>------------------------
> -----
> > In my experience, sequential datasets get the MOST benefit from the
> use
> > of cache, while randomly accessed files get the lowest benefit. When
> I
> > first starting using cache, if I got a 50% cache hit ratio, I tended
> to
> > add more cache, and watch the hit ratio rise.
> >
> > I most HIGHLY Reccomend that you use cache for ALL sequential read
> > processing. LOW rates of random access might not suffer too much
from
> > being uncached, but medium to high rates of random access might be
> more
> > adversely affected by not using cache.
> >
> > You could experiment, carefully, for the effects on your random
> access
> > files, but I don't think there are ANY good grounds for not caching
> all
> > sequential access datasets. Ditto for partitioned datasets.
> >
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to