Rick,

You may find that Sequential IO cache benefits are something of an illusion.
While cache is involved, the benefit actually comes from the pre-fetch
process, where the storage controller is fetching tracks asynchronous to the
host requests and staying ahead of the next host request. Even though cache
statistics show 100% cache hit you will find that almost every IO had to be
read from the disk. Because sequential IO must always come from the disk, it
will be one of the first things to suffer when there is sibling pend on an
Array Group.

The best cache candidates I have found are actually random IO. The first
controller I used was a 3880-23 with 8MB of cache and volumes like SYSRES,
Checkpoint1, MIM and RACF improved throughput radically. 350 IOPS for a
controller was unheard of at that time.

The success of random IO depends on the locality of reference of the IO
requests. I found VSAM files with clusters of IO around particular keys -
new, active customers vs old, dormant customers or some company phone
numbers - get excellent benefit from cache, while databases with hashed keys
like IMS Fast Path and CA-IDMS were nasty cache polluters. YMMV.

Back to the Jason's original idea of excluding some IO from cache; I'm
afraid you're out of luck. At bests you can use DCME with MSR in SMS to
influence how some controllers will retain or discard an IO once it is in
cache, but in all vendor offerings - HDS, IBM, SUN and EMC - everything must
go through cache. Inhibit Cache Load and Bypass Cache do not do what they
say.

Jason, I'd be interested in the paper that says NORMAL Read IO is not
recommended for cache, as in most shops I have analyzed random read IO
represents greater than 80% of all cache hits - it is what cache was
designed for. Your references may need updating.

Ron



> ------------------------------------<snip>------------------------------
> I have discovered we have been experiencing high disconnect time to most
> our LCUs/DASDs due to NORMAL/RANDOM read CACHE misses (only 50% hit
> ratio). I have read somewhere that NORMAL read normally is not
> recommended for CACHING and was suggested to be excluded. My question is
> have anyone here implemented this to exclude the NORMAL read thru SMS
> storage class? After the exclusion, does it really improve the IO
> performance? how to handle those files both have normal/sequential read?
> TIA.
> -------------------------------------<unsnip>-----------------------------
> In my experience, sequential datasets get the MOST benefit from the use
> of cache, while randomly accessed files get the lowest benefit. When I
> first starting using cache, if I got a 50% cache hit ratio, I tended to
> add more cache, and watch the hit ratio rise.
> 
> I most HIGHLY Reccomend that you use cache for ALL sequential read
> processing. LOW rates of random access might not suffer too much from
> being uncached, but medium to high rates of random access might be more
> adversely affected by not using cache.
> 
> You could experiment, carefully, for the effects on your random access
> files, but I don't think there are ANY good grounds for not caching all
> sequential access datasets. Ditto for partitioned datasets.
> 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to