------------------------------------<snip>------------------------------
I have discovered we have been experiencing high disconnect time to most
our LCUs/DASDs due to NORMAL/RANDOM read CACHE misses (only 50% hit
ratio). I have read somewhere that NORMAL read normally is not
recommended for CACHING and was suggested to be excluded. My question is
have anyone here implemented this to exclude the NORMAL read thru SMS
storage class? After the exclusion, does it really improve the IO
performance? how to handle those files both have normal/sequential read?
TIA.
-------------------------------------<unsnip>-----------------------------
In my experience, sequential datasets get the MOST benefit from the use
of cache, while randomly accessed files get the lowest benefit. When I
first starting using cache, if I got a 50% cache hit ratio, I tended to
add more cache, and watch the hit ratio rise.
I most HIGHLY Reccomend that you use cache for ALL sequential read
processing. LOW rates of random access might not suffer too much from
being uncached, but medium to high rates of random access might be more
adversely affected by not using cache.
You could experiment, carefully, for the effects on your random access
files, but I don't think there are ANY good grounds for not caching all
sequential access datasets. Ditto for partitioned datasets.
--
Rick
--
Remember that if you’re not the lead dog, the view never changes.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html