----------------------------------<snip>-----------------------------
You may find that Sequential IO cache benefits are something of an illusion. While cache is involved, the benefit actually comes from the pre-fetch process, where the storage controller is fetching tracks asynchronous to the host requests and staying ahead of the next host request. Even though cache statistics show 100% cache hit you will find that almost every IO had to be read from the disk. Because sequential IO must always come from the disk, it will be one of the first things to suffer when there is sibling pend on an Array Group.
---------------------------------<unsnip>------------------------------
We tested with a pair of purpose-built programs, on a 3990-6 controller and again on RAMAC II and RAMAC III, and again on our SHARK box, with substantially the same kinds of results.

The first program re-read the same sequential dataset, of 100 cylinders, over and over, for a total of 100 passes. While the RAMAC and SHARK tests weren't complete, 'cuz we couldn't "turn off the cache", on the 3990 testing the results were stunning. Elapsed time was reduced by 65% when we enabled the cache in the 3990, and the device response time, as shown by RMF, was halved.

The second program used a random-number generator to select records from a DIRECT dataset, using BDAM access. Improvement was noticeable with cache enabled, but not anywhere as surprising as the results from our sequential dataset tests.

The datasets were preloaded with 100 cylinders of half-track records in both sets of tests. IEBDG was used to create records with a relative record number in the first 6 bytes, and random strings in the remainder of the record.

Subsequent experience showed very good improvements in our IDMS database performance, but since we used several small areas for each database, spread across multiple volumes, locality of reference was fairly high. We actually reached a point where the IDMS CV was inhibited because it couldn't get LOG records written out fast enough. As you note, RACF and JES2 Checkpoint performance gained amazing improvements, again due to high locality of reference.

-----------------------------------------<snip>-------------------------------
Jason, I'd be interested in the paper that says NORMAL Read IO is not recommended for cache, as in most shops I have analyzed random read IO represents greater than 80% of all cache hits - it is what cache was designed for. Your references may need updating.
-----------------------------------------<unsnip>-------------------------------
I'd like to see that as well.

--
Rick
--
Remember that if you’re not the lead dog, the view never changes.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to