In a message dated 6/13/2005 4:47:56 A.M. Central Daylight Time, [EMAIL PROTECTED] writes:
> Then along came RAID arrays. > You skip one historic event: Cache in control units Other major events that have greatly muddied the waters: (1) the evolution of the Channel Measurement Data. With the advent of S/370/XA in 1982, IBM created new measurement data - Pending, Connected, and Disconnected timings that were calculated by the channel subsystem and reported on in RMF data. This made real tuning possible. Over the years, as various vendors have put more and more proprietary function into their compatible control units (EMC, STK, Hitachi, e.g.), these other vendors did not always report on their controllers' internal states in quite the same way, and neither did IBM. E.g., time spent searching a cache directory on certain controllers is reported as disconnected time when in fact the path is connected. And with the advent of RAID all bets were off as to what Connected and Disconnected time meant. IBM has recently added much new measurement data. See macros IOSCMB, IRACMB, and IRAECMB especially. And RAID controllers have greatly reduced RPS reconnect time, which used to be a large component of disconnected time, with volume-level buffers and extensive use of ECKD architecture channel programs. (2) Software caches. Even if a control unit keeps needed data in a cache, it still takes about 3/4 of a millisecond to read in a 4K block. But if the data can be stored in virtual storage somewhere, the I/O is avoided and replaced with a storage-to-storage move instruction that takes about 1/1000 as much elapsed time. Whole catalogs and PDS directories, inter alia, can now be forced to be storage-resident. And with VLF you can write your own application to manage other kinds of data in virtual storage caches that vanilla z/OS would not be able to cache. (I always wondered why some developer wag did not name the main retrieval macro COFITUP. They did a wag job with Godzilla.) And DB2 has a sophisticated method for using virtual storage buffers to hold oft-referenced data. (3) Parallel Access Volumes. These are dynamically managed by WLM to improve I/O performance to critical applications whose data is on these volumes. I believe I/O tuning is still possible, but should only be undertaken for critical applications that are known to be suffering from I/O waits or contentions. First make sure there is a real problem before doing tuning of any kind. Bill Fairchild ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

