This is a performance issue. In order for ASM not to care, the pages are distributed (roughly equally) across all available page datsets. If the algorithm were to take into account the available space: 1) the additional overhead of keep track (probably miniscue per IO, but imagine a paging rate of 100's or even thousands per second. (this actually happened in "the good old days"). 2) The concentration of pages in the larger page dataset would create a "hot spot" in the aux stor subsystem and provide uneven performance, depending on where your stolen pages were.
<snip> Wouldn't it make more sense if "distributed equally" were defined as a percentage of available space rather than number of pages? </snip> All this does is allow for human error if the page ds's are not sized equally! ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

