----------------------------------<snip>-------------------------

I typically give 1/3 of a 3390-3 volume for a single paging dataset,
populating the remainder of the volume with low-use datasets.

That's about 1 GB per page DS.  If you have ten of them that's only 10 GB of
local paging space.  I think that's pretty small today.  And then you'd need
another 10 to 20 GB of "low use" data sets to come close to filling the
packs.  Non-managed data sets, at that.  I'd go with full pack data sets on
mod-3 volumes or about 4GB data sets on larger volumes.
----------------------------------<unsnip>------------------------------
That's true, Tom, but we never had anything that made use of storage above the bar, and very little that used storage above the 16M line.

----------------------------------<snip>----------------------------------

I also
plan on being ready to add at least two more page datasets of similar
size, in case of an emergency that might arise due to shortages.

Yes, you might as well keep all your page data sets the same size, including
spares, if you keep them. Set PAGTOTL to 255. The cost is very small. IIRC, 96 bytes of ESQA for the PARTE for each page data set that is not
actually used.  The cost of not being able to add another may be an IPL.
--------------------------------<unsnip>---------------------------------
Setting PAGTOTL to 255 was something I always did. In 23 years at Clearing, I only had to use spares twice, because of inappropriate use of VIO datasets. (Some fool was copying SMF tapes into a VIO dataset.) Our application mix was rather I/O-bound, as opposed to storage-intensive and IDMS just couldn't keep up sometimes.

-----------------------------------<snip>---------------------------
Then again, if you are keeping page data sets around to add in case you need them, why not just add them?
----------------------------------<unsnip>----------------------------
Old habits die hard. We were doing our paging to RAMAC devices and trying not to affect performance by overloading the individual "drawers". We spent a lot of time talking to the storage guys at Tucson and learning the details of how RAMAC performance could be optimized. We carried much of the methodology to the 2105 when we upgraded, trying to equalize loading across the various internal strings. When you're on a budget as tight as ours was, you learn all sorts of little "wrinkles" to make the best possible use of what you have; even to the point of diminishing returns.

---------------------------------<snip>-----------------------------------

Running roughly 300 address spaces, I found that four of these page
datasets was about right, with no more than 50% utilization of any one
of them.

IIRC, four is not considered a sufficiently robust paging subsystem to deal
with spikes that occur when taking SVCDUMPs.  A minimum of six is
recommended.  And 50% is too high.  IIRC, beyond about 30% it becomes more
difficult for ASM to find contiguous free pages for efficient page out
operations.
-------------------------------------<unsnip>-----------------------------------
With regard to SVCDUMPs, the number of those we took in a month could be counted on one hand, with fingers left over. And since they mostly occurred at night, senior management turned a deaf ear. All schedules were being met, with time left over, so they just didn't give a d***. All they cared about was "The Bottom Line" and to H*** with the details, and the future. The only times that paging rose were during overnight processing, and again, since the online systems weren't in use, management didn't care.

I know, it's a lousy way to run a shop, but when politics are allowed to rule, what can the poor peons do?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to