Kees Vernooy wrote on 11/02/2006 10:06:26 AM:

> SAD has been optimized in the past to dump only what is needed, but you
> must count on a substantial part of CS plus what it needs from the page
> datasets. I would suggest at least 20 GB in your situation. There are
> also several options to speed up SAD to Dasd by using parallellisme.

Left to its own preferences, SADMP will dump all of main storage plus
things like LSQA and SWA from auxilliary storage.  All-zero pages of
storage are summarized rather than dumped as distinct, separate records, so
you'll see a lot smaller SADMPs generated early in an IPL than after the
full workload is in flight.  If there are some existing dumps that were
written to tape from this LPAR while a lot of system activity was going on,
that size would be a better estimate than the 20GB that you proposed.
Otherwise, 20GB wouldn't be a bad starting point.

Multi-volume SADMP data sets are the recommended targets for SADMP today.
Extended format data sets broke the 64K track per volume barrier in the
earliest release and their use of BLTs rather than TTRs allows them to hold
more blocks per volume that can be accessed by old programs, so we tend to
recommend their use.  If you can be generous with the volume count and
careful about making the paths to each volume independent, you'll see a
significant reduction in the time needed to record a SADMP and get back
into production.

Bob Wright - MVS Service Aids
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to