> Think standalone dump written striped to  - say - 5 volumes. Each 
> volume has a data set in format FBS, but only one of the volumes can
> have a short record. SAdump knows that, and IPCS knows it, too. The 
> utilities don't. So assume that you took a complete sadump to 5 
> volumes and the sort record happens to be on the first volume. Then 
> you use a utility (ICEGENER is my favourite) to copy somewhere else.
> You end up with a severely truncated sadump. One fifth, to be exact.
> IPCS will read the truncated dump to the best of its abilities, but 
> you will get all kinds of 'storage not available' warnings when 
> looking at the dump.
> 
> Last time a customer sent me an sadump, it had 27000cyl. I got all 
> kinds of warnings and got lucky in that the sadump messages were 
> clearly truncated and didn't show the 'successfully finished' 
> message. It turned out that the wrong utility was used for copying, 
> and the actual dump had 63000 cyls. Visible when IPCS COPYDUMP was 
> used for copying. IPCS knows that a striped sadump can have the 
> short record "earlier".

  SAdump never writes short blocks.  When it wants to write a block
which is not full, it pads the block with dummy records (which IPCS
knows to ignore).

> I am somewhat at a loss to understand how some of the problems you are 
detailing happened.  The only way it could have would be with an ill 
behaving user written program or process.

>If I remember this correctly (and I am on shaky ground here), sadump 
writes a 'special striping', understood fully only by IPCS. Mind you, 
sadump is a standalone application that does not use standard >z/OS 
services because they are not available. Either here in old posts or even 
somewhere in the docs the behaviour I detailed is described, including the 
warning NOT to use standard utilities when >copying a 'striped' (multi 
DASD volume) sadump. Suffice it to say, the full 63000cyls (from my recent 
customer) could only be copied when the customer used IPCS COPYDUMP. I was 
not told how they copied >it when they sent me the 27000cyls.

>Just thought that a reminder about sadump was in order. After all, who 
would want to take an sadump only to be told by support that the necessary 
data are not in the dump?

  SAdump does not support data sets whose stripe count is greater than 1.
For multivolume data sets, sadump writes to all of the volumes 
concurrently,
but not in the same way that DFSMS does for a striped data set.
 
  We recommend that a multivolume sadump data set should always be 
copied using COPYDUMP before doing further processing, for 2 reasons:

1.  COPYDUMP OPENs all of the volumes concurrently, and merges them
  back into an approximation of the logical dumping order.  This 
  reduces the number of storage map entries which IPCS will need to 
  create in the dump directory, and improves IPCS performance.

2.  If the sadump failed to complete, in a manner which did not
   allow it to close the data set extent on each volume, IPCS
   dump initialization or a copying utility will most likely
   terminate when it encounters an error at the end of the data
   dumped on the first volume, or copy residual data from 
   a prior dump.  COPYDUMP will copy all of the relevant data 
   from each volume.

>Sorry Barbara, I forgot about SADUMP.  I wonder, does the same problem 
exist with a console dump or a SYSMDUMP? 

  SVC Dump (except when DCB= is specified on the SDUMP macro),
SYSMDUMP, and IEATDUMP all use BSAM to write to dump data sets.  So 
they support anything that BSAM supports (like DFSMS striped
data sets, and zHPF).  They is generally no requirement to use COPYDUMP
for these types of dumps, unless you need to process a data set which
was not closed because the system crashed while it was being written.
However, COPYDUMP may still be beneficial, if you want to make use
of the INIT or INITAPPEND options.  INITAPPEND is new in z/OS 2.1.


Jim Mulder   z/OS System Test   IBM Corp.  Poughkeepsie,  NY

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to