I have a large file, and I need to split it into smaller files of
50,000 recs each I started off using Batch REXX so that I could
dynamically allocate a large number of output files, each with a different
name. The same REXX also creates SORT control cards (see below). The
REXX increments the SKIPREC by 50,000 each time.
OPTION NULLOUT=RC4
SORT FIELDS=COPY,STOPAFT=50000,SKIPREC=150000
So far so good. First DS has records 1 thru 50,000, next ds has
50,001 thru 100,000 and so on.
Now they want a single header record for each new data sets. All
header records will be identical. I could just create a single record
header data set , concatenate and copy again to a new ds, but I am hoping
that there is a better way to do this in one execution of DFSORT per data
set. BUT I have not been able to find it. Any help here?
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html