Raj,

If your ultimate goal is to break up the one large file into multiple
smaller one you can do this without COUNT.  There are a couple of ways
to do this.  The easiest is probably using the SPLITBY parameter of the
OUTFIL control statement.  This should work with either SyncSort or
other sort products.  The SPLITBY=n parameter writes groups of records
in rotation among multiple output data sets and distributes multiple
records at a time among the OUTFIL data sets. N specifies the number of
records to split by.  The following control statements will copy the
first 1,00,000 (not sure if this is a typo or if is should be 1,000,000)
to the first data set and the next 1,00,000 to the next data set and so
on.  The only thing you need to be careful of is to allocate enough data
sets.  If you need 6 data sets but only allocate 5, the next group after
the 5th, the one that starts with the 6,00,001st record will be written
to the fist data set again and the rotation continues.  If you allocate
6 data sets but only need 4 the 5th and 6th data set will be empty.  

The control cards to do this are: 

  SORT FIELDS=COPY  
  OUTFIL  FILES=(01,02,03,04,05,06,07,08),SPLITBY=100000

If you prefer to sort the data in addition to breaking it up then
replace the FIELDS=COPY with your sort control fields. 

You will need to allocate SORTOF01, SORTOF02, etc.  Be sure to include a
reference to each data set in the FILES= parameter of OUTFIL.  If you
would like further help with this please feel free to contact me
directly. 

Sincerely,
John Reda
Software Services Manager
Syncsort Inc.
201-930-8260

> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Rajeev Vasudevan
> Sent: Monday, May 14, 2007 1:08 PM
> To: [email protected]
> Subject: Dynamic spliiting of file
> 
> Hello,
> 
>   Hi,
> 
> Please provide me your suggestions/solutions to achieve the following:
> 
> A production job runs daily and creates a huge file with 'n' number of
> records. , I want to use a utility (assuming SYNCSORT with COUNT) to
know
> the 'n' number of records from this file and want to split the file
into
> equal output files (each output file should have 1,00,000 records).
How to
> achieve it dynamically if records vary on daily basis?  On a given day
we
> may get 5,00,000 and on the other day we may get 8,00,000 records. So,
> depending on the count I need to split the input file into 5 or 8
pieces
> for further processing. After this processing (suppose a COBOL
program) I
> may again get 5 or 8 files.
> 
> Please provide your suggestions/solutions/ideas to this problem.
Please
> let me know if you need more inputs/details
> 
>   Thanks,
>   Raj

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to