Re: Dynamic spliiting of file

2007-05-14 Thread Kenneth E Tomiak
Raj,

These options handle the splitting for you, you need to search the posts 
about dynamically allocating datasets since the final process you design will 
need to keep allocating additional datasets based on the input.

For a non-production, or a 'I really do not care about performance' solution, I 
would turn to rexx where I can allocate a new dataset, start reading the 
input, and write however many records I want to the output file, close it, 
allocate a new dataset and continue reading and writing records to this 
dataset until I hit my limit, allocate another new dataset .. until eof is 
reached. Existing posts explain how to do this in COBOL, too.


On Mon, 14 May 2007 12:16:53 -0700, Frank Yaeger <[EMAIL PROTECTED]> 
wrote:

>> The easiest is probably using the SPLITBY parameter of the
>> ...
>
>Raj,
>If you have access to DFSORT, you can use its SPLIT1R=n parameter instead
>of SPLITBY=n.  Whereas SPLITBY=n rotates the records back to the first
>output file (which is not desirable in your case), SPLIT1R=n will continue
>to write the extra records to the last output file so each output file will
>have contiguous records from the input file.
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dynamic spliiting of file

2007-05-14 Thread Frank Yaeger
> The easiest is probably using the SPLITBY parameter of the
> OUTFIL control statement.  This should work with either SyncSort or
> other sort products.  The SPLITBY=n parameter writes groups of records
> in rotation among multiple output data sets and distributes multiple
> records at a time among the OUTFIL data sets. N specifies the number of
> records to split by.  The following control statements will copy the
> first 1,00,000 (not sure if this is a typo or if is should be 1,000,000)
> to the first data set and the next 1,00,000 to the next data set and so
> on.  The only thing you need to be careful of is to allocate enough data
> sets.  If you need 6 data sets but only allocate 5, the next group after
> the 5th, the one that starts with the 6,00,001st record will be written
> to the fist data set again and the rotation continues.
> ...

Raj,
If you have access to DFSORT, you can use its SPLIT1R=n parameter instead
of SPLITBY=n.  Whereas SPLITBY=n rotates the records back to the first
output file (which is not desirable in your case), SPLIT1R=n will continue
to write the extra records to the last output file so each output file will
have contiguous records from the input file.

For complete details on DFSORT's SPLIT1R=n parameter, see:

www.ibm.com/servers/storage/support/software/sort/mvs/peug/

For another way to split the records evenly and contiguously, see the
"Split a file to n output files dynamically" Smart DFSORT Trick at:

http://www.ibm.com/servers/storage/support/software/sort/mvs/tricks/

Frank Yaeger - DFSORT Development Team (IBM) - [EMAIL PROTECTED]
Specialties: PARSE, JFY, SQZ, ICETOOL, IFTHEN, OVERLAY, Symbols, Migration

 => DFSORT/MVS is on the Web at http://www.ibm.com/storage/dfsort/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dynamic spliiting of file

2007-05-14 Thread GAVIN Darren * OPS EAS
Another technique can be to select records based on some data; such as
the last digit, or next to last digit of an account number or employee
ID, and parsing to separate files based on the digit or range of digits
in that spot.

This has the advantage of not needing to know any counts of records at
all.

Darren


-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Rajeev Vasudevan
Sent: Monday, May 14, 2007 10:08 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Dynamic spliiting of file

Hello,
   
  Hi, 

Please provide me your suggestions/solutions to achieve the following: 

A production job runs daily and creates a huge file with 'n' number of
records. , I want to use a utility (assuming SYNCSORT with COUNT) to
know the 'n' number of records from this file and want to split the file
into equal output files (each output file should have 1,00,000 records).
How to achieve it dynamically if records vary on daily basis?  On a
given day we may get 5,00,000 and on the other day we may get 8,00,000
records. So, depending on the count I need to split the input file into
5 or 8 pieces for further processing. After this processing (suppose a
COBOL program) I may again get 5 or 8 files. 
  
Please provide your suggestions/solutions/ideas to this problem. Please
let me know if you need more inputs/details
   
  Thanks,
  Raj

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dynamic spliiting of file

2007-05-14 Thread Anthony Saul Babonas
Set up a job that creates 1,000,000 records in a pass, then submits itself
to intrdr if rc=0.


 

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Rajeev Vasudevan
Sent: Monday, May 14, 2007 12:08 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Dynamic spliiting of file

Hello,
   
  Hi, 

Please provide me your suggestions/solutions to achieve the following: 

A production job runs daily and creates a huge file with 'n' number of
records. , I want to use a utility (assuming SYNCSORT with COUNT) to know
the 'n' number of records from this file and want to split the file into
equal output files (each output file should have 1,00,000 records). How to
achieve it dynamically if records vary on daily basis?  On a given day we
may get 5,00,000 and on the other day we may get 8,00,000 records. So,
depending on the count I need to split the input file into 5 or 8 pieces for
further processing. After this processing (suppose a COBOL program) I may
again get 5 or 8 files. 
  
Please provide your suggestions/solutions/ideas to this problem. Please let
me know if you need more inputs/details
   
  Thanks,
  Raj

   
-
Get the free Yahoo! toolbar and rest assured with the added security of
spyware protection. 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dynamic spliiting of file

2007-05-14 Thread Reda, John
Raj,

If your ultimate goal is to break up the one large file into multiple
smaller one you can do this without COUNT.  There are a couple of ways
to do this.  The easiest is probably using the SPLITBY parameter of the
OUTFIL control statement.  This should work with either SyncSort or
other sort products.  The SPLITBY=n parameter writes groups of records
in rotation among multiple output data sets and distributes multiple
records at a time among the OUTFIL data sets. N specifies the number of
records to split by.  The following control statements will copy the
first 1,00,000 (not sure if this is a typo or if is should be 1,000,000)
to the first data set and the next 1,00,000 to the next data set and so
on.  The only thing you need to be careful of is to allocate enough data
sets.  If you need 6 data sets but only allocate 5, the next group after
the 5th, the one that starts with the 6,00,001st record will be written
to the fist data set again and the rotation continues.  If you allocate
6 data sets but only need 4 the 5th and 6th data set will be empty.  

The control cards to do this are: 

  SORT FIELDS=COPY  
  OUTFIL  FILES=(01,02,03,04,05,06,07,08),SPLITBY=10

If you prefer to sort the data in addition to breaking it up then
replace the FIELDS=COPY with your sort control fields. 

You will need to allocate SORTOF01, SORTOF02, etc.  Be sure to include a
reference to each data set in the FILES= parameter of OUTFIL.  If you
would like further help with this please feel free to contact me
directly. 

Sincerely,
John Reda
Software Services Manager
Syncsort Inc.
201-930-8260

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Rajeev Vasudevan
> Sent: Monday, May 14, 2007 1:08 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Dynamic spliiting of file
> 
> Hello,
> 
>   Hi,
> 
> Please provide me your suggestions/solutions to achieve the following:
> 
> A production job runs daily and creates a huge file with 'n' number of
> records. , I want to use a utility (assuming SYNCSORT with COUNT) to
know
> the 'n' number of records from this file and want to split the file
into
> equal output files (each output file should have 1,00,000 records).
How to
> achieve it dynamically if records vary on daily basis?  On a given day
we
> may get 5,00,000 and on the other day we may get 8,00,000 records. So,
> depending on the count I need to split the input file into 5 or 8
pieces
> for further processing. After this processing (suppose a COBOL
program) I
> may again get 5 or 8 files.
> 
> Please provide your suggestions/solutions/ideas to this problem.
Please
> let me know if you need more inputs/details
> 
>   Thanks,
>   Raj

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Dynamic spliiting of file

2007-05-14 Thread Rajeev Vasudevan
Hello,
   
  Hi, 

Please provide me your suggestions/solutions to achieve the following: 

A production job runs daily and creates a huge file with 'n' number of records. 
, I want to use a utility (assuming SYNCSORT with COUNT) to know the 'n' number 
of records from this file and want to split the file into equal output files 
(each output file should have 1,00,000 records). How to achieve it dynamically 
if records vary on daily basis?  On a given day we may get 5,00,000 and on the 
other day we may get 8,00,000 records. So, depending on the count I need to 
split the input file into 5 or 8 pieces for further processing. After this 
processing (suppose a COBOL program) I may again get 5 or 8 files. 
  
Please provide your suggestions/solutions/ideas to this problem. Please let me 
know if you need more inputs/details
   
  Thanks,
  Raj

   
-
Get the free Yahoo! toolbar and rest assured with the added security of spyware 
protection. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html