First I want to thank everyone for all of their ideas.  After due 
consideration, 
here is what I think we'll end up doing.

I'm going to create two GDG datasets:
/* IDCAMS COMMAND */           
   DEFINE GENERATIONDATAGROUP -
   (NAME(PROD.XMIT.TXNFILE) -   
    LIMIT(6) -                 
   )                           

/* IDCAMS COMMAND */           
   DEFINE GENERATIONDATAGROUP -
   (NAME(PROD.APPL.TXNFILE) -   
    LIMIT(5) FOR(7) -
   )                           

Every day we'll receive an FTP of a new generation into PROD.XMIT.TXNFILE.  
On non-business days we won't do any further processing of the file.

On a business day we'll have two jobs.  The first one will look very much like 
this:
//GDGTST3A JOB NOTIFY=&SYSUID,MSGCLASS=X,CLASS=A               
//BACKUP   EXEC PGM=IEBGENER                                   
//SYSUT1   DD DISP=(OLD,DELETE,KEEP),DSN=PROD.XMIT.TXNFILE      
//SYSUT2   DD DISP=(NEW,CATLG,DELETE),DSN=PROD.APPL.TXNFILE(+1),
//           SPACE=(...)                                 
//SYSIN    DD DUMMY                                            
//SYSPRINT DD DUMMY                                       
//                                                             

This will copy all of the data from all of the generations of the "XMIT" file 
to a 
new generation of the "APPL" file.  If the copy succeeds then all of 
generations of the XMIT file are deleted.  If it fails (out of space, or 
whatever) the XMIT generations remain so we can rerun when the issue is 
resolved.

Job 2 (a totally separate job from job 1) will actually process the newest 
generation of the APPL file; DD being something like:
//TXNFILE   DD DISP=SHR,DSN=FJS.APPL.TXNFILE(0)

Job 2 can be rerun as necessary.
We'll keep 5 generations of TXNFILE for testing purposes.

This is pretty simple to understand, does not require any special software 
or "clever" coding, and seems to be close to bulletproof.  The only concerns I 
have are:
1) What if the same file is transmitted twice?
This requires someone realizing it occured, but the solution seems to be as 
simple as deleting the duplicate generation before running step 1.
If someone does not realize it before step 1 is run (thus copying the duplicate 
generation to the APPL file and deleting the XMIT generations) then we'd have 
to retransmit all generations to be processed.  But this doesn't happen often 
and is almost always caused by the automated process failing and a manual 
process being the cause of the duplicates.

2) As I think was mentioned recently, when a GDG is processed without 
generation qualification it processes them newest to oldest.  Annoying!  They 
should go oldest to newest.
But in the end my process has to sort the records anyway, so it doesn't really 
matter that they are not in chronological order.

Can anyone think of anything I missed?  This almost seems a bit too simple.  :-
)

Thanks!
Frank

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to