On Tue, 3 Feb 2009 11:25:05 +0000, Gerry Anstey <[email protected]> wrote:
>OK bad design, we have lot of cack here, probably due to hiring cheap >programmers, any way, I digress. > >Here s the SDSF summary: > > PREFIX=GCSPROCP DEST=(ALL) OWNER=* SYSNAME=FMVS > NP DDNAME STEPNAME PROCSTEP DSID OWNER C DEST REC-CNT > JESMSGLG JES2 2 PSTGCS T LOCAL 52 > JESJCL JES2 3 PSTGCS T LOCAL 44 > JESYSMSG JES2 4 PSTGCS T LOCAL 94 > DDPRINT GCSPROCP 106 PSTGCS 0 LOCAL 3 > CMPRINT GCSPROCP 107 PSTGCS 0 LOCAL 28,559 > CMPRT01 GCSPROCP 108 PSTGCS T LOCAL 25M > >Now we had a need to extract some of the records in CMPRT01, I wrote job to >run SDSF in batch and to use the PRINT ODSN command to extract the data to >a data set. > >Then I read the dataset with Filemaster and extracted the desired records >into a smaller file. > >My questions are: > >1. Any ideas why SDSF takes appox 90 minutes (120000 EXCPs + 31mins CPU) to >read and write out the data and Filemaster takes about 3 minutes to read >25million recs and write about 1.5million? > >2. Is there any way to make SDSF extract faster? Hope you don't mind me stealing your thread, but it made me think of something we need to deal with. We are on the front end of a project to move from VSE to z/OS. We currently archive our reports in IBM Content OnDemand (Windows version, not z/OS version). Basically we write all of our reports to the VSE/POWER list queue and then we run a job that FTPs them out of the list queue to the OnDemand server, which then loads them in to OnDemand. Can you FTP from the JES spool? And even if you can, is it a good idea? Or should we just go ahead and write them to sequential disk files instead of SYSOUT? Frank ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

