Tony,

ODMAXBF specifies the maximum buffer space DFSORT can use for each OUTFIL 
data set. Since you are creating so many OUTFIL files, it is a good idea 
to lower the buffer space for each OUTFIL.  Check this link which explains 
about ODMAXBF parameter in detail (search for ODMAXBF on that page)

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ice1ca60/3.14

Thanks
Kolusu
DFSORT Development
IBM Corporation



From:   TonyIcloud-OPERA <tonybabo...@icloud.com>
To:     IBM-MAIN@LISTSERV.UA.EDU
Date:   08/21/2014 09:43 AM
Subject:        Re: DF/SORT question (challenge?)
Sent by:        IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



Geez, I really should read:
1. the sample provided.
2. the book

RC=0

Now I need to find out more about ODMAXBF.  I feel like the guy who bought 
 
the toaster and didn't plug it in.






On Thu, 21 Aug 2014 11:35:25 -0500, Sri h Kolusu <skol...@us.ibm.com> 
wrote:

> Tony,
>
> Did you have the override of ODMAXBF?  In my sample JCL I have coded it 
> as
> ODMAXBF=100K overriding the default value of 2M. Can you rerun the job
> with the ODMAXBF parm?
>
> Thanks,
> Kolusu
> DFSORT Development
> IBM Corporation
>
> IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> wrote on
> 08/21/2014 09:18:40 AM:
>
>> From: TonyIcloud-OPERA <tonybabo...@icloud.com>
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Date: 08/21/2014 09:25 AM
>> Subject: Re: DF/SORT question (challenge?)
>> Sent by: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>
>>
>> Before I tried the solution cited below (TYVM BTW), I set up a test
>> manually with some interesting results.  The input file is not sorted,
> and
>> I'm only doing SORT FIELDS=COPY.  All I want to do is to break up the
> file
>> into a bunch of little files.
>>
>> Test #1. Read my input file, 229,762 records, lrecl=200/27800 FB , 310
> DD
>> statements, 310 OUTFIL INCLUDE statements.  RC=0, 310 members created,
> all
>> the data checks out.
>>
>> Test #2. Read the same input file, 321 DD statements, 321 OUTFIL 
INCLUDE
>
>> statements, abends:
>>
>> 10.09.26 JOB08830 IEC036I
> 002-B4,IGC0005E,IDSXSB7A,AA,VS09610,4BB9,SHRE16,
>> 452
>>   452 IDSX00S.IDSXSB7.SYSE.FIDX03.PDSE
>>
>> ICE185A 0 AN S002 ABEND WAS ISSUED BY DFSORT, ANOTHER PROGRAM OR AN 
EXIT
>
>> (PHASE C 3)
>> Quickref provides:
>> B4 - Unable to create a system buffer required for PDSE processing.
>>
>> I retried with regions of 16M, 32M, 0M, same result.  Not sure if our
>> local storage police exit chokes off my attempt at 0M.
>>
>>
>> Interestingly, even though the abend occurred the output PDSE was
>> populated with all 321 members however the last 8 members contain 0
>> records.  Looks like DF/SORT was tripped up at the 313 mark by some
> system
>> limitation. I'm going to consult with my sysprogs before we go to IBM
> for
>> help.
>>
>> P.S. My old successful attempt at writing 1,000 members occurred at a
>> different company, a much smaller shop, oddly enough.
>>
>>
>>
>>
>>
>>
>> On Wed, 20 Aug 2014 12:01:00 -0500, Sri h Kolusu <skol...@us.ibm.com>
>> wrote:
>>
>> > Tony,
>> >
>> > It is quite easy to split the group of records into multiple members.
>> > Here
>> > is a sample JCL which will give you the desired results of splitting
> the
>> > first 999 groups of records into a PDSE each containing the group of
>> > records. I also assumed your Input is already sorted on the field you
>> > want
>> > to split. I assumed that the split field is 44 bytes in length. If it
> is
>> > different then you can change it in ALL the places referred by
> KEYBEGIN.
>> >
>> > This job creates a dynamic JCL which will then be submitted via
> INTRDR.
>> > Take a look at the output from Step0200 and then if everything looks
> ok
>> > then change the statement
>> > //SORTOUT  DD SYSOUT=*  to  //SORTOUT  DD SYSOUT=(*,INTRDR),RECFM=FB
>> > If you have more than 999 groups of records we will copy the rest of
> the
>> > records into another file which will then be used as input file to
>> > further
>> > split. I will show you how to build the dynamic JCL's based on that. 
I
>> > chose the split to be 999 groups as the maximum number of DD
> statements
>> > per job step is 3273, based on the number of single DD statements
> allowed
>> > for a TIOT (task input output table) control block size of 64K. This
>> > limit
>> > can be different depending on the installation defined TIOT size. The
>> > IBM-supplied default TIOT size is 32K.
>> >
>> >
> //*********************************************************************
>> > //*  BUILD DYNAMIC OUTFIL CARDS AND DDNAMES FOR EACH GROUP OF RECORDS
> *
>> >
> //*********************************************************************
>> > //STEP0100 EXEC PGM=SORT
>> > //SYSOUT   DD SYSOUT=*
>> > //SORTIN   DD DISP=SHR,DSN=Your Input FB 100 Byte file
>> > //*
>> > //OFCARDS  DD DSN=&&C,DISP=(,PASS),SPACE=(CYL,(20,20),RLSE)
>> > //DDNAMES  DD DSN=&&D,DISP=(,PASS),SPACE=(CYL,(20,20),RLSE)
>> > //SORTOUT  DD DUMMY
>> > //SYSIN    DD *
>> >   OPTION COPY,NULLOUT=RC4
>> >   OUTREC IFTHEN=(WHEN=GROUP,KEYBEGIN=(1,44),PUSH=(101:ID=4))
>> >  OUTFIL FNAMES=OFCARDS,REMOVECC,NODETAIL,BUILD=(80X),
>> >   INCLUDE=(101,4,ZD,LT,1000),
>> >   SECTIONS=(101,4,
>> >   TRAILER3=(3:'OUTFIL FNAMES=OUTF',101,4,',BUILD=(1,100),',
>> >            C'INCLUDE=(101,4,ZD,EQ,',101,4,')')),
>> >   TRAILER1=(3:'OUTFIL FNAMES=NGRP',101,4,',SAVE')
>> >  OUTFIL FNAMES=DDNAMES,REMOVECC,NODETAIL,BUILD=(80X),
>> >   INCLUDE=(101,4,ZD,LT,1000),
>> >   SECTIONS=(101,4,
>> >   TRAILER3=('//OUTF',101,4,' DD ',
>> >             'DISP=SHR,DSN=Your.Split.PDSE(OUTF',101,4,')')),
>> >   TRAILER1=('//NGRP',101,4,' DD ',
>> >             'DSN=HLQ.TONYCLD.NGRP',101,4,','/,
>> >             '//',15:'DISP=(NEW,CATLG,DELETE),',/,
>> >             '//',15:'SPACE=(CYL,(100,40),RLSE)',/,
>> >             '//*')
>> > //*
>> >
> //*********************************************************************
>> > //*  SUBMIT THE SPLIT JOB TO INTRDR WITH THE ABOVE OUTPUT *
>> >
> //*********************************************************************
>> > //STEP0200 EXEC  PGM=SORT,COND=(4,EQ,STEP0100)
>> > //SYSOUT   DD SYSOUT=*
>> > //SYSIN    DD *
>> >    OPTION COPY
>> > //*SORTOUT  DD SYSOUT=(*,INTRDR),RECFM=FB
>> > //SORTOUT  DD SYSOUT=*
>> > //SORTIN   DD DATA,DLM=$$
>> > //SPLTTONY JOB (DA26,001,098,J69),'TONY',
>> > //             CLASS=A,
>> > //             MSGCLASS=H,
>> > //             MSGLEVEL=(1,1),
>> > //             TIME=(,15),
>> > //             NOTIFY=USERID
>> > //*
>> > //SPLTSTEP EXEC PGM=SORT,REGION=0M
>> > //SYSOUT   DD SYSOUT=*
>> > //SORTIN   DD DISP=SHR,DSN=Your Input FB 100 Byte file
>> > $$
>> > //         DD DSN=&D,DISP=(OLD,PASS)
>> > //         DD DATA,DLM=$$
>> > //SYSIN    DD *
>> >   OPTION COPY,ODMAXBF=100K
>> >   OUTREC IFTHEN=(WHEN=GROUP,KEYBEGIN=(1,44),PUSH=(101:ID=4))
>> > $$
>> > //         DD DSN=&C,DISP=(OLD,PASS)
>> > //*
>> > Further if you have any questions please let me know
>> >
>> > Thanks,
>> > Kolusu
>> > DFSORT Development
>> > IBM Corporation
>> >
>> > IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> wrote on
>> > 08/20/2014 07:34:48 AM:
>> >
>> >> From: TonyIcloud-OPERA <tonybabo...@icloud.com>
>> >> To: IBM-MAIN@LISTSERV.UA.EDU
>> >> Date: 08/20/2014 07:35 AM
>> >> Subject: DF/SORT question (challenge?)
>> >> Sent by: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>
>> >>
>> >> I have a dataset that contains records with a field, by which I need
> to
>> >> create a separate dataset that would contain all occurrences of that
>> >> field. For example the file, FB100, looks like
>> >>
>> >> value1 other data.....
>> >> value1 other data.....
>> >> value1 other data.....
>> >> value2 other data.....
>> >> value2 other data.....
>> >> value2 other data.....
>> >> value3 other data.....
>> >> value3 other data.....
>> >> value3 other data.....
>> >> value4 other data.....
>> >> value4 other data.....
>> >> value4 other data.....
>> >>
>> >> My final product must be a series of datasets:
>> >>
>> >> hlq.value1.records
>> >> hlq.value2.records
>> >> hlq.value3.records
>> >> hlq.value4.records
>> >>
>> >> There may be hundreds/thousands of possible values, hence
>> >> hundreds/thousands of datasets.  I have accomplished this in 3
> phases,
>> >> first pass reads the data, uses ICETOOL OCCUR to list the values,
> second
>> >
>> >> phase reads the OUTPUT and formats DD statements and OUTFIL OUTREC
>> >> statements, the third phase reads the original data to create the
>> > numerous
>> >> output files.  I used a newly created PDSE as the output file
> whereupon
>> >> the third phase created several thousand members.
>> >>
>> >> It works, after a fashion, but I'd like a more simple solution.
>> >>
>> >>
>> >>
> ----------------------------------------------------------------------
>> >> For IBM-MAIN subscribe / signoff / archive access instructions,
>> >> send email to lists...@listserv.ua.edu with the message: INFO
> IBM-MAIN
>> >>
>> >
>> > 
----------------------------------------------------------------------
>> > For IBM-MAIN subscribe / signoff / archive access instructions,
>> > send email to lists...@listserv.ua.edu with the message: INFO 
IBM-MAIN
>>
>>
>> --
>> Using Opera's mail client: http://www.opera.com/mail/
>>
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


-- 
Using Opera's mail client: http://www.opera.com/mail/

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to