On Nov 11, 2005, at 9:48 AM, Frank Yaeger wrote:
---------------------SNIP-----------------------------------
Ed,
I believe in this case, DFSORT was able to determine the filsize (it
usually can), but that the number of work data sets was too small
for that
filesize. I told Skip how to increase the number of work data sets
using:
//DFSPARM DD *
OPTION DYNALLOC=(,n)
/*
Note that DFSORT only uses the FILSZ=En value if it can't determine
the
filesize. There aren't too many cases of that these days (the most
common
one is when there's no SORTIN and an E15 passes all of the records to
DFSORT). DFSORT issues message ICE118I when it can't determine the
filesize. The doc for that message discusses what to do in that case:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/
ICE1CM10/2.2.114?SHELF=&DT=20050119125222&CASE=
Thanks for mentioning that. I had forgotten it. But it still begs the
question how do you code for the value of n .
A LONG time ago we had a need to sort 25+ 6250 (BPI) tapes at one
swat. It could not be broken up (I don't remember why).
This was a weekly job. The records were also large (18K -IIRC) . The
first couple of tries were a PITA. We got it to work week in and week
out after a lot of jcl changing and I think (IIRC) the final winner
was to use size=e20000000 on the sort statement (20 million) . We had
to sort the name and address file for a large publishing house. Like
I said its been ages, so I don't know if they ever found a better
way. I remember distinctly asking the programmer to write a note in
the run doc that if the number got over 20 million to update the sort
control card (and increase the sortwkxxs).
Ed
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html