Anyone,

I have a mainframe assembler application which is invoking Unix system 
services to get the names of all of the files in an NFS-mounted folder. The 
application dynamically allocates and logically concatenates these files into 
one giant dataset, then uses QSAM macros to read it.

The DYNALLOC calls work this way: first, I dynamically allocate the first file 
in 
the folder with DDNAME MYFILE. Then the program enters a loop, performing 
these steps for each remaining file in the folder:
1) Dynamically allocate the file, asking the system to provide the DDNAME (I 
observe that these are getting the ddnames SYS00001, SYS00002, etc).
2) Dynamically concatenate MYFILE with the SYSxxxxx dataset just allocated 
(with the "permanently allocated" attribute on).

This works beautifully; when I exit the loop, I can OPEN and GET all the 
records successfully from MYFILE.

The problem is that I have reached a practical limit of approximately 540 files 
in the folder, because when I reach that point, I get a dynamic concatenation 
ABEND due to the TIOT filling up. I am told that our TIOT size is the default 
of 
32K, which would allow for a maximum of 1,635 DDs in a job step. It would 
seem, however, that something in my allocation/concatenation loop is 
preventing me from reaching that number of files. There are only a handful of 
other DDs allocated to the step (e.g., STEPLIB, etc).

If I were able to handle up to 750 (or perhaps 1,000) files at a time, it would 
be of immense help. At the moment, our only option seems to be to split up 
the files into multiple folders of 500 files each.

Do I have any other options? Thanks so much.

David

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to