"malan" <[EMAIL PROTECTED]> wrote in message news:<[EMAIL PROTECTED]>... > The main purpose is to have the more efficient process with the best > performances. Also, I'm currently working for a company where a lot of > jobs (old jobs) are written in assembler. Today I should modify one and > add an access to a big sequential file, about 13,000,000 of records. > Each of them should be read. I can't rewrite the job.
Instead of reinventing the wheel, why not take a look at the buffering, blocksizes, and other options? There are sufficient facilities that already exist in z/OS and DFSMS to improve performance without going back to writing code in assembler to read tracks at a time. Proper buffering values will accomplish the same purpose and there are lots of other methodologies. Maybe the programs reading the data are poorly written - don't assume you need a new "module" to do the I/O - fix the old programs. Thanks, Mark Thomen Catalog/IDCAMS/VSAM Development ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

