David Cole wrote:
Ok, just FYI, here's my specific situation. My need is for controlling the updates, assemblies and linkedits that I need to run when regenerating z/XDC. (It has nothing to do with the installation process at customer sites.)

My typical gen jobstream is:
   * One update job.
   * Followed by from 1 to 175 assemblies.
   * Followed by one multi-step linkedit job.
The assemblies may run in any order, but none may run prior to the update job, and they all must run one at a time, and all must run before the linkedit job.

Hi Dave,

a very simple solution that GUTS have used to compile several hundred csects in the old MVT and OS/VS1 times was about this. The whole product was compiled and linked in a single job, containing only a few steps.

step1: build a huge temp seq dataset from the source PDS containing all csects, embedded with pseudo-csects to punch lked control cards using a single ' punch name xxx(r)' statement (and asm END stmt)

step2: a single-step assembler with option 'batch' producing a single syspunch containing all the csects and a trailing ' name xxx(r)' for each csect.

step3: linkedit each single csect into a separate member using option NCAL into an intermediate load library

step4: final linkedit only the composite modules into the final loadlib, using the final lked control statements, using option CALL to let autocall do the job.

---

Notes: I have adopted a similar method for other smaller projects. Depending on the number of csects, either generating the sequential data automatically, or, for smaller projects, preparing the big assembler manually without such utilities and rather concatenate the source members.
Here are some detais as far as memory serves:

Step1 used a small utility, generating each member of the source pds to the same single temporary seq dataset, and added the lked control statements after each csect, as follows:

<source member A>  (including the ASM END stmt)
  PUNCH '  NAME A(R)'
  END
<member B>
  PUNCH '  NAME B(R)'
  END
...

Step2: ASM-G was quick enough for several hundred CSECTS although using plenty of AMODGEN macros. This is since using the BATCH option, macros actually referenced were kept by the compiler pre-processed. Thus, the complete assembly time was only a fraction of assembling each csect in a separate step. This was acceptable on 370 machines, so must not be problem for you today, even re-running the whole big assembly just for an error in a single csect. Such errors were rare anyway, since this big compile was only to prepare a consistent distribution version. For development changes, only the changed single csect was compiled and link-edited into the intermediate loadlib, and the final loadmod reconstructed.

Note that there was also a small utility that split the biq sequential sysprint into separate members, one for each csect into a SYSPRINT PDS.

Step3: was to split the single sequential SYSPUNCH into separate members again.

Step4: For each final composite loadmod, there was a separate LKDEIN member manually prepared. Again, the LKED syslin was generated with putting each member of this LKEDIN PDS into a temp seq lkedin syslin, like
   INCLUDE LOADLIB(A)
   INCLUDE LOADLIB(B)
   ORDER A,B
   NAME  MAINPGM(R)


I hope it is clear enough, in spite of being quite late here.
Sorry for the late reply, I can't keep up with the volume on the list.

Best regards, Jenoe

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to