This is the way MVS has handled multiple OPENs for output on the same PDS for well over a decade. I'm not sure what the recent maintenance would have had to do with it unless you were migrating from some version of MVS older than OS/390.

Since writing a member to a PDS writes just past the old high-water mark and only one such value is retained for the PDS, it has always been illegal to try to write two new members to the same PDS at the same time (only possible if both users of the PDS allocated it with DISP=SHR). For over a decade MVS has detected and prevented this error by failing a concurrent second OPEN for output to the same PDS (with a 213-30 OPEN failure, I believe). Which of the two jobs fails, or whether you even have a failure, is strictly a matter of chance timing between the two job executions and whether the actual OPENs on the dataset within the job steps overlap. Prior to this check being put into MVS (over a decade ago), it used to be possible for two such jobs to run to successful completion, but the result could be randomly trashed members and/or a trashed PDS directory.

ISPF, the binder, IEBCOPY, and probably some other PDS utilities prevent this problem by doing additional internal enqueues on a PDS just before an OPEN for output to insure that all actual OPENs for output are single-threaded even though the dataset was allocated with SHR. If you must write to a PDS with a utility that doesn't follow these ISPF enqueue conventions (like IEBGENER, which only understands sequential data sets), then your only choices are to write a front-end to the utility that gets the required enqueues before linking to the utility, or use DISP=OLD, or tolerate random failures.

A PDSE dataset does support concurrent writes to multiple members; a PDS never has (and never will). IBM's original concept must have been that users would use DISP=OLD to update a PDS safely, but users obviously found this too restrictive and circumventions had to be devised.

Even a single DISP=SHR IEBGENER PDS update job could run into or induce random OPEN failures if the PDS in question is also updated by ISPF users and the job by bad luck overlaps a write attempt to the same PDS from some ISPF user.


Magen Margalit wrote:
Parallel running results in RC 0 for one job
and RC 12 for the second since the PDS was already enqed.

It seems that MIM is not involved since he is used for ENQ propagation and the second jobs doesn't issue an ENQ but is abended by Zos
before it reaches MIM (no MIM Msgs, Problem recur when MIM is down)...

Magen
...


--
Joel C. Ewing, Fort Smith, AR        [EMAIL PROTECTED]

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to