There is a delay between the building of the list and actually backing up each 
dataset on the list.  Generally, an enqueue is not set at the time the list is 
built - as you said, it may be hours before each dataset is actually backed up.

Do you run DSS natively or are you calling it from another product, perhaps 
CA-Disk?  If you are using DSS under the covers of CA-Disk, you can pass a parm 
(on the SYSPARMS dd statement) to enqueue the dataset.  That may cause problems 
for your batch processing - enqueues can cause jobs to have to wait, which can 
cause time-outs or other problems, depending on what the jobs are doing. You 
could also have CA-Disk retry, which in your case could result in the new 
version of the dataset being backed up.  That might be okay or not, depending 
on what you need to do.  

I would think that a better approach would be to look at the back up needs for 
that dataset and make changes accordingly.  You might move the backup instream 
of the job that deletes and rebuilds the dataset or maybe tell your scheduling 
system not to run one job until the other is finished, whatever would fit your 
situation best.  You might want to look at the possibility of breaking up the 
back up job into multiple jobs, each backing up one storage group or the 
contents of one catalogue at a time (just as an example).

Linda Mooney

-------------- Original message -------------- 
From: Johnny Luo <[EMAIL PROTECTED]> 

> Hi, 
> 
> We encountered a problem on our production system. 
> 
> A job was using DSS to backup a lot of data sets (logical dump) and we got 
> ADR321E for one extended format PS data set: the data set was not on the 
> supposed volume. 
> 
> This job will run more than 3 hours and we found out that another job will 
> delete the data set and re-create it during that time (1.5 hours after the 
> dump job starts). It might be the cause of ADR321E. 
> 
> And from the archive I found a similar thread: 
> 
> 
> 
> It is entirely possible that when DFDSS was building his process list the 
> 
> dataset was on the volume. Between that time the dataset probably got 
> deleted 
> 
> and re-allocated. Naturally when DFDSS went to copy the dataset later on, 
> it 
> 
> wasn't there. Hence the error message. 
> 
> Moral of the story: Don't delete datasets until after DFDSS has copied 
> them. 
> 
> 
> 
> 
> 
> Does that mean there is a time interval between DSS located the data set via 
> catalog and DSS actually places the ENQUEUE on it? 
> 
> 
> 
> If it's true, from our experience this time interval for a big dump job is 
> very long. Thus give another job the chance to delete the target data set. 
> Do we have any method to ask DSS to ENQUEUE earlier? 
> 
> 
> 
> Thanks. 
> 
> 
> 
> -- 
> Best Regards, 
> Johnny Luo 
> 
> ---------------------------------------------------------------------- 
> For IBM-MAIN subscribe / signoff / archive access instructions, 
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO 
> Search the archives at http://bama.ua.edu/archives/ibm-main.html 
> 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to