Ravi Gaur wrote:

>We had a very weird situation where one of the TSO user compressed the PDS 
>dataset which had 6000+ members however eventually the JCL in all of these 
>members got similar atmost ..so look like during stow somehow same memory 
>overlaid or Directory TTR got wrong...

REGION size may be too small or there are no workspace for it. Or your TSO 
session were interrupted somehow. Now and then the compress by IEBCOPY can 
crash if you're not careful.


>We had no backup of the dataset and then have had to create new and with best 
>knowledge or recovered members from 2011 ...

You have NO backup? Is it a production PDS? Ouch.... (We have a trophy just for 
such disasters. The new owner keeps it until another person makes an error...)


>Now challenge been to figure out what really happened and any way to restore 
>it back to the stage it was compress(been push it's not possible... but 
>thought to bring it on table)..

As documented in 'DFSMSdfp Utilities':

<snip>
It is a good idea to make a copy of the data set that you intend to 
compress-in-place before you actually do so. You can use the same execution of 
IEBCOPY to do both, and a following job step could delete the backup copy if 
the IEBCOPY job step ends with a return code of 0.

Attention: A partitioned data set can be destroyed if IEBCOPY is interrupted 
during processing, for example, by a power failure, abend, TSO attention, or 
I/O error. Keep a backup of the partitioned data set until  the successful 
completion of a compress-in-place.

Attention: Do not compress a partitioned data set currently being used by more 
than one user. If you do, the other users will see the data set as damaged, 
even though it is not. If any of these other users update the partitioned data 
set after you compress it, then their update will actually damage the 
partitioned data set. 
<end snip>

IMHO: I would replace 'a good idea' by 'mandatory'. And I will NOT delete a 
backup copy, because I'm paranoid and I don't want that trophy. ;-[

Sorry, but can't really help you here. Perhaps you can ask your Storage admin 
if they can use a HRECOVER or have a dump of your dataset somewhere...

Groete / Greetings
Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to