From a logger perspective there is no clear answer, but there are a number of tools to get off the data set.
Logger data sets are of the form HLQ.lsname.suffix and there are staging data sets and offload data sets. To get off of staging data sets you can change the duplex method for the log streams to stg_duplex=NO and that will get off the data set, and then change it back and it can re-allocate a staging data set on the new dasd. You'll have to go through a user managed rebuild after each duplex change for the change to take effect. For offload data set logger keeps the most recently used offload data set for each log stream and on each system allocated. The data sets remain allocated until a new offload data set is needed because the previous one filled. This is a tricky condition to force. 1) you can force an offload with the samplib proc OFFLDS. S IXGOFLDS,LOGSTRM=lsname This is a relativley risk free option, but if there is not enough log data in primary storage, it may not cause a new data set allocation to get off the current. 2) D logger,c,lsn=lsname,d -- This will show you the jobs that are using the log stream, You can find then use the recommended method to quiesce these applications. 3) Many but not all logger applications will automatically reconnect if they are disconnected. You can do a SETLOGR FORCE,DISC,LSName=x to disconnect from the logstream on a system, you might have to do this from multiple systems and multiple log streams to relieve all data sets used in a volume. Disconnecting should cause the data sets to be unallocated. However if applications don't like being disconected they may have to be restarted. You'll have to investigate each logger exploiters behavior to see how they tolerate these conditions and hopefully some combination of the above will get you what you need. ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN