Running TSM 5.5 on z/OS. Run nightly backups to disk storage pools, then migrate those to tape the next day.
Last night, one of our disk storage pools filled up due to a mass dump of data on one of the clients being backed up to that pool The high and low migration limits on this pool were ser to 85% and 60% respectively. Automatic migration of this storage pool kicked on as configured, with four processes starting to migrate the data in this pool to tape. Three of the process finished within a couple of hours. The fourth, however, is still running seven hours later. This morning, I tried to start another migration process on this pool, but it will not initiate. Only the one that is still running from last night is active. My questions: 1) Since this is an auto migration process kicked off by the thresholds, does it have to complete before any new migration processes can start? Is there any way to change/override that if that is the case? Is only one process running because the only data remaining in that pool is likely, as far as I can see, from a single node? 2) Could I safely cancel this migration session, then restart four (or six, or eight) new migration sessions? It sometimes takes as long to cancel a migration session as it would be to let it complete, and I wouldn't want to take another six hours to cancel it and get back where I started. 3) Will this auto migration session complete when it gets to the lower threshold (60%) or will it keep going until the pool is empty? ---------------- Kevin Kinder State of WV
