Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
The problem was corrected but,  I have no idea how.

I stopped the migration and replication kicked off. I let the replication run 
for an hour or so while I was at lunch. When I got back I stopped replication 
and restarted the migration. I have no idea how but it used the correct volumes 
this time. 

If this makes any sense by all means please explain it to me.

I appreciate all the help.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Ron 
Delaware
Sent: Wednesday, September 21, 2016 12:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be if 
there was a loop, meaning your disk pool points to the tape pool as the next 
stgpool, and your tape pool points to the disk pool as the next stgpool. If 
your tape pool was to hit the high migration mark, it would start a migration 
to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert IBM Corporation | System Lab Services 
IBM Certified Solutions Advisor - Spectrum Storage IBM Certified Spectrum Scale 
4.1 & Spectrum Protect 7.1.3 IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" 



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
>  

Re: TSM Migration Question

2016-09-21 Thread Ron Delaware
Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be 
if there was a loop, meaning your disk pool points to the tape pool as the 
next stgpool, and your tape pool points to the disk pool as the next 
stgpool. If your tape pool was to hit the high migration mark, it would 
start a migration to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | System Lab Services
IBM Certified Solutions Advisor - Spectrum Storage
IBM Certified Spectrum Scale 4.1 & Spectrum Protect 7.1.3
IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" 



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7

Re: TSM Migration Question

2016-09-21 Thread Gee, Norman
Is it migrating or reclaiming at 70% as defined.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Wednesday, September 21, 2016 8:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Migration Question

Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 0
>  Reclamation in Progress?: No
>Last Update by (administrator): RPLAIR
> Last 

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 0
>  Reclamation in Progress?: No
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 08:38:58
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Skylar Thompson
> Sent: Wednesday, September 21, 2016 11:19 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] TSM Migration Question
>
> Can you post the output of "Q STG F=D" for each of those pools?
>
> On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> > Within TSM I am migrating an old 

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
OLD STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool f=d

Storage Pool Name: DDSTGPOOL
Storage Pool Type: Primary
Device Class Name: DDFILE
   Estimated Capacity: 402,224 G
   Space Trigger Util: 69.4
 Pct Util: 70.4
 Pct Migr: 70.4
  Pct Logical: 95.9
 High Mig Pct: 100
  Low Mig Pct: 95
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 26
Reclamation Processes: 10
Next Storage Pool: DDSTGPOOL4500
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 2,947
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 4,560
 Reclamation in Progress?: Yes
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 09:05:51
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No



NEW STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d

Storage Pool Name: DDSTGPOOL4500
Storage Pool Type: Primary
Device Class Name: DDFILE1
   Estimated Capacity: 437,159 G
   Space Trigger Util: 21.4
 Pct Util: 6.7
 Pct Migr: 6.7
  Pct Logical: 100.0
 High Mig Pct: 90
  Low Mig Pct: 70
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool: TAPEPOOL
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 0
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 0
 Reclamation in Progress?: No
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 08:38:58
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is not using the next pool parameter,  has anyone seen this problem 

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is not using the next pool parameter,  has anyone seen this problem before?
>
> I appreciate the help.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: TSM Migration Question

2006-08-22 Thread Kevin Kinder
Gerry,

We migrated from 5.1.7.0 to 5.3.0.0, and then to 5.3.3.3 on z/OS.

If you have any questions, feel free to contact me directly.

Kevin Kinder
admin(at)wvadmin.gov

 [EMAIL PROTECTED] 8/22/06 1:57 PM 
Good Day To All.

 

Currently, we are running 5.2.3.3 on a Z/os platform and we
are preparing to migrate to 5.3 

 

Couple of questions:

 

1)   Has anyone in the Z/os world gone to 5.3.3 yet? 

2)   From 5.2.3.3 can we migrate directly to 5.3.3?  Or do we
migrate from 5.2.3.3 ... to 5.3 ... to 5.3.1 ... to 5.3.3

 

Gerry P. Weaver