Uwe also asked: whether it is unwise to have the external and the internal migrations in an uncoordinated fashion so that it might happen some files have been migrated to external before they undergo migration from one internal pool (pool0) to the other (pool1)
That's up to the admin. IOW coordinate it as you like or not at all, depending on what you're trying to acomplish.. But the admin should understand... Whether you use mmchattr -P newpool or mmapplypolicy/Migrate TO POOL 'newpool' to do an internal, GPFS pool to GPFS pool migration there are two steps: A) Mark the newly chosen, preferred newpool in the file's inode. Then, as long as any data blocks are on GPFS disks that are NOT in newpool, the file is considered "ill-placed". B) Migrate every datablock of the file to 'newpool', by allocating a block in newpool, copy a block of data, updating the file's data pointers, etc, etc. If you say "-I defer" then only (A) is done. You can force (B) later with a restripeXX command. If you default or say "-I yes" then (A) is done and (B) is done as part of the work of the same command (mmchattr or mmapplypolicy) - (If the command is interrupted, (B) may happen for some subset of the data blocks, leaving the file "ill-placed") Putting "external" storage into the mix -- you can save time and go faster - if you migrate completely and directly from the original pool - skip the "internal" migrate! Maybe if you're migrating but leaving a first block "stub" - you'll want to migrate to external first, and then migrate just the one block "internally"... On the other hand, if you're going to keep the whole file on GPFS storage for a while, but want to free up space in the original pool, you'll want to migrate the data to a newpool at some point... In that case you might want to pre-migrate (make a copy on HSM but not free the GPFS copy) also. Should you pre-migrate from the original pool or the newpool? Your choice! Maybe you arrange things so you pre-migrate while the data is on the faster pool. Maybe it doesn't make much difference, so you don't even think about it anymore, now that you understand that GPFS doesn't care either! ;-)
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
