Hi, Marc, thx. I understand that being premigrated to an external pool should not affect the internal migration of a file. FYI: This is not the typical "gold - silver - bronze" setup with a one-dimensional migration path. Instead, one of the internal pools (pool0) is used to receive files written in very small records, the other (pool1) is the "normal" pool and receives all other files. Files written to pool0 should move to pool1 once they are closed (i.e. complete), but pool 0 has enough capacity to live without off-migration to pool1 for a few days, thus I'd thought to keep the frequency of that migration to not more than once per day. The external pool serves as a remote async mirror to achieve some resiliency against FS failures and also unintentional file deletion (metadata / SOBAR backups and file listings to keep the HPSS coordinates of GPFS files are done regularly), only in the long run data will be purged from pool1. Thus, migration to external should be done in shorter intervals. Sounds like I can go ahead without hesitation.
Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: [email protected] ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: Thomas Wolter, Sven Schooß Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: "Marc A Kaplan" <[email protected]> To: gpfsug main discussion list <[email protected]> Date: 24/04/2018 23:10 Subject: Re: [gpfsug-discuss] ILM: migrating between internal pools while premigrated to external Sent by: [email protected] Uwe also asked: whether it is unwise to have the external and the internal migrations in an uncoordinated fashion so that it might happen some files have been migrated to external before they undergo migration from one internal pool (pool0) to the other (pool1) That's up to the admin. IOW coordinate it as you like or not at all, depending on what you're trying to acomplish.. But the admin should understand... Whether you use mmchattr -P newpool or mmapplypolicy/Migrate TO POOL 'newpool' to do an internal, GPFS pool to GPFS pool migration there are two steps: A) Mark the newly chosen, preferred newpool in the file's inode. Then, as long as any data blocks are on GPFS disks that are NOT in newpool, the file is considered "ill-placed". B) Migrate every datablock of the file to 'newpool', by allocating a block in newpool, copy a block of data, updating the file's data pointers, etc, etc. If you say "-I defer" then only (A) is done. You can force (B) later with a restripeXX command. If you default or say "-I yes" then (A) is done and (B) is done as part of the work of the same command (mmchattr or mmapplypolicy) - (If the command is interrupted, (B) may happen for some subset of the data blocks, leaving the file "ill-placed") Putting "external" storage into the mix -- you can save time and go faster - if you migrate completely and directly from the original pool - skip the "internal" migrate! Maybe if you're migrating but leaving a first block "stub" - you'll want to migrate to external first, and then migrate just the one block "internally"... On the other hand, if you're going to keep the whole file on GPFS storage for a while, but want to free up space in the original pool, you'll want to migrate the data to a newpool at some point... In that case you might want to pre-migrate (make a copy on HSM but not free the GPFS copy) also. Should you pre-migrate from the original pool or the newpool? Your choice! Maybe you arrange things so you pre-migrate while the data is on the faster pool. Maybe it doesn't make much difference, so you don't even think about it anymore, now that you understand that GPFS doesn't care either! ;-) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
