Hi

while maybe not the solution to the exact problem, GPFS does allow heat based tiering which seems to me like a more correct way to ensure efficient utilisation of fast SSD space.

https://www.ibm.com/docs/en/storage-scale/5.0.5?topic=scale-file-heat-tracking-file-access-temperature

Best,

Ott Oopkaup
University of Tartu, High Performance Computing Centre
Systems Administrator

On 6/29/23 12:01, Alec wrote:
Yeah that kind of placement isn't possible, because you can only use attributes you know at the time of inode creation.  When a file is created it's created with the current timestamp and then updated (usually after the copy finishes). If the majority of your data is going to be older than 365 days you may want to make your file placement default to your pool2, and then when you've finished copying all your older data, and want to freshen your data, update the placement policy it to the proper pool so new data hits the high speed disk.

You can use a file path/name in the placement policy and some copy engines do give inodes temporary names before giving them their proper name...  like rsync will start a file off with a . (and add a random suffix) until the file is completely transferred, then move it to the destination filename, then it will update the date, time, and ownership on that inode.  So you could have your placement engine put anything starting with a . into your pool2 and then migrate fresher files back up to your higher tiered storage if desired.

Not sure if any of that helps.

Alec

On Wed, Jun 28, 2023 at 10:12 PM Timm Stamer <[email protected]> wrote:

    Hello Prasad,

    we use this in our weekly policy run:

    RULE 'migrate cold data' MIGRATE FROM POOL 'system' TO POOL 'data'
    WHERE CURRENT_TIMESTAMP - ACCESS_TIME > INTERVAL '30' DAYS


    I do not know if a direct placement based on timestamps is possible.



    Kind regards
    Timm Stamer


    Am Mittwoch, dem 28.06.2023 um 23:18 +0000 schrieb Prasad Surampudi:
    >
    >
    >
    > ACHTUNG! Diese E-Mail kommt von Extern!WARNING! This email
    originated
    > off-campus.
    >
    >
    >
    >
    > Can we setup a file placement policy based on creating and
    > modification times when copying data from Windows into GPFS? It
    looks
    > like the placement policy only accepts only CREATION_TIME and not
    > MODIFICATION_TIME or ACCESS_TIME.  If I try to use these, I get
    > message saying these are not supported in the context (placement?)
    > But even the policy with CREATION_TIME is not working properly. We
    > wanted files with CREATION_TIME which is 365 days ago go to a
    > different pool other than ‘system’. But when we copy files it is
    > dumping all files into system pool. But the creation time is looks
    > correct on the file after copied into GPFS. Does it check file
    > CREATION_TIME when a file gets copied over to GPFS?
    >
    > Here is the placement policy:
    > RULE ‘tiering’ SET POOL ‘pool2’
    > WHERE ( DAYS(CURRENT_TIMESTAMP) - DAYS(CREATION_TIME) > 365 )
    > RULE ‘default’ SET POOL ‘system’
    >
    >
    >
    > Logo
    >
    > Description automatically generated
    >
    > Prasad Surampudi | Sr. Systems Engineer
    > [email protected] | 302.419.5833
    >
    > Innovative IT consulting & modern infrastructure solutions
    > www.theatsgroup.com <http://www.theatsgroup.com>
    > _______________________________________________
    > gpfsug-discuss mailing list
    > gpfsug-discuss at gpfsug.org <http://gpfsug.org>
    > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
    _______________________________________________
    gpfsug-discuss mailing list
    gpfsug-discuss at gpfsug.org <http://gpfsug.org>
    http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to