Christian,

Like any AFM deployment in Spectrum Scale, you define how much local working 
space you want to allocate in the Spectrum Scale filesystem and files will be 
cached on demand into this 'working set' and will never exceed this upper bound.

If you create a new file on this filesystem (in this fileset) then it will 
automatically be queued to be replicated onto the backend. If you make 
continuous changes to a file then setting afmAsyncDelay can reduce the number 
of writes to the backend object store.
The local copy though will be kept and only stubbed as space runs out (or some 
other explicit rule applied such as via mmafmctl evict).



Daniel

Daniel Kidger
HPC Storage Solutions Architect, EMEA
[email protected]<mailto:[email protected]>
+44 (0)7818 522266

hpe.com<http://www.hpe.com/>

[cid:25007e89-6cb9-44a6-b342-34415f14c0ed]

________________________________
From: gpfsug-discuss <[email protected]> on behalf of Christian 
Petersson <[email protected]>
Sent: 12 May 2022 13:47
To: IBM Spectrum Scale <[email protected]>
Cc: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] TCT Premigration Question

Thanks for the answer,
AFM to Cloud Object Storage did we look at but had some thoughts about.
The base design was to replicate a copy from our primary cluster to the backup 
cluster where we push that data directly to S3.
Our second cluster has much less disk space than our primary cluster and our 
primary cluster has a HSM function enabled to a tape library.

If I start using AFM to Object, will that delete the file from my disk and only 
keep a stub file or similar on the disk?
This is why we are looking at TCT instead of AFM Object, we don't have enough 
disk space to keep multiple copies of the data on disk, we need to use other 
storage layers like S3 and Tape.

/C

On Thu, 12 May 2022 at 14:38, IBM Spectrum Scale 
<[email protected]<mailto:[email protected]>> wrote:
Some thoughts to consider regarding your question.

        1        If this is “new” deployment, it is recommended that you 
evaluate the AFM object support since the TCT feature is stabilized and no 
future changes/enhancements will be made to it.
        2        TCT doesn’t support Ceph cloud object storage. So, though it 
is an S3 compliant interface, and things are working, in case of any issues in 
future, IBM would require that you reproduce any problemin a supported 
environment.
        3        If a file is skipped for any errors, if it is policy driven 
(pre)migration, the next policy run would pick up the file again. Even within a 
single (pre)migrate, TCT does 5 retries, on failure. So at both levels (within 
TCT as well as at policy level) file will be retried for migration.


Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of  Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum at 
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479<https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479>.

If your query concerns a potential software error in Spectrum Scale (GPFS) and 
you have an IBM software maintenance contract please contact  1-800-237-5511 in 
the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for 
priority messages to the Spectrum Scale (GPFS) team.



From:        "Christian Petersson" 
<[email protected]<mailto:[email protected]>>
To:        [email protected]<mailto:[email protected]>
Date:        05/11/2022 03:34 AM
Subject:        [EXTERNAL] [gpfsug-discuss] TCT Premigration Question
Sent by:        "gpfsug-discuss" 
<[email protected]<mailto:[email protected]>>
________________________________



Hi, I have set up a new Spectrum Scale Cluster where we are using TCT 
Premigration to a Ceph S3 Cluster, the premigration itself works fine and I got 
data to our S3 storage. But from time to time I reach our S3 max sessions and 
get the following

Hi,
I have set up a new Spectrum Scale Cluster where we are using TCT Premigration 
to a Ceph S3 Cluster, the premigration itself works fine and I got data to our 
S3 storage.
But from time to time I reach our S3 max sessions and get the following errors 
in our policy.

<1> MCSTG000130E: Command Failed with following reason: Request failed with 
error : com.ibm.gpfsconnector.messages.GpfsConnectorException: Unable to 
migrate data to the cloud, an unexpected exception occurred : 
com.amazonaws.ResetException: The request to the service failed with a 
retryable reason, but resetting the request input stream has failed. See 
exception.getExtraInfo or debug-level logging for the original failure that 
caused this retry.;  If the request involves an input stream, the maximum 
stream buffer size can be configured via 
request.getRequestClientOptions().setReadLimit(int).

But I wonder, if a file get skipt, will Spectrum Scale Automatic retry that 
file again later or will that be skipped and how can I catch does files for a 
new migration?

--
Med Vänliga Hälsningar
Christian Petersson

E-Post: [email protected]<mailto:[email protected]>
Mobil: 070-3251577
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org<http://gpfsug.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org<http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org>






--
Med Vänliga Hälsningar
Christian Petersson

E-Post: [email protected]<mailto:[email protected]>
Mobil: 070-3251577

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to