Gabor Bota created HADOOP-16279:
-----------------------------------
Summary: S3Guard: Implement time-based (TTL) expiry for entries
(and tombstones)
Key: HADOOP-16279
URL: https://issues.apache.org/jira/browse/HADOOP-16279
Project: Hadoop Common
Issue Type: Sub-task
Components: fs/s3
Reporter: Gabor Bota
In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and
added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}}
extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but
the implementation is not done yet.
To complete this feature the following should be done:
* Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
* Implement metadata entry and tombstone expiry
I would like to start a debate on whether we need to use separate expiry times
for entries and tombstones. My +1 on not using separate settings - so only one
config name and value.
Notes:
* In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, using
an existing feature in guava's cache implementation. Expiry is set with
{{fs.s3a.s3guard.local.ttl}}.
* This is not the same, and not using the DDB's feature of ttl
[(DOCS)|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
We need stronger consistency guarantees than what ddb promises: [cleaning once
a day with a background
job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
is not usable for this feature - although it can be used as a general cleanup
solution separately and independently from S3Guard.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]