[
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854914#comment-16854914
]
Steve Loughran commented on HADOOP-16279:
-----------------------------------------
reviewed the code, don't see any fundamental issues, (at least that I can
understand). Suggested more tests
regarding the failing committer test, that's listed in HADOOP-16207
In HADOOP-15183 I actually log the time the file was recorded as deleted, which
is a bit more informative.
But: I have not seen this failure myself. Which worries me, as I cant then say
"it's gone away". Again, in HADOOP-15183 I'm doing more work on the put
operation when committing files, but a lot of that is actually reducing the
number of parent dir markers created. Currently, committing a file seems to
put() something for each of them.
iMO, the only way I could see ITestDirectoryCommitMRJob failing is if we didn't
reinstate the deleted file.
What will do (right now) is see if I can enhance that test by always creating
then deleting the destination directory (so guarantee that a tombstone marker
is always added), then commit work underneath it.
> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> -----------------------------------------------------------------------
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Gabor Bota
> Assignee: Gabor Bota
> Priority: Major
> Attachments: Screenshot 2019-05-17 at 13.21.26.png
>
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}}
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but
> the implementation is not done yet.
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry
> I would like to start a debate on whether we need to use separate expiry
> times for entries and tombstones. My +1 on not using separate settings - so
> only one config name and value.
> ----
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore,
> using an existing feature in guava's cache implementation. Expiry is set with
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the
> guava cache's internal solution for the TTL of these entries. This is an
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not using the [DDB's TTL
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
> We need a different behavior than what ddb promises: [cleaning once a day
> with a background
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
> is not usable for this feature - although it can be used as a general
> cleanup solution separately and independently from S3Guard.
> * Use the same ttl for entries and authoritative directory listing
> * All entries can be expired. Then the returned metadata from the MS will be
> null.
> * Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix)
> to MetadataStore interface. These methods will delete all expired metadata
> from the ms.
> * Use last_updated field in ms for both file metadata and authoritative
> directory expiry.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]