[
https://issues.apache.org/jira/browse/HADOOP-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14651506#comment-14651506
]
Thomas Demoor commented on HADOOP-12292:
----------------------------------------
Oh, now I get what you ([[email protected]] and [~ndimiduk]) meant by TTL:
Object expiration through bucket lifecylces.
Not sure that approach is easy, there are several non-trivial issues. Some that
immediately come to mind:
* You are limited to 1000 policy rules per bucket
* Prefix based
{{PUT Object: mybucket/object}} -> write a file
{{PUT Bucket lifecycle: mybucket, Expiration, 1 day, prefix=object}} ->
asynchronously delete this file
{{PUT Object: mybucket/object2}} -> write another file
The next day BOTH files are automatically deleted (prefix!!!)
Also, all future writes which share the prefix will also be deleted
automatically after a day.
> Make use of DeleteObjects optional
> ----------------------------------
>
> Key: HADOOP-12292
> URL: https://issues.apache.org/jira/browse/HADOOP-12292
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Thomas Demoor
> Assignee: Thomas Demoor
>
> The {{DeleteObjectsRequest}} was not part of the initial S3 API, but was
> added later. This patch allows one to configure s3a to replace each
> multidelete request by consecutive single deletes. Evidently, this setting is
> disabled by default as this causes slower deletes.
> The main motivation is to enable legacy S3-compatible object stores to make
> the transition from s3n (which does not use multidelete) to s3a, fully
> allowing the planned s3n deprecation.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)