[
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-16085:
------------------------------------
Release Note:
S3Guard will now track the etag of uploaded files and, if an S3 bucket is
versioned, the object version. You can then control how to react to a mismatch
between the data in the DynamoDB table and that in the store: warn, fail, or,
when using versions, return the original value.
This adds two new columns to the table: etag and version. This is transparent
to older S3A clients -but when such clients add/update data to the S3Guard
table, they will not add these values. As a result, the etag/version checks
will not work with files uploaded by older clients.
For a consistent experience, upgrade all clients to use the latest hadoop
version.
> S3Guard: use object version or etags to protect against inconsistent read
> after replace/overwrite
> -------------------------------------------------------------------------------------------------
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.2.0
> Reporter: Ben Roling
> Assignee: Ben Roling
> Priority: Major
> Attachments: HADOOP-16085-003.patch, HADOOP-16085_002.patch,
> HADOOP-16085_3.2.0_001.patch
>
>
> Currently S3Guard doesn't track S3 object versions. If a file is written in
> S3A with S3Guard and then subsequently overwritten, there is no protection
> against the next reader seeing the old version of the file instead of the new
> one.
> It seems like the S3Guard metadata could track the S3 object version. When a
> file is created or updated, the object version could be written to the
> S3Guard metadata. When a file is read, the read out of S3 could be performed
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my
> impression from looking through the code. My organization is looking to
> shift some datasets stored in HDFS over to S3 and is concerned about this
> potential issue as there are some cases in our codebase that would do an
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite
> track down any JIRAs discussing it. If there is one, feel free to close this
> with a reference to it.
> Am I understanding things correctly? Is this idea feasible? Any feedback
> that could be provided would be appreciated. We may consider crafting a
> patch.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]