[
https://issues.apache.org/jira/browse/SPARK-47618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dongjoon Hyun updated SPARK-47618:
----------------------------------
Parent Issue: SPARK-54248 (was: SPARK-51166)
> Use Magic Committer for all S3 buckets by default
> -------------------------------------------------
>
> Key: SPARK-47618
> URL: https://issues.apache.org/jira/browse/SPARK-47618
> Project: Spark
> Issue Type: Sub-task
> Components: Spark Core
> Affects Versions: 4.1.0
> Reporter: Dongjoon Hyun
> Assignee: Dongjoon Hyun
> Priority: Major
> Labels: pull-request-available
> Fix For: 4.1.0
>
>
> This issue aims to use Apache Hadoop `Magic Committer` for all S3 buckets by
> default in Apache Spark 4.0.0.
> Apache Hadoop `Magic Committer` has been used for S3 buckets to get the best
> performance since [S3 becomes fully consistent on December 1st,
> 2020|https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/].
> -
> https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html#ConsistencyModel
> bq. Amazon S3 provides strong read-after-write consistency for PUT and DELETE
> requests of objects in your Amazon S3 bucket in all AWS Regions. This
> behavior applies to both writes to new objects as well as PUT requests that
> overwrite existing objects and DELETE requests. In addition, read operations
> on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object
> Tags, and object metadata (for example, the HEAD object) are strongly
> consistent.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]