This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch branch-4.1
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-4.1 by this push:
new edb2ac786c6b [SPARK-51165][DOCS][FOLLOWUP] Fix typo in migration guide
edb2ac786c6b is described below
commit edb2ac786c6bf93520ef4ee9e62f8cb8e84399e9
Author: Cheng Pan <[email protected]>
AuthorDate: Fri Dec 5 10:17:37 2025 -0800
[SPARK-51165][DOCS][FOLLOWUP] Fix typo in migration guide
### What changes were proposed in this pull request?
Fix a typo.
### Why are the changes needed?
Fix a typo.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Review, it's a docs-only change.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #53340 from pan3793/SPARK-51165-followup.
Authored-by: Cheng Pan <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 1679aaf28455855785ba19552b6f7335f7f97704)
Signed-off-by: Dongjoon Hyun <[email protected]>
---
docs/core-migration-guide.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/core-migration-guide.md b/docs/core-migration-guide.md
index 1d55c4c3e66d..415404322641 100644
--- a/docs/core-migration-guide.md
+++ b/docs/core-migration-guide.md
@@ -24,7 +24,7 @@ license: |
## Upgrading from Core 4.0 to 4.1
-- Since Spark 4.1, Spark Master deamon provides REST API by default. To
restore the behavior before Spark 4.1, you can set `spark.master.rest.enabled`
to `false`.
+- Since Spark 4.1, Spark Master daemon provides REST API by default. To
restore the behavior before Spark 4.1, you can set `spark.master.rest.enabled`
to `false`.
- Since Spark 4.1, Spark will compress RDD checkpoints by default. To restore
the behavior before Spark 4.1, you can set `spark.checkpoint.compress` to
`false`.
- Since Spark 4.1, Spark uses Apache Hadoop Magic Committer for all S3 buckets
by default. To restore the behavior before Spark 4.0, you can set
`spark.hadoop.fs.s3a.committer.magic.enabled=false`.
- Since Spark 4.1, `java.lang.InternalError` encountered during file reading
will no longer fail the task if the configuration
`spark.sql.files.ignoreCorruptFiles` or the data source option
`ignoreCorruptFiles` is set to `true`.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]