This is an automated email from the ASF dual-hosted git repository.
yihua pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 17d5c6aab36 [DOCS] Fix minor website typos (#12865)
17d5c6aab36 is described below
commit 17d5c6aab3680f2853ec355038b90bcc3b775dc9
Author: Ilya Kharlamov <[email protected]>
AuthorDate: Wed Feb 26 21:27:41 2025 +0100
[DOCS] Fix minor website typos (#12865)
Co-authored-by: Y Ethan Guo <[email protected]>
---
website/docs/quick-start-guide.md | 9 +++++----
website/docs/write_operations.md | 2 +-
website/versioned_docs/version-1.0.1/quick-start-guide.md | 9 +++++----
website/versioned_docs/version-1.0.1/write_operations.md | 2 +-
4 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/website/docs/quick-start-guide.md
b/website/docs/quick-start-guide.md
index e179d7464c3..befe36116a5 100644
--- a/website/docs/quick-start-guide.md
+++ b/website/docs/quick-start-guide.md
@@ -2,7 +2,7 @@
title: "Spark Quick Start"
sidebar_position: 2
toc: true
-last_modified_at: 2023-08-23T21:14:52+09:00
+last_modified_at: 2025-02-21T03:17:02+09:00
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@@ -458,10 +458,11 @@ values={[
val adjustedFareDF = spark.read.format("hudi").
load(basePath).limit(2).
withColumn("fare", col("fare") * 10)
+
adjustedFareDF.write.format("hudi").
-option("hoodie.datasource.write.payload.class","com.payloads.CustomMergeIntoConnector").
-mode(Append).
-save(basePath)
+
option("hoodie.datasource.write.payload.class","com.payloads.CustomMergeIntoConnector").
+ mode(Append).
+ save(basePath)
// Notice Fare column has been updated but all other columns remain intact.
spark.read.format("hudi").load(basePath).show()
```
diff --git a/website/docs/write_operations.md b/website/docs/write_operations.md
index eac036ff26d..1a0ea4560dc 100644
--- a/website/docs/write_operations.md
+++ b/website/docs/write_operations.md
@@ -68,7 +68,7 @@ Hudi supports migrating your existing large tables into a
Hudi table using the `
### INSERT_OVERWRITE
**Type**: _Batch_, **Action**: _REPLACE_COMMIT (CoW + MoR)_
-This operation is used to rerwrite the all the partitions that are present in
the input. This operation can be faster
+This operation is used to rewrite the all the partitions that are present in
the input. This operation can be faster
than `upsert` for batch ETL jobs, that are recomputing entire target
partitions at once (as opposed to incrementally
updating the target tables). This is because, we are able to bypass indexing,
precombining and other repartitioning
steps in the upsert write path completely. This comes in handy if you are
doing any backfill or any such type of use-cases.
diff --git a/website/versioned_docs/version-1.0.1/quick-start-guide.md
b/website/versioned_docs/version-1.0.1/quick-start-guide.md
index e179d7464c3..befe36116a5 100644
--- a/website/versioned_docs/version-1.0.1/quick-start-guide.md
+++ b/website/versioned_docs/version-1.0.1/quick-start-guide.md
@@ -2,7 +2,7 @@
title: "Spark Quick Start"
sidebar_position: 2
toc: true
-last_modified_at: 2023-08-23T21:14:52+09:00
+last_modified_at: 2025-02-21T03:17:02+09:00
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@@ -458,10 +458,11 @@ values={[
val adjustedFareDF = spark.read.format("hudi").
load(basePath).limit(2).
withColumn("fare", col("fare") * 10)
+
adjustedFareDF.write.format("hudi").
-option("hoodie.datasource.write.payload.class","com.payloads.CustomMergeIntoConnector").
-mode(Append).
-save(basePath)
+
option("hoodie.datasource.write.payload.class","com.payloads.CustomMergeIntoConnector").
+ mode(Append).
+ save(basePath)
// Notice Fare column has been updated but all other columns remain intact.
spark.read.format("hudi").load(basePath).show()
```
diff --git a/website/versioned_docs/version-1.0.1/write_operations.md
b/website/versioned_docs/version-1.0.1/write_operations.md
index eac036ff26d..1a0ea4560dc 100644
--- a/website/versioned_docs/version-1.0.1/write_operations.md
+++ b/website/versioned_docs/version-1.0.1/write_operations.md
@@ -68,7 +68,7 @@ Hudi supports migrating your existing large tables into a
Hudi table using the `
### INSERT_OVERWRITE
**Type**: _Batch_, **Action**: _REPLACE_COMMIT (CoW + MoR)_
-This operation is used to rerwrite the all the partitions that are present in
the input. This operation can be faster
+This operation is used to rewrite the all the partitions that are present in
the input. This operation can be faster
than `upsert` for batch ETL jobs, that are recomputing entire target
partitions at once (as opposed to incrementally
updating the target tables). This is because, we are able to bypass indexing,
precombining and other repartitioning
steps in the upsert write path completely. This comes in handy if you are
doing any backfill or any such type of use-cases.