This is an automated email from the ASF dual-hosted git repository.

xushiyan pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 7e626613d344 chore(site): reorganize pages for release notes (#17735)
7e626613d344 is described below

commit 7e626613d3444205282e50f2a8a81b5d2c0d15e3
Author: deepakpanda93 <[email protected]>
AuthorDate: Tue Dec 30 21:12:24 2025 +0530

    chore(site): reorganize pages for release notes (#17735)
    
    The release notes page for a major release will now contain all release 
notes of the corresponding minor/patch releases. Also hide 1.0.0 beta release 
pages.
    
    ---------
    
    Co-authored-by: Shiyan Xu <[email protected]>
---
 website/blog/2023-11-01-record-level-index.md      |   2 +-
 ...023-12-28-apache-hudi-2023-a-year-in-review.mdx |  20 +-
 .../2024-12-06-non-blocking-concurrency-control.md |   2 +-
 website/blog/2024-12-16-announcing-hudi-1-0-0.mdx  |   6 +-
 ...024-12-29-apache-hudi-2024-a-year-in-review.mdx |   2 +-
 website/blog/2025-09-17-hudi-auto-gen-keys.mdx     |   2 +-
 ...5-11-25-apache-hudi-release-1-1-announcement.md |   6 +-
 ...025-12-29-apache-hudi-2025-a-year-in-review.mdx |   4 +-
 website/docs/sql_dml.md                            |   4 +-
 website/docusaurus.config.js                       |  27 +-
 website/releases/download.md                       |  80 +++---
 website/releases/release-0.14.1.md                 |  47 ----
 .../{release-0.14.0.md => release-0.14.md}         | 304 ++++++++++++---------
 .../{release-0.15.0.md => release-0.15.md}         |  16 +-
 website/releases/release-1.0.1.md                  |  46 ----
 website/releases/release-1.0.2.md                  |  48 ----
 .../releases/{release-1.0.0.md => release-1.0.md}  | 118 +++++++-
 .../releases/{release-1.1.1.md => release-1.1.md}  |  17 +-
 website/sidebarsReleases.js                        |  13 +-
 website/src/pages/roadmap.md                       |   2 +-
 website/versioned_docs/version-0.15.0/sql_dml.md   |   4 +-
 website/versioned_docs/version-1.0.0/sql_dml.md    |   4 +-
 website/versioned_docs/version-1.0.1/sql_dml.md    |   4 +-
 website/versioned_docs/version-1.0.2/sql_dml.md    |   4 +-
 website/versioned_docs/version-1.1.1/sql_dml.md    |   4 +-
 25 files changed, 398 insertions(+), 388 deletions(-)

diff --git a/website/blog/2023-11-01-record-level-index.md 
b/website/blog/2023-11-01-record-level-index.md
index 4732c3fbf3eb..66089dce5d91 100644
--- a/website/blog/2023-11-01-record-level-index.md
+++ b/website/blog/2023-11-01-record-level-index.md
@@ -22,7 +22,7 @@ or traffic patterns, where a specific index may be more 
suitable for simpler ope
 Users often face trade-offs when selecting index types for different tables, 
since there hasn't been
 a generally performant index capable of facilitating both writes and reads 
with minimal operational overhead.
 
-Starting from [Hudi 0.14.0](https://hudi.apache.org/releases/release-0.14.0), 
we are thrilled to announce a 
+Starting from [Hudi 
0.14.0](https://hudi.apache.org/releases/release-0.14#release-0140), we are 
thrilled to announce a 
 general purpose index for Apache Hudi - the Record Level Index (RLI). This 
innovation not only dramatically boosts
 write efficiency but also improves read efficiency for relevant queries. 
Integrated seamlessly within the table storage layer,
 RLI can easily work without any additional operational efforts.
diff --git a/website/blog/2023-12-28-apache-hudi-2023-a-year-in-review.mdx 
b/website/blog/2023-12-28-apache-hudi-2023-a-year-in-review.mdx
index c744178cd3b5..644f44130f5c 100644
--- a/website/blog/2023-12-28-apache-hudi-2023-a-year-in-review.mdx
+++ b/website/blog/2023-12-28-apache-hudi-2023-a-year-in-review.mdx
@@ -33,20 +33,20 @@ exciting developments and accomplishments that have defined 
the year 2023 for th
 
 The year 2023 has been exceptionally productive for Hudi, marked by 
significant advancements and innovations.
 There have been three major releases: 
[0.13.0](https://hudi.apache.org/releases/release-0.13.0),
-[0.14.0](https://hudi.apache.org/releases/release-0.14.0), and the trailblazing
-[1.0.0-beta1](https://hudi.apache.org/releases/release-1.0.0-beta1) that have 
collectively reshaped the
+[0.14.0](https://hudi.apache.org/releases/release-0.14#release-0140), and the 
trailblazing
+[1.0.0-beta1](https://hudi.apache.org/releases/release-1.0#release-100-beta1) 
that have collectively reshaped the
 database experience for Hudi data lakehouses. Here are some brief summaries 
highlighting key features introduced:
 
 ### Indexing has elevated to a whole new level
 
-Hudi's new [Record Level 
Index](https://hudi.apache.org/releases/release-0.14.0#record-level-index)
+Hudi's new [Record Level 
Index](https://hudi.apache.org/releases/release-0.14#record-level-index)
 is a game-changing feature that boosts write performance for large tables. It 
achieves this by efficiently
 storing per-record locations, enabling rapid retrieval during index look-ups. 
Benchmarks indicate a 72%
 improvement in write latency compared to the Global Simple Index, alongside 
notable reductions in query latency
-for equality-matching queries. The new [Consistent Hash 
Index](https://hudi.apache.org/releases/release-0.14.0#consistent-hashing-index-support)
+for equality-matching queries. The new [Consistent Hash 
Index](https://hudi.apache.org/releases/release-0.14#consistent-hashing-index-support)
 dynamically scales the buckets for hash-based indexing schemes. By addressing 
data skew issues inherent in bucket
 index, it can achieve blazing fast look-up similar to the Record Level Index 
during the write process.
-[Functional 
Index](https://hudi.apache.org/releases/release-1.0.0-beta1#functional-index)
+[Functional 
Index](https://hudi.apache.org/releases/release-1.0#functional-index)
 enables the creation and deletion of indexes on specific columns, providing 
users with additional means to
 speed up queries and adjust partitioning.
 
@@ -57,7 +57,7 @@ The community has continued innovations on write performance 
including
 [Early-conflict detection for 
OCC](https://hudi.apache.org/releases/release-0.13.0#early-conflict-detection-for-multi-writer)
 which proactively validates concurrent writes before they are written to disk, 
avoiding significant resource wastage
 and enhancing throughput. Up-leveling this, the
-[Non-Blocking Concurrency 
Control](https://hudi.apache.org/releases/release-1.0.0-beta1#concurrency-control)
+[Non-Blocking Concurrency 
Control](https://hudi.apache.org/releases/release-1.0#concurrency-control-1)
 introduced in 1.0 further optimizes multi-writer throughput by allowing 
conflicts to be resolved later in query
 or via compaction. Responding to popular community requests,
 [partial update 
capability](https://hudi.apache.org/releases/release-0.13.0#support-for-partial-payload-update)
@@ -69,15 +69,15 @@ tables that are usually super wide.
 
[HoodieRecordMerger](https://hudi.apache.org/releases/release-0.13.0#optimizing-record-payload-handling)
 is a new abstraction that unifies the merging semantics and makes use of the 
engine-native representation for
 records in the process. Benchmark shows a ballpark of 10-20% boost for upsert 
performance.
-[File Group 
Reader](https://hudi.apache.org/releases/release-1.0.0-beta1#new-filegroup-reader)
+[File Group 
Reader](https://hudi.apache.org/releases/release-1.0#new-filegroup-reader)
 is another API that standardizes File Group access, reducing MoR tables' read 
latencies by approximately 20%.
 Enabling position-based merging and page-skipping can further accelerate 
snapshot queries by 5.7 times.
 
 ### Usability receives significant attention
 
-[Table-valued function 
`hudi_table_changes`](https://hudi.apache.org/releases/release-0.14.0#table-valued-function-named-hudi_table_changes-designed-for-incremental-reading-through-spark-sql)
+[Table-valued function 
`hudi_table_changes`](https://hudi.apache.org/releases/release-0.14#table-valued-function-named-hudi_table_changes-designed-for-incremental-reading-through-spark-sql)
 simplifies performing incremental queries via SQLs.
-[Auto-generated 
keys](https://hudi.apache.org/releases/release-0.14.0#support-for-hudi-tables-with-autogenerated-keys)
+[Auto-generated 
keys](https://hudi.apache.org/releases/release-0.14#support-for-hudi-tables-with-autogenerated-keys)
 allows users to omit providing a record key field, especially useful for 
append-only tables. Among many other
 user-friendly updates, two more notable ones are the addition of a
 [`hudi-cli-bundle` 
jar](https://hudi.apache.org/releases/release-0.13.0#hudi-cli-bundle)
@@ -91,7 +91,7 @@ and `after` images, can be served through incremental 
queries, offering rich ana
 [Metaserver](https://hudi.apache.org/releases/release-0.13.0#metaserver)
 offers centralized management services for operating numerous tables in 
lakehouse projects, signifying a major
 step in Hudi's platform features.
-[`HoodieStreamer`](https://hudi.apache.org/releases/release-0.14.0#hoodiedeltastreamer-renamed-to-hoodiestreamer)
+[`HoodieStreamer`](https://hudi.apache.org/releases/release-0.14#hoodiedeltastreamer-renamed-to-hoodiestreamer)
 (formerly `HoodieDeltaStreamer`) remains a highly popular tool for data 
ingestion:
 [new 
sources](https://hudi.apache.org/releases/release-0.13.0#new-source-support-in-deltastreamer)
 such as Protobuf Kafka source, GCS incremental source, and Pulsar source were 
added, further expanding
diff --git a/website/blog/2024-12-06-non-blocking-concurrency-control.md 
b/website/blog/2024-12-06-non-blocking-concurrency-control.md
index b8e515d8fadb..685c010d5adc 100644
--- a/website/blog/2024-12-06-non-blocking-concurrency-control.md
+++ b/website/blog/2024-12-06-non-blocking-concurrency-control.md
@@ -20,7 +20,7 @@ Another very common scenario is multiple stream sources 
joined together to suppl
 stream is taking records with partial table schema fields. Common and strong 
demand for multi-stream concurrent ingestion has always been there. 
 The Hudi community has collected so many feedbacks from users ever since the 
day Hudi supported streaming ingestion and processing.
 
-Starting from [Hudi 1.0.0](https://hudi.apache.org/releases/release-1.0.0), we 
are thrilled to announce a new general-purpose 
+Starting from [Hudi 
1.0.0](https://hudi.apache.org/releases/release-1.0#release-100), we are 
thrilled to announce a new general-purpose 
 concurrency model for Apache Hudi - the Non-blocking Concurrency Control 
(NBCC)- aimed at the stream processing or high-contention/frequent writing 
scenarios. 
 In contrast to [Optimistic Concurrency 
Control](/blog/2021/12/16/lakehouse-concurrency-control-are-we-too-optimistic/),
 where writers abort the transaction 
 if there is a hint of contention, this innovation allows multiple streaming 
writes to the same Hudi table without any overhead of conflict resolution, 
while 
diff --git a/website/blog/2024-12-16-announcing-hudi-1-0-0.mdx 
b/website/blog/2024-12-16-announcing-hudi-1-0-0.mdx
index b06fe52e91c2..e4f470a29e5d 100644
--- a/website/blog/2024-12-16-announcing-hudi-1-0-0.mdx
+++ b/website/blog/2024-12-16-announcing-hudi-1-0-0.mdx
@@ -22,7 +22,7 @@ We are thrilled to announce the release of Apache Hudi 1.0, a 
landmark achieveme
   <img src="/assets/images/blog/hudi-innovation-timeline.jpg" alt="innovation 
timeline" />
 </div>
 
-This [release](/releases/release-1.0.0) is more than just a version 
increment—it advances the breadth of Hudi’s feature set and its architecture's 
robustness while bringing fresh innovation to shape the future. This post 
reflects on how technology and the surrounding ecosystem have evolved, making a 
case for a holistic “***Data Lakehouse Management System***” (***DLMS***) as 
the new Northstar. For most of this post, we will deep dive into the latest 
capabilities of Hudi 1.0 that make thi [...]
+This [release](/releases/release-1.0#release-100) is more than just a version 
increment—it advances the breadth of Hudi’s feature set and its architecture's 
robustness while bringing fresh innovation to shape the future. This post 
reflects on how technology and the surrounding ecosystem have evolved, making a 
case for a holistic “***Data Lakehouse Management System***” (***DLMS***) as 
the new Northstar. For most of this post, we will deep dive into the latest 
capabilities of Hudi 1.0 tha [...]
 
 ## Evolution of the Data Lakehouse
 
@@ -189,7 +189,7 @@ If you are wondering: “All of this sounds cool, but how do 
I upgrade?” we ha
 ![Indexes](/assets/images/backwards-compat-writing.png)
 <p align = "center">Figure: 4-step process for painless rolling upgrades to 
Hudi 1.0</p>
 
-Hudi 1.0 introduces backward-compatible writing to achieve this in 4 steps, as 
described above. Hudi 1.0 also automatically handles any checkpoint translation 
necessary as we switch to completion time-based processing semantics for 
incremental and CDC queries. The Hudi metadata table has to be temporarily 
disabled during this upgrade process but can be turned on once the upgrade is 
completed successfully. Please read the [release 
notes](/releases/release-1.0.0) carefully to plan your migration.
+Hudi 1.0 introduces backward-compatible writing to achieve this in 4 steps, as 
described above. Hudi 1.0 also automatically handles any checkpoint translation 
necessary as we switch to completion time-based processing semantics for 
incremental and CDC queries. The Hudi metadata table has to be temporarily 
disabled during this upgrade process but can be turned on once the upgrade is 
completed successfully. Please read the [release 
notes](/releases/release-1.0#release-100) carefully to pla [...]
 
 ## What’s Next?
 
@@ -210,7 +210,7 @@ Are you ready to experience the future of data lakehouses? 
Here’s how you can
 
 * Documentation: Explore Hudi’s [Documentation](/docs/overview) and learn the 
[concepts](/docs/hudi_stack).
 * Quickstart Guide: Follow the [Quickstart Guide](/docs/quick-start-guide) to 
set up your first Hudi project.
-* Upgrading from a previous version?  Follow the [migration 
guide](/releases/release-1.0.0#migration-guide) and contact the Hudi OSS 
community for help.
+* Upgrading from a previous version?  Follow the [migration 
guide](/releases/release-1.0#migration-guide-2) and contact the Hudi OSS 
community for help.
 * Join the Community: Participate in discussions on the [Hudi Mailing 
List](https://hudi.apache.org/community/get-involved/), 
[Slack](https://join.slack.com/t/apache-hudi/shared_invite/zt-2ggm1fub8-_yt4Reu9djwqqVRFC7X49g)
 and [GitHub](https://github.com/apache/hudi/issues).
 * Follow us on social media: 
[Linkedin](https://www.linkedin.com/company/apache-hudi/?viewAsMember=true), 
[X/Twitter](https://twitter.com/ApacheHudi).
 
diff --git a/website/blog/2024-12-29-apache-hudi-2024-a-year-in-review.mdx 
b/website/blog/2024-12-29-apache-hudi-2024-a-year-in-review.mdx
index 95399d11a5c9..0ace1231068a 100644
--- a/website/blog/2024-12-29-apache-hudi-2024-a-year-in-review.mdx
+++ b/website/blog/2024-12-29-apache-hudi-2024-a-year-in-review.mdx
@@ -33,7 +33,7 @@ Our community presence expanded significantly across various 
platforms:
 
 ### Apache Hudi 1.0 Release
 
-2024 marked a historic moment with the [release of Apache Hudi 
1.0](https://hudi.apache.org/releases/release-1.0.0), representing a major 
evolution in data lakehouse technology. This release brought several 
groundbreaking features:
+2024 marked a historic moment with the [release of Apache Hudi 
1.0](https://hudi.apache.org/releases/release-1.0#release-100), representing a 
major evolution in data lakehouse technology. This release brought several 
groundbreaking features:
 
 - **Secondary Indexing**: First of its kind in lakehouses, enabling 
database-like query acceleration with demonstrated 95% latency reduction on 
10TB TPC-DS for low-moderate selectivity queries
 - **Logical Partitioning via Expression Indexes**: Introducing 
PostgreSQL-style expression indexes for more efficient partition management
diff --git a/website/blog/2025-09-17-hudi-auto-gen-keys.mdx 
b/website/blog/2025-09-17-hudi-auto-gen-keys.mdx
index 9f0c24a89c6d..335169726863 100644
--- a/website/blog/2025-09-17-hudi-auto-gen-keys.mdx
+++ b/website/blog/2025-09-17-hudi-auto-gen-keys.mdx
@@ -30,7 +30,7 @@ Apache Hudi was the first lakehouse storage project to 
introduce the notion of r
 
 Append-only writes are very common in the data lakehouse, such as ingesting 
application logs streamed continuously from numerous servers or capturing 
clickstream events from user interactions on a website. Even for this kind of 
scenario, having record keys is beneficial in scenarios like concurrently 
running data-fixing backfill writers (e.g., a GDPR deletion process) with 
ongoing writers to the same table. Without record keys, engineers typically had 
to coordinate the backfill to run on [...]
 
-Given the advantages of supporting record keys, Hudi required users to set one 
or multiple record key fields when creating a table prior to [release 
0.14](https://hudi.apache.org/releases/release-0.14.0). However, this 
requirement created friction for users in cases where there were no natural 
record keys in the incoming stream for simply setting another config variable. 
Even for users who understood the benefits of record keys, they had to put 
careful thought into their record key gener [...]
+Given the advantages of supporting record keys, Hudi required users to set one 
or multiple record key fields when creating a table prior to [release 
0.14](https://hudi.apache.org/releases/release-0.14#release-0140). However, 
this requirement created friction for users in cases where there were no 
natural record keys in the incoming stream for simply setting another config 
variable. Even for users who understood the benefits of record keys, they had 
to put careful thought into their recor [...]
 
 ## Automatic Key Generation
 
diff --git a/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md 
b/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md
index 80cde0ee9d9c..b349a902d466 100644
--- a/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md
+++ b/website/blog/2025-11-25-apache-hudi-release-1-1-announcement.md
@@ -11,7 +11,7 @@ tags:
   - performance
 ---
 
-The Hudi community is excited to announce the [release of Hudi 
1.1](https://hudi.apache.org/releases/release-1.1.1), a major milestone that 
sets the stage for the next generation of data lakehouse capabilities. This 
release represents months of focused engineering on foundational improvements, 
engine-specific optimizations, and key architectural enhancements, laying the 
foundation for ambitious features coming in future releases.
+The Hudi community is excited to announce the [release of Hudi 
1.1](https://hudi.apache.org/releases/release-1.1#release-111), a major 
milestone that sets the stage for the next generation of data lakehouse 
capabilities. This release represents months of focused engineering on 
foundational improvements, engine-specific optimizations, and key architectural 
enhancements, laying the foundation for ambitious features coming in future 
releases.
 
 Hudi continues to evolve rapidly, with contributions from a vibrant community 
of developers and users. The 1.1 release brings over 800 commits addressing 
performance bottlenecks, expanding engine support, and introducing new 
capabilities that make Hudi tables more reliable, faster, and easier to 
operate. Let’s dive into the highlights.
 
@@ -153,7 +153,7 @@ The default behavior is adaptive: if no ordering field 
(`hoodie.table.ordering.f
 
 ### Custom Mergers—The Flexible Approach
 
-For complex merging logic—such as field-level reconciliation, aggregating 
counters, or preserving audit fields—the `HoodieRecordMerger` interface 
provides a modern, engine-native alternative to payload classes. You need to 
set the merge mode to `CUSTOM` and provide your own implementation of 
`HoodieRecordMerger`. By using the new API, you can achieve consistent merging 
across all code paths: precombine, updating writes, compaction, and snapshot 
reads—you are strongly encouraged to migrat [...]
+For complex merging logic—such as field-level reconciliation, aggregating 
counters, or preserving audit fields—the `HoodieRecordMerger` interface 
provides a modern, engine-native alternative to payload classes. You need to 
set the merge mode to `CUSTOM` and provide your own implementation of 
`HoodieRecordMerger`. By using the new API, you can achieve consistent merging 
across all code paths: precombine, updating writes, compaction, and snapshot 
reads—you are strongly encouraged to migrat [...]
 
 ## Apache Spark Integration Improvements
 
@@ -235,4 +235,4 @@ Hudi 1.1 introduces [native integration with 
Polaris](https://hudi.apache.org/do
 
 The future of Hudi is incredibly exciting, and we're building it together with 
a vibrant, global community of contributors. Building on the strong foundation 
of 1.1, we're actively developing transformative AI/ML-focused capabilities for 
Hudi 1.2 and beyond—unstructured data types and column groups for efficient 
storage of embeddings and documents, Lance, Vortex, blob-optimized Parquet 
support, and vector search capabilities for lakehouse tables. This is just the 
beginning—we're reimagin [...]
 
-Join us in building the future. Check out the [1.1 release 
notes](https://hudi.apache.org/releases/release-1.1.1) to get started, join our 
[Slack space](https://hudi.apache.org/slack/), follow us on 
[LinkedIn](https://www.linkedin.com/company/apache-hudi) and [X 
(twitter)](http://x.com/apachehudi), and subscribe (send an empty email) to the 
[mailing list](mailto:[email protected])—let's build the next generation of 
Hudi together.
+Join us in building the future. Check out the [1.1 release 
notes](https://hudi.apache.org/releases/release-1.1#release-111) to get 
started, join our [Slack space](https://hudi.apache.org/slack/), follow us on 
[LinkedIn](https://www.linkedin.com/company/apache-hudi) and [X 
(twitter)](http://x.com/apachehudi), and subscribe (send an empty email) to the 
[mailing list](mailto:[email protected])—let's build the next generation of 
Hudi together.
diff --git a/website/blog/2025-12-29-apache-hudi-2025-a-year-in-review.mdx 
b/website/blog/2025-12-29-apache-hudi-2025-a-year-in-review.mdx
index 60175066a341..e483fe1ef274 100644
--- a/website/blog/2025-12-29-apache-hudi-2025-a-year-in-review.mdx
+++ b/website/blog/2025-12-29-apache-hudi-2025-a-year-in-review.mdx
@@ -25,13 +25,13 @@ The project celebrated new milestones in contributor 
recognition. [Yue Zhang](ht
 
 ## Development Highlights
 
-[Hudi 1.1](https://hudi.apache.org/releases/release-1.1.1) landed with over 
800 commits from 50+ contributors. The headline feature is the pluggable table 
format framework — Hudi's storage engine is now pluggable, allowing its 
battle-tested transaction management, indexing, and concurrency control to work 
while storing data in Hudi's native format or other table formats like [Apache 
Iceberg](https://iceberg.apache.org/) via [Apache XTable 
(incubating)](https://xtable.apache.org/).
+[Hudi 1.1](https://hudi.apache.org/releases/release-1.1) landed with over 800 
commits from 50+ contributors. The headline feature is the pluggable table 
format framework — Hudi's storage engine is now pluggable, allowing its 
battle-tested transaction management, indexing, and concurrency control to work 
while storing data in Hudi's native format or other table formats like [Apache 
Iceberg](https://iceberg.apache.org/) via [Apache XTable 
(Incubating)](https://xtable.apache.org/).
 
 <img 
src="/assets/images/blog/2025-12-29-apache-hudi-2025-a-year-in-review/03-hudi11.jpg"
 alt="drawing" style={{width:'80%', display:'block', marginLeft:'auto', 
marginRight:'auto', marginTop:'18pt', marginBottom:'18pt'}} />
 
 Performance saw major gains across the board. Parquet binary copy for 
clustering delivered 10-15x faster execution with 95% compute reduction. Apache 
Flink writer achieved 2-3x improved throughput with Avro conversion eliminated 
in the write path. Apache Spark metadata-table streaming ran ~18% faster for 
update-heavy workloads. Indexing enhancements — partitioned record index, 
partition-level bucket index, HFile caching, and Bloom filters — delivered up 
to 4x speedup for lookups on massi [...]
 
-Spark 4.0 and Flink 2.0 support were added. [Apache Polaris 
(incubating)](https://polaris.apache.org/) catalog integration enabled 
multi-engine queries with unified governance. Operational simplicity improved 
with storage-based locking that eliminated external dependencies. New merge 
modes replaced legacy payload classes with declarative options, and SQL 
procedures enhanced table management directly in Spark SQL. See more details in 
the [release blog](https://hudi.apache.org/blog/2025/11 [...]
+Spark 4.0 and Flink 2.0 support were added. [Apache Polaris 
(Incubating)](https://polaris.apache.org/) catalog integration enabled 
multi-engine queries with unified governance. Operational simplicity improved 
with storage-based locking that eliminated external dependencies. New merge 
modes replaced legacy payload classes with declarative options, and SQL 
procedures enhanced table management directly in Spark SQL. See more details in 
the [release blog](https://hudi.apache.org/blog/2025/11 [...]
 
 [Hudi-rs](https://github.com/apache/hudi-rs) expanded its feature support — 
release 0.3.0 introduced Merge-on-Read and incremental queries, while 0.4.0 
added C++ bindings and Avro log file support. The native Rust implementation 
now powers Ray Data and Daft integrations for ML and multi-cloud analytics.
 
diff --git a/website/docs/sql_dml.md b/website/docs/sql_dml.md
index 81ad924289f0..90f407644386 100644
--- a/website/docs/sql_dml.md
+++ b/website/docs/sql_dml.md
@@ -489,7 +489,7 @@ This is done to ensure that the compaction and cleaning 
services are not execute
 
 We have introduced the Consistent Hashing Bucket Index since [0.13.0 
release](/releases/release-0.13.0#consistent-hashing-index). This is one of 
three [bucket index](indexes.md#additional-writer-side-indexes) variants 
available in Hudi. The consistent hashing bucket index offers dynamic 
scalability of data buckets for the writer. 
 You can find the 
[RFC](https://github.com/apache/hudi/blob/master/rfc/rfc-42/rfc-42.md) for the 
design of this feature.
-In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support), the index 
is supported for Flink engine.
+In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support), the index is 
supported for Flink engine.
 
 To utilize this feature, configure the option `index.type` as `BUCKET` and set 
`hoodie.index.bucket.engine` to `CONSISTENT_HASHING`.
 When enabling the consistent hashing index, it's important to enable 
clustering scheduling within the writer. During this process, the writer will 
perform dual writes for both the old and new data buckets while the clustering 
is pending. Although the dual write does not impact correctness, it is strongly 
recommended to execute clustering as quickly as possible.
@@ -546,7 +546,7 @@ select * from t1 limit 20;
 ```
 
 :::caution
-Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support) and 
currently there are some limitations to use it as of 0.14.0:
+Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support) and currently 
there are some limitations to use it as of 0.14.0:
 
 - This index is supported only for MOR table. This limitation also exists even 
if using Spark engine.
 - It does not work with metadata table enabled. This limitation also exists 
even if using Spark engine.
diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js
index 82c2118b256a..85904ea98f8c 100644
--- a/website/docusaurus.config.js
+++ b/website/docusaurus.config.js
@@ -146,11 +146,32 @@ module.exports = {
           },
           {
             from: ["/docs/releases", "/docs/next/releases"],
-            to: "/releases/release-1.1.1",
+            to: "/releases/release-1.1",
           },
           {
             from: ["/releases"],
-            to: "/releases/release-1.1.1",
+            to: "/releases/release-1.1",
+          },
+          {
+            from: [
+              "/releases/release-1.0.2",
+              "/releases/release-1.0.1",
+              "/releases/release-1.0.0",
+            ],
+            to: "/releases/release-1.0",
+          },
+          {
+            from: [
+              "/releases/release-0.15.0",
+            ],
+            to: "/releases/release-0.15",
+          },
+          {
+            from: [
+              "/releases/release-0.14.1",
+              "/releases/release-0.14.0",
+            ],
+            to: "/releases/release-0.14",
           },
         ],
       },
@@ -323,7 +344,7 @@ module.exports = {
             },
             {
               label: "Releases",
-              to: "/releases/release-1.1.1",
+              to: "/releases/release-1.1",
             },
             {
               label: "Download",
diff --git a/website/releases/download.md b/website/releases/download.md
index 7878a361d733..0c774b28f04c 100644
--- a/website/releases/download.md
+++ b/website/releases/download.md
@@ -8,24 +8,24 @@ toc: true
 ## Release 1.1.1
 
 * Source Release : [Apache Hudi 1.1.1 Source 
Release](https://downloads.apache.org/hudi/1.1.1/hudi-1.1.1.src.tgz) 
([asc](https://downloads.apache.org/hudi/1.1.1/hudi-1.1.1.src.tgz.asc), 
[sha512](https://downloads.apache.org/hudi/1.1.1/hudi-1.1.1.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
1.1.1](/releases/release-1.1.1))
+* Release Note : ([Release Note for Apache Hudi 
1.1.1](/releases/release-1.1#release-111))
 
 * Maven Artifacts:
 
   <details>
   <summary><strong>Spark Bundles</strong></summary>
   * **Spark 4.0**
-    * 
[hudi-spark4.0-bundle_2.13](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-spark4.0-bundle_2.13/1.1.1/hudi-spark4.0-bundle_2.13-1.1.1.jar)
-  
+    * 
[hudi-spark4.0-bundle_2.13](https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark4.0-bundle_2.13/1.1.1/hudi-spark4.0-bundle_2.13-1.1.1.jar)
+
   * **Spark 3.5**
-    * 
[hudi-spark3.5-bundle_2.13](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-spark3.5-bundle_2.13/1.1.1/hudi-spark3.5-bundle_2.13-1.1.1.jar)
-    * 
[hudi-spark3.5-bundle_2.12](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-spark3.5-bundle_2.12/1.1.1/hudi-spark3.5-bundle_2.12-1.1.1.jar)
+    * 
[hudi-spark3.5-bundle_2.13](https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark3.5-bundle_2.13/1.1.1/hudi-spark3.5-bundle_2.13-1.1.1.jar)
+    * 
[hudi-spark3.5-bundle_2.12](https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark3.5-bundle_2.12/1.1.1/hudi-spark3.5-bundle_2.12-1.1.1.jar)
 
   * **Spark 3.4**
-    * 
[hudi-spark3.4-bundle_2.12](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-spark3.4-bundle_2.12/1.1.1/hudi-spark3.4-bundle_2.12-1.1.1.jar)
+    * 
[hudi-spark3.4-bundle_2.12](https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark3.4-bundle_2.12/1.1.1/hudi-spark3.4-bundle_2.12-1.1.1.jar)
 
   * **Spark 3.3**
-    * 
[hudi-spark3.3-bundle_2.12](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-spark3.3-bundle_2.12/1.1.1/hudi-spark3.3-bundle_2.12-1.1.1.jar)
+    * 
[hudi-spark3.3-bundle_2.12](https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark3.3-bundle_2.12/1.1.1/hudi-spark3.3-bundle_2.12-1.1.1.jar)
 
   </details>
 
@@ -33,13 +33,13 @@ toc: true
   <summary><strong>Flink Bundles</strong></summary>
 
   * **Flink 2.x**
-    * 
[hudi-flink2.0-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-flink2.0-bundle/1.1.1/hudi-flink2.0-bundle-1.1.1.jar)
+    * 
[hudi-flink2.0-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-flink2.0-bundle/1.1.1/hudi-flink2.0-bundle-1.1.1.jar)
 
   * **Flink 1.x**
-    * 
[hudi-flink1.20-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-flink1.20-bundle/1.1.1/hudi-flink1.20-bundle-1.1.1.jar)
-    * 
[hudi-flink1.19-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-flink1.19-bundle/1.1.1/hudi-flink1.19-bundle-1.1.1.jar)
-    * 
[hudi-flink1.18-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-flink1.18-bundle/1.1.1/hudi-flink1.18-bundle-1.1.1.jar)
-    * 
[hudi-flink1.17-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-flink1.17-bundle/1.1.1/hudi-flink1.17-bundle-1.1.1.jar)
+    * 
[hudi-flink1.20-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-flink1.20-bundle/1.1.1/hudi-flink1.20-bundle-1.1.1.jar)
+    * 
[hudi-flink1.19-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-flink1.19-bundle/1.1.1/hudi-flink1.19-bundle-1.1.1.jar)
+    * 
[hudi-flink1.18-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-flink1.18-bundle/1.1.1/hudi-flink1.18-bundle-1.1.1.jar)
+    * 
[hudi-flink1.17-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-flink1.17-bundle/1.1.1/hudi-flink1.17-bundle-1.1.1.jar)
 
   </details>
 
@@ -47,13 +47,13 @@ toc: true
   <summary><strong>Query Engines</strong></summary>
 
   * **Presto**
-    * 
[hudi-presto-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-presto-bundle/1.1.1/hudi-presto-bundle-1.1.1.jar)
+    * 
[hudi-presto-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-presto-bundle/1.1.1/hudi-presto-bundle-1.1.1.jar)
 
   * **Trino**
-    * 
[hudi-trino-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-trino-bundle/1.1.1/hudi-trino-bundle-1.1.1.jar)
+    * 
[hudi-trino-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-trino-bundle/1.1.1/hudi-trino-bundle-1.1.1.jar)
 
   * **Hive**
-    * 
[hudi-hadoop-mr-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-hadoop-mr-bundle/1.1.1/hudi-hadoop-mr-bundle-1.1.1.jar)
+    * 
[hudi-hadoop-mr-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-hadoop-mr-bundle/1.1.1/hudi-hadoop-mr-bundle-1.1.1.jar)
 
   </details>
 
@@ -61,14 +61,14 @@ toc: true
   <summary><strong>Utilities & Tools</strong></summary>
 
   * **Hudi Utilities**
-    * 
[hudi-utilities-bundle_2.13](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-utilities-bundle_2.13/1.1.1/hudi-utilities-bundle_2.13-1.1.1.jar)
-    * 
[hudi-utilities-bundle_2.12](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-utilities-bundle_2.12/1.1.1/hudi-utilities-bundle_2.12-1.1.1.jar)
-    * 
[hudi-utilities-slim-bundle_2.13](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-utilities-slim-bundle_2.13/1.1.1/hudi-utilities-slim-bundle_2.13-1.1.1.jar)
-    * 
[hudi-utilities-slim-bundle_2.12](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-utilities-slim-bundle_2.12/1.1.1/hudi-utilities-slim-bundle_2.12-1.1.1.jar)
+    * 
[hudi-utilities-bundle_2.13](https://repo1.maven.org/maven2/org/apache/hudi/hudi-utilities-bundle_2.13/1.1.1/hudi-utilities-bundle_2.13-1.1.1.jar)
+    * 
[hudi-utilities-bundle_2.12](https://repo1.maven.org/maven2/org/apache/hudi/hudi-utilities-bundle_2.12/1.1.1/hudi-utilities-bundle_2.12-1.1.1.jar)
+    * 
[hudi-utilities-slim-bundle_2.13](https://repo1.maven.org/maven2/org/apache/hudi/hudi-utilities-slim-bundle_2.13/1.1.1/hudi-utilities-slim-bundle_2.13-1.1.1.jar)
+    * 
[hudi-utilities-slim-bundle_2.12](https://repo1.maven.org/maven2/org/apache/hudi/hudi-utilities-slim-bundle_2.12/1.1.1/hudi-utilities-slim-bundle_2.12-1.1.1.jar)
 
   * **Hudi CLI**
-    * 
[hudi-cli-bundle_2.13](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-cli-bundle_2.13/1.1.1/hudi-cli-bundle_2.13-1.1.1.jar)
-    * 
[hudi-cli-bundle_2.12](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-cli-bundle_2.12/1.1.1/hudi-cli-bundle_2.12-1.1.1.jar)
+    * 
[hudi-cli-bundle_2.13](https://repo1.maven.org/maven2/org/apache/hudi/hudi-cli-bundle_2.13/1.1.1/hudi-cli-bundle_2.13-1.1.1.jar)
+    * 
[hudi-cli-bundle_2.12](https://repo1.maven.org/maven2/org/apache/hudi/hudi-cli-bundle_2.12/1.1.1/hudi-cli-bundle_2.12-1.1.1.jar)
 
   </details>
 
@@ -76,65 +76,55 @@ toc: true
   <summary><strong>Platform Integrations</strong></summary>
 
   * **AWS**
-    * 
[hudi-aws-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-aws-bundle/1.1.1/hudi-aws-bundle-1.1.1.jar)
+    * 
[hudi-aws-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-aws-bundle/1.1.1/hudi-aws-bundle-1.1.1.jar)
 
   * **Google Cloud**
-    * 
[hudi-gcp-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-gcp-bundle/1.1.1/hudi-gcp-bundle-1.1.1.jar)
+    * 
[hudi-gcp-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-gcp-bundle/1.1.1/hudi-gcp-bundle-1.1.1.jar)
 
   * **Data Catalogs**
-    * 
[hudi-hive-sync-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-hive-sync-bundle/1.1.1/hudi-hive-sync-bundle-1.1.1.jar)
-    * 
[hudi-datahub-sync-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-datahub-sync-bundle/1.1.1/hudi-datahub-sync-bundle-1.1.1.jar)
-  
+    * 
[hudi-hive-sync-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-hive-sync-bundle/1.1.1/hudi-hive-sync-bundle-1.1.1.jar)
+    * 
[hudi-datahub-sync-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-datahub-sync-bundle/1.1.1/hudi-datahub-sync-bundle-1.1.1.jar)
+
   * **Kafka Connect**
-    * 
[hudi-kafka-connect-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-kafka-connect-bundle/1.1.1/hudi-kafka-connect-bundle-1.1.1.jar)
+    * 
[hudi-kafka-connect-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-kafka-connect-bundle/1.1.1/hudi-kafka-connect-bundle-1.1.1.jar)
 
   * **Timeline Server**
-    * 
[hudi-timeline-server-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-timeline-server-bundle/1.1.1/hudi-timeline-server-bundle-1.1.1.jar)
+    * 
[hudi-timeline-server-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-timeline-server-bundle/1.1.1/hudi-timeline-server-bundle-1.1.1.jar)
 
   * **Metaserver**
-    * 
[hudi-metaserver-server-bundle](https://repository.apache.org/content/repositories/releases/org/apache/hudi/hudi-metaserver-server-bundle/1.1.1/hudi-metaserver-server-bundle-1.1.1.jar)
+    * 
[hudi-metaserver-server-bundle](https://repo1.maven.org/maven2/org/apache/hudi/hudi-metaserver-server-bundle/1.1.1/hudi-metaserver-server-bundle-1.1.1.jar)
 
   </details>
 
 ## Release 1.0.2
 
 * Source Release : [Apache Hudi 1.0.2 Source 
Release](https://downloads.apache.org/hudi/1.0.2/hudi-1.0.2.src.tgz) 
([asc](https://downloads.apache.org/hudi/1.0.2/hudi-1.0.2.src.tgz.asc), 
[sha512](https://downloads.apache.org/hudi/1.0.2/hudi-1.0.2.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
1.0.2](/releases/release-1.0.2))
+* Release Note : ([Release Note for Apache Hudi 
1.0.2](/releases/release-1.0#release-102))
 
 ## Release 1.0.1
 
 * Source Release : [Apache Hudi 1.0.1 Source 
Release](https://downloads.apache.org/hudi/1.0.1/hudi-1.0.1.src.tgz) 
([asc](https://downloads.apache.org/hudi/1.0.1/hudi-1.0.1.src.tgz.asc), 
[sha512](https://downloads.apache.org/hudi/1.0.1/hudi-1.0.1.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
1.0.1](/releases/release-1.0.1))
+* Release Note : ([Release Note for Apache Hudi 
1.0.1](/releases/release-1.0#release-101))
 
 ## Release 1.0.0
 
 * Source Release : [Apache Hudi 1.0.0 Source 
Release](https://downloads.apache.org/hudi/1.0.0/hudi-1.0.0.src.tgz) 
([asc](https://downloads.apache.org/hudi/1.0.0/hudi-1.0.0.src.tgz.asc), 
[sha512](https://downloads.apache.org/hudi/1.0.0/hudi-1.0.0.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
1.0.0](/releases/release-1.0.0))
-
-## Release 1.0.0-beta2
-
-* Source Release : [Apache Hudi 1.0.0-beta2 Source 
Release](https://downloads.apache.org/hudi/1.0.0-beta2/hudi-1.0.0-beta2.src.tgz)
 
([asc](https://downloads.apache.org/hudi/1.0.0-beta2/hudi-1.0.0-beta2.src.tgz.asc),
 
[sha512](https://downloads.apache.org/hudi/1.0.0-beta2/hudi-1.0.0-beta2.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
1.0.0-beta2](/releases/release-1.0.0-beta2))
-
-## Release 1.0.0-beta1
-
-* Source Release : [Apache Hudi 1.0.0-beta1 Source 
Release](https://www.apache.org/dyn/closer.lua/hudi/1.0.0-beta1/hudi-1.0.0-beta1.src.tgz)
 
([asc](https://downloads.apache.org/hudi/1.0.0-beta1/hudi-1.0.0-beta1.src.tgz.asc),
 
[sha512](https://downloads.apache.org/hudi/1.0.0-beta1/hudi-1.0.0-beta1.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
1.0.0-beta1](/releases/release-1.0.0-beta1))
+* Release Note : ([Release Note for Apache Hudi 
1.0.0](/releases/release-1.0#release-100))
 
 ## Release 0.15.0
 
 * Source Release : [Apache Hudi 0.15.0 Source 
Release](https://downloads.apache.org/hudi/0.15.0/hudi-0.15.0.src.tgz) 
([asc](https://downloads.apache.org/hudi/0.15.0/hudi-0.15.0.src.tgz.asc), 
[sha512](https://downloads.apache.org/hudi/0.15.0/hudi-0.15.0.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
0.15.0](/releases/release-0.15.0))
+* Release Note : ([Release Note for Apache Hudi 
0.15.0](/releases/release-0.15#release-0150))
 
 ## Release 0.14.1
 
 * Source Release : [Apache Hudi 0.14.1 Source 
Release](https://downloads.apache.org/hudi/0.14.1/hudi-0.14.1.src.tgz) 
([asc](https://downloads.apache.org/hudi/0.14.1/hudi-0.14.1.src.tgz.asc), 
[sha512](https://downloads.apache.org/hudi/0.14.1/hudi-0.14.1.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
0.14.1](/releases/release-0.14.1))
+* Release Note : ([Release Note for Apache Hudi 
0.14.1](/releases/release-0.14#release-0141))
 
 ## Release 0.14.0
 
 * Source Release : [Apache Hudi 0.14.0 Source 
Release](https://downloads.apache.org/hudi/0.14.0/hudi-0.14.0.src.tgz) 
([asc](https://downloads.apache.org/hudi/0.14.0/hudi-0.14.0.src.tgz.asc), 
[sha512](https://downloads.apache.org/hudi/0.14.0/hudi-0.14.0.src.tgz.sha512))
-* Release Note : ([Release Note for Apache Hudi 
0.14.0](/releases/release-0.14.0))
+* Release Note : ([Release Note for Apache Hudi 
0.14.0](/releases/release-0.14#release-0140))
 
 ## End-of-life Releases
 
diff --git a/website/releases/release-0.14.1.md 
b/website/releases/release-0.14.1.md
deleted file mode 100644
index abf4204e8a65..000000000000
--- a/website/releases/release-0.14.1.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title: "Release 0.14.1"
-layout: releases
-toc: true
-last_modified_at: 2023-05-25T13:00:00-08:00
----
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## [Release 0.14.1](https://github.com/apache/hudi/releases/tag/release-0.14.1)
-
-## Migration Guide
-
-* This release (0.14.1) does not introduce any new table version, thus no 
migration is needed if you are on 0.14.0.
-* If migrating from an older release, please check the migration guide from 
the previous release notes, specifically
-  the upgrade instructions in [0.6.0](/releases/release-0.6.0),
-  [0.9.0](/releases/release-0.9.0), [0.10.0](/releases/release-0.10.0),
-  [0.11.0](/releases/release-0.11.0), [0.12.0](/releases/release-0.12.0), 
[0.13.0](/releases/release-0.13.0), and
-  [0.14.0](/releases/release-0.14.0)
-
-### Bug fixes
-
-0.14.1 release is mainly intended for bug fixes and stability. The fixes span 
across many components, including
-
-* Hudi Streamer
-* Spark SQL
-* Spark datasource writer
-* Table services
-* Meta Syncs
-* Flink engine
-* Unit, functional, integration tests and CI
-
-## Known Regressions
-We discovered a regression in Hudi 0.14.1 release related to Complex Key gen 
when record key consists of one field. 
-It can silently ingest duplicates if table is upgraded from previous versions.
-
-:::tip
-Avoid upgrading any existing table to 0.14.1 if you are using 
ComplexKeyGenerator with single field as record key and multiple partition 
fields.
-:::
-
-## Raw Release Notes
-
-The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12353493)
-
-:::tip
-0.14.1 release also contains all the new features and bug fixes from 0.14.0, 
of which the release notes are [here](/releases/release-0.14.0)
-:::
diff --git a/website/releases/release-0.14.0.md 
b/website/releases/release-0.14.md
similarity index 68%
rename from website/releases/release-0.14.0.md
rename to website/releases/release-0.14.md
index c0276c3b5697..854b4990bc1f 100644
--- a/website/releases/release-0.14.0.md
+++ b/website/releases/release-0.14.md
@@ -1,31 +1,38 @@
 ---
-title: "Release 0.14.0"
+title: "Release 0.14"
 layout: releases
 toc: true
 ---
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-## [Release 0.14.0](https://github.com/apache/hudi/releases/tag/release-0.14.0)
-Apache Hudi 0.14.0 marks a significant milestone with a range of new 
functionalities and enhancements. 
-These include the introduction of Record Level Index, automatic generation of 
record keys, the `hudi_table_changes` 
-function for incremental reads, and more. Notably, this release also 
incorporates support for Spark 3.4. On the Flink 
-front, version 0.14.0 brings several exciting features such as consistent 
hashing index support, Flink 1.17 support, and 
+This page contains release notes for all Apache Hudi 0.14.x releases, 
including:
+
+- [Release 0.14.0](#release-0140)
+- [Release 0.14.1](#release-0141)
+
+---
+
+## [Release 
0.14.0](https://github.com/apache/hudi/releases/tag/release-0.14.0) 
{#release-0140}
+Apache Hudi 0.14.0 marks a significant milestone with a range of new 
functionalities and enhancements.
+These include the introduction of Record Level Index, automatic generation of 
record keys, the `hudi_table_changes`
+function for incremental reads, and more. Notably, this release also 
incorporates support for Spark 3.4. On the Flink
+front, version 0.14.0 brings several exciting features such as consistent 
hashing index support, Flink 1.17 support, and
 Update and Delete statement support. Additionally, this release upgrades the 
Hudi table version, prompting users to consult
 the Migration Guide provided below. We encourage users to review the [release 
highlights](#release-highlights),
-[breaking changes](#breaking-changes), and [behavior 
changes](#behavior-changes) before 
+[breaking changes](#breaking-changes), and [behavior 
changes](#behavior-changes) before
 adopting the 0.14.0 release.
 
 
 
 ## Migration Guide
 In version 0.14.0, we've made changes such as the removal of compaction plans 
from the ".aux" folder and the introduction
-of a new log block version. As part of this release, the table version is 
updated to version `6`. When running a Hudi job 
-with version 0.14.0 on a table with an older table version, an automatic 
upgrade process is triggered to bring the table 
+of a new log block version. As part of this release, the table version is 
updated to version `6`. When running a Hudi job
+with version 0.14.0 on a table with an older table version, an automatic 
upgrade process is triggered to bring the table
 up to version `6`. This upgrade is a one-time occurrence for each Hudi table, 
as the `hoodie.table.version` is updated in
-the property file upon completion of the upgrade. Additionally, a command-line 
tool for downgrading has been included, 
-allowing users to move from table version `6` to `5`, or revert from Hudi 
0.14.0 to a version prior to 0.14.0. To use this 
-tool, execute it from a 0.14.0 environment. For more details, refer to the 
+the property file upon completion of the upgrade. Additionally, a command-line 
tool for downgrading has been included,
+allowing users to move from table version `6` to `5`, or revert from Hudi 
0.14.0 to a version prior to 0.14.0. To use this
+tool, execute it from a 0.14.0 environment. For more details, refer to the
 [hudi-cli](/docs/cli/#upgrade-and-downgrade-table).
 
 :::caution
@@ -36,24 +43,24 @@ sequence.
 ### Bundle Updates
 
 #### New Spark Bundles
-In this release, we've expanded our support to include bundles for both Spark 
3.4 
-([hudi-spark3.4-bundle_2.12](https://mvnrepository.com/artifact/org.apache.hudi/hudi-spark3.4-bundle_2.12))
 
+In this release, we've expanded our support to include bundles for both Spark 
3.4
+([hudi-spark3.4-bundle_2.12](https://mvnrepository.com/artifact/org.apache.hudi/hudi-spark3.4-bundle_2.12))
 and Spark 3.0 
([hudi-spark3.0-bundle_2.12](https://mvnrepository.com/artifact/org.apache.hudi/hudi-spark3.0-bundle_2.12)).
-Please note that, the support for Spark 3.0 had been discontinued after Hudi 
version 0.10.1, but due to strong community 
+Please note that, the support for Spark 3.0 had been discontinued after Hudi 
version 0.10.1, but due to strong community
 interest, it has been reinstated in this release.
 
 ### Breaking Changes
 
 #### INSERT INTO behavior with Spark SQL
-Before version 0.14.0, data ingested through `INSERT INTO` in Spark SQL 
followed the upsert flow, where multiple versions 
-of records would be merged into one version. However, starting from 0.14.0, 
we've altered the default behavior of 
-`INSERT INTO` to utilize the `insert` flow internally. This change 
significantly enhances write performance as it 
+Before version 0.14.0, data ingested through `INSERT INTO` in Spark SQL 
followed the upsert flow, where multiple versions
+of records would be merged into one version. However, starting from 0.14.0, 
we've altered the default behavior of
+`INSERT INTO` to utilize the `insert` flow internally. This change 
significantly enhances write performance as it
 bypasses index lookups.
 
-If a table is created with a *preCombine* key, the default operation for 
`INSERT INTO` remains as `upsert`. Conversely, 
-if no *preCombine* key is set, the underlying write operation for `INSERT 
INTO` defaults to `insert`. Users have the 
-flexibility to override this behavior by explicitly setting values for the 
config 
-[`hoodie.spark.sql.insert.into.operation`](https://hudi.apache.org/docs/configurations#hoodiesparksqlinsertintooperation)
 
+If a table is created with a *preCombine* key, the default operation for 
`INSERT INTO` remains as `upsert`. Conversely,
+if no *preCombine* key is set, the underlying write operation for `INSERT 
INTO` defaults to `insert`. Users have the
+flexibility to override this behavior by explicitly setting values for the 
config
+[`hoodie.spark.sql.insert.into.operation`](https://hudi.apache.org/docs/configurations#hoodiesparksqlinsertintooperation)
 as per their requirements. Possible values for this config include `insert`, 
`bulk_insert`, and `upsert`.
 
 Additionally, in version 0.14.0, we have **deprecated** two related older 
configs:
@@ -63,22 +70,22 @@ Additionally, in version 0.14.0, we have **deprecated** two 
related older config
 ### Behavior changes
 
 #### Simplified duplicates handling with Inserts in Spark SQL
-In cases where the operation type is configured as `insert` for the Spark SQL 
`INSERT INTO` flow, users now have the 
-option to enforce a duplicate policy using the configuration setting 
-[`hoodie.datasource.insert.dup.policy`](https://hudi.apache.org/docs/configurations#hoodiedatasourceinsertduppolicy).
 
-This policy determines the action taken when incoming records being ingested 
already exist in storage. The available 
+In cases where the operation type is configured as `insert` for the Spark SQL 
`INSERT INTO` flow, users now have the
+option to enforce a duplicate policy using the configuration setting
+[`hoodie.datasource.insert.dup.policy`](https://hudi.apache.org/docs/configurations#hoodiedatasourceinsertduppolicy).
+This policy determines the action taken when incoming records being ingested 
already exist in storage. The available
 values for this configuration are as follows:
 
 - `none`: No specific action is taken, allowing duplicates to exist in the 
Hudi table if the incoming records contain duplicates.
 - `drop`: Matching records from the incoming writes will be dropped, and the 
remaining ones will be ingested.
-- `fail`: The write operation will fail if the same records are re-ingested. 
In essence, a given record, as determined 
-by the key generation policy, can only be ingested once into the target table.
+- `fail`: The write operation will fail if the same records are re-ingested. 
In essence, a given record, as determined
+  by the key generation policy, can only be ingested once into the target 
table.
 
-With this addition, an older related configuration setting, 
-[`hoodie.datasource.write.insert.drop.duplicates`](https://hudi.apache.org/docs/configurations#hoodiedatasourcewriteinsertdropduplicates),
 
-will be deprecated. The newer configuration will take precedence over the old 
one when both are specified. If no specific 
-configurations are provided, the default value for the newer configuration 
will be assumed. Users are strongly encouraged 
-to migrate to the use of these newer configurations when using Spark SQL. 
+With this addition, an older related configuration setting,
+[`hoodie.datasource.write.insert.drop.duplicates`](https://hudi.apache.org/docs/configurations#hoodiedatasourcewriteinsertdropduplicates),
+will be deprecated. The newer configuration will take precedence over the old 
one when both are specified. If no specific
+configurations are provided, the default value for the newer configuration 
will be assumed. Users are strongly encouraged
+to migrate to the use of these newer configurations when using Spark SQL.
 
 :::caution
 This is only applicable to Spark SQL writing.
@@ -86,26 +93,26 @@ This is only applicable to Spark SQL writing.
 
 
 #### Compaction with MOR table
-For Spark batch writers (both the Spark datasource and Spark SQL), compaction 
is automatically enabled by default for 
-MOR (Merge On Read) tables, unless users explicitly override this behavior. 
Users have the option to disable compaction 
-explicitly by setting 
[`hoodie.compact.inline`](https://hudi.apache.org/docs/configurations#hoodiecompactinline)
 to false. 
-In case users do not override this configuration, compaction may be triggered 
for MOR tables approximately once every 
-5 delta commits (the default value for 
+For Spark batch writers (both the Spark datasource and Spark SQL), compaction 
is automatically enabled by default for
+MOR (Merge On Read) tables, unless users explicitly override this behavior. 
Users have the option to disable compaction
+explicitly by setting 
[`hoodie.compact.inline`](https://hudi.apache.org/docs/configurations#hoodiecompactinline)
 to false.
+In case users do not override this configuration, compaction may be triggered 
for MOR tables approximately once every
+5 delta commits (the default value for
 
[`hoodie.compact.inline.max.delta.commits`](https://hudi.apache.org/docs/configurations#hoodiecompactinlinemaxdeltacommits)).
 
 #### `HoodieDeltaStreamer` renamed to `HoodieStreamer` (Hudi Streamer)
 
 Starting from version 0.14.0, we have renamed 
[HoodieDeltaStreamer](https://github.com/apache/hudi/blob/84a80e21b5f0cdc1f4a33957293272431b221aa9/hudi-utilities/src/main/java/org/apache/hudi/utilities/deltastreamer/HoodieDeltaStreamer.java)
-to 
[`HoodieStreamer`](https://github.com/apache/hudi/blob/84a80e21b5f0cdc1f4a33957293272431b221aa9/hudi-utilities/src/main/java/org/apache/hudi/utilities/streamer/HoodieStreamer.java).
 
-We have ensured backward compatibility so that existing user jobs remain 
unaffected. However, in upcoming 
-releases, support for Deltastreamer might be discontinued. Hence, we strongly 
advise users to transition to using 
+to 
[`HoodieStreamer`](https://github.com/apache/hudi/blob/84a80e21b5f0cdc1f4a33957293272431b221aa9/hudi-utilities/src/main/java/org/apache/hudi/utilities/streamer/HoodieStreamer.java).
+We have ensured backward compatibility so that existing user jobs remain 
unaffected. However, in upcoming
+releases, support for Deltastreamer might be discontinued. Hence, we strongly 
advise users to transition to using
 `HoodieStreamer` instead.
 
 
-#### MERGE INTO JOIN condition 
-Starting from version 0.14.0, Hudi has the capability to automatically 
generate primary record keys when users do not 
-provide explicit specifications. This enhancement enables the `MERGE INTO 
JOIN` clause to reference any data column for 
-the join condition in Hudi tables where the primary keys are generated by Hudi 
itself. However, in cases where users 
+#### MERGE INTO JOIN condition
+Starting from version 0.14.0, Hudi has the capability to automatically 
generate primary record keys when users do not
+provide explicit specifications. This enhancement enables the `MERGE INTO 
JOIN` clause to reference any data column for
+the join condition in Hudi tables where the primary keys are generated by Hudi 
itself. However, in cases where users
 configure the primary record key, the join condition still expects the primary 
key fields as specified by the user.
 
 
@@ -113,29 +120,29 @@ configure the primary record key, the join condition 
still expects the primary k
 
 ### Record Level Index
 Hudi version 0.14.0, introduces a new index implementation -  
-[Record Level 
Index](https://github.com/apache/hudi/blob/master/rfc/rfc-8/rfc-8.md#rfc-8-metadata-based-record-index).
 
-The Record level Index significantly enhances write performance for large 
tables by efficiently storing per-record 
-locations and enabling swift retrieval during index lookup operations. It can 
effectively replace other 
-[Global 
indices](https://hudi.apache.org/docs/next/indexing#global-and-non-global-indexes)
 like Global_bloom, 
+[Record Level 
Index](https://github.com/apache/hudi/blob/master/rfc/rfc-8/rfc-8.md#rfc-8-metadata-based-record-index).
+The Record level Index significantly enhances write performance for large 
tables by efficiently storing per-record
+locations and enabling swift retrieval during index lookup operations. It can 
effectively replace other
+[Global 
indices](https://hudi.apache.org/docs/next/indexing#global-and-non-global-indexes)
 like Global_bloom,
 Global_Simple, or Hbase, commonly used in Hudi.
 
-Bloom and Simple Indexes exhibit slower performance for large datasets due to 
the high costs associated with gathering 
-index data from various data files during lookup. Moreover, these indexes do 
not preserve a one-to-one record-key to 
-record file path mapping; instead, they deduce the mapping through an 
optimized search at lookup time. The per-file 
+Bloom and Simple Indexes exhibit slower performance for large datasets due to 
the high costs associated with gathering
+index data from various data files during lookup. Moreover, these indexes do 
not preserve a one-to-one record-key to
+record file path mapping; instead, they deduce the mapping through an 
optimized search at lookup time. The per-file
 overhead required by these indexes makes them less effective for datasets with 
a larger number of files or records.
 
-On the other hand, the Hbase Index saves a one-to-one mapping for each record 
key, resulting in fast performance that 
-scales with the dataset size. However, it necessitates a separate HBase 
cluster for maintenance, which is operationally 
+On the other hand, the Hbase Index saves a one-to-one mapping for each record 
key, resulting in fast performance that
+scales with the dataset size. However, it necessitates a separate HBase 
cluster for maintenance, which is operationally
 challenging and resource-intensive, requiring specialized expertise.
 
-The Record Index combines the speed and scalability of the HBase Index without 
its limitations and overhead. Being a 
-part of the HUDI Metadata Table, any future performance enhancements in writes 
and queries will automatically translate 
-into improved performance for the Record Index. Adopting the Record Level 
Index has the potential to boost index lookup 
+The Record Index combines the speed and scalability of the HBase Index without 
its limitations and overhead. Being a
+part of the HUDI Metadata Table, any future performance enhancements in writes 
and queries will automatically translate
+into improved performance for the Record Index. Adopting the Record Level 
Index has the potential to boost index lookup
 performance by 4 to 10 times, depending on the workload, even for extremely 
large-scale datasets (e.g., 1TB).
 
-With the Record Level Index, significant performance improvements can be 
observed for large datasets, as latency is 
-directly proportional to the amount of data being ingested. This is in 
contrast to other Global indices where index 
-lookup time increases linearly with the table size. The Record Level Index is 
specifically designed to efficiently 
+With the Record Level Index, significant performance improvements can be 
observed for large datasets, as latency is
+directly proportional to the amount of data being ingested. This is in 
contrast to other Global indices where index
+lookup time increases linearly with the table size. The Record Level Index is 
specifically designed to efficiently
 handle lookups for such large-scale data without a linear increase in lookup 
times as the table size grows.
 
 To harness the benefits of this lightning-fast index, users need to enable two 
configurations:
@@ -144,85 +151,85 @@ To harness the benefits of this lightning-fast index, 
users need to enable two c
 
 
 ### Support for Hudi tables with Autogenerated keys
-Since the initial official version of Hudi, the primary key was a mandatory 
field that users needed to configure for any 
-Hudi table. Starting 0.14.0, we are relaxing this constraint. This enhancement 
addresses a longstanding need within the 
-community, where certain use-cases didn't naturally possess an intrinsic 
primary key. Version 0.14.0 now offers the 
-flexibility for users to create a Hudi table without the need to explicitly 
configure a primary key (by omitting the 
+Since the initial official version of Hudi, the primary key was a mandatory 
field that users needed to configure for any
+Hudi table. Starting 0.14.0, we are relaxing this constraint. This enhancement 
addresses a longstanding need within the
+community, where certain use-cases didn't naturally possess an intrinsic 
primary key. Version 0.14.0 now offers the
+flexibility for users to create a Hudi table without the need to explicitly 
configure a primary key (by omitting the
 configuration setting -
-[`hoodie.datasource.write.recordkey.field`](https://hudi.apache.org/docs/configurations#hoodiedatasourcewriterecordkeyfield)).
 
-Hudi will **automatically generate the primary keys** in such cases. This 
feature is applicable only for new tables and 
+[`hoodie.datasource.write.recordkey.field`](https://hudi.apache.org/docs/configurations#hoodiedatasourcewriterecordkeyfield)).
+Hudi will **automatically generate the primary keys** in such cases. This 
feature is applicable only for new tables and
 cannot be altered for existing ones.
 
 
-This functionality is available in all spark writers with certain limitations. 
For append only type of use cases, Inserts and 
-bulk_inserts are allowed with all four writers - Spark Datasource, Spark SQL, 
Spark Streaming, Hudi Streamer. Updates and 
-Deletes are supported only using spark-sql `MERGE INTO` , `UPDATE` and 
`DELETE` statements. With Spark Datasource, `UPDATE` 
-and `DELETE` are supported only when the source dataframe contains Hudi's meta 
fields. Please check out our 
-[quick start guide](https://hudi.apache.org/docs/quick-start-guide) for code 
snippets on Hudi table CRUD operations where 
+This functionality is available in all spark writers with certain limitations. 
For append only type of use cases, Inserts and
+bulk_inserts are allowed with all four writers - Spark Datasource, Spark SQL, 
Spark Streaming, Hudi Streamer. Updates and
+Deletes are supported only using spark-sql `MERGE INTO` , `UPDATE` and 
`DELETE` statements. With Spark Datasource, `UPDATE`
+and `DELETE` are supported only when the source dataframe contains Hudi's meta 
fields. Please check out our
+[quick start guide](https://hudi.apache.org/docs/quick-start-guide) for code 
snippets on Hudi table CRUD operations where
 keys are autogenerated.
 
 ### Spark 3.4 version support
 
-Spark 3.4 support is added; users who are on Spark 3.4 can use 
-[hudi-spark3.4-bundle](https://mvnrepository.com/artifact/org.apache.hudi/hudi-spark3.4-bundle).
 Spark 3.2, Spark 3.1, 
-Spark3.0 and Spark 2.4 will continue to be supported. Please check the 
migration guide for bundle updates. To quickly get 
+Spark 3.4 support is added; users who are on Spark 3.4 can use
+[hudi-spark3.4-bundle](https://mvnrepository.com/artifact/org.apache.hudi/hudi-spark3.4-bundle).
 Spark 3.2, Spark 3.1,
+Spark3.0 and Spark 2.4 will continue to be supported. Please check the 
migration guide for bundle updates. To quickly get
 started with Hudi and Spark 3.4, you can explore our [quick start 
guide](https://hudi.apache.org/docs/quick-start-guide).
 
 ### Query side improvements:
 
 #### Metadata table support with Athena
 
-Users now have the ability to utilize Hudi’s [Metadata 
table](https://hudi.apache.org/docs/metadata/) seamlessly with Athena. 
-The file listing index removes the need for recursive file system calls like 
"list files" by retrieving information 
-from an index that maintains a mapping of partitions to files. This approach 
proves to be highly efficient, particularly 
-when dealing with extensive datasets. With Hudi 0.14.0, users can activate 
file listing based on the metadata table when 
-performing Glue catalog synchronization for their Hudi tables. To enable this 
functionality, users can configure 
+Users now have the ability to utilize Hudi’s [Metadata 
table](https://hudi.apache.org/docs/metadata/) seamlessly with Athena.
+The file listing index removes the need for recursive file system calls like 
"list files" by retrieving information
+from an index that maintains a mapping of partitions to files. This approach 
proves to be highly efficient, particularly
+when dealing with extensive datasets. With Hudi 0.14.0, users can activate 
file listing based on the metadata table when
+performing Glue catalog synchronization for their Hudi tables. To enable this 
functionality, users can configure
 `hoodie.datasource.meta.sync.glue.metadata_file_listing` and set it to true 
during the Glue sync process.
 
 #### Leverage Parquet bloom filters w/ read queries
-In Hudi 0.14.0, users can now utilize the native 
-[Parquet bloom 
filters](https://github.com/apache/parquet-format/blob/1603152f8991809e8ad29659dffa224b4284f31b/BloomFilter.md),
 
-provided their compute engine supports Apache Parquet 1.12.0 or higher. This 
support covers both the writing and reading 
-of datasets. Hudi facilitates the use of native Parquet bloom filters through 
Hadoop configuration. Users are required 
-to set a Hadoop configuration with a specific key representing the column for 
which the bloom filter is to be applied. 
-For example, `parquet.bloom.filter.enabled#rider=true` creates a bloom filter 
for the rider column. Whenever a query 
+In Hudi 0.14.0, users can now utilize the native
+[Parquet bloom 
filters](https://github.com/apache/parquet-format/blob/1603152f8991809e8ad29659dffa224b4284f31b/BloomFilter.md),
+provided their compute engine supports Apache Parquet 1.12.0 or higher. This 
support covers both the writing and reading
+of datasets. Hudi facilitates the use of native Parquet bloom filters through 
Hadoop configuration. Users are required
+to set a Hadoop configuration with a specific key representing the column for 
which the bloom filter is to be applied.
+For example, `parquet.bloom.filter.enabled#rider=true` creates a bloom filter 
for the rider column. Whenever a query
 involves a predicate on the rider column, the bloom filter comes into play, 
enhancing read performance.
 
 #### Incremental queries with multi-writers
-In multi-writer scenarios, there can be instances of gaps in the timeline 
(requests or inflight instants that are not 
-the latest instant) due to concurrent writing activities. These gaps may 
result in inconsistent outcomes when 
-performing incremental queries. To address this issue, Hudi 0.14.0 introduces 
a new configuration setting, 
-[`hoodie.read.timeline.holes.resolution.policy`](https://hudi.apache.org/docs/configurations#hoodiereadtimelineholesresolutionpolicy),
 
+In multi-writer scenarios, there can be instances of gaps in the timeline 
(requests or inflight instants that are not
+the latest instant) due to concurrent writing activities. These gaps may 
result in inconsistent outcomes when
+performing incremental queries. To address this issue, Hudi 0.14.0 introduces 
a new configuration setting,
+[`hoodie.read.timeline.holes.resolution.policy`](https://hudi.apache.org/docs/configurations#hoodiereadtimelineholesresolutionpolicy),
 specifically designed for handling these inconsistencies in incremental 
queries. The configuration provides three possible policies:
 - `FAIL`: This serves as the default policy and throws an exception when such 
timeline gaps are identified during an incremental query.
-- `BLOCK`: In this policy, the results of an incremental query are limited to 
the time range between the holes in the 
-   timeline. For instance, if a gap is detected at instant t1 within the 
incremental query range from t0 to t2, the 
-   query will only display results between t0 and t1 without failing.
-- `USE_TRANSITION_TIME`: This policy is experimental and involves using the 
state transition time, which is based on the 
-   file modification time of commit metadata files in the timeline, during the 
incremental query.
+- `BLOCK`: In this policy, the results of an incremental query are limited to 
the time range between the holes in the
+  timeline. For instance, if a gap is detected at instant t1 within the 
incremental query range from t0 to t2, the
+  query will only display results between t0 and t1 without failing.
+- `USE_TRANSITION_TIME`: This policy is experimental and involves using the 
state transition time, which is based on the
+  file modification time of commit metadata files in the timeline, during the 
incremental query.
 
 #### Timestamp support with Hive 3.x
-For quite some time, Hudi users encountered 
[challenges](https://issues.apache.org/jira/browse/HUDI-83) regarding reading 
Timestamp type columns written by Spark and 
-subsequently attempting to read them with Hive 3.x. While in Hudi 0.13.x, we 
introduced a 
+For quite some time, Hudi users encountered 
[challenges](https://issues.apache.org/jira/browse/HUDI-83) regarding reading 
Timestamp type columns written by Spark and
+subsequently attempting to read them with Hive 3.x. While in Hudi 0.13.x, we 
introduced a
 
[workaround](https://github.com/apache/hudi/commit/cd314b8cfa58c32f731f7da2aa6377a09df4c6f9#diff-cff4dfc264f7abcac63a5ba5db55b38115177fe279ab35807d345c2b8872475e)
 to mitigate this issue, version 0.14.0 now ensures full compatibility of 
HiveAvroSerializer with Hive 3.x to resolve this.
 
 #### Google BigQuery sync enhancements
-With 0.14.0, the [BigQuerySyncTool](https://hudi.apache.org/docs/gcp_bigquery) 
supports syncing table to BigQuery 
+With 0.14.0, the [BigQuerySyncTool](https://hudi.apache.org/docs/gcp_bigquery) 
supports syncing table to BigQuery
 using 
[manifests](https://cloud.google.com/blog/products/data-analytics/bigquery-manifest-file-support-for-open-table-format-queries).
-This is expected to have better query performance compared to legacy way. 
Schema evolution is supported with the manifest approach. 
-Partition column no longer needs to be dropped from the files due to new 
schema handling improvements. To enable this 
-feature, users can set 
-[`hoodie.gcp.bigquery.sync.use_bq_manifest_file`](https://hudi.apache.org/docs/configurations#hoodiegcpbigquerysyncuse_bq_manifest_file)
 
+This is expected to have better query performance compared to legacy way. 
Schema evolution is supported with the manifest approach.
+Partition column no longer needs to be dropped from the files due to new 
schema handling improvements. To enable this
+feature, users can set
+[`hoodie.gcp.bigquery.sync.use_bq_manifest_file`](https://hudi.apache.org/docs/configurations#hoodiegcpbigquerysyncuse_bq_manifest_file)
 to true.
 
 ### Spark read side improvements
 
 #### Snapshot read support for MOR Bootstrap tables
-With 0.14.0, MOR snapshot read support is added for Bootstrapped tables. The 
default behavior has been changed in several 
-ways to match the behavior of non-bootstrapped MOR tables. Snapshot reads will 
now be the default reading mode. Use 
-`hoodie.datasource.query.type=read_optimized` for read optimized queries which 
was previously the default behavior. 
-Hive sync for such tables will result in both _ro and _rt suffixed to the 
table name to signify read optimized and snapshot 
+With 0.14.0, MOR snapshot read support is added for Bootstrapped tables. The 
default behavior has been changed in several
+ways to match the behavior of non-bootstrapped MOR tables. Snapshot reads will 
now be the default reading mode. Use
+`hoodie.datasource.query.type=read_optimized` for read optimized queries which 
was previously the default behavior.
+Hive sync for such tables will result in both _ro and _rt suffixed to the 
table name to signify read optimized and snapshot
 reading respectively.
 
 ####  Table-valued function named hudi_table_changes designed for incremental 
reading through Spark SQL
@@ -254,53 +261,53 @@ Checkout the 
[quickstart](/docs/quick-start-guide#incremental-query) for more ex
 
 #### New MOR file format reader in Spark:
 Based on a proposal from [RFC-72](https://github.com/apache/hudi/pull/9235) 
aimed at redesigning Hudi-Spark integration,
-we are introducing an experimental file format reader for MOR (Merge On Read) 
tables. This reader is expected to 
-significantly reduce read latencies by 20 to 40% when compared to the older 
file format, particularly for snapshot and 
-bootstrap queries. The goal is to bring the latencies closer to those of the 
COW (Copy On Write) file format. To utilize 
+we are introducing an experimental file format reader for MOR (Merge On Read) 
tables. This reader is expected to
+significantly reduce read latencies by 20 to 40% when compared to the older 
file format, particularly for snapshot and
+bootstrap queries. The goal is to bring the latencies closer to those of the 
COW (Copy On Write) file format. To utilize
 this new file format, users need to set 
`hoodie.datasource.read.use.new.parquet.file.format=true`. It's important to 
note
-that this feature is still experimental and comes with a few limitations. For 
more details and if you're interested in 
+that this feature is still experimental and comes with a few limitations. For 
more details and if you're interested in
 contributing, please refer to [this GitHub 
issue](https://github.com/apache/hudi/issues/16112).
 
 ### Spark write side improvements
 
 #### Bulk_Insert and row writer enhancements
 The 0.14.0 release provides support for using bulk insert operation while 
performing SQL operations like `INSERT OVERWRITE TABLE`
-and `INSERT OVERWRITE PARTITION`. To enable bulk insert, set config 
-[`hoodie.spark.sql.insert.into.operation`](https://hudi.apache.org/docs/configurations#hoodiesparksqlinsertintooperation)
 
-to value `bulk_insert`. Bulk insert has better write performance compared to 
insert operation. Row writer support is 
+and `INSERT OVERWRITE PARTITION`. To enable bulk insert, set config
+[`hoodie.spark.sql.insert.into.operation`](https://hudi.apache.org/docs/configurations#hoodiesparksqlinsertintooperation)
+to value `bulk_insert`. Bulk insert has better write performance compared to 
insert operation. Row writer support is
 also added for Simple bucket index.
 
 ### Hudi Streamer enhancements
 
 #### Dynamic configuration updates
-When Hudi Streamer is run in continuous mode, the properties can be 
refreshed/updated before each sync calls. 
+When Hudi Streamer is run in continuous mode, the properties can be 
refreshed/updated before each sync calls.
 Interested users can implement 
`org.apache.hudi.utilities.streamer.ConfigurationHotUpdateStrategy` to leverage 
this.
 
 #### SQL File based source for Hudi Streamer
-A new source - 
[SqlFileBasedSource](https://github.com/apache/hudi/blob/30146d61f5544f06e2100234b9bf9c5e4bc2a97f/hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/SqlFileBasedSource.java),
 
+A new source - 
[SqlFileBasedSource](https://github.com/apache/hudi/blob/30146d61f5544f06e2100234b9bf9c5e4bc2a97f/hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/SqlFileBasedSource.java),
 has been added to Hudi Streamer designed to facilitate one-time backfill 
scenarios.
 
 ### Flink Enhancements
 Below are the Flink Engine based enhancements in the 0.14.0 release.
 
 #### Consistent hashing index support
-In comparison to the static hashing index (BUCKET index), the consistent 
hashing index offers dynamic scalability of 
-data buckets for the writer. To utilize this feature, configure the option 
`index.type` as `BUCKET` and set 
+In comparison to the static hashing index (BUCKET index), the consistent 
hashing index offers dynamic scalability of
+data buckets for the writer. To utilize this feature, configure the option 
`index.type` as `BUCKET` and set
 `hoodie.index.bucket.engine` to `CONSISTENT_HASHING`.
 
-When enabling the consistent hashing index, it's important to activate 
clustering scheduling within the writer. 
-The clustering plan should be executed through an offline Spark job. During 
this process, the writer will perform dual writes 
-for both the old and new data buckets while the clustering is pending. 
Although the dual write does not impact correctness, 
+When enabling the consistent hashing index, it's important to activate 
clustering scheduling within the writer.
+The clustering plan should be executed through an offline Spark job. During 
this process, the writer will perform dual writes
+for both the old and new data buckets while the clustering is pending. 
Although the dual write does not impact correctness,
 it is strongly recommended to execute clustering as quickly as possible.
 
 #### Dynamic partition pruning for streaming read
-Before 0.14.0, the Flink streaming reader can not prune the datetime 
partitions correctly when the queries have 
-predicates with constant datetime filtering. Since this release, the Flink 
streaming queries have been fixed to support 
+Before 0.14.0, the Flink streaming reader can not prune the datetime 
partitions correctly when the queries have
+predicates with constant datetime filtering. Since this release, the Flink 
streaming queries have been fixed to support
 any pattern of filtering predicates, including but not limited to the datetime 
filtering.
 
 #### Simple bucket index table query speed up (with index fields)
-For a simple bucket index table, if the query takes equality filtering 
predicates on index key fields, Flink engine 
-would optimize the planning to only include the source data files from a very 
specific data bucket; such queries expect 
+For a simple bucket index table, if the query takes equality filtering 
predicates on index key fields, Flink engine
+would optimize the planning to only include the source data files from a very 
specific data bucket; such queries expect
 to have nearly `hoodie.bucket.index.num.buckets` times performance improvement 
in average.
 
 #### Flink 1.17 support
@@ -308,7 +315,7 @@ Flink 1.17 is supported with a new compile maven profile 
`flink1.17`, adding pro
 Flink Hudi bundle jar to enable the integration with Flink 1.17.
 
 #### Update deletes statement for Flink
-`UPDATE` and `DELETE` statements have been integrated since this release for 
batch queries. Current only table that 
+`UPDATE` and `DELETE` statements have been integrated since this release for 
batch queries. Current only table that
 defines primary keys can handle the statement correctly.
 
 ```
@@ -322,18 +329,59 @@ UPDATE hudi_table SET age=19 WHERE UUID in ('id1', 'id2');
 DELETE FROM hudi_table WHERE age > 23;
 ```
 
-
-
 ### Java Enhancements
-Lot of write operations have been extended to support Java engine to bring it 
to parity with other engines. For eg, 
+
+Lot of write operations have been extended to support Java engine to bring it 
to parity with other engines. For eg,
 compaction, clustering, and metadata table support has been added to Java 
Engine with 0.14.0.
 
 ## Known Regressions
+
 In Hudi 0.14.0, when querying a table that uses ComplexKeyGenerator or 
CustomKeyGenerator, partition values are returned
-as string. Note that there is no type change on the storage i.e. partition 
fields are written in the user-defined type 
+as string. Note that there is no type change on the storage i.e. partition 
fields are written in the user-defined type
 on storage. However, this is a breaking change for the aforementioned key 
generators and will be fixed in 0.14.1 -
 [tracking issue](https://github.com/apache/hudi/issues/16251)
 
 ## Raw Release Notes
 
 The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12352700).
+
+---
+
+## [Release 
0.14.1](https://github.com/apache/hudi/releases/tag/release-0.14.1) 
{#release-0141}
+
+## Migration Guide
+
+* This release (0.14.1) does not introduce any new table version, thus no 
migration is needed if you are on 0.14.0.
+* If migrating from an older release, please check the migration guide from 
the previous release notes, specifically
+  the upgrade instructions in [0.6.0](/releases/release-0.6.0),
+  [0.9.0](/releases/release-0.9.0), [0.10.0](/releases/release-0.10.0),
+  [0.11.0](/releases/release-0.11.0), [0.12.0](/releases/release-0.12.0), 
[0.13.0](/releases/release-0.13.0), and
+  [0.14.0](/releases/release-0.14#release-0140)
+
+### Bug fixes
+
+0.14.1 release is mainly intended for bug fixes and stability. The fixes span 
across many components, including
+
+* Hudi Streamer
+* Spark SQL
+* Spark datasource writer
+* Table services
+* Meta Syncs
+* Flink engine
+* Unit, functional, integration tests and CI
+
+## Known Regressions
+We discovered a regression in Hudi 0.14.1 release related to Complex Key gen 
when record key consists of one field.
+It can silently ingest duplicates if table is upgraded from previous versions.
+
+:::tip
+Avoid upgrading any existing table to 0.14.1 if you are using 
ComplexKeyGenerator with single field as record key and multiple partition 
fields.
+:::
+
+## Raw Release Notes
+
+The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12353493)
+
+:::tip
+0.14.1 release also contains all the new features and bug fixes from 0.14.0, 
of which the release notes are [here](/releases/release-0.14#release-0140)
+:::
diff --git a/website/releases/release-0.15.0.md 
b/website/releases/release-0.15.md
similarity index 98%
rename from website/releases/release-0.15.0.md
rename to website/releases/release-0.15.md
index 37bce091b3d1..38cb2f11fd90 100644
--- a/website/releases/release-0.15.0.md
+++ b/website/releases/release-0.15.md
@@ -1,12 +1,17 @@
 ---
-title: "Release 0.15.0"
+title: "Release 0.15"
 layout: releases
 toc: true
 ---
-
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
+This page contains release notes for all Apache Hudi 0.15.x releases, 
including:
+
+- [Release 0.15.0](#release-0150)
+
+---
+
 ## [Release 0.15.0](https://github.com/apache/hudi/releases/tag/release-0.15.0)
 
 Apache Hudi 0.15.0 release brings enhanced engine integration, new features, 
and improvements in several areas. These
@@ -18,7 +23,7 @@ relevant [module and API changes](#module-and-api-changes) and
 
 ## Migration Guide
 
-This release keeps the same table version (`6`) as [0.14.0 
release](/releases/release-0.14.0), and there is no need for
+This release keeps the same table version (`6`) as [0.14.0 
release](/releases/release-0.14#release-0140), and there is no need for
 a table version upgrade if you are upgrading from 0.14.0. There are a
 few [module and API changes](#module-and-api-changes)
 and [behavior changes](#behavior-changes) as
@@ -317,8 +322,9 @@ partition `s3` scheme fixes the issue. We have added a fix 
to use `s3` scheme fo
 Catalog sync ([HUDI-7362](https://issues.apache.org/jira/browse/HUDI-7362)).
 
 ## Known Regressions
-The Hudi 0.15.0 release introduces a regression related to Complex Key 
generation when the record key consists of a 
-single field. This issue was also present in version 0.14.1. When upgrading a 
table from previous versions, 
+
+The Hudi 0.15.0 release introduces a regression related to Complex Key 
generation when the record key consists of a
+single field. This issue was also present in version 0.14.1. When upgrading a 
table from previous versions,
 it may silently ingest duplicate records.
 
 :::tip
diff --git a/website/releases/release-1.0.1.md 
b/website/releases/release-1.0.1.md
deleted file mode 100644
index 180b260e9fed..000000000000
--- a/website/releases/release-1.0.1.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: "Release 1.0.1"
-layout: releases
-toc: true
-last_modified_at: 2024-02-10T13:00:00-08:00
----
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## [Release 1.0.1](https://github.com/apache/hudi/releases/tag/release-1.0.1)
-
-## Migration Guide
-
-* This release (1.0.1) does not introduce any new table version, thus no 
migration is needed if you are on 1.0.0.
-* If migrating from an older release, please check the migration guide from 
the previous release notes, specifically
-  the upgrade instructions in [0.6.0](/releases/release-0.6.0),
-  [0.9.0](/releases/release-0.9.0), [0.10.0](/releases/release-0.10.0),
-  [0.11.0](/releases/release-0.11.0), [0.12.0](/releases/release-0.12.0), 
[0.13.0](/releases/release-0.13.0),
-  [0.14.0](/releases/release-0.14.0) and [1.0.0](/releases/release-1.0.0)
-
-### Bug fixes
-
-1.0.1 release is mainly intended for bug fixes and stability. The fixes span 
across many components, including
-
-* Hudi Streamer
-* Spark SQL
-* Spark datasource writer
-* Table services
-* Backwards compatible writer
-* Flink engine
-* Unit, functional, integration tests and CI
-
-## Known Regressions
-We have a ComplexKeyGenerator related regression reported 
[here](release-0.14.1#known-regressions). Please refrain from migrating, if you 
have single field as record key and multiple partition fields.
-
-:::tip
-Avoid upgrading any existing table to 1.0.1 if you are using 
ComplexKeyGenerator with single record key configured.
-:::
-
-## Raw Release Notes
-
-The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12355195)
-
-:::tip
-1.0.1 release also contains all the new features and bug fixes from 1.0.0, of 
which the release notes are [here](/releases/release-1.0.0)
-:::
diff --git a/website/releases/release-1.0.2.md 
b/website/releases/release-1.0.2.md
deleted file mode 100644
index f2bf7854e681..000000000000
--- a/website/releases/release-1.0.2.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: "Release 1.0.2"
-layout: releases
-toc: true
-last_modified_at: 2024-05-02T18:00:00-08:00
----
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## [Release 1.0.2](https://github.com/apache/hudi/releases/tag/release-1.0.2)
-
-## Migration Guide
-
-* This release (1.0.2) does not introduce any new table version, thus no 
migration is needed if you are on 1.0.1.
-* If migrating from an older release, please check the migration guide from 
the previous release notes, specifically
-  the upgrade instructions in [0.6.0](/releases/release-0.6.0),
-  [0.9.0](/releases/release-0.9.0), [0.10.0](/releases/release-0.10.0),
-  [0.11.0](/releases/release-0.11.0), [0.12.0](/releases/release-0.12.0), 
[0.13.0](/releases/release-0.13.0),
-  [0.14.0](/releases/release-0.14.0) and [1.0.0](/releases/release-1.0.0)
-
-### Bug fixes
-
-The 1.0.2 release primarily focuses on bug fixes, stability enhancements, and 
critical improvements, particularly around migration and backwards 
compatibility. The changes span across various components, including:
-
-* **Metadata Table (MDT):** Numerous fixes and improvements related to 
validation, writing, reading, compaction, indexing (column stats), and 
backwards compatibility (especially for table version 6).
-* **Spark Integration:** Enhancements and fixes for Spark SQL (MERGE INTO, 
query behavior), datasource reader/writer, schema handling, performance, and 
backward compatibility.
-* **Backwards Compatibility:** Significant effort ensuring compatibility with 
older table versions (specifically v6) and smoother upgrades from 0.x versions, 
including dedicated writers/readers.
-* **File Group Reader:** Validation, fixes, and feature completeness 
improvements, including making it default for table version 6.
-* **Flink Engine:** Fixes and improvements related to streamer checkpoints and 
bundle validation.
-* **Compaction and Table Services:** Fixes related to compaction scheduling, 
execution (especially with global index or RLI), archival, and cleanup.
-* **Indexing:** Fixes and enhancements for Column Stats, Record Level Index 
(RLI), and Bloom Filters.
-* **Performance:** Optimizations in areas like log file writing, schema reuse, 
and metadata initialization.
-* **Testing, CI, and Dependencies:** Fixes for flaky tests, improved code 
coverage, bundle validation, dependency cleanup (HBase removal), and extensive 
release testing.
-
-## Known Regressions
-We have a ComplexKeyGenerator related regression reported 
[here](release-0.14.1#known-regressions). Please refrain from migrating, if you 
have single field as record key and mutiple partition fields.
-
-:::tip
-Avoid upgrading any existing table to 1.0.2 if you are using 
ComplexKeyGenerator with single record key configured.
-:::
-
-## Raw Release Notes
-
-The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12355558)
-
-:::tip
-1.0.2 release also contains all the new features and bug fixes from 1.0.1, of 
which the release notes are [here](/releases/release-1.0.1)
-:::
diff --git a/website/releases/release-1.0.0.md b/website/releases/release-1.0.md
similarity index 62%
rename from website/releases/release-1.0.0.md
rename to website/releases/release-1.0.md
index a5a2e824fb5b..8de908cbc0ec 100644
--- a/website/releases/release-1.0.0.md
+++ b/website/releases/release-1.0.md
@@ -1,41 +1,48 @@
 ---
-title: "Release 1.0.0"
+title: "Release 1.0"
 layout: releases
 toc: true
 ---
-
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-## [Release 1.0.0](https://github.com/apache/hudi/releases/tag/release-1.0.0)
+This page contains release notes for all Apache Hudi 1.0.x releases, including:
+
+- [Release 1.0.0](#release-100)
+- [Release 1.0.1](#release-101)
+- [Release 1.0.2](#release-102)
+
+---
+
+## [Release 1.0.0](https://github.com/apache/hudi/releases/tag/release-1.0.0) 
{#release-100}
 
-Apache Hudi 1.0.0 is a major milestone release of Apache Hudi. This release 
contains significant format changes and new exciting features 
+Apache Hudi 1.0.0 is a major milestone release of Apache Hudi. This release 
contains significant format changes and new exciting features
 as we will see below.
 
 ## Migration Guide
 
 We encourage users to try the **1.0.0** features on new tables first. The 1.0 
general availability (GA) release will
 support automatic table upgrades from 0.x versions while also ensuring full 
backward compatibility when reading 0.x
-Hudi tables using 1.0, ensuring a seamless migration experience. 
+Hudi tables using 1.0, ensuring a seamless migration experience.
 
 This release comes with **backward compatible writes** i.e. 1.0.0 can write in 
both the table version 8 (latest) and older
 table version 6 (corresponds to 0.14 & above) formats. Automatic upgrades for 
tables from 0.x versions are fully
-supported, minimizing migration challenges. Until all the readers are 
upgraded, users can still deploy 1.0.0 binaries 
+supported, minimizing migration challenges. Until all the readers are 
upgraded, users can still deploy 1.0.0 binaries
 for the writers and leverage backward compatible writes to continue writing 
the tables in the older format. Once the readers
-are fully upgraded, users can switch to the latest format through a config 
change. We recommend users to follow the upgrade 
+are fully upgraded, users can switch to the latest format through a config 
change. We recommend users to follow the upgrade
 steps mentioned in the [migration guide](/docs/deployment#upgrading-to-100) to 
ensure a smooth transition.
 
 :::caution
-Most things are seamlessly handled by the auto upgrade process, but there are 
some limitations. Please read through the 
-limitations of the upgrade downgrade process before proceeding to migrate. 
Please check the [migration guide](/docs/deployment#upgrading-to-100) 
+Most things are seamlessly handled by the auto upgrade process, but there are 
some limitations. Please read through the
+limitations of the upgrade downgrade process before proceeding to migrate. 
Please check the [migration guide](/docs/deployment#upgrading-to-100)
 and 
[RFC-78](https://github.com/apache/hudi/blob/master/rfc/rfc-78/rfc-78.md#support-matrix-for-different-readers-and-writers)
 for more details.
 :::
 
 ## Bundle Updates
 
- - Same bundles supported in the [0.15.0 
release](release-0.15.0#new-spark-bundles) are still supported.
- - New Flink Bundles to support Flink 1.19 and Flink 1.20.
- - In this release, we have deprecated support for Spark 3.2 or lower version 
in Spark 3.
+- Same bundles supported in the [0.15.0 
release](release-0.15#new-spark-bundles) are still supported.
+- New Flink Bundles to support Flink 1.19 and Flink 1.20.
+- In this release, we have deprecated support for Spark 3.2 or lower version 
in Spark 3.
 
 ## Highlights
 
@@ -171,10 +178,91 @@ experience the new features and enhancements.
 
 ## Known Regressions
 - We discovered a regression in Hudi 1.0.0 release for backwards compatible 
writer for MOR table.
-It can silently deletes committed data after upgrade when new data is ingested 
to the table.
-- We also have a ComplexKeyGenerator related regression reported 
[here](release-0.14.1#known-regressions). Please refrain from migrating, if you 
have single field as record key and multiple fields as partition fields.
+  It can silently deletes committed data after upgrade when new data is 
ingested to the table.
+- We also have a ComplexKeyGenerator related regression reported 
[here](release-0.14#known-regressions). Please refrain from migrating, if you 
have single field as record key and multiple fields as partition fields.
+
+:::tip
+Avoid upgrading any existing table to 1.0.0 if any of the above scenario 
matches your workload. Incase of backwards compatible writer for MOR table, you 
are good to upgrade to 1.0.2 release.
+:::
+
+---
+
+## [Release 1.0.1](https://github.com/apache/hudi/releases/tag/release-1.0.1) 
{#release-101}
+
+## Migration Guide
+
+* This release (1.0.1) does not introduce any new table version, thus no 
migration is needed if you are on 1.0.0.
+* If migrating from an older release, please check the migration guide from 
the previous release notes, specifically
+  the upgrade instructions in [0.6.0](/releases/release-0.6.0),
+  [0.9.0](/releases/release-0.9.0), [0.10.0](/releases/release-0.10.0),
+  [0.11.0](/releases/release-0.11.0), [0.12.0](/releases/release-0.12.0), 
[0.13.0](/releases/release-0.13.0),
+  [0.14.0](/releases/release-0.14#release-0140) and 
[1.0.0](/releases/release-1.0#release-100)
+
+### Bug fixes
+
+1.0.1 release is mainly intended for bug fixes and stability. The fixes span 
across many components, including
+
+* Hudi Streamer
+* Spark SQL
+* Spark datasource writer
+* Table services
+* Backwards compatible writer
+* Flink engine
+* Unit, functional, integration tests and CI
+
+## Known Regressions
+We have a ComplexKeyGenerator related regression reported 
[here](release-0.14#known-regressions). Please refrain from migrating, if you 
have single field as record key and multiple partition fields.
+
+:::tip
+Avoid upgrading any existing table to 1.0.1 if you are using 
ComplexKeyGenerator with single record key configured.
+:::
+
+## Raw Release Notes
+
+The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12355195)
 
 :::tip
-Avoid upgrading any existing table to 1.0.0 if any of the above scenario 
matches your workload. Incase of backwards compatible writer for MOR table, you 
are good to upgrade to 1.0.2 release. 
+1.0.1 release also contains all the new features and bug fixes from 1.0.0, of 
which the release notes are [here](/releases/release-1.0#release-100)
 :::
 
+---
+
+## [Release 1.0.2](https://github.com/apache/hudi/releases/tag/release-1.0.2) 
{#release-102}
+
+## Migration Guide
+
+* This release (1.0.2) does not introduce any new table version, thus no 
migration is needed if you are on 1.0.1.
+* If migrating from an older release, please check the migration guide from 
the previous release notes, specifically
+  the upgrade instructions in [0.6.0](/releases/release-0.6.0),
+  [0.9.0](/releases/release-0.9.0), [0.10.0](/releases/release-0.10.0),
+  [0.11.0](/releases/release-0.11.0), [0.12.0](/releases/release-0.12.0), 
[0.13.0](/releases/release-0.13.0),
+  [0.14.0](/releases/release-0.14#release-0140) and 
[1.0.0](/releases/release-1.0#release-100)
+
+### Bug fixes
+
+The 1.0.2 release primarily focuses on bug fixes, stability enhancements, and 
critical improvements, particularly around migration and backwards 
compatibility. The changes span across various components, including:
+
+* **Metadata Table (MDT):** Numerous fixes and improvements related to 
validation, writing, reading, compaction, indexing (column stats), and 
backwards compatibility (especially for table version 6).
+* **Spark Integration:** Enhancements and fixes for Spark SQL (MERGE INTO, 
query behavior), datasource reader/writer, schema handling, performance, and 
backward compatibility.
+* **Backwards Compatibility:** Significant effort ensuring compatibility with 
older table versions (specifically v6) and smoother upgrades from 0.x versions, 
including dedicated writers/readers.
+* **File Group Reader:** Validation, fixes, and feature completeness 
improvements, including making it default for table version 6.
+* **Flink Engine:** Fixes and improvements related to streamer checkpoints and 
bundle validation.
+* **Compaction and Table Services:** Fixes related to compaction scheduling, 
execution (especially with global index or RLI), archival, and cleanup.
+* **Indexing:** Fixes and enhancements for Column Stats, Record Level Index 
(RLI), and Bloom Filters.
+* **Performance:** Optimizations in areas like log file writing, schema reuse, 
and metadata initialization.
+* **Testing, CI, and Dependencies:** Fixes for flaky tests, improved code 
coverage, bundle validation, dependency cleanup (HBase removal), and extensive 
release testing.
+
+## Known Regressions
+We have a ComplexKeyGenerator related regression reported 
[here](release-0.14#known-regressions). Please refrain from migrating, if you 
have single field as record key and mutiple partition fields.
+
+:::tip
+Avoid upgrading any existing table to 1.0.2 if you are using 
ComplexKeyGenerator with single record key configured.
+:::
+
+## Raw Release Notes
+
+The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12355558)
+
+:::tip
+1.0.2 release also contains all the new features and bug fixes from 1.0.1, of 
which the release notes are [here](/releases/release-1.0#release-101)
+:::
diff --git a/website/releases/release-1.1.1.md b/website/releases/release-1.1.md
similarity index 98%
rename from website/releases/release-1.1.1.md
rename to website/releases/release-1.1.md
index d2cf4472a75d..11f31edfa3b6 100644
--- a/website/releases/release-1.1.1.md
+++ b/website/releases/release-1.1.md
@@ -1,7 +1,16 @@
 ---
-title: "Release 1.1.1"
+title: "Release 1.1"
 layout: releases
 toc: true
+last_modified_at: 2024-05-02T18:00:00-08:00
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This page contains release notes for all Apache Hudi 1.1.x releases, including:
+
+- [Release 1.1.1](#release-111)
+
 ---
 
 ## [Release 1.1.1](https://github.com/apache/hudi/releases/tag/release-1.1.1)
@@ -369,9 +378,3 @@ Bucket index now supports only UPSERT operations and cannot 
be used with append
 As of this release, Hudi versions prior to 0.14.0 have reached end of life. 
Users on these older versions should plan to upgrade to 1.1.1 or later to 
receive ongoing support, bug fixes, and new features. The Hudi community will 
focus support efforts on versions 0.14.0 and later.
 
 For more details, see the [community 
discussion](https://github.com/apache/hudi/discussions/13847).
-
----
-
-## Contributors
-
-Hudi 1.1.1 is the result of contributions from the entire Hudi community. We 
thank all contributors who made this release possible.
diff --git a/website/sidebarsReleases.js b/website/sidebarsReleases.js
index f2d1177f38e4..cee8f2791901 100644
--- a/website/sidebarsReleases.js
+++ b/website/sidebarsReleases.js
@@ -12,14 +12,9 @@
 module.exports = {
   releases: [
     'download',
-    'release-1.1.1',
-    'release-1.0.2',
-    'release-1.0.1',
-    'release-1.0.0',
-    'release-1.0.0-beta2',
-    'release-1.0.0-beta1',
-    'release-0.15.0',
-    'release-0.14.1',
-    'release-0.14.0',
+    'release-1.1',
+    'release-1.0',
+    'release-0.15',
+    'release-0.14',
   ],
 };
diff --git a/website/src/pages/roadmap.md b/website/src/pages/roadmap.md
index 1f8364326c35..99b051e92fcc 100644
--- a/website/src/pages/roadmap.md
+++ b/website/src/pages/roadmap.md
@@ -11,7 +11,7 @@ down by areas on our [stack](/docs/hudi_stack).
 
 ## Recent Release(s)
 
-[1.1.1](/releases/release-1.1.1) (Dec 2025)
+[1.1.1](/releases/release-1.1#release-111) (Dec 2025)
 
 ## Future Releases
 
diff --git a/website/versioned_docs/version-0.15.0/sql_dml.md 
b/website/versioned_docs/version-0.15.0/sql_dml.md
index 02df525335c5..0f09cb507e2a 100644
--- a/website/versioned_docs/version-0.15.0/sql_dml.md
+++ b/website/versioned_docs/version-0.15.0/sql_dml.md
@@ -397,7 +397,7 @@ This is done to ensure that the compaction and cleaning 
services are not execute
 
 We have introduced the Consistent Hashing Index since [0.13.0 
release](/releases/release-0.13.0#consistent-hashing-index). In comparison to 
the static hashing index ([Bucket 
Index](/releases/release-0.11.0#bucket-index)), the consistent hashing index 
offers dynamic scalability of data buckets for the writer. 
 You can find the 
[RFC](https://github.com/apache/hudi/blob/master/rfc/rfc-42/rfc-42.md) for the 
design of this feature.
-In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support), the index 
is supported for Flink engine.
+In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support), the index is 
supported for Flink engine.
 
 To utilize this feature, configure the option `index.type` as `BUCKET` and set 
`hoodie.index.bucket.engine` to `CONSISTENT_HASHING`.
 When enabling the consistent hashing index, it's important to enable 
clustering scheduling within the writer. During this process, the writer will 
perform dual writes for both the old and new data buckets while the clustering 
is pending. Although the dual write does not impact correctness, it is strongly 
recommended to execute clustering as quickly as possible.
@@ -454,7 +454,7 @@ select * from t1 limit 20;
 ```
 
 :::caution
-Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support) and 
currently there are some limitations to use it as of 0.14.0:
+Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support) and currently 
there are some limitations to use it as of 0.14.0:
 
 - This index is supported only for MOR table. This limitation also exists even 
if using Spark engine.
 - It does not work with metadata table enabled. This limitation also exists 
even if using Spark engine.
diff --git a/website/versioned_docs/version-1.0.0/sql_dml.md 
b/website/versioned_docs/version-1.0.0/sql_dml.md
index 75c4544c023c..c8acd510b323 100644
--- a/website/versioned_docs/version-1.0.0/sql_dml.md
+++ b/website/versioned_docs/version-1.0.0/sql_dml.md
@@ -478,7 +478,7 @@ This is done to ensure that the compaction and cleaning 
services are not execute
 
 We have introduced the Consistent Hashing Index since [0.13.0 
release](/releases/release-0.13.0#consistent-hashing-index). In comparison to 
the static hashing index ([Bucket 
Index](/releases/release-0.11.0#bucket-index)), the consistent hashing index 
offers dynamic scalability of data buckets for the writer. 
 You can find the 
[RFC](https://github.com/apache/hudi/blob/master/rfc/rfc-42/rfc-42.md) for the 
design of this feature.
-In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support), the index 
is supported for Flink engine.
+In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support), the index is 
supported for Flink engine.
 
 To utilize this feature, configure the option `index.type` as `BUCKET` and set 
`hoodie.index.bucket.engine` to `CONSISTENT_HASHING`.
 When enabling the consistent hashing index, it's important to enable 
clustering scheduling within the writer. During this process, the writer will 
perform dual writes for both the old and new data buckets while the clustering 
is pending. Although the dual write does not impact correctness, it is strongly 
recommended to execute clustering as quickly as possible.
@@ -535,7 +535,7 @@ select * from t1 limit 20;
 ```
 
 :::caution
-Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support) and 
currently there are some limitations to use it as of 0.14.0:
+Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support) and currently 
there are some limitations to use it as of 0.14.0:
 
 - This index is supported only for MOR table. This limitation also exists even 
if using Spark engine.
 - It does not work with metadata table enabled. This limitation also exists 
even if using Spark engine.
diff --git a/website/versioned_docs/version-1.0.1/sql_dml.md 
b/website/versioned_docs/version-1.0.1/sql_dml.md
index 71f49884a848..0d965419ddcf 100644
--- a/website/versioned_docs/version-1.0.1/sql_dml.md
+++ b/website/versioned_docs/version-1.0.1/sql_dml.md
@@ -487,7 +487,7 @@ This is done to ensure that the compaction and cleaning 
services are not execute
 
 We have introduced the Consistent Hashing Index since [0.13.0 
release](/releases/release-0.13.0#consistent-hashing-index). In comparison to 
the static hashing index ([Bucket 
Index](/releases/release-0.11.0#bucket-index)), the consistent hashing index 
offers dynamic scalability of data buckets for the writer. 
 You can find the 
[RFC](https://github.com/apache/hudi/blob/master/rfc/rfc-42/rfc-42.md) for the 
design of this feature.
-In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support), the index 
is supported for Flink engine.
+In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support), the index is 
supported for Flink engine.
 
 To utilize this feature, configure the option `index.type` as `BUCKET` and set 
`hoodie.index.bucket.engine` to `CONSISTENT_HASHING`.
 When enabling the consistent hashing index, it's important to enable 
clustering scheduling within the writer. During this process, the writer will 
perform dual writes for both the old and new data buckets while the clustering 
is pending. Although the dual write does not impact correctness, it is strongly 
recommended to execute clustering as quickly as possible.
@@ -544,7 +544,7 @@ select * from t1 limit 20;
 ```
 
 :::caution
-Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support) and 
currently there are some limitations to use it as of 0.14.0:
+Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support) and currently 
there are some limitations to use it as of 0.14.0:
 
 - This index is supported only for MOR table. This limitation also exists even 
if using Spark engine.
 - It does not work with metadata table enabled. This limitation also exists 
even if using Spark engine.
diff --git a/website/versioned_docs/version-1.0.2/sql_dml.md 
b/website/versioned_docs/version-1.0.2/sql_dml.md
index 6b257144287c..cb6fc5350ed5 100644
--- a/website/versioned_docs/version-1.0.2/sql_dml.md
+++ b/website/versioned_docs/version-1.0.2/sql_dml.md
@@ -489,7 +489,7 @@ This is done to ensure that the compaction and cleaning 
services are not execute
 
 We have introduced the Consistent Hashing Index since [0.13.0 
release](/releases/release-0.13.0#consistent-hashing-index). In comparison to 
the static hashing index ([Bucket 
Index](/releases/release-0.11.0#bucket-index)), the consistent hashing index 
offers dynamic scalability of data buckets for the writer. 
 You can find the 
[RFC](https://github.com/apache/hudi/blob/master/rfc/rfc-42/rfc-42.md) for the 
design of this feature.
-In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support), the index 
is supported for Flink engine.
+In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support), the index is 
supported for Flink engine.
 
 To utilize this feature, configure the option `index.type` as `BUCKET` and set 
`hoodie.index.bucket.engine` to `CONSISTENT_HASHING`.
 When enabling the consistent hashing index, it's important to enable 
clustering scheduling within the writer. During this process, the writer will 
perform dual writes for both the old and new data buckets while the clustering 
is pending. Although the dual write does not impact correctness, it is strongly 
recommended to execute clustering as quickly as possible.
@@ -546,7 +546,7 @@ select * from t1 limit 20;
 ```
 
 :::caution
-Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support) and 
currently there are some limitations to use it as of 0.14.0:
+Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support) and currently 
there are some limitations to use it as of 0.14.0:
 
 - This index is supported only for MOR table. This limitation also exists even 
if using Spark engine.
 - It does not work with metadata table enabled. This limitation also exists 
even if using Spark engine.
diff --git a/website/versioned_docs/version-1.1.1/sql_dml.md 
b/website/versioned_docs/version-1.1.1/sql_dml.md
index 81ad924289f0..90f407644386 100644
--- a/website/versioned_docs/version-1.1.1/sql_dml.md
+++ b/website/versioned_docs/version-1.1.1/sql_dml.md
@@ -489,7 +489,7 @@ This is done to ensure that the compaction and cleaning 
services are not execute
 
 We have introduced the Consistent Hashing Bucket Index since [0.13.0 
release](/releases/release-0.13.0#consistent-hashing-index). This is one of 
three [bucket index](indexes.md#additional-writer-side-indexes) variants 
available in Hudi. The consistent hashing bucket index offers dynamic 
scalability of data buckets for the writer. 
 You can find the 
[RFC](https://github.com/apache/hudi/blob/master/rfc/rfc-42/rfc-42.md) for the 
design of this feature.
-In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support), the index 
is supported for Flink engine.
+In the 0.13.X release, the Consistent Hashing Index is supported only for 
Spark engine. And since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support), the index is 
supported for Flink engine.
 
 To utilize this feature, configure the option `index.type` as `BUCKET` and set 
`hoodie.index.bucket.engine` to `CONSISTENT_HASHING`.
 When enabling the consistent hashing index, it's important to enable 
clustering scheduling within the writer. During this process, the writer will 
perform dual writes for both the old and new data buckets while the clustering 
is pending. Although the dual write does not impact correctness, it is strongly 
recommended to execute clustering as quickly as possible.
@@ -546,7 +546,7 @@ select * from t1 limit 20;
 ```
 
 :::caution
-Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14.0#consistent-hashing-index-support) and 
currently there are some limitations to use it as of 0.14.0:
+Consistent Hashing Index is supported for Flink engine since [release 
0.14.0](/releases/release-0.14#consistent-hashing-index-support) and currently 
there are some limitations to use it as of 0.14.0:
 
 - This index is supported only for MOR table. This limitation also exists even 
if using Spark engine.
 - It does not work with metadata table enabled. This limitation also exists 
even if using Spark engine.

Reply via email to