This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/fluss.git


The following commit(s) were added to refs/heads/main by this push:
     new 4408d2c2d [hotfix][docs] Revert documentation link changes to remove 
"next" version references
4408d2c2d is described below

commit 4408d2c2d73b7a8bba5e0cd693b383bea2abebec
Author: Jark Wu <[email protected]>
AuthorDate: Tue Feb 3 17:32:10 2026 +0800

    [hotfix][docs] Revert documentation link changes to remove "next" version 
references
---
 website/blog/2024-11-29-fluss-open-source.md             |  2 +-
 website/blog/2025-06-01-partial-updates.md               |  4 ++--
 website/blog/releases/0.6.md                             |  2 +-
 website/blog/releases/0.7.md                             |  8 ++++----
 website/blog/releases/0.8.md                             | 16 ++++++++--------
 website/docs/maintenance/operations/rebalance.md         |  2 +-
 .../docs/maintenance/tiered-storage/lakehouse-storage.md |  2 +-
 website/src/pages/index.tsx                              |  2 +-
 8 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/website/blog/2024-11-29-fluss-open-source.md 
b/website/blog/2024-11-29-fluss-open-source.md
index 42e3df116..3e3effb41 100644
--- a/website/blog/2024-11-29-fluss-open-source.md
+++ b/website/blog/2024-11-29-fluss-open-source.md
@@ -40,7 +40,7 @@ Make sure to keep an eye on the project, give it a try and if 
you like it, don
 
 ### Getting Started
 - Visit the [GitHub repository](https://github.com/apache/fluss).
-- Check out the [quickstart guide](/docs/next/quickstart/flink/).
+- Check out the [quickstart guide](/docs/quickstart/flink/).
 
 ### Additional Resources
 - Announcement Blog Post: [Introducing Fluss: Unified Streaming Storage For 
Next-Generation Data 
Analytics](https://www.ververica.com/blog/introducing-fluss)
diff --git a/website/blog/2025-06-01-partial-updates.md 
b/website/blog/2025-06-01-partial-updates.md
index 29d60b8c8..55317235e 100644
--- a/website/blog/2025-06-01-partial-updates.md
+++ b/website/blog/2025-06-01-partial-updates.md
@@ -265,7 +265,7 @@ Flink SQL> SELECT * FROM user_rec_wide;
 
 Now let's switch to `batch` mode and query the current snapshot of the 
`user_rec_wide` table.
 
-But before that, let's start the [Tiering 
Service](/docs/next/maintenance/tiered-storage/lakehouse-storage/#start-the-datalake-tiering-service)
 that allows offloading the tables as `Lakehouse` tables.
+But before that, let's start the [Tiering 
Service](/docs/maintenance/tiered-storage/lakehouse-storage/#start-the-datalake-tiering-service)
 that allows offloading the tables as `Lakehouse` tables.
 
 **Step 7:** Open a new terminal đź’» in the `Coordinator Server` and run the 
following command to start the `Tiering Service`:
 ```shell
@@ -297,7 +297,7 @@ Flink SQL> SELECT * FROM user_rec_wide;
 ### Conclusion
 Partial updates in Fluss enable an alternative approach in how we design 
streaming data pipelines for enriching or joining data. 
 
-When all your sources share a primary key - otherwise you can mix & match 
[streaming lookup joins](/docs/next/engine-flink/lookups/#lookup) - you can 
turn the problem on its head: update a unified table incrementally, rather than 
joining streams on the fly.
+When all your sources share a primary key - otherwise you can mix & match 
[streaming lookup joins](/docs/engine-flink/lookups/#lookup) - you can turn the 
problem on its head: update a unified table incrementally, rather than joining 
streams on the fly.
 
 The result is a more scalable, maintainable, and efficient pipeline. 
 Engineers can spend less time wrestling with Flink’s state, checkpoints and 
join mechanics, and more time delivering fresh, integrated data to power 
real-time analytics and applications. 
diff --git a/website/blog/releases/0.6.md b/website/blog/releases/0.6.md
index 46b1a5624..287b5cb37 100644
--- a/website/blog/releases/0.6.md
+++ b/website/blog/releases/0.6.md
@@ -184,7 +184,7 @@ SELECT * FROM fluss_left_table INNER JOIN fluss_right_table
 Flink performs lookups on Fluss tables using the Join Key, which serves as the 
Bucket Key for the Fluss table.
 This allows it to leverage the prefix index of the primary key in the Fluss 
table, enabling highly efficient lookup queries.
 This feature in Fluss is referred to as Prefix Lookup. Currently, Prefix 
Lookup can also be used to perform one-to-many lookup queries.
-For more details, please refer to the [Prefix 
Lookup](/docs/next/engine-flink/lookups/#prefix-lookup) documentation.
+For more details, please refer to the [Prefix 
Lookup](/docs/engine-flink/lookups/#prefix-lookup) documentation.
 
 ## Stability & Performance Improvements
 
diff --git a/website/blog/releases/0.7.md b/website/blog/releases/0.7.md
index 74b3ba169..a8ebf1bc0 100644
--- a/website/blog/releases/0.7.md
+++ b/website/blog/releases/0.7.md
@@ -79,7 +79,7 @@ flink run /path/to/fluss-flink-tiering-0.7.0.jar \
     --datalake.paimon.warehouse /path/to/warehouse
 ```
 
-See more details in the [Streaming Lakehouse 
documentation](/docs/next/maintenance/tiered-storage/lakehouse-storage/).
+See more details in the [Streaming Lakehouse 
documentation](/docs/maintenance/tiered-storage/lakehouse-storage/).
 
 ## Streaming Partition Pruning
 Partitioning is a foundational technique in modern data warehouses and 
Lakehouse architectures for optimizing query performance by 
@@ -131,7 +131,7 @@ CALL admin_catalog.sys.add_acl(
 );
 ```
 
-For details, please refer to the [Security 
documentation](/docs/next/security/overview/) and quickstarts.
+For details, please refer to the [Security 
documentation](/docs/security/overview/) and quickstarts.
 
 ## Flink DataStream Connector
 Fluss 0.7 officially introduces the DataStream Connector, supporting both 
Source and Sink for reading and writing log and primary key tables. Users can 
now seamlessly integrate Fluss tables into Flink DataStream pipelines.
@@ -155,7 +155,7 @@ DataStreamSource<Order> stream = env.fromSource(
 );
 ```
 
-For usage examples and configuration parameters, see the [DataStream Connector 
documentation](/docs/next/engine-flink/datastream/).
+For usage examples and configuration parameters, see the [DataStream Connector 
documentation](/docs/engine-flink/datastream/).
 
 
 ## Fluss Java Client
@@ -164,7 +164,7 @@ In this version, we officially release the Fluss Java 
Client, a client library d
 * **Table API:** For table-based data operations, supporting streaming 
reads/writes, updates, deletions, and point queries.
 * **Admin API:** For metadata management, including cluster management, table 
lifecycle, and access control.
 
-The client supports forward and backward compatibility, ensuring smooth 
upgrades across Fluss versions. With the Fluss Java Client, developers can 
build online applications and data ingestion services based on Fluss, as well 
as enterprise-level components such as Fluss management platforms and 
operations monitoring systems. For detailed usage instructions, please refer to 
the official documentation: [Fluss Java Client User 
Guide](/docs/next/apis/java-client/).
+The client supports forward and backward compatibility, ensuring smooth 
upgrades across Fluss versions. With the Fluss Java Client, developers can 
build online applications and data ingestion services based on Fluss, as well 
as enterprise-level components such as Fluss management platforms and 
operations monitoring systems. For detailed usage instructions, please refer to 
the official documentation: [Fluss Java Client User 
Guide](/docs/apis/java-client/).
 
 Fluss uses Apache Arrow as its underlying storage format, enabling efficient 
cross-language extensions. A **Fluss Python Client** is planned for future 
releases, leveraging the rich ecosystem of **PyArrow** to integrate with 
popular data analysis tools such as **Pandas** and **DuckDB**. 
 This will further lower the barrier for real-time data exploration and 
analytics.
diff --git a/website/blog/releases/0.8.md b/website/blog/releases/0.8.md
index 963ab6749..c302685af 100644
--- a/website/blog/releases/0.8.md
+++ b/website/blog/releases/0.8.md
@@ -51,7 +51,7 @@ datalake.iceberg.type: hadoop
 datalake.iceberg.warehouse: /path/to/iceberg
 ```
 
-You can find more detailed instructions in the [Iceberg Lakehouse 
documentation](/docs/next/streaming-lakehouse/integrate-data-lakes/iceberg/).
+You can find more detailed instructions in the [Iceberg Lakehouse 
documentation](/docs/streaming-lakehouse/integrate-data-lakes/iceberg/).
 
 ## Real-Time Multimodal AI Analytics with Lance
 
@@ -80,7 +80,7 @@ datalake.lance.access_key_id: <access_key_id>
 datalake.lance.secret_access_key: <secret_access_key>
 ```
 
-See the [LanceDB blog post](https://lancedb.com/blog/fluss-integration/) for 
the full integration. You also can find more detailed instructions in the 
[Lance Lakehouse 
documentation](/docs/next/streaming-lakehouse/integrate-data-lakes/lance/).
+See the [LanceDB blog post](https://lancedb.com/blog/fluss-integration/) for 
the full integration. You also can find more detailed instructions in the 
[Lance Lakehouse 
documentation](/docs/streaming-lakehouse/integrate-data-lakes/lance/).
 
 ## Flink 2.1
 
@@ -102,7 +102,7 @@ Below is a performance comparison (CPU, memory, state size, 
checkpoint interval)
 ![](../assets/taobao_practice/performance_delta2.png)
 
 
-You can find more detailed instructions in the [Delta Join 
documentation](/docs/next/engine-flink/delta-joins/).
+You can find more detailed instructions in the [Delta Join 
documentation](/docs/engine-flink/delta-joins/).
 
 ### Materialized Table
 
@@ -135,7 +135,7 @@ WITH(
 );
 ```
 
-You can find more detailed instructions in the [Materialized Table 
documentation](/docs/next/engine-flink/ddl/#materialized-table).
+You can find more detailed instructions in the [Materialized Table 
documentation](/docs/engine-flink/ddl/#materialized-table).
 
 ## Stability
 
@@ -144,7 +144,7 @@ Through continuous validation across multiple business 
units within Alibaba Grou
 These improvements substantially enhance Fluss’s robustness in 
mission-critical streaming use cases.
 
 Key improvements include:
-- **[Graceful 
Shutdown](/docs/next/maintenance/operations/graceful-shutdown/)**: Fluss 
supports cluster rolling upgrade, and we introduced a graceful shutdown 
mechanism for TabletServers in this version. During shutdown, leadership is 
proactively migrated before termination, ensuring that read/write latency 
remains unaffected during rolling upgrades.
+- **[Graceful Shutdown](/docs/maintenance/operations/graceful-shutdown/)**: 
Fluss supports cluster rolling upgrade, and we introduced a graceful shutdown 
mechanism for TabletServers in this version. During shutdown, leadership is 
proactively migrated before termination, ensuring that read/write latency 
remains unaffected during rolling upgrades.
 - **Accelerated Coordinator Event Processing**: Optimized the Coordinator’s 
event handling mechanism through asynchronous processing and batched ZooKeeper 
operations. As a result, all events are now processed in milliseconds.
 - **Faster Coordinator Recovery**: Parallelized initialization cuts 
Coordinator startup time from 10 minutes to just 20 seconds in production-scale 
benchmarks, this dramatically improves service availability and recovery speed.
 - **Optimized Server Metrics**: Refined metric granularity and reporting logic 
to reduce telemetry volume by 90% while preserving full observability.
@@ -180,7 +180,7 @@ When you issue a `ALTER TABLE ... SET` command to update 
storage options on a ta
 
 This capability is especially useful for tuning performance, adapting to 
changing data patterns, or complying with evolving data governance 
requirements—all without service interruption.
 
-You can find more detailed instructions in the [Updating Configs 
documentation](/docs/next/maintenance/operations/updating-configs/).
+You can find more detailed instructions in the [Updating Configs 
documentation](/docs/maintenance/operations/updating-configs/).
 
 ## Helm Charts
 
@@ -188,7 +188,7 @@ This release also introduced Helm Charts. With this 
addition, users can now depl
 The Helm chart simplifies provisioning, upgrades, and scaling by packaging 
configuration, manifests, and dependencies into a single, versioned release.
 This should help users running Fluss on Kubernetes faster, more reliably, and 
with easier integration into existing CI/CD and observability setups, 
significantly lowering the barrier for teams adopting Fluss in production.
 
-You can find more detailed instructions in the [Deploying with Helm 
documentation](/docs/next/install-deploy/deploying-with-helm/).
+You can find more detailed instructions in the [Deploying with Helm 
documentation](/docs/install-deploy/deploying-with-helm/).
 
 ## Java Version Upgrade
 
@@ -214,7 +214,7 @@ The Fluss community is committed to delivering a smooth 
upgrade experience. This
 - Clients from version 0.7 can seamlessly connect to version 0.8 servers,
 - Clients from version 0.8 are also compatible with version 0.7 servers.
 
-However, Fluss 0.8 is the first official release since the project entered the 
Apache Incubator, and it includes changes such as package path updates (e.g., 
groupId and Java package names). As a result, applications that depend on the 
Fluss SDK will need to make corresponding code adjustments when upgrading to 
version 0.8. Please refer to the [upgrade 
notes](/docs/next/maintenance/operations/upgrade-notes-0.8/) for a 
comprehensive list of adjustments to make and issues to check during th [...]
+However, Fluss 0.8 is the first official release since the project entered the 
Apache Incubator, and it includes changes such as package path updates (e.g., 
groupId and Java package names). As a result, applications that depend on the 
Fluss SDK will need to make corresponding code adjustments when upgrading to 
version 0.8. Please refer to the [upgrade 
notes](/docs/maintenance/operations/upgrade-notes-0.8/) for a comprehensive 
list of adjustments to make and issues to check during the upg [...]
 
 For a detailed list of all changes in this release, please refer to the 
[release notes](https://github.com/apache/fluss/releases/tag/v0.8.0-incubating).
 
diff --git a/website/docs/maintenance/operations/rebalance.md 
b/website/docs/maintenance/operations/rebalance.md
index eafb19827..48d89ac40 100644
--- a/website/docs/maintenance/operations/rebalance.md
+++ b/website/docs/maintenance/operations/rebalance.md
@@ -210,7 +210,7 @@ public class RebalanceExample {
 
 ## Using Flink Stored Procedures
 
-For rebalancing operations, Fluss provides convenient Flink stored procedures 
that can be called directly from Flink SQL. See [Rebalance 
Procedures](../../../engine-flink/procedures#rebalance-procedures) for detailed 
documentation on using the following procedures:
+For rebalancing operations, Fluss provides convenient Flink stored procedures 
that can be called directly from Flink SQL. See [Rebalance 
Procedures](/docs/engine-flink/procedures.md#rebalance-procedures) for detailed 
documentation on using the following procedures:
 
 - **add_server_tag**: Tag servers before rebalancing
 - **remove_server_tag**: Remove tags after rebalancing
diff --git a/website/docs/maintenance/tiered-storage/lakehouse-storage.md 
b/website/docs/maintenance/tiered-storage/lakehouse-storage.md
index 89237c31c..35b2394d7 100644
--- a/website/docs/maintenance/tiered-storage/lakehouse-storage.md
+++ b/website/docs/maintenance/tiered-storage/lakehouse-storage.md
@@ -51,7 +51,7 @@ For example:
 - If you are using Paimon filesystem catalog with OSS filesystem, you need to 
put `paimon-oss-<paimon_version>.jar` into directory 
`${FLUSS_HOME}/plugins/paimon/`.
 - If you are using Paimon Hive catalog, you need to put [the flink sql hive 
connector 
jar](https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/table/hive/overview/#using-bundled-hive-jar)
 into directory `${FLUSS_HOME}/plugins/paimon/`.
 
-Additionally, when using Paimon with HDFS, you must also configure the Fluss 
server with the Hadoop environment. See the [HDFS setup 
guide](../../filesystems/hdfs) for detailed instructions.
+Additionally, when using Paimon with HDFS, you must also configure the Fluss 
server with the Hadoop environment. See the [HDFS setup 
guide](/docs/maintenance/filesystems/hdfs.md) for detailed instructions.
 
 ### Start The Datalake Tiering Service
 Then, you must start the datalake tiering service to tier Fluss's data to the 
lakehouse storage.
diff --git a/website/src/pages/index.tsx b/website/src/pages/index.tsx
index 5cf202b53..06aa2fcfc 100644
--- a/website/src/pages/index.tsx
+++ b/website/src/pages/index.tsx
@@ -38,7 +38,7 @@ function HomepageHeader() {
                 <div className={styles.buttons}>
                     <Link
                         className={clsx("hero_button button button--primary 
button--lg", styles.buttonWidth)}
-                        to="/docs/next/quickstart/flink">
+                        to="/docs/quickstart/flink">
                         Quick Start
                     </Link>
 

Reply via email to