This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new c1dd2266f60 [doc](file-cache) Best Practices for Cache Optimization in 
Read-Write Splitting Scenarios (#2934)
c1dd2266f60 is described below

commit c1dd2266f6022957d0f168b0685f7b7b734d5134
Author: bobhan1 <[email protected]>
AuthorDate: Tue Sep 30 15:39:54 2025 +0800

    [doc](file-cache) Best Practices for Cache Optimization in Read-Write 
Splitting Scenarios (#2934)
    
    modified based on https://github.com/apache/doris-website/pull/2745
    
    for https://github.com/apache/doris/pull/53540
    ## Versions
    
    - [x] dev
    - [ ] 3.0
    - [ ] 2.1
    - [ ] 2.0
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
    
    ---------
    
    Co-authored-by: Chen Zhang <[email protected]>
---
 .../file-cache-rw-compute-group-best-practice.md   | 171 +++++++++++++++++++++
 .../file-cache-rw-compute-group-best-practice.md   | 164 ++++++++++++++++++++
 sidebars.json                                      |   1 +
 3 files changed, 336 insertions(+)

diff --git 
a/docs/compute-storage-decoupled/file-cache-rw-compute-group-best-practice.md 
b/docs/compute-storage-decoupled/file-cache-rw-compute-group-best-practice.md
new file mode 100644
index 00000000000..b995e207f72
--- /dev/null
+++ 
b/docs/compute-storage-decoupled/file-cache-rw-compute-group-best-practice.md
@@ -0,0 +1,171 @@
+---
+{
+    "title": "Best Practices for Cache Optimization in Read-Write Splitting 
Scenarios",
+    "language": "en"
+}
+---
+
+When using Apache Doris's storage-compute separation architecture, especially 
in scenarios where multiple Compute Groups are deployed to implement read-write 
splitting, query performance is highly dependent on the File Cache hit rate. 
When a read-only compute group experiences a cache miss, it needs to pull data 
from remote object storage, leading to a significant increase in query latency.
+
+This document aims to detail how to effectively reduce cache miss issues 
caused by common scenarios such as **Compaction**, **Data Ingestion**, and 
**Schema Change** through cache warm-up and related configurations, thereby 
ensuring the query performance stability of the read-only cluster.
+
+## Core Issue: Cache Invalidation Caused by New Data Versions (Rowsets)
+
+In Doris, both background processes like Compaction / Schema Change and 
foreground data ingestion will generate new sets of data files (Rowsets). On 
the nodes of the write-only compute group responsible for writes, this data is 
written to the local File Cache by default, so its query performance is not 
affected.
+
+However, for a read-only compute group, when it synchronizes metadata and 
becomes aware of these new Rowsets, its local cache does not contain this new 
data. If a query then needs to access these new Rowsets, it will trigger a 
cache miss, leading to a performance degradation.
+
+To solve this problem, the core idea is: **to load data into the read-only 
compute group's cache in advance or intelligently before it is queried.**
+
+## I. Overview of Cache Warm-up Mechanisms
+
+Cache warm-up is the process of proactively loading data from remote storage 
into the File Cache of BE nodes. Doris provides the following three main 
warm-up methods:
+
+### 1. Proactive Incremental Warm-up (Recommended)
+
+This is a more intelligent and automated mechanism. It establishes a warm-up 
relationship between the write compute group and the read-only compute group. 
When a write/Compaction event generates a new Rowset, it actively notifies and 
triggers the associated read-only compute group to perform an asynchronous 
cache warm-up.
+
+**Applicable Scenarios:**
+
+- Most scenarios.
+- Requires user permission to configure warm-up relationships.
+
+> **[Documentation Link]**: For detailed information on how to configure and 
use proactive incremental warm-up, please refer to the official documentation 
**[FileCache Proactive Incremental Warm-up](./read-write-splitting.md)**.
+
+### 2. Read-Only Compute Group Automatic Warm-up
+
+This is a lightweight, automatic warm-up strategy. By enabling a configuration 
on the BE nodes of the **read-only compute group**, it automatically triggers 
an asynchronous warm-up task when it perceives a new Rowset.
+
+**Applicable Scenarios:**
+
+- Most scenarios.
+- Requires user permission to configure warm-up relationships.
+
+**Core Configuration:** In the `be.conf` of the read-only compute group, set:
+
+```
+enable_warmup_immediately_on_new_rowset = true
+```
+
+## II. Optimizing the Impact of Compaction / Schema Change on Query Performance
+
+Background Compaction merges old Rowsets and generates new ones. If the new 
Rowsets are not warmed up, the query performance of the read-only compute group 
will fluctuate due to cache misses. The following are two recommended solutions.
+
+### Solution 1: Proactive Incremental Warm-up + Delayed Commit (Recommended)
+
+This solution can **fundamentally prevent** the read-only compute group from 
querying new Rowsets generated by Compaction / Schema Change that have not yet 
been cached.
+
+**How it Works:**
+
+1. First, configure the **proactive incremental warm-up** relationship between 
the write compute group and the read-only compute group.
+2. On the BE nodes of the **write compute group**, enable the delayed commit 
feature for Compaction / Schema Change.
+
+**Core Configuration (Write Compute Group `be.conf`):**
+
+```sql
+enable_compaction_delay_commit_for_warm_up = true
+```
+
+1. **Workflow:**
+   1. A Compaction / Schema Change task completes on the write compute group 
and generates a new Rowset.
+   2. At this point, the Rowset is **not immediately committed** (i.e., it is 
not visible to the read-only compute group).
+   3. The system triggers the associated read-only compute group to warm up 
the cache for this new Rowset.
+   4. Only after all associated read-only compute groups have completed the 
warm-up, the new Rowset is finally committed and becomes visible to all compute 
groups.
+
+**Advantages:**
+
+- **Seamless Switching**: For the read-only compute group, all visible data 
post-Compaction is already in the cache, so query performance does not 
fluctuate.
+- **High Stability**: This is the most robust solution for ensuring query 
performance in read-write splitting scenarios.
+
+### Solution 2: Read-Only Compute Group Automatic Warm-up + Query Awareness
+
+This solution uses intelligent selection at the query layer to **try to skip** 
new Rowsets that have not yet been warmed up (Note: For Unique Key MoW tables, 
Rowsets from compaction cannot be skipped to ensure correctness).
+
+**How it Works:**
+
+1. On the BE nodes of the **read-only compute group**, enable automatic 
warm-up.
+
+**Core Configuration (Read-Only Compute Group `be.conf`):**
+
+```sql
+enable_warmup_immediately_on_new_rowset = true
+```
+
+1. During a query, enable the "prefer cached" Rowset selection strategy via a 
session variable or user property.
+
+**Set in the query session:**
+
+```sql
+SET enable_prefer_cached_rowset = true;
+```
+
+**Or set as a user property:**
+```sql
+SET property for "jack" enable_prefer_cached_rowset = true;
+```
+
+1. **Workflow:**
+   1. When the read-only compute group perceives a new Rowset from Compaction, 
it asynchronously triggers a warm-up task.
+   2. With `enable_prefer_cached_rowset` enabled, the query planner, when 
selecting Rowsets to read, will prioritize those that are **already warmed up**.
+   3. It will automatically ignore new Rowsets that are still being warmed up, 
provided that this does not affect data consistency (i.e., the old Rowsets 
before the merge are still accessible).
+
+**Advantages:**
+
+- Relatively simple to configure, without needing to set up 
cross-compute-group warm-up relationships.
+- Effectively reduces performance impact in most cases.
+
+**Note:**
+
+> This solution is a "best-effort" strategy. If the old Rowsets corresponding 
to a new Rowset have already been cleaned up, or if the query must access the 
latest data version, the query will still have to wait for the warm-up to 
complete or access the cold data directly.
+
+## III. Optimizing the Impact of Data Ingestion on Query Performance
+
+High-frequency data ingestion (like `INSERT INTO`, `Stream Load`) continuously 
produces new small files (Rowsets), which also causes cache miss problems for 
the read-only compute group. If your business can tolerate data latency of 
seconds or even sub-seconds, you can adopt the following combined strategy to 
trade a tiny amount of "freshness" for a huge performance gain.
+
+**How it Works:** This strategy combines **automatic warm-up** with a **query 
freshness tolerance** setting, allowing the query planner to intelligently skip 
the latest data that has not been warmed up within a specified time window.
+
+**Implementation Steps:**
+
+1. **Enable a Warm-up Mechanism**:
+
+   1. Enable either **Proactive Incremental Warm-up** or **Read-Only Compute 
Group Automatic Warm-up**(`enable_warmup_immediately_on_new_rowset=true`) on 
the read-only compute group. This is the prerequisite for data to be loaded 
into the cache asynchronously.
+
+2. **Set Query Freshness Tolerance**:
+
+   1. In the query session of the read-only compute group, set the 
`query_freshness_tolerance_ms` variable.
+
+   2. **Set in the query session:**
+
+      ```sql
+      -- Set a tolerance for 1000 milliseconds (1 second) of data latency
+      SET query_freshness_tolerance_ms = 1000;
+      ```
+
+      **Or set as a user property:**
+      ```sql
+      SET property for "jack" query_freshness_tolerance_ms = 1000;
+      ```
+
+**Workflow:**
+
+- When a query starts, it checks the Rowsets it needs to access.
+- If a Rowset was generated within the **last 1000ms** and is **not yet warmed 
up**, the query planner will automatically skip it and access older, but 
already cached, data instead.
+- This way, the vast majority of queries will hit the cache, avoiding the 
performance degradation caused by reading the latest, cold data from recent 
writes.
+
+**Fallback Mechanism:**
+
+> If the warm-up process for a Rowset is very slow and exceeds the time set by 
`query_freshness_tolerance_ms` (e.g., still not finished after 1000ms), the 
query will no longer skip it to ensure eventual data visibility. It will fall 
back to the default behavior: read the cold data directly.
+
+**Advantages:**
+
+- **Significant Performance Improvement**: Effectively eliminates query 
performance spikes in high-throughput write scenarios.
+- **High Flexibility**: Users can make a flexible trade-off between data 
freshness and query performance based on their business needs.
+
+## Summary and Recommendations
+
+| Solution                                                    | Applicable 
Scenarios                                         | Expected Effect (Impact of 
various write operations on cache hit rate) |
+| ----------------------------------------------------------- | 
------------------------------------------------------------ | 
------------------------------------------------------------ |
+| Active incremental pre-warming + delayed commit + configurable data 
freshness tolerance (optional) | Suitable for scenarios with extremely high 
query latency requirements; requires users to have permission to configure 
pre-warming relationships | Compaction: None <br> Heavyweight schema change: 
None <br> Newly written data: Depends on freshness tolerance |
+| Read-only compute group with automatic pre-warming + prefer cached data + 
configurable data freshness tolerance (optional) | Users have no permission to 
configure pre-warming relationships <br> If freshness tolerance is not 
configured, ineffective for MOW primary key tables | Compaction: None <br> 
Heavyweight schema change: Cache miss <br> Newly written data: Depends on 
freshness tolerance |
+
+By reasonably applying the above cache warm-up strategies and related 
configurations, you can effectively manage the cache behavior of Apache Doris 
in a read-write splitting architecture, minimize performance loss due to cache 
misses, and ensure the stability and efficiency of your read-only query 
services.
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/file-cache-rw-compute-group-best-practice.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/file-cache-rw-compute-group-best-practice.md
new file mode 100644
index 00000000000..56dfe91cb93
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/file-cache-rw-compute-group-best-practice.md
@@ -0,0 +1,164 @@
+---
+{
+    "title": "读写分离场景下缓存优化最佳实践",
+    "language": "zh-CN"
+}
+---
+
+在使用 Apache Doris 的存算分离架构时,特别是部署了多个计算组(Compute Group)来实现读写分离的场景下,查询性能高度依赖于 File 
Cache 的命中率。当只读计算组(Read-Only Compute Group)的缓存未命中(Cache 
Miss)时,需要从远端对象存储拉取数据,会导致查询延迟(Query Latency)显著增加。
+
+本文档旨在详细阐述如何通过缓存预热及相关配置,有效减少因 **Compaction** 和 **数据导入(Data 
Ingestion)**以及**Schema Change等**常见场景引起的缓存未命中问题,从而保障只读集群的查询性能稳定性。
+
+## 核心问题:新数据版本(Rowset)引发的缓存失效
+
+在 Doris 中,无论是后台的 Compaction / Schema Change 还是前台的数据导入,都会生成新的数据文件集合(Rowset)。这些新 
Rowset 在负责写入的计算组(Write-Only Compute Group)的节点上,其数据会默认被写入本地的 File Cache 
中,因此该计算组的查询性能不受影响。
+
+然而,对于只读计算组而言,当它同步到元数据并感知到这些新 Rowset 的存在时,其本地缓存中并没有这些新数据。此时若有查询需要访问这些新 
Rowset,就会触发缓存未命中,导致性能下降。
+
+为了解决这一问题,核心思路是:**让数据在被查询之前,提前或智能地加载到只读计算组的缓存中。**
+
+## 一、 缓存预热机制概览
+
+缓存预热(Cache Warm-up)是主动将远端存储中的数据加载到 BE 节点的 File Cache 中的过程。Doris 提供以下三种主要的预热方式:
+
+### 1. 主动增量预热 (推荐)
+
+这是一种更为智能和自动化的机制。它通过在写入计算组和只读计算组之间建立预热关系,当写入/Compaction 等事件产生新 Rowset 
时,会主动通知并触发关联的只读计算组进行异步的缓存预热。
+
+**适用场景:**
+
+- 大部分场景。
+- 用户有权限配置预热关系。
+
+> **[文档链接]**:关于如何配置和使用主动增量预热的详细信息,请参考官方文档 
**[FileCache主动增量预热](./read-write-splitting.md)**。
+
+### 2. 只读计算组自动预热
+
+这是一种轻量级的自动预热策略。通过在**只读计算组**的 BE 节点上开启配置,使其在感知到新 Rowset 时,自动触发一个异步的预热任务。
+
+**适用场景:**
+
+- 用户无权配置预热关系
+- 用户使用的是非MoW表
+
+**核心配置:** 在只读计算组的 `be.conf` 中设置:
+
+```sql
+enable_warmup_immediately_on_new_rowset = true
+```
+
+## 二、 优化 Compaction / Schema Change 对查询性能的影响
+
+后台 Compaction 会合并旧的 Rowset 并生成新的 Rowset。如果新 Rowset 未被预热,只读计算组的查询性能会因 Cache 
Miss 而抖动。以下是两种推荐的解决方案。
+
+### 方案一:主动增量预热 + 延迟提交(推荐)
+
+该方案可以**从根本上避免**只读计算组查询到未被缓存的、由 Compaction / Schema Change 产生的新 Rowset。
+
+**实现原理:**
+
+1. 首先,配置好写入计算组和只读计算组之间的**主动增量预热**关系。
+2. 在**写入计算组**的 BE 节点上,开启 Compaction / Schema Change 延迟提交功能。
+
+**核心配置 (写入计算组 `be.conf`):**
+
+```sql
+enable_compaction_delay_commit_for_warm_up = true
+```
+
+1. **工作流程:**
+   1. Compaction / Schema Change 任务在写入计算组上完成,并生成了新的 Rowset。
+   2. 此时,该 Rowset **不会立刻提交生效**(即对只读计算组不可见)。
+   3. 系统会触发关联的只读计算组对这个新 Rowset 进行缓存预热。
+   4. 待所有关联的只读计算组都完成了预热后,这个新 Rowset 才会被最终提交,并对所有计算组可见。
+
+**优势:**
+
+- **无感知切换**:对于只读计算组来说,所有可见的 Compaction 后数据均已在缓存中,查询性能不会出现抖动。
+- **高稳定性**:是保障读写分离场景下查询性能最稳健的方案。
+
+### 方案二:只读计算组自动预热 + 查询感知
+
+该方案通过在查询层进行智能选择,**尽量跳过**尚未预热完成的新 Rowset(对于Unique Key 
MoW表,考虑到正确性问题,compaction产生的rowset无法跳过)
+
+**实现原理:**
+
+1. 在**只读计算组**的 BE 节点上,开启自动预热。
+
+**核心配置 (只读计算组 `be.conf`):**
+
+```sql
+enable_warmup_immediately_on_new_rowset = true
+```
+
+1. 在查询时,通过 Session 变量或用户属性开启 "预热感知" 的 Rowset 选择策略。
+
+**设置查询会话:**
+
+```sql
+SET enable_prefer_cached_rowset = true;
+```
+**或设置用户属性:**
+```sql
+SET property for "jack" enable_prefer_cached_rowset = true;
+```
+
+1. **工作流程:**
+   1. 当只读计算组感知到 Compaction 产生的新 Rowset 时,会异步触发预热任务。
+   2. 开启 `enable_prefer_cached_rowset` 后,查询执行器在选择要读取的 Rowset 
时,会优先选择那些**已经预热完成**的版本。
+   3. 它会自动忽略那些还在预热中的新 Rowset,前提是这种忽略不影响数据的一致性(即依然可以访问合并前的旧 Rowset)。
+
+**优势:**
+
+- 配置相对简单,无需配置跨计算组的预热关系。
+- 能有效降低大部分情况下的性能影响。
+
+**注意事项:**
+
+> 此方案是一种“尽力而为”的策略。如果新 Rowset 对应的旧 Rowset 
已经被清理,或者查询必须访问最新的数据版本,查询依然需要等待预热完成或直接访问冷数据。
+
+## 三、 优化数据导入对查询性能的影响
+
+高频的数据导入(如 `INSERT INTO`, `Stream Load`)会持续产生新的小文件(Rowset),同样会给只读计算组带来 Cache 
Miss 问题。如果您的业务可以容忍秒级甚至亚秒级的数据延迟,可以采用以下组合策略,以极小的“新鲜度”代价换取巨大的性能提升。
+
+**实现原理:** 该策略通过结合**自动预热**和**查询时的新鲜度容忍度**设置,让查询执行器智能地跳过在指定时间窗口内尚未预热完成的最新数据。
+
+**实施步骤:**
+
+1. **开启预热机制**:
+   1. 
在只读计算组上开启**主动增量预热**或**只读计算组自动预热**(`enable_warmup_immediately_on_new_rowset=true`)。这是让数据能够被异步加载到缓存的前提。
+2. **设置查询新鲜度容忍度**:
+   1. 在只读计算组的查询会话或用户属性中,设置 `query_freshness_tolerance_ms` 变量。
+   2. **设置查询会话:**
+      ```sql
+      -- 设置可以容忍 1000 毫秒(1秒)的数据延迟
+      SET query_freshness_tolerance_ms = 1000;
+      ```
+      **或设置用户属性:**
+      ```sql
+      SET property for "jack" query_freshness_tolerance_ms = 1000;
+      ```
+
+**工作流程:**
+
+- 当一个查询开始执行时,它会检查需要访问的 Rowset。
+- 如果某个 Rowset 是在**最近 1000ms 内**生成的,并且**尚未预热完成**,查询执行器会自动跳过它,转而访问较旧但已缓存的数据。
+- 这样,绝大多数查询都能命中缓存,从而避免了因读取最新写入的冷数据而导致的性能下降。
+
+**回退机制:**
+
+> 如果某个 Rowset 的预热过程非常缓慢,超过了 `query_freshness_tolerance_ms` 
设置的时间(例如超过1000ms仍未完成),为了保证数据的最终可见性,查询将不再跳过它,而是会回退到默认行为:直接读取冷数据。
+
+**优势:**
+
+- **性能提升显著**:对于高吞吐写入场景,能有效消除查询性能毛刺。
+- **灵活性高**:用户可以根据业务需求,在数据新鲜度和查询性能之间做出灵活的权衡。
+
+## 总结与建议
+
+| 方案                                                 | 适用场景                    
                 | 预期效果(各类写操作对cache命中率的影响)                    |
+| ---------------------------------------------------- | 
-------------------------------------------- | 
------------------------------------------------------------ |
+| 开启主动增量预热+延迟提交+ 配置数据新鲜度容忍时间(可选)                           | 适用于查询 latency 
要求非常高的场景,需要用户有权限配置预热关系    | compaction:无<br>重量级 schema 
change:无<br>新写入的数据:取决于新鲜度容忍时间 |
+| 只读计算组自动预热+优先cache数据 + 配置数据新鲜度容忍时间(可选) | 用户无权配置预热关系<br>没有配置新鲜度容忍时间时对于 MOW 
主键表无效  | compaction:无<br>重量级 schema change:cache miss<br>新写入的数据:取决于新鲜度容忍时间 |
+
+通过合理地运用上述缓存预热策略和相关配置,您可以有效地管理 Apache Doris 
在读写分离架构下的缓存行为,最大限度地减少因缓存未命中带来的性能损失,确保只读查询业务的稳定与高效。
diff --git a/sidebars.json b/sidebars.json
index 5502abf398a..20bdf3d3a03 100644
--- a/sidebars.json
+++ b/sidebars.json
@@ -531,6 +531,7 @@
                             ]
                         },
                         "compute-storage-decoupled/read-write-splitting",
+                        
"compute-storage-decoupled/file-cache-rw-compute-group-best-practice",
                         "compute-storage-decoupled/recycler",
                         "compute-storage-decoupled/upgrade"
                     ]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to