This is an automated email from the ASF dual-hosted git repository.

yiguolei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 85875683cef [documents]add grouping workload groups document (#1237)
85875683cef is described below

commit 85875683cef7e0c3bae4e8101f7af977d90c68b7
Author: wangbo <[email protected]>
AuthorDate: Wed Oct 30 17:56:24 2024 +0800

    [documents]add grouping workload groups document (#1237)
    
    # Versions
    
    - [x] dev
    - [x] 3.0
    - [x] 2.1
    - [ ] 2.0
    
    # Languages
    
    - [x] Chinese
    - [x] English
---
 .../resource-admin/group-workload-groups.md        | 157 +++++++++++++++++++++
 docs/admin-manual/resource-admin/workload-group.md |   6 +-
 .../resource-admin/group-workload-groups.md        | 150 ++++++++++++++++++++
 .../admin-manual/resource-admin/workload-group.md  |   5 +-
 .../resource-admin/group-workload-groups.md        | 150 ++++++++++++++++++++
 .../admin-manual/resource-admin/workload-group.md  |   5 +-
 .../resource-admin/group-workload-groups.md        | 150 ++++++++++++++++++++
 .../admin-manual/resource-admin/workload-group.md  |   5 +-
 sidebars.json                                      |   1 +
 .../images/workload-management/group_wg_add_be.png | Bin 0 -> 43134 bytes
 .../workload-management/group_wg_add_cluster.png   | Bin 0 -> 167190 bytes
 .../workload-management/group_wg_default.png       | Bin 0 -> 33413 bytes
 .../workload-management/group_wg_two_group.png     | Bin 0 -> 263812 bytes
 .../workload-management/rg1_rg2_workload_group.png | Bin 0 -> 307514 bytes
 .../resource-admin/group-workload-groups.md        | 157 +++++++++++++++++++++
 .../admin-manual/resource-admin/workload-group.md  |   5 +-
 .../resource-admin/group-workload-groups.md        | 157 +++++++++++++++++++++
 .../admin-manual/resource-admin/workload-group.md  |   5 +-
 versioned_sidebars/version-2.1-sidebars.json       |   1 +
 versioned_sidebars/version-3.0-sidebars.json       |   1 +
 20 files changed, 949 insertions(+), 6 deletions(-)

diff --git a/docs/admin-manual/resource-admin/group-workload-groups.md 
b/docs/admin-manual/resource-admin/group-workload-groups.md
new file mode 100644
index 00000000000..4d63c3c9fb6
--- /dev/null
+++ b/docs/admin-manual/resource-admin/group-workload-groups.md
@@ -0,0 +1,157 @@
+---
+{
+"title": "Grouping Workload Groups",
+"language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The Workload Group grouping function is commonly used when there are multiple 
physically isolated BE clusters in a Doris cluster. Workload Groups can be 
grouped, and different groups of Workload Groups can be bound to different BE 
clusters.
+
+## Recommended usage
+
+If there are currently two isolated BE sub-clusters in the cluster, named rg1 
and rg2, and these two groups are completely physically isolated, with no 
shared data or computation, the recommended configuration approach is as 
follows:
+
+1. Reduce the resource allocation for the normal group as much as possible, 
serving as a fallback query group. For example, if a query does not carry any 
Workload Group information, it will automatically use this default group to 
avoid query failures.
+
+2. Create corresponding Workload Groups for these two sub-clusters and bind 
them to the respective sub-clusters. For instance, create the first Workload 
Group named wg1 for the rg1 cluster, which includes Workload Group a and 
Workload Group b. Create the second Workload Group named wg2 for the rg2 
cluster, which includes Workload Group c and Workload Group d.
+
+The final effect will be as follows:
+
+![rg1_rg2_workload_group](/images/workload-management/rg1_rg2_workload_group.png)
+
+The operating process is as follows:
+
+Step 1: Bind the data replicas to the BE nodes, which essentially completes 
the division of the rg1 and rg2 sub-clusters, achieving isolation of the data 
replicas. If the cluster has already completed the division into sub-clusters, 
this step can be skipped, and you can proceed directly to Step 2.
+1. Bind the data replicas to the rg1 and rg2 clusters.
+```
+-- When creating tables for the rg1 cluster, it is necessary to specify that 
the replicas are distributed to rg1.
+create table table1
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg1:3"
+)
+
+-- When creating tables for the rg2 cluster, it is necessary to specify that 
the replicas are distributed to rg2.
+create table table2
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg2:3"
+)
+```
+
+2. Bind the BE nodes to the rg1 and rg2 clusters.
+```
+-- Bind be1 and be2 to the rg1 cluster.
+alter system modify backend "be1:9050" set ("tag.location" = "rg1");
+alter system modify backend "be2:9050" set ("tag.location" = "rg1");
+
+-- Bind be3 and be3 to the rg2 cluster.
+alter system modify backend "be3:9050" set ("tag.location" = "rg2");
+alter system modify backend "be4:9050" set ("tag.location" = "rg2");
+```
+
+Step 2: Bind the workload group to the BE nodes.
+1. Create a new workload group and bind it to wg1 and wg2 respectively.
+```
+-- Create a workload group for the wg1 group.
+create workload group a properties ("memory_limit"="45%","tag"="wg1")
+create workload group b properties ("memory_limit"="45%","tag"="wg1")
+
+-- Create a workload group for the wg2 group.
+create workload group c properties ("memory_limit"="45%","tag"="wg2")
+create workload group d properties ("memory_limit"="45%","tag"="wg2")
+```
+
+2. Bind the BE to wg1 and wg2. At this point, Workload Group a and b will only 
take effect on be1 and be2, while Workload Group c and d will only take effect 
on be3 and be4.
+
+(Note that when modifying, the tag.location is specified here because the 
current interface for modifying BE configurations does not support incremental 
updates. Therefore, when adding new attributes, you must also carry over the 
existing attributes.)
+```
+-- Bind be1 and be2 to wg1.
+alter system modify backend "be1:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+alter system modify backend "be2:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+
+-- Bind be3 and be4 to wg2.
+alter system modify backend "be3:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+alter system modify backend "be4:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+```
+
+3. Reduce the resource usage of the normal workload group, serving as a 
fallback workload group when users do not carry Workload Group information. It 
can be observed that no tag attributes have been specified for the normal 
group, allowing it to be effective on all BE nodes.
+```
+alter workload group normal properties("memory_limit=1%")
+```
+To simplify maintenance, the BE's tag.location and tag.workload_group can use 
the same value, effectively merging rg1 with wg1 and rg2 with wg2 under a 
unified name. For example, set the BE's tag.workload_group to rg1, and also 
specify the tag for Workload Group a and b as rg1.
+
+
+## Principle explanation
+### Default situation
+The user has created a new Doris cluster with only one BE (defaulting to the 
default group). The system typically creates a group named normal by default. 
The user then creates a Workload Group A, with each group allocated 50% of the 
memory. At this point, the distribution of Workload Groups in the cluster is as 
follows:
+
+![group_wg_default](/images/workload-management/group_wg_default.png)
+
+If a new BE named BE2 is added at this point, the Workload Group distribution 
in the new BE will be as follows::
+
+![group_wg_add_be](/images/workload-management/group_wg_add_be.png)
+
+The distribution of Workload Groups in the new BE is the same as in the 
existing BE.
+
+### Add a new BE cluster
+Doris supports the feature of physical isolation for BE nodes. When a new BE 
node (named BE3) is added and assigned to a separate group (the new BE group is 
named vip_group), the distribution of Workload Groups is as follows:
+
+![group_wg_add_cluster](/images/workload-management/group_wg_add_cluster.png)
+
+It can be seen that by default, the Workload Group in the system is effective 
across all sub-clusters, which may have certain limitations in some scenarios.
+
+### Grouping Workload Groups
+Suppose there are two physically isolated BE clusters in the cluster: 
vip_group and default, serving different business entities. These two entities 
may have different requirements for load management. For instance, vip_group 
may need to create more Workload Groups, and the resource configurations for 
each Workload Group may differ significantly from those of the default group.
+
+In this case, the functionality of Workload Group grouping is needed to 
address this issue. For example, the vip_group cluster needs to create three 
Workload Groups, each of which can obtain equal resources.
+
+![group_wg_two_group](/images/workload-management/group_wg_two_group.png)
+
+The user has created three workload groups, named vip_wg_1, vip_wg_2, and 
vip_wg_3, and specified the tag for the workload groups as vip_wg. This means 
that these three workload groups are categorized into one group, and their 
combined memory resource allocation cannot exceed 100%.
+
+At the same time, the tag.workload_group attribute for BE3 is set to vip_wg, 
meaning that only Workload Groups with the tag attribute set to vip_wg will 
take effect on BE3.
+
+BE1 and BE2 have their tag.workload_group attribute set to default_wg, and the 
Workload Groups normal and A are also assigned the tag default_wg, so normal 
and A will only take effect on BE1 and BE2.
+
+It can be simply understood that BE1 and BE2 form one sub-cluster, which has 
two Workload Groups: normal and A; while BE3 forms another sub-cluster, which 
has three Workload Groups: vip_wg_1, vip_wg_2, and vip_wg_3.
+
+:::tip
+NOTE:
+
+It can be noted that the BE has two attributes: tag.location and 
tag.workload_group, which are not directly related.
+
+The tag.location is used to specify which data replica group the BE belongs 
to. The data replicas also have a location attribute, and the replicas are 
distributed to BEs with the same location attribute, thereby achieving physical 
resource isolation.
+
+The tag.workload_group is used to specify which Workload Group the BE belongs 
to. Workload Groups also have a tag attribute to indicate which group they 
belong to, and Workload Groups will only take effect on BEs with the specified 
grouping.
+
+In the Doris integrated storage and computing mode, data replicas and 
computation are typically bound together. Therefore, it is also recommended 
that the values of BE's tag.location and tag.workload_group be the same value.
+:::
+
+
+"The current matching rules for the Workload Group tag and the BE's 
tag.workload_group are as follows:
+1. When the Workload Group tag is empty, this Workload Group can be sent to 
all BEs, regardless of whether the BE has specified a tag.
+2. When the Workload Group tag is not empty, the Workload Group will only be 
sent to BEs with the same tag.
+
+
diff --git a/docs/admin-manual/resource-admin/workload-group.md 
b/docs/admin-manual/resource-admin/workload-group.md
index 5f8c6632898..80a4f7a3c2d 100644
--- a/docs/admin-manual/resource-admin/workload-group.md
+++ b/docs/admin-manual/resource-admin/workload-group.md
@@ -90,14 +90,18 @@ Example:
 create workload group tag_wg properties('tag'='cn1');
 ```
 2. Modify the tag of a BE in the cluster to cn1. At this point, the tag_wg 
Workload Group will only be sent to this BE and any BE with no tag. The 
tag.workload_group attribute can specify multiple values, separated by commas.
+It is important to note that the alter interface currently does not support 
incremental updates. Each time the BE attributes are modified, the entire set 
of attributes needs to be provided. Therefore, in the statements below, the 
tag.location attribute is added, with 'default' as the system default value. In 
practice, the existing attributes of the BE should be specified accordingly.
 ```
-alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1");
+alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1", "tag.location"="default");
 ```
 
 Workload Group and BE Matching Rules:
 If the Workload Group's tag is empty, the Workload Group can be sent to all 
BEs, regardless of whether the BE has a tag or not.
 If the Workload Group's tag is not empty, the Workload Group will only be sent 
to BEs with the same tag.
 
+You can refer to the recommended 
usage:[group-workload-groups](./group-workload-groups.md)
+
+
 ## Configure cgroup
 
 Doris 2.0 version uses Doris scheduling to limit CPU resources, but since 
version 2.1, Doris defaults to using CGgroup v1 to limit CPU resources. 
Therefore, if CPU resources are expected to be limited in version 2.1, it is 
necessary to have CGgroup installed on the node where BE is located.
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/group-workload-groups.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/group-workload-groups.md
new file mode 100644
index 00000000000..9139ec05050
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/group-workload-groups.md
@@ -0,0 +1,150 @@
+---
+{
+"title": "Workload Group分组功能",
+"language": "zh-CN"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Workload Group分组功能常用于当一个Doris集群中有多个物理隔离的BE集群时,可以将Workload 
Group进行分组,不同分组的Workload Group可以绑定到不同的BE集群中。
+
+## 推荐用法
+假如目前集群中已有了两个隔离的BE子集群,命名为rg1和rg2,且这两个分组之间是完全物理隔离的,数据和计算不会有共享的情况。
+那么比较推荐的配置方式是:
+1. 把normal group的资源配置量尽量调小,作为保底的查询分组,比如查询如果不携带任何Workload 
Group信息,那么就会自动使用这个默认的group,作用是避免查询失败。
+2. 为这两个子集群分别创建对应的Workload Group,绑定到对应的子集群上。
+   例如,为rg1集群创建第一个名为wg1的Workload Group分组,包含Workload Group a和Workload Group 
b两个Workload Group。为rg2集群创建第二个名为wg2的Workload Group分组,包含Workload Group c和Workload 
Group d。
+那么最终效果如下:
+
+![rg1_rg2_workload_group](/images/workload-management/rg1_rg2_workload_group.png)
+
+操作流程如下:
+
+第一步:把数据副本绑定到BE节点,其实也就是完成rg1子集群和rg2子集群的划分,实现数据副本的隔离,如果集群已经完成了子集群的划分,那么可以跳过这个步骤,直接进入第二步。
+1. 把数据副本绑定到rg1集群和rg2集群
+```
+-- 为rg1集群建表时需要指定副本分布到rg1
+create table table1
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg1:3"
+)
+
+-- 为rg2集群建表时需要指定副本分布到rg2
+create table table2
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg2:3"
+)
+```
+
+2. 把BE节点绑定到rg1集群和rg2集群
+```
+-- 把be1和be2绑定到rg1集群
+alter system modify backend "be1:9050" set ("tag.location" = "rg1");
+alter system modify backend "be2:9050" set ("tag.location" = "rg1");
+
+-- 把be3和be4绑定到rg2集群
+alter system modify backend "be3:9050" set ("tag.location" = "rg2");
+alter system modify backend "be4:9050" set ("tag.location" = "rg2");
+```
+
+第二步:把workload group绑定到BE节点
+1. 新建workload group,并把workload group分别绑定到wg1和wg2
+```
+-- 创建wg1分组的workload group
+create workload group a properties ("memory_limit"="45%","tag"="wg1")
+create workload group b properties ("memory_limit"="45%","tag"="wg1")
+
+-- 创建wg2分组的workload group
+create workload group c properties ("memory_limit"="45%","tag"="wg2")
+create workload group d properties ("memory_limit"="45%","tag"="wg2")
+```
+
+2. 把BE绑定到wg1和wg2,此时Workload Group a和b只会在be1和be2上生效。Workload Group 
c和d只会在be3和be4上生效。
+(需要注意的是这里在修改时指定了tag.location,原因是修改BE配置的接口目前暂时不支持增量更新,因此在新加属性时要把存量的属性也携带上)
+```
+-- 把be1和be2绑定到wg1
+alter system modify backend "be1:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+alter system modify backend "be2:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+
+-- 把be3和be4绑定到wg2
+alter system modify backend "be3:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+alter system modify backend "be4:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+```
+
+3. 调小normal workload group的资源用量,作为用户不携带Workload Group信息时保底可用的Workload 
Group,可以看到没有为normal group指定tag属性,因此normal可以在所有BE生效。
+```
+alter workload group normal properties("memory_limit=1%")
+```
+为了维护更加简单,BE的tag.location和tag.workload_group可以使用相同的值,也就是把rg1和wg1进行合并,rg2和wg2进行合并,统一使用一个名称。比如把BE的tag.workload_group设置为rg1,Workload
 Group a和b的tag也指定为rg1。
+
+
+## 原理讲解
+### 默认情况
+用户新建了一个Doris的集群,集群中只有一个BE(默认为default分组),系统通常默认会创建一个名为normal的group,然后用户又创建了一个Workload
 Group A,各自分配50%的内存,那么此时集群中Workload Group的分布情况如下:
+
+![group_wg_default](/images/workload-management/group_wg_default.png)
+
+如果此时添加一个名为BE2的新BE,那么新BE中的分布情况如下:
+
+![group_wg_add_be](/images/workload-management/group_wg_add_be.png)
+
+新增BE的Workload Group的分布和现有BE相同。
+
+### 添加新的BE集群
+Doris支持BE物理隔离的功能,当添加新的BE节点(名为BE3)并划分到独立的分组时(新的BE分组命名为vip_group),Workload 
Group的分组如下:
+
+![group_wg_add_cluster](/images/workload-management/group_wg_add_cluster.png)
+
+可以看到默认情况下,系统中的Workload Group会在所有的子集群生效,在有些场景下会具有一定的局限性。
+
+### 对Workload Group使用分组的功能
+假如集群中有vip_group和default两个物理隔离的BE集群,服务于不同的业务方,这两个业务方对于负载管理可能有不同的诉求。比如vip_group可能需要创建更多的Workload
 Group,每个Workload Group的资源配置和default分组的差异也比较大。
+
+此时就需要Workload Group分组的功能解决这个问题,比如vip_group集群需要创建三个Workload 
Group,每个group可以获得均等的资源。
+
+![group_wg_two_group](/images/workload-management/group_wg_two_group.png)
+
+用户新建了三个workload group,分别名为vip_wg_1, vip_wg_2, vip_wg_3,并指定workload 
group的tag为vip_wg,含义为这三个workload group划分为一个分组,它们的内存资源累加值不能超过100%。
+同时指定BE3的tag.workload_group属性为vip_wg,含义为只有指定了tag属性为vip_wg的Workload 
Group才会在BE3上生效。
+
+BE1和BE2指定了tag.workload_group属性为default_wg,Workload Group 
normal和A则指定了tag为default_wg,因此normal和A只会在BE1和BE2上生效。
+
+可以简单理解为,BE1和BE2是一个子集群,这个子集群拥有normal和A两个Workload 
Group;BE3是另一个子集群,这个子集群拥有vip_wg_1,vip_wg_2和vip_wg_3三个Workload Group。
+
+:::tip
+注意事项:
+
+可以注意到上文中BE有两个属性,tag.location和tag.workload_group,这两个属性没有什么直接的关联。
+tag.location用于指定BE归属于哪个数据副本分组,数据副本也有location属性,数据副本会被分发到具有相同location属性的BE,从而完成物理资源的隔离。
+
+tag.workload_group用于指定BE归属于哪个Workload Group的分组,Workload 
Group也具有tag属性用于指定Workload Group归属于哪个分组,Workload Group也只会在具有分组的BE上生效。
+Doris存算一体模式下,数据副本和计算通常是绑定的,因此也比较推荐BE的tag.location和tag.workload_group值是对齐的。
+   :::
+
+目前Workload Group的tag和Be的tag.workload_group的匹配规则为:
+1. 当Workload Group的tag为空,那么这个Workload Group可以发送给所有的BE,不管该BE是否指定了tag。
+2. 当Workload Group的tag不为空,那么Workload Group只会发送给具有相同标签的BE。
+
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/workload-group.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/workload-group.md
index 24e993f2ba6..3e125a96b9f 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/workload-group.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/workload-group.md
@@ -95,14 +95,17 @@ Workload Group功能是对单台BE资源用量的划分。当用户创建了一
 create workload group tag_wg properties('tag'='cn1');
 ```
 2. 修改集群中一个BE的标签为cn1,此时tag_wg这个Workload 
Group就只会发送到这个BE以及标签为空的BE上。tag.workload_group属性可以指定多个,使用英文逗号分隔。
+需要注意的是,alter接口目前不支持增量更新,每次修改BE的属性都需要增加全量的属性,因此下面语句中添加了tag.location属性,default为系统默认值,实际修改时需要按照BE原有属性指定。
 ```
-alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1");
+alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1", "tag.location"="default");
 ```
 
 Workload Group和BE的匹配规则说明:
 1. 当Workload Group的Tag为空,那么这个Workload Group可以发送给所有的BE,不管该BE是否指定了tag。
 2. 当Workload Group的Tag不为空,那么Workload Group只会发送给具有相同标签的BE。
 
+推荐用法可以参考:[Workload Group分组功能](./group-workload-groups.md)
+
 ## 配置 cgroup 的环境
 Doris 的 2.0 版本使用基于 Doris 的调度实现 CPU 资源的限制,但是从 2.1 版本起,Doris 默认使用基于 CGroup v1 
版本对 CPU 资源进行限制,因此如果期望在 2.1 版本对 CPU 资源进行约束,那么需要 BE 所在的节点上已经安装好 CGroup 的环境。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/resource-admin/group-workload-groups.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/resource-admin/group-workload-groups.md
new file mode 100644
index 00000000000..92eefb7ff14
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/resource-admin/group-workload-groups.md
@@ -0,0 +1,150 @@
+---
+{
+"title": "Workload Group分组功能",
+"language": "zh-CN"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Workload Group分组功能常用于当一个Doris集群中有多个物理隔离的BE集群时,可以将Workload 
Group进行分组,不同分组的Workload Group可以绑定到不同的BE集群中。
+
+## 推荐用法
+假如目前集群中已有了两个隔离的BE子集群,命名为rg1和rg2,且这两个分组之间是完全物理隔离的,数据和计算不会有共享的情况。
+那么比较推荐的配置方式是:
+1. 把normal group的资源配置量尽量调小,作为保底的查询分组,比如查询如果不携带任何Workload 
Group信息,那么就会自动使用这个默认的group,作用是避免查询失败。
+2. 为这两个子集群分别创建对应的Workload Group,绑定到对应的子集群上。
+   例如,为rg1集群创建第一个名为wg1的Workload Group分组,包含Workload Group a和Workload Group 
b两个Workload Group。为rg2集群创建第二个名为wg2的Workload Group分组,包含Workload Group c和Workload 
Group d。
+   那么最终效果如下:
+
+![rg1_rg2_workload_group](/images/workload-management/rg1_rg2_workload_group.png)
+
+操作流程如下:
+
+第一步:把数据副本绑定到BE节点,其实也就是完成rg1子集群和rg2子集群的划分,实现数据副本的隔离,如果集群已经完成了子集群的划分,那么可以跳过这个步骤,直接进入第二步。
+1. 把数据副本绑定到rg1集群和rg2集群
+```
+-- 为rg1集群建表时需要指定副本分布到rg1
+create table table1
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg1:3"
+)
+
+-- 为rg2集群建表时需要指定副本分布到rg2
+create table table2
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg2:3"
+)
+```
+
+2. 把BE节点绑定到rg1集群和rg2集群
+```
+-- 把be1和be2绑定到rg1集群
+alter system modify backend "be1:9050" set ("tag.location" = "rg1");
+alter system modify backend "be2:9050" set ("tag.location" = "rg1");
+
+-- 把be3和be4绑定到rg2集群
+alter system modify backend "be3:9050" set ("tag.location" = "rg2");
+alter system modify backend "be4:9050" set ("tag.location" = "rg2");
+```
+
+第二步:把workload group绑定到BE节点
+1. 新建workload group,并把workload group分别绑定到wg1和wg2
+```
+-- 创建wg1分组的workload group
+create workload group a properties ("memory_limit"="45%","tag"="wg1")
+create workload group b properties ("memory_limit"="45%","tag"="wg1")
+
+-- 创建wg2分组的workload group
+create workload group c properties ("memory_limit"="45%","tag"="wg2")
+create workload group d properties ("memory_limit"="45%","tag"="wg2")
+```
+
+2. 把BE绑定到wg1和wg2,此时Workload Group a和b只会在be1和be2上生效。Workload Group 
c和d只会在be3和be4上生效。
+   (需要注意的是这里在修改时指定了tag.location,原因是修改BE配置的接口目前暂时不支持增量更新,因此在新加属性时要把存量的属性也携带上)
+```
+-- 把be1和be2绑定到wg1
+alter system modify backend "be1:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+alter system modify backend "be2:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+
+-- 把be3和be4绑定到wg2
+alter system modify backend "be3:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+alter system modify backend "be4:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+```
+
+3. 调小normal workload group的资源用量,作为用户不携带Workload Group信息时保底可用的Workload 
Group,可以看到没有为normal group指定tag属性,因此normal可以在所有BE生效。
+```
+alter workload group normal properties("memory_limit=1%")
+```
+为了维护更加简单,BE的tag.location和tag.workload_group可以使用相同的值,也就是把rg1和wg1进行合并,rg2和wg2进行合并,统一使用一个名称。比如把BE的tag.workload_group设置为rg1,Workload
 Group a和b的tag也指定为rg1。
+
+
+## 原理讲解
+### 默认情况
+用户新建了一个Doris的集群,集群中只有一个BE(默认为default分组),系统通常默认会创建一个名为normal的group,然后用户又创建了一个Workload
 Group A,各自分配50%的内存,那么此时集群中Workload Group的分布情况如下:
+
+![group_wg_default](/images/workload-management/group_wg_default.png)
+
+如果此时添加一个名为BE2的新BE,那么新BE中的分布情况如下:
+
+![group_wg_add_be](/images/workload-management/group_wg_add_be.png)
+
+新增BE的Workload Group的分布和现有BE相同。
+
+### 添加新的BE集群
+Doris支持BE物理隔离的功能,当添加新的BE节点(名为BE3)并划分到独立的分组时(新的BE分组命名为vip_group),Workload 
Group的分组如下:
+
+![group_wg_add_cluster](/images/workload-management/group_wg_add_cluster.png)
+
+可以看到默认情况下,系统中的Workload Group会在所有的子集群生效,在有些场景下会具有一定的局限性。
+
+### 对Workload Group使用分组的功能
+假如集群中有vip_group和default两个物理隔离的BE集群,服务于不同的业务方,这两个业务方对于负载管理可能有不同的诉求。比如vip_group可能需要创建更多的Workload
 Group,每个Workload Group的资源配置和default分组的差异也比较大。
+
+此时就需要Workload Group分组的功能解决这个问题,比如vip_group集群需要创建三个Workload 
Group,每个group可以获得均等的资源。
+
+![group_wg_two_group](/images/workload-management/group_wg_two_group.png)
+
+用户新建了三个workload group,分别名为vip_wg_1, vip_wg_2, vip_wg_3,并指定workload 
group的tag为vip_wg,含义为这三个workload group划分为一个分组,它们的内存资源累加值不能超过100%。
+同时指定BE3的tag.workload_group属性为vip_wg,含义为只有指定了tag属性为vip_wg的Workload 
Group才会在BE3上生效。
+
+BE1和BE2指定了tag.workload_group属性为default_wg,Workload Group 
normal和A则指定了tag为default_wg,因此normal和A只会在BE1和BE2上生效。
+
+可以简单理解为,BE1和BE2是一个子集群,这个子集群拥有normal和A两个Workload 
Group;BE3是另一个子集群,这个子集群拥有vip_wg_1,vip_wg_2和vip_wg_3三个Workload Group。
+
+:::tip
+注意事项:
+
+可以注意到上文中BE有两个属性,tag.location和tag.workload_group,这两个属性没有什么直接的关联。
+tag.location用于指定BE归属于哪个数据副本分组,数据副本也有location属性,数据副本会被分发到具有相同location属性的BE,从而完成物理资源的隔离。
+
+tag.workload_group用于指定BE归属于哪个Workload Group的分组,Workload 
Group也具有tag属性用于指定Workload Group归属于哪个分组,Workload Group也只会在具有分组的BE上生效。
+Doris存算一体模式下,数据副本和计算通常是绑定的,因此也比较推荐BE的tag.location和tag.workload_group值是对齐的。
+:::
+
+目前Workload Group的tag和Be的tag.workload_group的匹配规则为:
+1. 当Workload Group的tag为空,那么这个Workload Group可以发送给所有的BE,不管该BE是否指定了tag。
+2. 当Workload Group的tag不为空,那么Workload Group只会发送给具有相同标签的BE。
+
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/resource-admin/workload-group.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/resource-admin/workload-group.md
index 7a7659e033c..74d02f5244c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/resource-admin/workload-group.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/resource-admin/workload-group.md
@@ -95,14 +95,17 @@ Workload Group功能是对单台BE资源用量的划分。当用户创建了一
 create workload group tag_wg properties('tag'='cn1');
 ```
 2. 修改集群中一个BE的标签为cn1,此时tag_wg这个Workload 
Group就只会发送到这个BE以及标签为空的BE上。tag.workload_group属性可以指定多个,使用英文逗号分隔。
+   
需要注意的是,alter接口目前不支持增量更新,每次修改BE的属性都需要增加全量的属性,因此下面语句中添加了tag.location属性,default为系统默认值,实际修改时需要按照BE原有属性指定。
 ```
-alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1");
+alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1", "tag.location"="default");
 ```
 
 Workload Group和BE的匹配规则说明:
 1. 当Workload Group的Tag为空,那么这个Workload Group可以发送给所有的BE,不管该BE是否指定了tag。
 2. 当Workload Group的Tag不为空,那么Workload Group只会发送给具有相同标签的BE。
 
+推荐用法可以参考:[Workload Group分组功能](./group-workload-groups.md)
+
 ## 配置 cgroup 的环境
 Doris 的 2.0 版本使用基于 Doris 的调度实现 CPU 资源的限制,但是从 2.1 版本起,Doris 默认使用基于 CGroup v1 
版本对 CPU 资源进行限制,因此如果期望在 2.1 版本对 CPU 资源进行约束,那么需要 BE 所在的节点上已经安装好 CGroup 的环境。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/resource-admin/group-workload-groups.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/resource-admin/group-workload-groups.md
new file mode 100644
index 00000000000..92eefb7ff14
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/resource-admin/group-workload-groups.md
@@ -0,0 +1,150 @@
+---
+{
+"title": "Workload Group分组功能",
+"language": "zh-CN"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Workload Group分组功能常用于当一个Doris集群中有多个物理隔离的BE集群时,可以将Workload 
Group进行分组,不同分组的Workload Group可以绑定到不同的BE集群中。
+
+## 推荐用法
+假如目前集群中已有了两个隔离的BE子集群,命名为rg1和rg2,且这两个分组之间是完全物理隔离的,数据和计算不会有共享的情况。
+那么比较推荐的配置方式是:
+1. 把normal group的资源配置量尽量调小,作为保底的查询分组,比如查询如果不携带任何Workload 
Group信息,那么就会自动使用这个默认的group,作用是避免查询失败。
+2. 为这两个子集群分别创建对应的Workload Group,绑定到对应的子集群上。
+   例如,为rg1集群创建第一个名为wg1的Workload Group分组,包含Workload Group a和Workload Group 
b两个Workload Group。为rg2集群创建第二个名为wg2的Workload Group分组,包含Workload Group c和Workload 
Group d。
+   那么最终效果如下:
+
+![rg1_rg2_workload_group](/images/workload-management/rg1_rg2_workload_group.png)
+
+操作流程如下:
+
+第一步:把数据副本绑定到BE节点,其实也就是完成rg1子集群和rg2子集群的划分,实现数据副本的隔离,如果集群已经完成了子集群的划分,那么可以跳过这个步骤,直接进入第二步。
+1. 把数据副本绑定到rg1集群和rg2集群
+```
+-- 为rg1集群建表时需要指定副本分布到rg1
+create table table1
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg1:3"
+)
+
+-- 为rg2集群建表时需要指定副本分布到rg2
+create table table2
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg2:3"
+)
+```
+
+2. 把BE节点绑定到rg1集群和rg2集群
+```
+-- 把be1和be2绑定到rg1集群
+alter system modify backend "be1:9050" set ("tag.location" = "rg1");
+alter system modify backend "be2:9050" set ("tag.location" = "rg1");
+
+-- 把be3和be4绑定到rg2集群
+alter system modify backend "be3:9050" set ("tag.location" = "rg2");
+alter system modify backend "be4:9050" set ("tag.location" = "rg2");
+```
+
+第二步:把workload group绑定到BE节点
+1. 新建workload group,并把workload group分别绑定到wg1和wg2
+```
+-- 创建wg1分组的workload group
+create workload group a properties ("memory_limit"="45%","tag"="wg1")
+create workload group b properties ("memory_limit"="45%","tag"="wg1")
+
+-- 创建wg2分组的workload group
+create workload group c properties ("memory_limit"="45%","tag"="wg2")
+create workload group d properties ("memory_limit"="45%","tag"="wg2")
+```
+
+2. 把BE绑定到wg1和wg2,此时Workload Group a和b只会在be1和be2上生效。Workload Group 
c和d只会在be3和be4上生效。
+   (需要注意的是这里在修改时指定了tag.location,原因是修改BE配置的接口目前暂时不支持增量更新,因此在新加属性时要把存量的属性也携带上)
+```
+-- 把be1和be2绑定到wg1
+alter system modify backend "be1:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+alter system modify backend "be2:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+
+-- 把be3和be4绑定到wg2
+alter system modify backend "be3:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+alter system modify backend "be4:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+```
+
+3. 调小normal workload group的资源用量,作为用户不携带Workload Group信息时保底可用的Workload 
Group,可以看到没有为normal group指定tag属性,因此normal可以在所有BE生效。
+```
+alter workload group normal properties("memory_limit=1%")
+```
+为了维护更加简单,BE的tag.location和tag.workload_group可以使用相同的值,也就是把rg1和wg1进行合并,rg2和wg2进行合并,统一使用一个名称。比如把BE的tag.workload_group设置为rg1,Workload
 Group a和b的tag也指定为rg1。
+
+
+## 原理讲解
+### 默认情况
+用户新建了一个Doris的集群,集群中只有一个BE(默认为default分组),系统通常默认会创建一个名为normal的group,然后用户又创建了一个Workload
 Group A,各自分配50%的内存,那么此时集群中Workload Group的分布情况如下:
+
+![group_wg_default](/images/workload-management/group_wg_default.png)
+
+如果此时添加一个名为BE2的新BE,那么新BE中的分布情况如下:
+
+![group_wg_add_be](/images/workload-management/group_wg_add_be.png)
+
+新增BE的Workload Group的分布和现有BE相同。
+
+### 添加新的BE集群
+Doris支持BE物理隔离的功能,当添加新的BE节点(名为BE3)并划分到独立的分组时(新的BE分组命名为vip_group),Workload 
Group的分组如下:
+
+![group_wg_add_cluster](/images/workload-management/group_wg_add_cluster.png)
+
+可以看到默认情况下,系统中的Workload Group会在所有的子集群生效,在有些场景下会具有一定的局限性。
+
+### 对Workload Group使用分组的功能
+假如集群中有vip_group和default两个物理隔离的BE集群,服务于不同的业务方,这两个业务方对于负载管理可能有不同的诉求。比如vip_group可能需要创建更多的Workload
 Group,每个Workload Group的资源配置和default分组的差异也比较大。
+
+此时就需要Workload Group分组的功能解决这个问题,比如vip_group集群需要创建三个Workload 
Group,每个group可以获得均等的资源。
+
+![group_wg_two_group](/images/workload-management/group_wg_two_group.png)
+
+用户新建了三个workload group,分别名为vip_wg_1, vip_wg_2, vip_wg_3,并指定workload 
group的tag为vip_wg,含义为这三个workload group划分为一个分组,它们的内存资源累加值不能超过100%。
+同时指定BE3的tag.workload_group属性为vip_wg,含义为只有指定了tag属性为vip_wg的Workload 
Group才会在BE3上生效。
+
+BE1和BE2指定了tag.workload_group属性为default_wg,Workload Group 
normal和A则指定了tag为default_wg,因此normal和A只会在BE1和BE2上生效。
+
+可以简单理解为,BE1和BE2是一个子集群,这个子集群拥有normal和A两个Workload 
Group;BE3是另一个子集群,这个子集群拥有vip_wg_1,vip_wg_2和vip_wg_3三个Workload Group。
+
+:::tip
+注意事项:
+
+可以注意到上文中BE有两个属性,tag.location和tag.workload_group,这两个属性没有什么直接的关联。
+tag.location用于指定BE归属于哪个数据副本分组,数据副本也有location属性,数据副本会被分发到具有相同location属性的BE,从而完成物理资源的隔离。
+
+tag.workload_group用于指定BE归属于哪个Workload Group的分组,Workload 
Group也具有tag属性用于指定Workload Group归属于哪个分组,Workload Group也只会在具有分组的BE上生效。
+Doris存算一体模式下,数据副本和计算通常是绑定的,因此也比较推荐BE的tag.location和tag.workload_group值是对齐的。
+:::
+
+目前Workload Group的tag和Be的tag.workload_group的匹配规则为:
+1. 当Workload Group的tag为空,那么这个Workload Group可以发送给所有的BE,不管该BE是否指定了tag。
+2. 当Workload Group的tag不为空,那么Workload Group只会发送给具有相同标签的BE。
+
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/resource-admin/workload-group.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/resource-admin/workload-group.md
index 7a7659e033c..74d02f5244c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/resource-admin/workload-group.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/resource-admin/workload-group.md
@@ -95,14 +95,17 @@ Workload Group功能是对单台BE资源用量的划分。当用户创建了一
 create workload group tag_wg properties('tag'='cn1');
 ```
 2. 修改集群中一个BE的标签为cn1,此时tag_wg这个Workload 
Group就只会发送到这个BE以及标签为空的BE上。tag.workload_group属性可以指定多个,使用英文逗号分隔。
+   
需要注意的是,alter接口目前不支持增量更新,每次修改BE的属性都需要增加全量的属性,因此下面语句中添加了tag.location属性,default为系统默认值,实际修改时需要按照BE原有属性指定。
 ```
-alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1");
+alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1", "tag.location"="default");
 ```
 
 Workload Group和BE的匹配规则说明:
 1. 当Workload Group的Tag为空,那么这个Workload Group可以发送给所有的BE,不管该BE是否指定了tag。
 2. 当Workload Group的Tag不为空,那么Workload Group只会发送给具有相同标签的BE。
 
+推荐用法可以参考:[Workload Group分组功能](./group-workload-groups.md)
+
 ## 配置 cgroup 的环境
 Doris 的 2.0 版本使用基于 Doris 的调度实现 CPU 资源的限制,但是从 2.1 版本起,Doris 默认使用基于 CGroup v1 
版本对 CPU 资源进行限制,因此如果期望在 2.1 版本对 CPU 资源进行约束,那么需要 BE 所在的节点上已经安装好 CGroup 的环境。
 
diff --git a/sidebars.json b/sidebars.json
index cf31941c153..9926087d247 100644
--- a/sidebars.json
+++ b/sidebars.json
@@ -390,6 +390,7 @@
                     "label": "Managing Resource",
                     "items": [
                         "admin-manual/resource-admin/workload-group",
+                        "admin-manual/resource-admin/group-workload-groups",
                         "admin-manual/resource-admin/workload-policy",
                         "admin-manual/resource-admin/workload-analysis",
                         "admin-manual/resource-admin/multi-tenant",
diff --git a/static/images/workload-management/group_wg_add_be.png 
b/static/images/workload-management/group_wg_add_be.png
new file mode 100644
index 00000000000..7021e350aaa
Binary files /dev/null and 
b/static/images/workload-management/group_wg_add_be.png differ
diff --git a/static/images/workload-management/group_wg_add_cluster.png 
b/static/images/workload-management/group_wg_add_cluster.png
new file mode 100644
index 00000000000..060e52a84f7
Binary files /dev/null and 
b/static/images/workload-management/group_wg_add_cluster.png differ
diff --git a/static/images/workload-management/group_wg_default.png 
b/static/images/workload-management/group_wg_default.png
new file mode 100644
index 00000000000..a79c944fc4e
Binary files /dev/null and 
b/static/images/workload-management/group_wg_default.png differ
diff --git a/static/images/workload-management/group_wg_two_group.png 
b/static/images/workload-management/group_wg_two_group.png
new file mode 100644
index 00000000000..97394d53ba6
Binary files /dev/null and 
b/static/images/workload-management/group_wg_two_group.png differ
diff --git a/static/images/workload-management/rg1_rg2_workload_group.png 
b/static/images/workload-management/rg1_rg2_workload_group.png
new file mode 100644
index 00000000000..a5d9bb6fba0
Binary files /dev/null and 
b/static/images/workload-management/rg1_rg2_workload_group.png differ
diff --git 
a/versioned_docs/version-2.1/admin-manual/resource-admin/group-workload-groups.md
 
b/versioned_docs/version-2.1/admin-manual/resource-admin/group-workload-groups.md
new file mode 100644
index 00000000000..4d63c3c9fb6
--- /dev/null
+++ 
b/versioned_docs/version-2.1/admin-manual/resource-admin/group-workload-groups.md
@@ -0,0 +1,157 @@
+---
+{
+"title": "Grouping Workload Groups",
+"language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The Workload Group grouping function is commonly used when there are multiple 
physically isolated BE clusters in a Doris cluster. Workload Groups can be 
grouped, and different groups of Workload Groups can be bound to different BE 
clusters.
+
+## Recommended usage
+
+If there are currently two isolated BE sub-clusters in the cluster, named rg1 
and rg2, and these two groups are completely physically isolated, with no 
shared data or computation, the recommended configuration approach is as 
follows:
+
+1. Reduce the resource allocation for the normal group as much as possible, 
serving as a fallback query group. For example, if a query does not carry any 
Workload Group information, it will automatically use this default group to 
avoid query failures.
+
+2. Create corresponding Workload Groups for these two sub-clusters and bind 
them to the respective sub-clusters. For instance, create the first Workload 
Group named wg1 for the rg1 cluster, which includes Workload Group a and 
Workload Group b. Create the second Workload Group named wg2 for the rg2 
cluster, which includes Workload Group c and Workload Group d.
+
+The final effect will be as follows:
+
+![rg1_rg2_workload_group](/images/workload-management/rg1_rg2_workload_group.png)
+
+The operating process is as follows:
+
+Step 1: Bind the data replicas to the BE nodes, which essentially completes 
the division of the rg1 and rg2 sub-clusters, achieving isolation of the data 
replicas. If the cluster has already completed the division into sub-clusters, 
this step can be skipped, and you can proceed directly to Step 2.
+1. Bind the data replicas to the rg1 and rg2 clusters.
+```
+-- When creating tables for the rg1 cluster, it is necessary to specify that 
the replicas are distributed to rg1.
+create table table1
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg1:3"
+)
+
+-- When creating tables for the rg2 cluster, it is necessary to specify that 
the replicas are distributed to rg2.
+create table table2
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg2:3"
+)
+```
+
+2. Bind the BE nodes to the rg1 and rg2 clusters.
+```
+-- Bind be1 and be2 to the rg1 cluster.
+alter system modify backend "be1:9050" set ("tag.location" = "rg1");
+alter system modify backend "be2:9050" set ("tag.location" = "rg1");
+
+-- Bind be3 and be3 to the rg2 cluster.
+alter system modify backend "be3:9050" set ("tag.location" = "rg2");
+alter system modify backend "be4:9050" set ("tag.location" = "rg2");
+```
+
+Step 2: Bind the workload group to the BE nodes.
+1. Create a new workload group and bind it to wg1 and wg2 respectively.
+```
+-- Create a workload group for the wg1 group.
+create workload group a properties ("memory_limit"="45%","tag"="wg1")
+create workload group b properties ("memory_limit"="45%","tag"="wg1")
+
+-- Create a workload group for the wg2 group.
+create workload group c properties ("memory_limit"="45%","tag"="wg2")
+create workload group d properties ("memory_limit"="45%","tag"="wg2")
+```
+
+2. Bind the BE to wg1 and wg2. At this point, Workload Group a and b will only 
take effect on be1 and be2, while Workload Group c and d will only take effect 
on be3 and be4.
+
+(Note that when modifying, the tag.location is specified here because the 
current interface for modifying BE configurations does not support incremental 
updates. Therefore, when adding new attributes, you must also carry over the 
existing attributes.)
+```
+-- Bind be1 and be2 to wg1.
+alter system modify backend "be1:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+alter system modify backend "be2:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+
+-- Bind be3 and be4 to wg2.
+alter system modify backend "be3:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+alter system modify backend "be4:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+```
+
+3. Reduce the resource usage of the normal workload group, serving as a 
fallback workload group when users do not carry Workload Group information. It 
can be observed that no tag attributes have been specified for the normal 
group, allowing it to be effective on all BE nodes.
+```
+alter workload group normal properties("memory_limit=1%")
+```
+To simplify maintenance, the BE's tag.location and tag.workload_group can use 
the same value, effectively merging rg1 with wg1 and rg2 with wg2 under a 
unified name. For example, set the BE's tag.workload_group to rg1, and also 
specify the tag for Workload Group a and b as rg1.
+
+
+## Principle explanation
+### Default situation
+The user has created a new Doris cluster with only one BE (defaulting to the 
default group). The system typically creates a group named normal by default. 
The user then creates a Workload Group A, with each group allocated 50% of the 
memory. At this point, the distribution of Workload Groups in the cluster is as 
follows:
+
+![group_wg_default](/images/workload-management/group_wg_default.png)
+
+If a new BE named BE2 is added at this point, the Workload Group distribution 
in the new BE will be as follows::
+
+![group_wg_add_be](/images/workload-management/group_wg_add_be.png)
+
+The distribution of Workload Groups in the new BE is the same as in the 
existing BE.
+
+### Add a new BE cluster
+Doris supports the feature of physical isolation for BE nodes. When a new BE 
node (named BE3) is added and assigned to a separate group (the new BE group is 
named vip_group), the distribution of Workload Groups is as follows:
+
+![group_wg_add_cluster](/images/workload-management/group_wg_add_cluster.png)
+
+It can be seen that by default, the Workload Group in the system is effective 
across all sub-clusters, which may have certain limitations in some scenarios.
+
+### Grouping Workload Groups
+Suppose there are two physically isolated BE clusters in the cluster: 
vip_group and default, serving different business entities. These two entities 
may have different requirements for load management. For instance, vip_group 
may need to create more Workload Groups, and the resource configurations for 
each Workload Group may differ significantly from those of the default group.
+
+In this case, the functionality of Workload Group grouping is needed to 
address this issue. For example, the vip_group cluster needs to create three 
Workload Groups, each of which can obtain equal resources.
+
+![group_wg_two_group](/images/workload-management/group_wg_two_group.png)
+
+The user has created three workload groups, named vip_wg_1, vip_wg_2, and 
vip_wg_3, and specified the tag for the workload groups as vip_wg. This means 
that these three workload groups are categorized into one group, and their 
combined memory resource allocation cannot exceed 100%.
+
+At the same time, the tag.workload_group attribute for BE3 is set to vip_wg, 
meaning that only Workload Groups with the tag attribute set to vip_wg will 
take effect on BE3.
+
+BE1 and BE2 have their tag.workload_group attribute set to default_wg, and the 
Workload Groups normal and A are also assigned the tag default_wg, so normal 
and A will only take effect on BE1 and BE2.
+
+It can be simply understood that BE1 and BE2 form one sub-cluster, which has 
two Workload Groups: normal and A; while BE3 forms another sub-cluster, which 
has three Workload Groups: vip_wg_1, vip_wg_2, and vip_wg_3.
+
+:::tip
+NOTE:
+
+It can be noted that the BE has two attributes: tag.location and 
tag.workload_group, which are not directly related.
+
+The tag.location is used to specify which data replica group the BE belongs 
to. The data replicas also have a location attribute, and the replicas are 
distributed to BEs with the same location attribute, thereby achieving physical 
resource isolation.
+
+The tag.workload_group is used to specify which Workload Group the BE belongs 
to. Workload Groups also have a tag attribute to indicate which group they 
belong to, and Workload Groups will only take effect on BEs with the specified 
grouping.
+
+In the Doris integrated storage and computing mode, data replicas and 
computation are typically bound together. Therefore, it is also recommended 
that the values of BE's tag.location and tag.workload_group be the same value.
+:::
+
+
+"The current matching rules for the Workload Group tag and the BE's 
tag.workload_group are as follows:
+1. When the Workload Group tag is empty, this Workload Group can be sent to 
all BEs, regardless of whether the BE has specified a tag.
+2. When the Workload Group tag is not empty, the Workload Group will only be 
sent to BEs with the same tag.
+
+
diff --git 
a/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md 
b/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md
index 2df281f815e..b04422c4772 100644
--- a/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md
+++ b/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md
@@ -90,14 +90,17 @@ Example:
 create workload group tag_wg properties('tag'='cn1');
 ```
 2. Modify the tag of a BE in the cluster to cn1. At this point, the tag_wg 
Workload Group will only be sent to this BE and any BE with no tag. The 
tag.workload_group attribute can specify multiple values, separated by commas.
+   It is important to note that the alter interface currently does not support 
incremental updates. Each time the BE attributes are modified, the entire set 
of attributes needs to be provided. Therefore, in the statements below, the 
tag.location attribute is added, with 'default' as the system default value. In 
practice, the existing attributes of the BE should be specified accordingly.
 ```
-alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1");
+alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1", "tag.location"="default");
 ```
 
 Workload Group and BE Matching Rules:
 If the Workload Group's tag is empty, the Workload Group can be sent to all 
BEs, regardless of whether the BE has a tag or not.
 If the Workload Group's tag is not empty, the Workload Group will only be sent 
to BEs with the same tag.
 
+You can refer to the recommended 
usage:[group-workload-groups](./group-workload-groups.md)
+
 ## Configure cgroup
 
 Doris 2.0 version uses Doris scheduling to limit CPU resources, but since 
version 2.1, Doris defaults to using CGgroup v1 to limit CPU resources. 
Therefore, if CPU resources are expected to be limited in version 2.1, it is 
necessary to have CGgroup installed on the node where BE is located.
diff --git 
a/versioned_docs/version-3.0/admin-manual/resource-admin/group-workload-groups.md
 
b/versioned_docs/version-3.0/admin-manual/resource-admin/group-workload-groups.md
new file mode 100644
index 00000000000..4d63c3c9fb6
--- /dev/null
+++ 
b/versioned_docs/version-3.0/admin-manual/resource-admin/group-workload-groups.md
@@ -0,0 +1,157 @@
+---
+{
+"title": "Grouping Workload Groups",
+"language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The Workload Group grouping function is commonly used when there are multiple 
physically isolated BE clusters in a Doris cluster. Workload Groups can be 
grouped, and different groups of Workload Groups can be bound to different BE 
clusters.
+
+## Recommended usage
+
+If there are currently two isolated BE sub-clusters in the cluster, named rg1 
and rg2, and these two groups are completely physically isolated, with no 
shared data or computation, the recommended configuration approach is as 
follows:
+
+1. Reduce the resource allocation for the normal group as much as possible, 
serving as a fallback query group. For example, if a query does not carry any 
Workload Group information, it will automatically use this default group to 
avoid query failures.
+
+2. Create corresponding Workload Groups for these two sub-clusters and bind 
them to the respective sub-clusters. For instance, create the first Workload 
Group named wg1 for the rg1 cluster, which includes Workload Group a and 
Workload Group b. Create the second Workload Group named wg2 for the rg2 
cluster, which includes Workload Group c and Workload Group d.
+
+The final effect will be as follows:
+
+![rg1_rg2_workload_group](/images/workload-management/rg1_rg2_workload_group.png)
+
+The operating process is as follows:
+
+Step 1: Bind the data replicas to the BE nodes, which essentially completes 
the division of the rg1 and rg2 sub-clusters, achieving isolation of the data 
replicas. If the cluster has already completed the division into sub-clusters, 
this step can be skipped, and you can proceed directly to Step 2.
+1. Bind the data replicas to the rg1 and rg2 clusters.
+```
+-- When creating tables for the rg1 cluster, it is necessary to specify that 
the replicas are distributed to rg1.
+create table table1
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg1:3"
+)
+
+-- When creating tables for the rg2 cluster, it is necessary to specify that 
the replicas are distributed to rg2.
+create table table2
+(k1 int, k2 int)
+distributed by hash(k1) buckets 1
+properties(
+    "replication_allocation"="tag.location.rg2:3"
+)
+```
+
+2. Bind the BE nodes to the rg1 and rg2 clusters.
+```
+-- Bind be1 and be2 to the rg1 cluster.
+alter system modify backend "be1:9050" set ("tag.location" = "rg1");
+alter system modify backend "be2:9050" set ("tag.location" = "rg1");
+
+-- Bind be3 and be3 to the rg2 cluster.
+alter system modify backend "be3:9050" set ("tag.location" = "rg2");
+alter system modify backend "be4:9050" set ("tag.location" = "rg2");
+```
+
+Step 2: Bind the workload group to the BE nodes.
+1. Create a new workload group and bind it to wg1 and wg2 respectively.
+```
+-- Create a workload group for the wg1 group.
+create workload group a properties ("memory_limit"="45%","tag"="wg1")
+create workload group b properties ("memory_limit"="45%","tag"="wg1")
+
+-- Create a workload group for the wg2 group.
+create workload group c properties ("memory_limit"="45%","tag"="wg2")
+create workload group d properties ("memory_limit"="45%","tag"="wg2")
+```
+
+2. Bind the BE to wg1 and wg2. At this point, Workload Group a and b will only 
take effect on be1 and be2, while Workload Group c and d will only take effect 
on be3 and be4.
+
+(Note that when modifying, the tag.location is specified here because the 
current interface for modifying BE configurations does not support incremental 
updates. Therefore, when adding new attributes, you must also carry over the 
existing attributes.)
+```
+-- Bind be1 and be2 to wg1.
+alter system modify backend "be1:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+alter system modify backend "be2:9050" set ("tag.location" = 
"rg1",tag.workload_group="wg1");
+
+-- Bind be3 and be4 to wg2.
+alter system modify backend "be3:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+alter system modify backend "be4:9050" set ("tag.location" = 
"rg2",tag.workload_group="wg2");
+```
+
+3. Reduce the resource usage of the normal workload group, serving as a 
fallback workload group when users do not carry Workload Group information. It 
can be observed that no tag attributes have been specified for the normal 
group, allowing it to be effective on all BE nodes.
+```
+alter workload group normal properties("memory_limit=1%")
+```
+To simplify maintenance, the BE's tag.location and tag.workload_group can use 
the same value, effectively merging rg1 with wg1 and rg2 with wg2 under a 
unified name. For example, set the BE's tag.workload_group to rg1, and also 
specify the tag for Workload Group a and b as rg1.
+
+
+## Principle explanation
+### Default situation
+The user has created a new Doris cluster with only one BE (defaulting to the 
default group). The system typically creates a group named normal by default. 
The user then creates a Workload Group A, with each group allocated 50% of the 
memory. At this point, the distribution of Workload Groups in the cluster is as 
follows:
+
+![group_wg_default](/images/workload-management/group_wg_default.png)
+
+If a new BE named BE2 is added at this point, the Workload Group distribution 
in the new BE will be as follows::
+
+![group_wg_add_be](/images/workload-management/group_wg_add_be.png)
+
+The distribution of Workload Groups in the new BE is the same as in the 
existing BE.
+
+### Add a new BE cluster
+Doris supports the feature of physical isolation for BE nodes. When a new BE 
node (named BE3) is added and assigned to a separate group (the new BE group is 
named vip_group), the distribution of Workload Groups is as follows:
+
+![group_wg_add_cluster](/images/workload-management/group_wg_add_cluster.png)
+
+It can be seen that by default, the Workload Group in the system is effective 
across all sub-clusters, which may have certain limitations in some scenarios.
+
+### Grouping Workload Groups
+Suppose there are two physically isolated BE clusters in the cluster: 
vip_group and default, serving different business entities. These two entities 
may have different requirements for load management. For instance, vip_group 
may need to create more Workload Groups, and the resource configurations for 
each Workload Group may differ significantly from those of the default group.
+
+In this case, the functionality of Workload Group grouping is needed to 
address this issue. For example, the vip_group cluster needs to create three 
Workload Groups, each of which can obtain equal resources.
+
+![group_wg_two_group](/images/workload-management/group_wg_two_group.png)
+
+The user has created three workload groups, named vip_wg_1, vip_wg_2, and 
vip_wg_3, and specified the tag for the workload groups as vip_wg. This means 
that these three workload groups are categorized into one group, and their 
combined memory resource allocation cannot exceed 100%.
+
+At the same time, the tag.workload_group attribute for BE3 is set to vip_wg, 
meaning that only Workload Groups with the tag attribute set to vip_wg will 
take effect on BE3.
+
+BE1 and BE2 have their tag.workload_group attribute set to default_wg, and the 
Workload Groups normal and A are also assigned the tag default_wg, so normal 
and A will only take effect on BE1 and BE2.
+
+It can be simply understood that BE1 and BE2 form one sub-cluster, which has 
two Workload Groups: normal and A; while BE3 forms another sub-cluster, which 
has three Workload Groups: vip_wg_1, vip_wg_2, and vip_wg_3.
+
+:::tip
+NOTE:
+
+It can be noted that the BE has two attributes: tag.location and 
tag.workload_group, which are not directly related.
+
+The tag.location is used to specify which data replica group the BE belongs 
to. The data replicas also have a location attribute, and the replicas are 
distributed to BEs with the same location attribute, thereby achieving physical 
resource isolation.
+
+The tag.workload_group is used to specify which Workload Group the BE belongs 
to. Workload Groups also have a tag attribute to indicate which group they 
belong to, and Workload Groups will only take effect on BEs with the specified 
grouping.
+
+In the Doris integrated storage and computing mode, data replicas and 
computation are typically bound together. Therefore, it is also recommended 
that the values of BE's tag.location and tag.workload_group be the same value.
+:::
+
+
+"The current matching rules for the Workload Group tag and the BE's 
tag.workload_group are as follows:
+1. When the Workload Group tag is empty, this Workload Group can be sent to 
all BEs, regardless of whether the BE has specified a tag.
+2. When the Workload Group tag is not empty, the Workload Group will only be 
sent to BEs with the same tag.
+
+
diff --git 
a/versioned_docs/version-3.0/admin-manual/resource-admin/workload-group.md 
b/versioned_docs/version-3.0/admin-manual/resource-admin/workload-group.md
index d9600cc61d3..bb08c268081 100644
--- a/versioned_docs/version-3.0/admin-manual/resource-admin/workload-group.md
+++ b/versioned_docs/version-3.0/admin-manual/resource-admin/workload-group.md
@@ -90,14 +90,17 @@ Example:
 create workload group tag_wg properties('tag'='cn1');
 ```
 2. Modify the tag of a BE in the cluster to cn1. At this point, the tag_wg 
Workload Group will only be sent to this BE and any BE with no tag. The 
tag.workload_group attribute can specify multiple values, separated by commas.
+   It is important to note that the alter interface currently does not support 
incremental updates. Each time the BE attributes are modified, the entire set 
of attributes needs to be provided. Therefore, in the statements below, the 
tag.location attribute is added, with 'default' as the system default value. In 
practice, the existing attributes of the BE should be specified accordingly.
 ```
-alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1");
+alter system modify backend "localhost:9050" set ("tag.workload_group" = 
"cn1", "tag.location"="default");
 ```
 
 Workload Group and BE Matching Rules:
 If the Workload Group's tag is empty, the Workload Group can be sent to all 
BEs, regardless of whether the BE has a tag or not.
 If the Workload Group's tag is not empty, the Workload Group will only be sent 
to BEs with the same tag.
 
+You can refer to the recommended 
usage:[group-workload-groups](./group-workload-groups.md)
+
 ## Configure cgroup
 
 Doris 2.0 version uses Doris scheduling to limit CPU resources, but since 
version 2.1, Doris defaults to using CGgroup v1 to limit CPU resources. 
Therefore, if CPU resources are expected to be limited in version 2.1, it is 
necessary to have CGgroup installed on the node where BE is located.
diff --git a/versioned_sidebars/version-2.1-sidebars.json 
b/versioned_sidebars/version-2.1-sidebars.json
index 30d4d1c210a..1be1bc49fd2 100644
--- a/versioned_sidebars/version-2.1-sidebars.json
+++ b/versioned_sidebars/version-2.1-sidebars.json
@@ -339,6 +339,7 @@
                     "label": "Managing Resource",
                     "items": [
                         "admin-manual/resource-admin/workload-group",
+                        "admin-manual/resource-admin/group-workload-groups",
                         "admin-manual/resource-admin/workload-policy",
                         "admin-manual/resource-admin/workload-analysis",
                         "admin-manual/resource-admin/multi-tenant",
diff --git a/versioned_sidebars/version-3.0-sidebars.json 
b/versioned_sidebars/version-3.0-sidebars.json
index 8042c850e1a..5b6de9be6d6 100644
--- a/versioned_sidebars/version-3.0-sidebars.json
+++ b/versioned_sidebars/version-3.0-sidebars.json
@@ -390,6 +390,7 @@
                     "label": "Managing Resource",
                     "items": [
                         "admin-manual/resource-admin/workload-group",
+                        "admin-manual/resource-admin/group-workload-groups",
                         "admin-manual/resource-admin/workload-policy",
                         "admin-manual/resource-admin/workload-analysis",
                         "admin-manual/resource-admin/multi-tenant",


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to