This is an automated email from the ASF dual-hosted git repository.

critas pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/iotdb-docs.git


The following commit(s) were added to refs/heads/main by this push:
     new 84764f7d update kubernetes deployment (#861)
84764f7d is described below

commit 84764f7dad150cda8d6440cc44e853d9305a7535
Author: leto-b <[email protected]>
AuthorDate: Thu Dec 11 20:00:03 2025 +0800

    update kubernetes deployment (#861)
    
    * update kubernetes deployment
    
    * add github link
---
 .../Kubernetes_apache.md                           | 169 +++------------------
 .../Kubernetes_timecho.md                          |   9 --
 .../Kubernetes_apache.md                           | 169 +++------------------
 .../Kubernetes_timecho.md                          |   9 --
 .../Kubernetes_apache.md                           | 169 +++------------------
 .../Kubernetes_timecho.md                          |   9 --
 .../Kubernetes_apache.md                           | 169 +++------------------
 .../Kubernetes_timecho.md                          |   9 --
 .../Kubernetes_apache.md                           | 166 +++-----------------
 .../Kubernetes_timecho.md                          |  10 +-
 .../Kubernetes_apache.md                           | 166 +++-----------------
 .../Kubernetes_timecho.md                          |  10 +-
 .../Kubernetes_apache.md                           | 166 +++-----------------
 .../Kubernetes_timecho.md                          |  10 +-
 .../Kubernetes_apache.md                           | 166 +++-----------------
 .../Kubernetes_timecho.md                          |  10 +-
 16 files changed, 148 insertions(+), 1268 deletions(-)

diff --git 
a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md
index fca46c19..3cb32019 100644
--- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md
+++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -123,12 +123,11 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 ### 5.1 Clone IoTDB Kubernetes Deployment Code
 
-Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
-
+Clone Helm : [Source 
Code](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 If encountering proxy issues, cancel proxy settings:
 
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +144,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   # Name after installation
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    # Repository and version used
+  tag: latest    # Repository and version used
 
 storage:
   # Storage class name, if using local static storage, do not configure; if 
using dynamic storage, this must be set
@@ -184,85 +183,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. Configure Private Repository Information or Pre-Pull Images
-
-Configure private repository information on k8s as a prerequisite for the next 
helm install step.
-
-Option one is to pull the available iotdb images during helm insta, while 
option two is to import the available iotdb images into containerd in advance.
-
-### 6.1 [Option 1] Pull Image from Private Repository
-
-#### 6.1.1 Create a Secret to Allow k8s to Access the IoTDB Helm Private 
Repository
-
-Replace xxxxxx with the IoTDB private repository account, password, and email.
-
-
-
-```Bash
-# Note the single quotes
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# View the secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# View and output as YAML
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# View and decrypt
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 Load the Secret as a Patch to the Namespace iotdb-ns
-
-```Bash
-# Add a patch to include login information for nexus in this namespace
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# View the information in this namespace
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 [Option 2] Import Image
+## 6. Install IoTDB
 
-This step is for scenarios where the customer cannot connect to the private 
repository and requires assistance from company implementation staff.
-
-#### 6.2.1  Pull and Export the Image:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 View and Export the Image:
-
-```Bash
-# View
-ctr images ls 
-
-# Export
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 Import into the k8s Namespace:
-
-> Note that k8s.io is the namespace for ctr in the example environment; 
importing to other namespaces will not work.
-
-```Bash
-# Import into the k8s namespace
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 View the Image:
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. Install IoTDB
-
-### 7.1  Install IoTDB
+### 6.1  Install IoTDB
 
 ```Bash
 # Enter the directory
@@ -272,14 +195,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 View Helm Installation List
+### 6.2 View Helm Installation List
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 View Pods
+### 6.3 View Pods
 
 ```Bash
 # View IoTDB pods
@@ -288,7 +211,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 After executing the command, if the output shows 6 Pods with confignode and 
datanode labels (3 each), it indicates a successful installation. Note that not 
all Pods may be in the Running state initially; inactive datanode Pods may keep 
restarting but will normalize after activation.
 
-### 7.4 Troubleshooting
+### 6.4 Troubleshooting
 
 ```Bash
 # View k8s creation logs
@@ -303,65 +226,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. Activate IoTDB
-
-### 8.1 Option 1: Activate Directly in the Pod (Quickest)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# Obtain the machine code and proceed with activation
-```
-
-### 8.2 Option 2: Activate Inside the ConfigNode Container
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# Obtain the machine code and proceed with activation
-# Exit the container
-```
-
-### Option 3: Manual Activation
-
-1. View ConfigNode details to determine the node:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# Example output:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. View PVC and find the corresponding Volume for ConfigNode to determine the 
path:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-# Example output:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# To view multiple ConfigNodes, use the following:
-for i in {0..2}; do echo confignode-$i; kubectl describe pod confignode-${i} 
-n iotdb-ns | grep -e "Node:" -e "Path:"
-```
-
-3. View the Detailed Information of the Corresponding Volume to Determine the 
Physical Directory Location:
-
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# Example output:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. Locate the system-info file in the corresponding directory on the 
corresponding node, use this system-info as the machine code to generate an 
activation code, and create a new file named license in the same directory, 
writing the activation code into this file.
-
-## 9.  Verify IoTDB
+## 7.  Verify IoTDB
 
-### 9.1 Check the Status of Pods within the Namespace
+### 7.1 Check the Status of Pods within the Namespace
 
 View the IP, status, and other information of the pods in the iotdb-ns 
namespace to ensure they are all running normally.
 
@@ -378,7 +245,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 Check the Port Mapping within the Namespace
+### 7.2 Check the Port Mapping within the Namespace
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -390,7 +257,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
+### 7.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
 
 Use the port of jdbc-balancer and the IP of any k8s node.
 
@@ -402,9 +269,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. Scaling
+## 8. Scaling
 
-### 10.1  Add New PV
+### 8.1  Add New PV
 
 Add a new PV; scaling is only possible with available PVs.
 
@@ -415,7 +282,7 @@ Add a new PV; scaling is only possible with available PVs.
 **Reason**:The static storage hostPath mode is configured, and the script 
modifies the `iotdb-system.properties` file to set `dn_data_dirs` to 
`/iotdb6/iotdb_data,/iotdb7/iotdb_data`. However, the default storage path  
`/iotdb/data` is not mounted, leading to data loss upon restart.
 **Solution**:Mount the `/iotdb/data` directory as well, and ensure this 
setting is applied to both ConfigNode and DataNode to maintain data integrity 
and cluster stability.
 
-### 10.2 Scale ConfigNode
+### 8.2 Scale ConfigNode
 
 Example: Scale from 3 ConfigNodes to 4 ConfigNodes
 
@@ -428,7 +295,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 Scale DataNode
+### 8.3 Scale DataNode
 
 Example: Scale from 3 DataNodes to 4 DataNodes
 
@@ -438,7 +305,7 @@ Modify the values.yaml file in iotdb-cluster-k8s/helm to 
change the number of Da
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 Verify IoTDB Status
+### 8.4 Verify IoTDB Status
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md
index f16d4727..14b51ab8 100644
--- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -125,15 +125,6 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
 
-
-If encountering proxy issues, cancel proxy settings:
-
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 Modify YAML Files
 
 > Ensure that the version used is supported (>=1.3.3.2):
diff --git 
a/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md
index fca46c19..3cb32019 100644
--- a/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md
+++ b/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -123,12 +123,11 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 ### 5.1 Clone IoTDB Kubernetes Deployment Code
 
-Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
-
+Clone Helm : [Source 
Code](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 If encountering proxy issues, cancel proxy settings:
 
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +144,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   # Name after installation
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    # Repository and version used
+  tag: latest    # Repository and version used
 
 storage:
   # Storage class name, if using local static storage, do not configure; if 
using dynamic storage, this must be set
@@ -184,85 +183,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. Configure Private Repository Information or Pre-Pull Images
-
-Configure private repository information on k8s as a prerequisite for the next 
helm install step.
-
-Option one is to pull the available iotdb images during helm insta, while 
option two is to import the available iotdb images into containerd in advance.
-
-### 6.1 [Option 1] Pull Image from Private Repository
-
-#### 6.1.1 Create a Secret to Allow k8s to Access the IoTDB Helm Private 
Repository
-
-Replace xxxxxx with the IoTDB private repository account, password, and email.
-
-
-
-```Bash
-# Note the single quotes
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# View the secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# View and output as YAML
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# View and decrypt
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 Load the Secret as a Patch to the Namespace iotdb-ns
-
-```Bash
-# Add a patch to include login information for nexus in this namespace
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# View the information in this namespace
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 [Option 2] Import Image
+## 6. Install IoTDB
 
-This step is for scenarios where the customer cannot connect to the private 
repository and requires assistance from company implementation staff.
-
-#### 6.2.1  Pull and Export the Image:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 View and Export the Image:
-
-```Bash
-# View
-ctr images ls 
-
-# Export
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 Import into the k8s Namespace:
-
-> Note that k8s.io is the namespace for ctr in the example environment; 
importing to other namespaces will not work.
-
-```Bash
-# Import into the k8s namespace
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 View the Image:
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. Install IoTDB
-
-### 7.1  Install IoTDB
+### 6.1  Install IoTDB
 
 ```Bash
 # Enter the directory
@@ -272,14 +195,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 View Helm Installation List
+### 6.2 View Helm Installation List
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 View Pods
+### 6.3 View Pods
 
 ```Bash
 # View IoTDB pods
@@ -288,7 +211,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 After executing the command, if the output shows 6 Pods with confignode and 
datanode labels (3 each), it indicates a successful installation. Note that not 
all Pods may be in the Running state initially; inactive datanode Pods may keep 
restarting but will normalize after activation.
 
-### 7.4 Troubleshooting
+### 6.4 Troubleshooting
 
 ```Bash
 # View k8s creation logs
@@ -303,65 +226,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. Activate IoTDB
-
-### 8.1 Option 1: Activate Directly in the Pod (Quickest)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# Obtain the machine code and proceed with activation
-```
-
-### 8.2 Option 2: Activate Inside the ConfigNode Container
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# Obtain the machine code and proceed with activation
-# Exit the container
-```
-
-### Option 3: Manual Activation
-
-1. View ConfigNode details to determine the node:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# Example output:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. View PVC and find the corresponding Volume for ConfigNode to determine the 
path:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-# Example output:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# To view multiple ConfigNodes, use the following:
-for i in {0..2}; do echo confignode-$i; kubectl describe pod confignode-${i} 
-n iotdb-ns | grep -e "Node:" -e "Path:"
-```
-
-3. View the Detailed Information of the Corresponding Volume to Determine the 
Physical Directory Location:
-
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# Example output:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. Locate the system-info file in the corresponding directory on the 
corresponding node, use this system-info as the machine code to generate an 
activation code, and create a new file named license in the same directory, 
writing the activation code into this file.
-
-## 9.  Verify IoTDB
+## 7.  Verify IoTDB
 
-### 9.1 Check the Status of Pods within the Namespace
+### 7.1 Check the Status of Pods within the Namespace
 
 View the IP, status, and other information of the pods in the iotdb-ns 
namespace to ensure they are all running normally.
 
@@ -378,7 +245,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 Check the Port Mapping within the Namespace
+### 7.2 Check the Port Mapping within the Namespace
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -390,7 +257,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
+### 7.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
 
 Use the port of jdbc-balancer and the IP of any k8s node.
 
@@ -402,9 +269,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. Scaling
+## 8. Scaling
 
-### 10.1  Add New PV
+### 8.1  Add New PV
 
 Add a new PV; scaling is only possible with available PVs.
 
@@ -415,7 +282,7 @@ Add a new PV; scaling is only possible with available PVs.
 **Reason**:The static storage hostPath mode is configured, and the script 
modifies the `iotdb-system.properties` file to set `dn_data_dirs` to 
`/iotdb6/iotdb_data,/iotdb7/iotdb_data`. However, the default storage path  
`/iotdb/data` is not mounted, leading to data loss upon restart.
 **Solution**:Mount the `/iotdb/data` directory as well, and ensure this 
setting is applied to both ConfigNode and DataNode to maintain data integrity 
and cluster stability.
 
-### 10.2 Scale ConfigNode
+### 8.2 Scale ConfigNode
 
 Example: Scale from 3 ConfigNodes to 4 ConfigNodes
 
@@ -428,7 +295,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 Scale DataNode
+### 8.3 Scale DataNode
 
 Example: Scale from 3 DataNodes to 4 DataNodes
 
@@ -438,7 +305,7 @@ Modify the values.yaml file in iotdb-cluster-k8s/helm to 
change the number of Da
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 Verify IoTDB Status
+### 8.4 Verify IoTDB Status
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md
index f16d4727..14b51ab8 100644
--- a/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ b/src/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -125,15 +125,6 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
 
-
-If encountering proxy issues, cancel proxy settings:
-
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 Modify YAML Files
 
 > Ensure that the version used is supported (>=1.3.3.2):
diff --git 
a/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md
index fca46c19..3cb32019 100644
--- a/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md
+++ b/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -123,12 +123,11 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 ### 5.1 Clone IoTDB Kubernetes Deployment Code
 
-Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
-
+Clone Helm : [Source 
Code](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 If encountering proxy issues, cancel proxy settings:
 
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +144,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   # Name after installation
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    # Repository and version used
+  tag: latest    # Repository and version used
 
 storage:
   # Storage class name, if using local static storage, do not configure; if 
using dynamic storage, this must be set
@@ -184,85 +183,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. Configure Private Repository Information or Pre-Pull Images
-
-Configure private repository information on k8s as a prerequisite for the next 
helm install step.
-
-Option one is to pull the available iotdb images during helm insta, while 
option two is to import the available iotdb images into containerd in advance.
-
-### 6.1 [Option 1] Pull Image from Private Repository
-
-#### 6.1.1 Create a Secret to Allow k8s to Access the IoTDB Helm Private 
Repository
-
-Replace xxxxxx with the IoTDB private repository account, password, and email.
-
-
-
-```Bash
-# Note the single quotes
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# View the secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# View and output as YAML
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# View and decrypt
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 Load the Secret as a Patch to the Namespace iotdb-ns
-
-```Bash
-# Add a patch to include login information for nexus in this namespace
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# View the information in this namespace
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 [Option 2] Import Image
+## 6. Install IoTDB
 
-This step is for scenarios where the customer cannot connect to the private 
repository and requires assistance from company implementation staff.
-
-#### 6.2.1  Pull and Export the Image:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 View and Export the Image:
-
-```Bash
-# View
-ctr images ls 
-
-# Export
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 Import into the k8s Namespace:
-
-> Note that k8s.io is the namespace for ctr in the example environment; 
importing to other namespaces will not work.
-
-```Bash
-# Import into the k8s namespace
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 View the Image:
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. Install IoTDB
-
-### 7.1  Install IoTDB
+### 6.1  Install IoTDB
 
 ```Bash
 # Enter the directory
@@ -272,14 +195,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 View Helm Installation List
+### 6.2 View Helm Installation List
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 View Pods
+### 6.3 View Pods
 
 ```Bash
 # View IoTDB pods
@@ -288,7 +211,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 After executing the command, if the output shows 6 Pods with confignode and 
datanode labels (3 each), it indicates a successful installation. Note that not 
all Pods may be in the Running state initially; inactive datanode Pods may keep 
restarting but will normalize after activation.
 
-### 7.4 Troubleshooting
+### 6.4 Troubleshooting
 
 ```Bash
 # View k8s creation logs
@@ -303,65 +226,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. Activate IoTDB
-
-### 8.1 Option 1: Activate Directly in the Pod (Quickest)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# Obtain the machine code and proceed with activation
-```
-
-### 8.2 Option 2: Activate Inside the ConfigNode Container
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# Obtain the machine code and proceed with activation
-# Exit the container
-```
-
-### Option 3: Manual Activation
-
-1. View ConfigNode details to determine the node:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# Example output:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. View PVC and find the corresponding Volume for ConfigNode to determine the 
path:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-# Example output:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# To view multiple ConfigNodes, use the following:
-for i in {0..2}; do echo confignode-$i; kubectl describe pod confignode-${i} 
-n iotdb-ns | grep -e "Node:" -e "Path:"
-```
-
-3. View the Detailed Information of the Corresponding Volume to Determine the 
Physical Directory Location:
-
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# Example output:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. Locate the system-info file in the corresponding directory on the 
corresponding node, use this system-info as the machine code to generate an 
activation code, and create a new file named license in the same directory, 
writing the activation code into this file.
-
-## 9.  Verify IoTDB
+## 7.  Verify IoTDB
 
-### 9.1 Check the Status of Pods within the Namespace
+### 7.1 Check the Status of Pods within the Namespace
 
 View the IP, status, and other information of the pods in the iotdb-ns 
namespace to ensure they are all running normally.
 
@@ -378,7 +245,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 Check the Port Mapping within the Namespace
+### 7.2 Check the Port Mapping within the Namespace
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -390,7 +257,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
+### 7.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
 
 Use the port of jdbc-balancer and the IP of any k8s node.
 
@@ -402,9 +269,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. Scaling
+## 8. Scaling
 
-### 10.1  Add New PV
+### 8.1  Add New PV
 
 Add a new PV; scaling is only possible with available PVs.
 
@@ -415,7 +282,7 @@ Add a new PV; scaling is only possible with available PVs.
 **Reason**:The static storage hostPath mode is configured, and the script 
modifies the `iotdb-system.properties` file to set `dn_data_dirs` to 
`/iotdb6/iotdb_data,/iotdb7/iotdb_data`. However, the default storage path  
`/iotdb/data` is not mounted, leading to data loss upon restart.
 **Solution**:Mount the `/iotdb/data` directory as well, and ensure this 
setting is applied to both ConfigNode and DataNode to maintain data integrity 
and cluster stability.
 
-### 10.2 Scale ConfigNode
+### 8.2 Scale ConfigNode
 
 Example: Scale from 3 ConfigNodes to 4 ConfigNodes
 
@@ -428,7 +295,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 Scale DataNode
+### 8.3 Scale DataNode
 
 Example: Scale from 3 DataNodes to 4 DataNodes
 
@@ -438,7 +305,7 @@ Modify the values.yaml file in iotdb-cluster-k8s/helm to 
change the number of Da
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 Verify IoTDB Status
+### 8.4 Verify IoTDB Status
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md
index f16d4727..14b51ab8 100644
--- a/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ b/src/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -125,15 +125,6 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
 
-
-If encountering proxy issues, cancel proxy settings:
-
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 Modify YAML Files
 
 > Ensure that the version used is supported (>=1.3.3.2):
diff --git 
a/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md
index fca46c19..3cb32019 100644
--- a/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md
+++ b/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -123,12 +123,11 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 ### 5.1 Clone IoTDB Kubernetes Deployment Code
 
-Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
-
+Clone Helm : [Source 
Code](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 If encountering proxy issues, cancel proxy settings:
 
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +144,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   # Name after installation
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    # Repository and version used
+  tag: latest    # Repository and version used
 
 storage:
   # Storage class name, if using local static storage, do not configure; if 
using dynamic storage, this must be set
@@ -184,85 +183,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. Configure Private Repository Information or Pre-Pull Images
-
-Configure private repository information on k8s as a prerequisite for the next 
helm install step.
-
-Option one is to pull the available iotdb images during helm insta, while 
option two is to import the available iotdb images into containerd in advance.
-
-### 6.1 [Option 1] Pull Image from Private Repository
-
-#### 6.1.1 Create a Secret to Allow k8s to Access the IoTDB Helm Private 
Repository
-
-Replace xxxxxx with the IoTDB private repository account, password, and email.
-
-
-
-```Bash
-# Note the single quotes
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# View the secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# View and output as YAML
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# View and decrypt
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 Load the Secret as a Patch to the Namespace iotdb-ns
-
-```Bash
-# Add a patch to include login information for nexus in this namespace
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# View the information in this namespace
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 [Option 2] Import Image
+## 6. Install IoTDB
 
-This step is for scenarios where the customer cannot connect to the private 
repository and requires assistance from company implementation staff.
-
-#### 6.2.1  Pull and Export the Image:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 View and Export the Image:
-
-```Bash
-# View
-ctr images ls 
-
-# Export
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 Import into the k8s Namespace:
-
-> Note that k8s.io is the namespace for ctr in the example environment; 
importing to other namespaces will not work.
-
-```Bash
-# Import into the k8s namespace
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 View the Image:
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. Install IoTDB
-
-### 7.1  Install IoTDB
+### 6.1  Install IoTDB
 
 ```Bash
 # Enter the directory
@@ -272,14 +195,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 View Helm Installation List
+### 6.2 View Helm Installation List
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 View Pods
+### 6.3 View Pods
 
 ```Bash
 # View IoTDB pods
@@ -288,7 +211,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 After executing the command, if the output shows 6 Pods with confignode and 
datanode labels (3 each), it indicates a successful installation. Note that not 
all Pods may be in the Running state initially; inactive datanode Pods may keep 
restarting but will normalize after activation.
 
-### 7.4 Troubleshooting
+### 6.4 Troubleshooting
 
 ```Bash
 # View k8s creation logs
@@ -303,65 +226,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. Activate IoTDB
-
-### 8.1 Option 1: Activate Directly in the Pod (Quickest)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# Obtain the machine code and proceed with activation
-```
-
-### 8.2 Option 2: Activate Inside the ConfigNode Container
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# Obtain the machine code and proceed with activation
-# Exit the container
-```
-
-### Option 3: Manual Activation
-
-1. View ConfigNode details to determine the node:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# Example output:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. View PVC and find the corresponding Volume for ConfigNode to determine the 
path:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-# Example output:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# To view multiple ConfigNodes, use the following:
-for i in {0..2}; do echo confignode-$i; kubectl describe pod confignode-${i} 
-n iotdb-ns | grep -e "Node:" -e "Path:"
-```
-
-3. View the Detailed Information of the Corresponding Volume to Determine the 
Physical Directory Location:
-
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# Example output:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. Locate the system-info file in the corresponding directory on the 
corresponding node, use this system-info as the machine code to generate an 
activation code, and create a new file named license in the same directory, 
writing the activation code into this file.
-
-## 9.  Verify IoTDB
+## 7.  Verify IoTDB
 
-### 9.1 Check the Status of Pods within the Namespace
+### 7.1 Check the Status of Pods within the Namespace
 
 View the IP, status, and other information of the pods in the iotdb-ns 
namespace to ensure they are all running normally.
 
@@ -378,7 +245,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 Check the Port Mapping within the Namespace
+### 7.2 Check the Port Mapping within the Namespace
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -390,7 +257,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
+### 7.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
 
 Use the port of jdbc-balancer and the IP of any k8s node.
 
@@ -402,9 +269,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. Scaling
+## 8. Scaling
 
-### 10.1  Add New PV
+### 8.1  Add New PV
 
 Add a new PV; scaling is only possible with available PVs.
 
@@ -415,7 +282,7 @@ Add a new PV; scaling is only possible with available PVs.
 **Reason**:The static storage hostPath mode is configured, and the script 
modifies the `iotdb-system.properties` file to set `dn_data_dirs` to 
`/iotdb6/iotdb_data,/iotdb7/iotdb_data`. However, the default storage path  
`/iotdb/data` is not mounted, leading to data loss upon restart.
 **Solution**:Mount the `/iotdb/data` directory as well, and ensure this 
setting is applied to both ConfigNode and DataNode to maintain data integrity 
and cluster stability.
 
-### 10.2 Scale ConfigNode
+### 8.2 Scale ConfigNode
 
 Example: Scale from 3 ConfigNodes to 4 ConfigNodes
 
@@ -428,7 +295,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 Scale DataNode
+### 8.3 Scale DataNode
 
 Example: Scale from 3 DataNodes to 4 DataNodes
 
@@ -438,7 +305,7 @@ Modify the values.yaml file in iotdb-cluster-k8s/helm to 
change the number of Da
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 Verify IoTDB Status
+### 8.4 Verify IoTDB Status
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md
index f16d4727..14b51ab8 100644
--- a/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ b/src/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -125,15 +125,6 @@ For installation steps, please refer to the[Helm Official 
Website.](https://helm
 
 Please contact timechodb staff to obtain the IoTDB Helm Chart. If you 
encounter proxy issues, disable the proxy settings:
 
-
-If encountering proxy issues, cancel proxy settings:
-
-> The git clone error is as follows, indicating that the proxy has been 
configured and needs to be turned off fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 Modify YAML Files
 
 > Ensure that the version used is supported (>=1.3.3.2):
diff --git 
a/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md
index a21e6df2..de184c3e 100644
--- 
a/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md
+++ 
b/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -124,11 +124,11 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 ### 5.1 克隆 IoTDB Kubernetes 部署代码
 
-请联系天谋工作人员获取IoTDB的Helm Chart
+下载 Helm : [源码](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 如果遇到代理问题,取消代理设置:
 
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +145,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   #软件安装后的名称
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    #软件所用的仓库和版本
+  tag: latest    #软件所用的仓库和版本
 
 storage:
 # 存储类名称,如果使用本地静态存储storageClassName 不用配置,如果使用动态存储必需设置此项
@@ -184,83 +184,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. 配置私库信息或预先使用ctr拉取镜像
+## 6. 安装 IoTDB
 
-在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
-
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
-
-### 6.1 【方案一】从私有仓库拉取镜像
-
-#### 6.1.1 创建secret 使k8s可访问iotdb-helm的私有仓库
-
-下文中“xxxxxx”表示IoTDB私有仓库的账号、密码、邮箱。
-
-```Bash
-# 注意 单引号
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# 查看secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# 查看并输出为yaml
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# 查看并解密
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 将secret作为一个patch加载到命名空间iotdb-ns
-
-```Bash
-# 添加一个patch,使该命名空间增加登陆nexus的登陆信息
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# 查看命名空间的该条信息
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 【方案二】导入镜像
-
-该步骤用于客户无法连接私库的场景,需要联系公司实施同事辅助准备。
-
-#### 6.2.1  拉取并导出镜像:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 查看并导出镜像:
-
-```Bash
-# 查看
-ctr images ls 
-
-# 导出
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 导入到k8s的namespace下:
-
-> 注意,k8s.io为示例环境中k8s的ctr的命名空间,导入到其他命名空间是不行的
-
-```Bash
-# 导入到k8s的namespace下
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 查看镜像
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. 安装 IoTDB
-
-### 7.1 安装 IoTDB
+### 6.1 安装 IoTDB
 
 ```Bash
 # 进入文件夹
@@ -270,14 +196,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 查看 Helm 安装列表
+### 6.2 查看 Helm 安装列表
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 查看 Pods
+### 6.3 查看 Pods
 
 ```Bash
 # 查看 iotdb的pods
@@ -286,7 +212,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 
执行命令后,输出了带有confignode和datanode标识的各3个Pods,,总共6个Pods,即表明安装成功;需要注意的是,并非所有Pods都处于Running状态,未激活的datanode可能会持续重启,但在激活后将恢复正常。
 
-### 7.4 发现故障的排除方式
+### 6.4 发现故障的排除方式
 
 ```Bash
 # 查看k8s的创建log
@@ -301,65 +227,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. 激活 IoTDB
-
-### 8.1 方案1:直接在 Pod 中激活(最快捷)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# 拿到机器码后进行激活
-```
-
-### 8.2 方案2:进入confignode的容器中激活
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# 拿到机器码后进行激活
-# 退出容器
-```
-
-### 8.3 方案3:手动激活
-
-1. 查看 ConfigNode 详细信息,确定所在节点:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# 结果示例:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. 查看 PVC 并找到 ConfigNode 对应的 Volume,确定所在路径:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-
-# 结果示例:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# 如果要查看多个confignode,使用如下:
-for i in {0..2}; do echo confignode-$i;kubectl describe pod confignode-${i} -n 
iotdb-ns | grep -e "Node:" -e "Path:"; echo "----"; done
-```
-
-3. 查看对应 Volume 的详细信息,确定物理目录的位置:
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# 结果示例:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. 从对应节点的对应目录下找到 system-info 文件,使用该 system-info 作为机器码生成激活码,并在同级目录新建文件 
license,将激活码写入到该文件。
-
-## 9. 验证 IoTDB
+## 7. 验证 IoTDB
 
-### 9.1 查看命名空间内的 Pods 状态
+### 7.1 查看命名空间内的 Pods 状态
 
 查看iotdb-ns命名空间内的IP、状态等信息,确定全部运行正常
 
@@ -376,7 +246,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 查看命名空间内的端口映射情况
+### 7.2 查看命名空间内的端口映射情况
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -388,7 +258,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
+### 7.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
 
 端口即jdbc-balancer的端口,服务器为k8s任意节点的IP
 
@@ -400,9 +270,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. 扩容
+## 8. 扩容
 
-### 10.1 新增pv
+### 8.1 新增pv
 
 新增pv,必须有可用的pv才可以扩容。
 
@@ -414,7 +284,7 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 **解决方案**:是将 `/iotdb/data` 目录也进行外挂操作,且 ConfigNode 和 DataNode 
均需如此设置,以确保数据完整性和集群稳定性。
 
-### 10.2 扩容confignode
+### 8.2 扩容confignode
 
 示例:3 confignode 扩容为 4 confignode
 
@@ -427,7 +297,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 扩容datanode
+### 8.3 扩容datanode
 
 示例:3 datanode 扩容为 4 datanode
 
@@ -437,7 +307,7 @@ helm upgrade iotdb . -n iotdb-ns
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 验证IoTDB状态
+### 8.4 验证IoTDB状态
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md
index 2a6847ff..7fbc7be8 100644
--- 
a/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ 
b/src/zh/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -126,14 +126,6 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 请联系天谋工作人员获取IoTDB的Helm Chart
 
-如果遇到代理问题,取消代理设置:
-
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 修改 YAML 文件
 
 > 确保使用的是支持的版本 >=1.3.3.2
@@ -188,7 +180,7 @@ confignode:
 
 在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
 
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
+方案一即在 helm install 时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
 
 ### 6.1 【方案一】从私有仓库拉取镜像
 
diff --git 
a/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md
index a21e6df2..de184c3e 100644
--- a/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md
+++ b/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -124,11 +124,11 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 ### 5.1 克隆 IoTDB Kubernetes 部署代码
 
-请联系天谋工作人员获取IoTDB的Helm Chart
+下载 Helm : [源码](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 如果遇到代理问题,取消代理设置:
 
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +145,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   #软件安装后的名称
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    #软件所用的仓库和版本
+  tag: latest    #软件所用的仓库和版本
 
 storage:
 # 存储类名称,如果使用本地静态存储storageClassName 不用配置,如果使用动态存储必需设置此项
@@ -184,83 +184,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. 配置私库信息或预先使用ctr拉取镜像
+## 6. 安装 IoTDB
 
-在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
-
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
-
-### 6.1 【方案一】从私有仓库拉取镜像
-
-#### 6.1.1 创建secret 使k8s可访问iotdb-helm的私有仓库
-
-下文中“xxxxxx”表示IoTDB私有仓库的账号、密码、邮箱。
-
-```Bash
-# 注意 单引号
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# 查看secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# 查看并输出为yaml
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# 查看并解密
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 将secret作为一个patch加载到命名空间iotdb-ns
-
-```Bash
-# 添加一个patch,使该命名空间增加登陆nexus的登陆信息
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# 查看命名空间的该条信息
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 【方案二】导入镜像
-
-该步骤用于客户无法连接私库的场景,需要联系公司实施同事辅助准备。
-
-#### 6.2.1  拉取并导出镜像:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 查看并导出镜像:
-
-```Bash
-# 查看
-ctr images ls 
-
-# 导出
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 导入到k8s的namespace下:
-
-> 注意,k8s.io为示例环境中k8s的ctr的命名空间,导入到其他命名空间是不行的
-
-```Bash
-# 导入到k8s的namespace下
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 查看镜像
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. 安装 IoTDB
-
-### 7.1 安装 IoTDB
+### 6.1 安装 IoTDB
 
 ```Bash
 # 进入文件夹
@@ -270,14 +196,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 查看 Helm 安装列表
+### 6.2 查看 Helm 安装列表
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 查看 Pods
+### 6.3 查看 Pods
 
 ```Bash
 # 查看 iotdb的pods
@@ -286,7 +212,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 
执行命令后,输出了带有confignode和datanode标识的各3个Pods,,总共6个Pods,即表明安装成功;需要注意的是,并非所有Pods都处于Running状态,未激活的datanode可能会持续重启,但在激活后将恢复正常。
 
-### 7.4 发现故障的排除方式
+### 6.4 发现故障的排除方式
 
 ```Bash
 # 查看k8s的创建log
@@ -301,65 +227,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. 激活 IoTDB
-
-### 8.1 方案1:直接在 Pod 中激活(最快捷)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# 拿到机器码后进行激活
-```
-
-### 8.2 方案2:进入confignode的容器中激活
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# 拿到机器码后进行激活
-# 退出容器
-```
-
-### 8.3 方案3:手动激活
-
-1. 查看 ConfigNode 详细信息,确定所在节点:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# 结果示例:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. 查看 PVC 并找到 ConfigNode 对应的 Volume,确定所在路径:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-
-# 结果示例:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# 如果要查看多个confignode,使用如下:
-for i in {0..2}; do echo confignode-$i;kubectl describe pod confignode-${i} -n 
iotdb-ns | grep -e "Node:" -e "Path:"; echo "----"; done
-```
-
-3. 查看对应 Volume 的详细信息,确定物理目录的位置:
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# 结果示例:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. 从对应节点的对应目录下找到 system-info 文件,使用该 system-info 作为机器码生成激活码,并在同级目录新建文件 
license,将激活码写入到该文件。
-
-## 9. 验证 IoTDB
+## 7. 验证 IoTDB
 
-### 9.1 查看命名空间内的 Pods 状态
+### 7.1 查看命名空间内的 Pods 状态
 
 查看iotdb-ns命名空间内的IP、状态等信息,确定全部运行正常
 
@@ -376,7 +246,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 查看命名空间内的端口映射情况
+### 7.2 查看命名空间内的端口映射情况
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -388,7 +258,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
+### 7.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
 
 端口即jdbc-balancer的端口,服务器为k8s任意节点的IP
 
@@ -400,9 +270,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. 扩容
+## 8. 扩容
 
-### 10.1 新增pv
+### 8.1 新增pv
 
 新增pv,必须有可用的pv才可以扩容。
 
@@ -414,7 +284,7 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 **解决方案**:是将 `/iotdb/data` 目录也进行外挂操作,且 ConfigNode 和 DataNode 
均需如此设置,以确保数据完整性和集群稳定性。
 
-### 10.2 扩容confignode
+### 8.2 扩容confignode
 
 示例:3 confignode 扩容为 4 confignode
 
@@ -427,7 +297,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 扩容datanode
+### 8.3 扩容datanode
 
 示例:3 datanode 扩容为 4 datanode
 
@@ -437,7 +307,7 @@ helm upgrade iotdb . -n iotdb-ns
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 验证IoTDB状态
+### 8.4 验证IoTDB状态
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md
index 2a6847ff..7fbc7be8 100644
--- a/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ b/src/zh/UserGuide/V1.3.x/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -126,14 +126,6 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 请联系天谋工作人员获取IoTDB的Helm Chart
 
-如果遇到代理问题,取消代理设置:
-
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 修改 YAML 文件
 
 > 确保使用的是支持的版本 >=1.3.3.2
@@ -188,7 +180,7 @@ confignode:
 
 在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
 
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
+方案一即在 helm install 时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
 
 ### 6.1 【方案一】从私有仓库拉取镜像
 
diff --git 
a/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md
index a21e6df2..de184c3e 100644
--- a/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md
+++ b/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -124,11 +124,11 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 ### 5.1 克隆 IoTDB Kubernetes 部署代码
 
-请联系天谋工作人员获取IoTDB的Helm Chart
+下载 Helm : [源码](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 如果遇到代理问题,取消代理设置:
 
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +145,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   #软件安装后的名称
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    #软件所用的仓库和版本
+  tag: latest    #软件所用的仓库和版本
 
 storage:
 # 存储类名称,如果使用本地静态存储storageClassName 不用配置,如果使用动态存储必需设置此项
@@ -184,83 +184,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. 配置私库信息或预先使用ctr拉取镜像
+## 6. 安装 IoTDB
 
-在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
-
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
-
-### 6.1 【方案一】从私有仓库拉取镜像
-
-#### 6.1.1 创建secret 使k8s可访问iotdb-helm的私有仓库
-
-下文中“xxxxxx”表示IoTDB私有仓库的账号、密码、邮箱。
-
-```Bash
-# 注意 单引号
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# 查看secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# 查看并输出为yaml
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# 查看并解密
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 将secret作为一个patch加载到命名空间iotdb-ns
-
-```Bash
-# 添加一个patch,使该命名空间增加登陆nexus的登陆信息
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# 查看命名空间的该条信息
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 【方案二】导入镜像
-
-该步骤用于客户无法连接私库的场景,需要联系公司实施同事辅助准备。
-
-#### 6.2.1  拉取并导出镜像:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 查看并导出镜像:
-
-```Bash
-# 查看
-ctr images ls 
-
-# 导出
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 导入到k8s的namespace下:
-
-> 注意,k8s.io为示例环境中k8s的ctr的命名空间,导入到其他命名空间是不行的
-
-```Bash
-# 导入到k8s的namespace下
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 查看镜像
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. 安装 IoTDB
-
-### 7.1 安装 IoTDB
+### 6.1 安装 IoTDB
 
 ```Bash
 # 进入文件夹
@@ -270,14 +196,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 查看 Helm 安装列表
+### 6.2 查看 Helm 安装列表
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 查看 Pods
+### 6.3 查看 Pods
 
 ```Bash
 # 查看 iotdb的pods
@@ -286,7 +212,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 
执行命令后,输出了带有confignode和datanode标识的各3个Pods,,总共6个Pods,即表明安装成功;需要注意的是,并非所有Pods都处于Running状态,未激活的datanode可能会持续重启,但在激活后将恢复正常。
 
-### 7.4 发现故障的排除方式
+### 6.4 发现故障的排除方式
 
 ```Bash
 # 查看k8s的创建log
@@ -301,65 +227,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. 激活 IoTDB
-
-### 8.1 方案1:直接在 Pod 中激活(最快捷)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# 拿到机器码后进行激活
-```
-
-### 8.2 方案2:进入confignode的容器中激活
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# 拿到机器码后进行激活
-# 退出容器
-```
-
-### 8.3 方案3:手动激活
-
-1. 查看 ConfigNode 详细信息,确定所在节点:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# 结果示例:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. 查看 PVC 并找到 ConfigNode 对应的 Volume,确定所在路径:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-
-# 结果示例:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# 如果要查看多个confignode,使用如下:
-for i in {0..2}; do echo confignode-$i;kubectl describe pod confignode-${i} -n 
iotdb-ns | grep -e "Node:" -e "Path:"; echo "----"; done
-```
-
-3. 查看对应 Volume 的详细信息,确定物理目录的位置:
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# 结果示例:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. 从对应节点的对应目录下找到 system-info 文件,使用该 system-info 作为机器码生成激活码,并在同级目录新建文件 
license,将激活码写入到该文件。
-
-## 9. 验证 IoTDB
+## 7. 验证 IoTDB
 
-### 9.1 查看命名空间内的 Pods 状态
+### 7.1 查看命名空间内的 Pods 状态
 
 查看iotdb-ns命名空间内的IP、状态等信息,确定全部运行正常
 
@@ -376,7 +246,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 查看命名空间内的端口映射情况
+### 7.2 查看命名空间内的端口映射情况
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -388,7 +258,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
+### 7.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
 
 端口即jdbc-balancer的端口,服务器为k8s任意节点的IP
 
@@ -400,9 +270,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. 扩容
+## 8. 扩容
 
-### 10.1 新增pv
+### 8.1 新增pv
 
 新增pv,必须有可用的pv才可以扩容。
 
@@ -414,7 +284,7 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 **解决方案**:是将 `/iotdb/data` 目录也进行外挂操作,且 ConfigNode 和 DataNode 
均需如此设置,以确保数据完整性和集群稳定性。
 
-### 10.2 扩容confignode
+### 8.2 扩容confignode
 
 示例:3 confignode 扩容为 4 confignode
 
@@ -427,7 +297,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 扩容datanode
+### 8.3 扩容datanode
 
 示例:3 datanode 扩容为 4 datanode
 
@@ -437,7 +307,7 @@ helm upgrade iotdb . -n iotdb-ns
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 验证IoTDB状态
+### 8.4 验证IoTDB状态
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md
index 2a6847ff..7fbc7be8 100644
--- a/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ b/src/zh/UserGuide/dev-1.3/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -126,14 +126,6 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 请联系天谋工作人员获取IoTDB的Helm Chart
 
-如果遇到代理问题,取消代理设置:
-
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 修改 YAML 文件
 
 > 确保使用的是支持的版本 >=1.3.3.2
@@ -188,7 +180,7 @@ confignode:
 
 在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
 
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
+方案一即在 helm install 时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
 
 ### 6.1 【方案一】从私有仓库拉取镜像
 
diff --git 
a/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md 
b/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md
index a21e6df2..de184c3e 100644
--- a/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md
+++ b/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_apache.md
@@ -124,11 +124,11 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 ### 5.1 克隆 IoTDB Kubernetes 部署代码
 
-请联系天谋工作人员获取IoTDB的Helm Chart
+下载 Helm : [源码](https://github.com/apache/iotdb-extras/tree/master/helm)
 
 如果遇到代理问题,取消代理设置:
 
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
+> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://github.com/apache/iotdb-extras.git': gnutls_handshake() failed: The 
TLS connection was non-properly terminated.
 
 ```Bash
 unset HTTPS_PROXY
@@ -145,9 +145,9 @@ nameOverride: "iotdb"
 fullnameOverride: "iotdb"   #软件安装后的名称
 
 image:
-  repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
+  repository: apache/iotdb
   pullPolicy: IfNotPresent
-  tag: 1.3.3.2-standalone    #软件所用的仓库和版本
+  tag: latest    #软件所用的仓库和版本
 
 storage:
 # 存储类名称,如果使用本地静态存储storageClassName 不用配置,如果使用动态存储必需设置此项
@@ -184,83 +184,9 @@ confignode:
   dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
 ```
 
-## 6. 配置私库信息或预先使用ctr拉取镜像
+## 6. 安装 IoTDB
 
-在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
-
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
-
-### 6.1 【方案一】从私有仓库拉取镜像
-
-#### 6.1.1 创建secret 使k8s可访问iotdb-helm的私有仓库
-
-下文中“xxxxxx”表示IoTDB私有仓库的账号、密码、邮箱。
-
-```Bash
-# 注意 单引号
-kubectl create secret docker-registry timecho-nexus \
-  --docker-server='nexus.infra.timecho.com:8143' \
-  --docker-username='xxxxxx' \
-  --docker-password='xxxxxx' \
-  --docker-email='xxxxxx' \
-  -n iotdb-ns
-  
-# 查看secret
-kubectl get secret timecho-nexus -n iotdb-ns
-# 查看并输出为yaml
-kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
-# 查看并解密
-kubectl get secret timecho-nexus 
--output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
-```
-
-#### 6.1.2 将secret作为一个patch加载到命名空间iotdb-ns
-
-```Bash
-# 添加一个patch,使该命名空间增加登陆nexus的登陆信息
-kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": 
[{"name": "timecho-nexus"}]}'
-
-# 查看命名空间的该条信息
-kubectl get serviceaccounts -n iotdb-ns -o yaml
-```
-
-### 6.2 【方案二】导入镜像
-
-该步骤用于客户无法连接私库的场景,需要联系公司实施同事辅助准备。
-
-#### 6.2.1  拉取并导出镜像:
-
-```Bash
-ctr images pull --user xxxxxxxx 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.2 查看并导出镜像:
-
-```Bash
-# 查看
-ctr images ls 
-
-# 导出
-ctr images export iotdb-enterprise:1.3.3.2-standalone.tar 
nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
-```
-
-#### 6.2.3 导入到k8s的namespace下:
-
-> 注意,k8s.io为示例环境中k8s的ctr的命名空间,导入到其他命名空间是不行的
-
-```Bash
-# 导入到k8s的namespace下
-ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar 
-```
-
-#### 6.2.4 查看镜像
-
-```Bash
-ctr --namespace k8s.io images list | grep 1.3.3.2
-```
-
-## 7. 安装 IoTDB
-
-### 7.1 安装 IoTDB
+### 6.1 安装 IoTDB
 
 ```Bash
 # 进入文件夹
@@ -270,14 +196,14 @@ cd iotdb-cluster-k8s/helm
 helm install iotdb ./ -n iotdb-ns
 ```
 
-### 7.2 查看 Helm 安装列表
+### 6.2 查看 Helm 安装列表
 
 ```Bash
 # helm list
 helm list -n iotdb-ns
 ```
 
-### 7.3 查看 Pods
+### 6.3 查看 Pods
 
 ```Bash
 # 查看 iotdb的pods
@@ -286,7 +212,7 @@ kubectl get pods -n iotdb-ns -o wide
 
 
执行命令后,输出了带有confignode和datanode标识的各3个Pods,,总共6个Pods,即表明安装成功;需要注意的是,并非所有Pods都处于Running状态,未激活的datanode可能会持续重启,但在激活后将恢复正常。
 
-### 7.4 发现故障的排除方式
+### 6.4 发现故障的排除方式
 
 ```Bash
 # 查看k8s的创建log
@@ -301,65 +227,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
 kubectl logs -n iotdb-ns confignode-0 -f
 ```
 
-## 8. 激活 IoTDB
-
-### 8.1 方案1:直接在 Pod 中激活(最快捷)
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
-kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
-# 拿到机器码后进行激活
-```
-
-### 8.2 方案2:进入confignode的容器中激活
-
-```Bash
-kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
-cd /iotdb/sbin
-/bin/bash start-activate.sh
-# 拿到机器码后进行激活
-# 退出容器
-```
-
-### 8.3 方案3:手动激活
-
-1. 查看 ConfigNode 详细信息,确定所在节点:
-
-```Bash
-kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
-
-# 结果示例:
-# Node:          a87/172.20.31.87
-# Path:          /data/k8s-data/env/confignode/.env
-```
-
-2. 查看 PVC 并找到 ConfigNode 对应的 Volume,确定所在路径:
-
-```Bash
-kubectl get pvc -n iotdb-ns | grep "confignode-0"
-
-# 结果示例:
-# map-confignode-confignode-0   Bound    iotdb-pv-04   10Gi       RWO          
  local-storage   <unset>                 8h
-
-# 如果要查看多个confignode,使用如下:
-for i in {0..2}; do echo confignode-$i;kubectl describe pod confignode-${i} -n 
iotdb-ns | grep -e "Node:" -e "Path:"; echo "----"; done
-```
-
-3. 查看对应 Volume 的详细信息,确定物理目录的位置:
-
-```Bash
-kubectl describe pv iotdb-pv-04 | grep "Path:"
-
-# 结果示例:
-# Path:          /data/k8s-data/iotdb-pv-04
-```
-
-4. 从对应节点的对应目录下找到 system-info 文件,使用该 system-info 作为机器码生成激活码,并在同级目录新建文件 
license,将激活码写入到该文件。
-
-## 9. 验证 IoTDB
+## 7. 验证 IoTDB
 
-### 9.1 查看命名空间内的 Pods 状态
+### 7.1 查看命名空间内的 Pods 状态
 
 查看iotdb-ns命名空间内的IP、状态等信息,确定全部运行正常
 
@@ -376,7 +246,7 @@ kubectl get pods -n iotdb-ns -o wide
 # datanode-2     1/1     Running   10 (5m55s ago)   75m   10.20.191.76   a88   
 <none>           <none>
 ```
 
-### 9.2 查看命名空间内的端口映射情况
+### 7.2 查看命名空间内的端口映射情况
 
 ```Bash
 kubectl get svc -n iotdb-ns
@@ -388,7 +258,7 @@ kubectl get svc -n iotdb-ns
 # jdbc-balancer    LoadBalancer   10.10.191.209   <pending>     6667:31895/TCP 
  7d8h
 ```
 
-### 9.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
+### 7.3 在任意服务器启动 CLI 脚本验证 IoTDB 集群状态
 
 端口即jdbc-balancer的端口,服务器为k8s任意节点的IP
 
@@ -400,9 +270,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 <img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
 
-## 10. 扩容
+## 8. 扩容
 
-### 10.1 新增pv
+### 8.1 新增pv
 
 新增pv,必须有可用的pv才可以扩容。
 
@@ -414,7 +284,7 @@ start-cli.sh -h 172.20.31.88 -p 31895
 
 **解决方案**:是将 `/iotdb/data` 目录也进行外挂操作,且 ConfigNode 和 DataNode 
均需如此设置,以确保数据完整性和集群稳定性。
 
-### 10.2 扩容confignode
+### 8.2 扩容confignode
 
 示例:3 confignode 扩容为 4 confignode
 
@@ -427,7 +297,7 @@ helm upgrade iotdb . -n iotdb-ns
 <img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
 
 
-### 10.3 扩容datanode
+### 8.3 扩容datanode
 
 示例:3 datanode 扩容为 4 datanode
 
@@ -437,7 +307,7 @@ helm upgrade iotdb . -n iotdb-ns
 helm upgrade iotdb . -n iotdb-ns
 ```
 
-### 10.4 验证IoTDB状态
+### 8.4 验证IoTDB状态
 
 ```Shell
 kubectl get pods -n iotdb-ns -o wide
diff --git 
a/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md 
b/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md
index 2a6847ff..7fbc7be8 100644
--- a/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md
+++ b/src/zh/UserGuide/latest/Deployment-and-Maintenance/Kubernetes_timecho.md
@@ -126,14 +126,6 @@ mkdir -p /data/k8s-data/iotdb-pv-02
 
 请联系天谋工作人员获取IoTDB的Helm Chart
 
-如果遇到代理问题,取消代理设置:
-
-> git clone报错如下,说明是配置了代理,需要把代理关掉 fatal: unable to access 
'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() 
failed: The TLS connection was non-properly terminated.
-
-```Bash
-unset HTTPS_PROXY
-```
-
 ### 5.2 修改 YAML 文件
 
 > 确保使用的是支持的版本 >=1.3.3.2
@@ -188,7 +180,7 @@ confignode:
 
 在k8s上配置私有仓库的信息,为下一步helm install的前置步骤。
 
-方案一即在helm insta时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
+方案一即在 helm install 时拉取可用的iotdb镜像,方案二则是提前将可用的iotdb镜像导入到containerd里。
 
 ### 6.1 【方案一】从私有仓库拉取镜像
 

Reply via email to