This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.12 by this push:
     new e022727  [hotfix][docs] Fix typo in Kubernetes HA services 
documentation
e022727 is described below

commit e022727fa9a2f016bd5f3739acb7e4b095b973b7
Author: Till Rohrmann <[email protected]>
AuthorDate: Mon Jan 11 10:56:29 2021 +0100

    [hotfix][docs] Fix typo in Kubernetes HA services documentation
---
 docs/deployment/ha/kubernetes_ha.md    | 2 +-
 docs/deployment/ha/kubernetes_ha.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/deployment/ha/kubernetes_ha.md 
b/docs/deployment/ha/kubernetes_ha.md
index c553a1a..c5c871e 100644
--- a/docs/deployment/ha/kubernetes_ha.md
+++ b/docs/deployment/ha/kubernetes_ha.md
@@ -66,7 +66,7 @@ high-availability.storageDir: hdfs:///flink/recovery
 
 ## High availability data clean up
 
-To keep HA data while restarting the Flink cluster, simply delete the 
deployment (via `kubectl delete deploy <cluster-id>`). 
+To keep HA data while restarting the Flink cluster, simply delete the 
deployment (via `kubectl delete deployment <cluster-id>`). 
 All the Flink cluster related resources will be deleted (e.g. JobManager 
Deployment, TaskManager pods, services, Flink conf ConfigMap). 
 HA related ConfigMaps will be retained because they do not set the owner 
reference. 
 When restarting the cluster, all previously running jobs will be recovered and 
restarted from the latest successful checkpoint.
diff --git a/docs/deployment/ha/kubernetes_ha.zh.md 
b/docs/deployment/ha/kubernetes_ha.zh.md
index 4ff8356..c1750a2 100644
--- a/docs/deployment/ha/kubernetes_ha.zh.md
+++ b/docs/deployment/ha/kubernetes_ha.zh.md
@@ -66,7 +66,7 @@ high-availability.storageDir: hdfs:///flink/recovery
 
 ## High availability data clean up
 
-To keep HA data while restarting the Flink cluster, simply delete the 
deployment (via `kubectl delete deploy <cluster-id>`). 
+To keep HA data while restarting the Flink cluster, simply delete the 
deployment (via `kubectl delete deployment <cluster-id>`). 
 All the Flink cluster related resources will be deleted (e.g. JobManager 
Deployment, TaskManager pods, services, Flink conf ConfigMap). 
 HA related ConfigMaps will be retained because they do not set the owner 
reference. 
 When restarting the cluster, all previously running jobs will be recovered and 
restarted from the latest successful checkpoint.

Reply via email to