RocMarshal commented on a change in pull request #16780:
URL: https://github.com/apache/flink/pull/16780#discussion_r688403652



##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍

Review comment:
       ```suggestion
   <a name="introduction"></a>
   
   ### 介绍
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。

Review comment:
       ```suggestion
   本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}})模式在 Kubernetes上部署 
Flink。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式

Review comment:
       link tag ?

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -184,18 +190,21 @@ flink-taskmanager-64847444ff-7rdl4   1/1     Running      
      3          3m28s
 flink-taskmanager-64847444ff-nnd6m   1/1     Running            3          
3m28s
 ```
 
-You can now access the logs by running `kubectl logs 
flink-jobmanager-589967dcfc-m49xv`
+现在可以通过运行 `kubectl logs flink-jobmanager-589967dcfc-m49xv` 来访问日志。

Review comment:
       ```suggestion
   现在你可以通过运行 `kubectl logs flink-jobmanager-589967dcfc-m49xv` 来访问日志。
   ```
   And would you mind supplementing the  subject of some statements in other 
parts of the page? 

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式
 
-A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
+*Flink Application 集群* 是运行单个 Application 的专用集群,部署集群时要保证该 Application 可用。
 
-* an *Application* which runs a *JobManager*
-* a *Deployment* for a pool of *TaskManagers*
-* a *Service* exposing the *JobManager's* REST and UI ports
+在 Kubernetes 上部署一个基本的 *Flink Application 集群* 时,一般包括下面三个组件:
 
-Check [the Application cluster specific resource 
definitions](#application-cluster-resource-definitions) and adjust them 
accordingly:
+* *Application* 作业,同时在该 *Application* 中运行 *JobManager*;
+* 运行若干个 TaskManager 的 Deployment;
+* 暴露 JobManager 上 REST 和 UI 端口的 Service;
 
-The `args` attribute in the `jobmanager-job.yaml` has to specify the main 
class of the user job.
-See also [how to specify the JobManager arguments]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments) to understand
-how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
+检查 [Application 集群资源定义](#application-cluster-resource-definitions) 并相应地调整它们:
 
-The *job artifacts* should be available from the `job-artifacts-volume` in 
[the resource definition examples](#application-cluster-resource-definitions).
-The definition examples mount the volume as a local directory of the host 
assuming that you create the components in a minikube cluster.
-If you do not use a minikube cluster, you can use any other type of volume, 
available in your Kubernetes cluster, to supply the *job artifacts*.
-Alternatively, you can build [a custom image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) which already contains the artifacts instead.
+`jobmanager-job.yaml` 中的 `args` 属性必须指定用户作业的主类。也可以参考[如何设置 JobManager 参数]({{< 
ref "docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments)来了解如何将额外的 `args` 传递给 
`jobmanager-job.yaml` 配置中指定的 Flink 镜像。
 
-After creating [the common cluster 
components](#common-cluster-resource-definitions), use [the Application cluster 
specific resource definitions](#application-cluster-resource-definitions) to 
launch the cluster with the `kubectl` command:
+*job artifacts* 参数必须可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 
`job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts* 。此外,还可以构建一个已经包含 *job artifacts* 
参数的[自定义镜像](({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization))。
+
+在创建[通用集群组件](#common-cluster-resource-definitions)后,指定 [Application 
集群资源定义](#application-cluster-resource-definitions)文件,执行 `kubectl` 命令来启动 Flink 
Application 集群:
 
 ```sh
     $ kubectl create -f jobmanager-job.yaml
     $ kubectl create -f taskmanager-job-deployment.yaml
 ```
 
-To terminate the single application cluster, these components can be deleted 
along with [the common ones](#common-cluster-resource-definitions)
-with the `kubectl` command:
+要停止单个 application 集群,可以使用 `kubectl` 命令来删除相应组件以及 
[通用集群资源](#common-cluster-resource-definitions)对应的组件 :
 
 ```sh
     $ kubectl delete -f taskmanager-job-deployment.yaml
     $ kubectl delete -f jobmanager-job.yaml
 ```
 
-### Per-Job Cluster Mode
-Flink on Standalone Kubernetes does not support the Per-Job Cluster Mode.
+### Per-Job 集群模式

Review comment:
       missing link tag.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。
 
-This guide expects a Kubernetes environment to be present. You can ensure that 
your Kubernetes setup is working by running a command like `kubectl get nodes`, 
which lists all connected Kubelets. 
+### 准备
 
-If you want to run Kubernetes locally, we recommend using 
[MiniKube](https://minikube.sigs.k8s.io/docs/start/).
+本指南假设存在一个 Kubernets 的运行环境。可以通过运行 `kubectl get nodes` 命令来确保 Kubernetes 
环境运行正常,该命令展示所有连接到 Kubernets 集群的 node 节点信息。
+
+如果想在本地运行 Kubernetes,建议使用 [MiniKube](https://minikube.sigs.k8s.io/docs/start/)。
 
 {{< hint info >}}
-If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set 
docker0 promisc on'` before deploying a Flink cluster. Otherwise Flink 
components are not able to reference themselves through a Kubernetes service.
+如果使用 MiniKube,请确保在部署 Flink 集群之前先执行 `minikube ssh 'sudo ip link set docker0 
promisc on'`,否则 Flink 组件不能自动地将自己映射到 Kubernetes Service 中。
 {{< /hint >}}
 
-### Starting a Kubernetes Cluster (Session Mode)
+### Kubernetes 上的 Flink session 集群

Review comment:
       Same as the mentioned above. missing link tag

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。
 
-This guide expects a Kubernetes environment to be present. You can ensure that 
your Kubernetes setup is working by running a command like `kubectl get nodes`, 
which lists all connected Kubelets. 
+### 准备
 
-If you want to run Kubernetes locally, we recommend using 
[MiniKube](https://minikube.sigs.k8s.io/docs/start/).
+本指南假设存在一个 Kubernets 的运行环境。可以通过运行 `kubectl get nodes` 命令来确保 Kubernetes 
环境运行正常,该命令展示所有连接到 Kubernets 集群的 node 节点信息。
+
+如果想在本地运行 Kubernetes,建议使用 [MiniKube](https://minikube.sigs.k8s.io/docs/start/)。

Review comment:
       ```suggestion
   如果你想在本地运行 Kubernetes,建议使用 
[MiniKube](https://minikube.sigs.k8s.io/docs/start/)。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。
 
-This guide expects a Kubernetes environment to be present. You can ensure that 
your Kubernetes setup is working by running a command like `kubectl get nodes`, 
which lists all connected Kubelets. 
+### 准备
 
-If you want to run Kubernetes locally, we recommend using 
[MiniKube](https://minikube.sigs.k8s.io/docs/start/).
+本指南假设存在一个 Kubernets 的运行环境。可以通过运行 `kubectl get nodes` 命令来确保 Kubernetes 
环境运行正常,该命令展示所有连接到 Kubernets 集群的 node 节点信息。

Review comment:
       ```suggestion
   本指南假设存在一个 Kubernets 的运行环境。你可以通过运行 `kubectl get nodes` 命令来确保 Kubernetes 
环境运行正常,该命令展示所有连接到 Kubernets 集群的 node 节点信息。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式
 
-A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
+*Flink Application 集群* 是运行单个 Application 的专用集群,部署集群时要保证该 Application 可用。
 
-* an *Application* which runs a *JobManager*
-* a *Deployment* for a pool of *TaskManagers*
-* a *Service* exposing the *JobManager's* REST and UI ports
+在 Kubernetes 上部署一个基本的 *Flink Application 集群* 时,一般包括下面三个组件:
 
-Check [the Application cluster specific resource 
definitions](#application-cluster-resource-definitions) and adjust them 
accordingly:
+* *Application* 作业,同时在该 *Application* 中运行 *JobManager*;
+* 运行若干个 TaskManager 的 Deployment;
+* 暴露 JobManager 上 REST 和 UI 端口的 Service;
 
-The `args` attribute in the `jobmanager-job.yaml` has to specify the main 
class of the user job.
-See also [how to specify the JobManager arguments]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments) to understand
-how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
+检查 [Application 集群资源定义](#application-cluster-resource-definitions) 并相应地调整它们:

Review comment:
       What about 
   `检查 [Application 集群资源定义](#application-cluster-resource-definitions) 
并做出相应的调整:`?
   Maybe you could do it in a better way.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式
 
-A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
+*Flink Application 集群* 是运行单个 Application 的专用集群,部署集群时要保证该 Application 可用。
 
-* an *Application* which runs a *JobManager*
-* a *Deployment* for a pool of *TaskManagers*
-* a *Service* exposing the *JobManager's* REST and UI ports
+在 Kubernetes 上部署一个基本的 *Flink Application 集群* 时,一般包括下面三个组件:
 
-Check [the Application cluster specific resource 
definitions](#application-cluster-resource-definitions) and adjust them 
accordingly:
+* *Application* 作业,同时在该 *Application* 中运行 *JobManager*;
+* 运行若干个 TaskManager 的 Deployment;
+* 暴露 JobManager 上 REST 和 UI 端口的 Service;
 
-The `args` attribute in the `jobmanager-job.yaml` has to specify the main 
class of the user job.
-See also [how to specify the JobManager arguments]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments) to understand
-how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
+检查 [Application 集群资源定义](#application-cluster-resource-definitions) 并相应地调整它们:
 
-The *job artifacts* should be available from the `job-artifacts-volume` in 
[the resource definition examples](#application-cluster-resource-definitions).
-The definition examples mount the volume as a local directory of the host 
assuming that you create the components in a minikube cluster.
-If you do not use a minikube cluster, you can use any other type of volume, 
available in your Kubernetes cluster, to supply the *job artifacts*.
-Alternatively, you can build [a custom image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) which already contains the artifacts instead.
+`jobmanager-job.yaml` 中的 `args` 属性必须指定用户作业的主类。也可以参考[如何设置 JobManager 参数]({{< 
ref "docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments)来了解如何将额外的 `args` 传递给 
`jobmanager-job.yaml` 配置中指定的 Flink 镜像。
 
-After creating [the common cluster 
components](#common-cluster-resource-definitions), use [the Application cluster 
specific resource definitions](#application-cluster-resource-definitions) to 
launch the cluster with the `kubectl` command:
+*job artifacts* 参数必须可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 
`job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts* 。此外,还可以构建一个已经包含 *job artifacts* 
参数的[自定义镜像](({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization))。

Review comment:
       ```suggestion
   *job artifacts* 参数必须可以从 [资源定义示例](#application-cluster-resource-definitions) 
中的 `job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts*。此外,还可以构建一个已经包含 *job artifacts* 
参数的[自定义镜像](({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization))。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。
 
-This guide expects a Kubernetes environment to be present. You can ensure that 
your Kubernetes setup is working by running a command like `kubectl get nodes`, 
which lists all connected Kubelets. 
+### 准备
 
-If you want to run Kubernetes locally, we recommend using 
[MiniKube](https://minikube.sigs.k8s.io/docs/start/).
+本指南假设存在一个 Kubernets 的运行环境。可以通过运行 `kubectl get nodes` 命令来确保 Kubernetes 
环境运行正常,该命令展示所有连接到 Kubernets 集群的 node 节点信息。
+
+如果想在本地运行 Kubernetes,建议使用 [MiniKube](https://minikube.sigs.k8s.io/docs/start/)。
 
 {{< hint info >}}
-If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set 
docker0 promisc on'` before deploying a Flink cluster. Otherwise Flink 
components are not able to reference themselves through a Kubernetes service.
+如果使用 MiniKube,请确保在部署 Flink 集群之前先执行 `minikube ssh 'sudo ip link set docker0 
promisc on'`,否则 Flink 组件不能自动地将自己映射到 Kubernetes Service 中。
 {{< /hint >}}
 
-### Starting a Kubernetes Cluster (Session Mode)
+### Kubernetes 上的 Flink session 集群
 
-A *Flink Session cluster* is executed as a long-running Kubernetes Deployment. 
You can run multiple Flink jobs on a *Session cluster*.
-Each job needs to be submitted to the cluster after the cluster has been 
deployed.
+*Flink session 集群* 是以一种长期运行的 Kubernetes Deployment 形式执行的。可以在一个 *session 集群* 
上运行多个 Flink 作业。当然,只有 session 集群部署好以后才可以在上面提交 Flink 作业。

Review comment:
       ```suggestion
   *Flink session 集群* 是以一种长期运行的 Kubernetes Deployment 形式执行的。你可以在一个 *session 集群* 
上运行多个 Flink 作业。当然,只有 session 集群部署好以后才可以在上面提交 Flink 作业。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式
 
-A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
+*Flink Application 集群* 是运行单个 Application 的专用集群,部署集群时要保证该 Application 可用。
 
-* an *Application* which runs a *JobManager*
-* a *Deployment* for a pool of *TaskManagers*
-* a *Service* exposing the *JobManager's* REST and UI ports
+在 Kubernetes 上部署一个基本的 *Flink Application 集群* 时,一般包括下面三个组件:
 
-Check [the Application cluster specific resource 
definitions](#application-cluster-resource-definitions) and adjust them 
accordingly:
+* *Application* 作业,同时在该 *Application* 中运行 *JobManager*;
+* 运行若干个 TaskManager 的 Deployment;
+* 暴露 JobManager 上 REST 和 UI 端口的 Service;
 
-The `args` attribute in the `jobmanager-job.yaml` has to specify the main 
class of the user job.
-See also [how to specify the JobManager arguments]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments) to understand
-how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
+检查 [Application 集群资源定义](#application-cluster-resource-definitions) 并相应地调整它们:
 
-The *job artifacts* should be available from the `job-artifacts-volume` in 
[the resource definition examples](#application-cluster-resource-definitions).
-The definition examples mount the volume as a local directory of the host 
assuming that you create the components in a minikube cluster.
-If you do not use a minikube cluster, you can use any other type of volume, 
available in your Kubernetes cluster, to supply the *job artifacts*.
-Alternatively, you can build [a custom image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) which already contains the artifacts instead.
+`jobmanager-job.yaml` 中的 `args` 属性必须指定用户作业的主类。也可以参考[如何设置 JobManager 参数]({{< 
ref "docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments)来了解如何将额外的 `args` 传递给 
`jobmanager-job.yaml` 配置中指定的 Flink 镜像。
 
-After creating [the common cluster 
components](#common-cluster-resource-definitions), use [the Application cluster 
specific resource definitions](#application-cluster-resource-definitions) to 
launch the cluster with the `kubectl` command:
+*job artifacts* 参数必须可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 
`job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts* 。此外,还可以构建一个已经包含 *job artifacts* 
参数的[自定义镜像](({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization))。
+
+在创建[通用集群组件](#common-cluster-resource-definitions)后,指定 [Application 
集群资源定义](#application-cluster-resource-definitions)文件,执行 `kubectl` 命令来启动 Flink 
Application 集群:
 
 ```sh
     $ kubectl create -f jobmanager-job.yaml
     $ kubectl create -f taskmanager-job-deployment.yaml
 ```
 
-To terminate the single application cluster, these components can be deleted 
along with [the common ones](#common-cluster-resource-definitions)
-with the `kubectl` command:
+要停止单个 application 集群,可以使用 `kubectl` 命令来删除相应组件以及 
[通用集群资源](#common-cluster-resource-definitions)对应的组件 :
 
 ```sh
     $ kubectl delete -f taskmanager-job-deployment.yaml
     $ kubectl delete -f jobmanager-job.yaml
 ```
 
-### Per-Job Cluster Mode
-Flink on Standalone Kubernetes does not support the Per-Job Cluster Mode.
+### Per-Job 集群模式
+
+在 Kubernetes 上部署 Standalone 集群时不支持 Per-Job 集群模式。
 
-### Session Mode
+### Session 集群模式
 
-Deployment of a Session cluster is explained in the [Getting 
Started](#getting-started) guide at the top of this page.
+本页面顶部的[入门](#getting-started)指南中描述了 Session 集群模式的部署。

Review comment:
       ```suggestion
   本文档开始部分的[入门](#getting-started)指南中描述了 Session 集群模式的部署。
   ```
   Only a minor comment.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。
 
-This guide expects a Kubernetes environment to be present. You can ensure that 
your Kubernetes setup is working by running a command like `kubectl get nodes`, 
which lists all connected Kubelets. 
+### 准备
 
-If you want to run Kubernetes locally, we recommend using 
[MiniKube](https://minikube.sigs.k8s.io/docs/start/).
+本指南假设存在一个 Kubernets 的运行环境。可以通过运行 `kubectl get nodes` 命令来确保 Kubernetes 
环境运行正常,该命令展示所有连接到 Kubernets 集群的 node 节点信息。
+
+如果想在本地运行 Kubernetes,建议使用 [MiniKube](https://minikube.sigs.k8s.io/docs/start/)。
 
 {{< hint info >}}
-If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set 
docker0 promisc on'` before deploying a Flink cluster. Otherwise Flink 
components are not able to reference themselves through a Kubernetes service.
+如果使用 MiniKube,请确保在部署 Flink 集群之前先执行 `minikube ssh 'sudo ip link set docker0 
promisc on'`,否则 Flink 组件不能自动地将自己映射到 Kubernetes Service 中。
 {{< /hint >}}
 
-### Starting a Kubernetes Cluster (Session Mode)
+### Kubernetes 上的 Flink session 集群
 
-A *Flink Session cluster* is executed as a long-running Kubernetes Deployment. 
You can run multiple Flink jobs on a *Session cluster*.
-Each job needs to be submitted to the cluster after the cluster has been 
deployed.
+*Flink session 集群* 是以一种长期运行的 Kubernetes Deployment 形式执行的。可以在一个 *session 集群* 
上运行多个 Flink 作业。当然,只有 session 集群部署好以后才可以在上面提交 Flink 作业。
 
-A *Flink Session cluster* deployment in Kubernetes has at least three 
components:
+在 Kubernetes 上部署一个基本的 *Flink session 集群* 时,一般包括下面三个组件:
 
-* a *Deployment* which runs a [JobManager]({{< ref "docs/concepts/glossary" 
>}}#flink-jobmanager)
-* a *Deployment* for a pool of [TaskManagers]({{< ref "docs/concepts/glossary" 
>}}#flink-taskmanager)
-* a *Service* exposing the *JobManager's* REST and UI ports
+* 运行 [JobManager]({{< ref "docs/concepts/glossary" >}}#flink-jobmanager) 的 
*Deployment*;
+* 运行 [TaskManagers]({{< ref "docs/concepts/glossary" >}}#flink-taskmanager) 的 
*Deployment*;
+* 暴露 *JobManager* 上 REST 和 UI 端口的 *Service*;
 
-Using the file contents provided in the [the common resource 
definitions](#common-cluster-resource-definitions), create the following files, 
and create the respective components with the `kubectl` command:
+使用 [通用集群资源定义](#common-cluster-resource-definitions)中提供的文件内容来创建以下文件,并使用 
`kubectl` 命令来创建相应的组件:
 
 ```sh
-    # Configuration and service definition
+    # configmap 和 service 的定义

Review comment:
       Keep raw content?  
   `# Configuration 和 service 的定义`

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。
 
-This guide expects a Kubernetes environment to be present. You can ensure that 
your Kubernetes setup is working by running a command like `kubectl get nodes`, 
which lists all connected Kubelets. 
+### 准备

Review comment:
       ```suggestion
   <a name="preparation"></a>
   
   ### 准备
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门

Review comment:
       Maybe you could translate it in a better way. Only a minor comment.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式
 
-A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
+*Flink Application 集群* 是运行单个 Application 的专用集群,部署集群时要保证该 Application 可用。
 
-* an *Application* which runs a *JobManager*
-* a *Deployment* for a pool of *TaskManagers*
-* a *Service* exposing the *JobManager's* REST and UI ports
+在 Kubernetes 上部署一个基本的 *Flink Application 集群* 时,一般包括下面三个组件:
 
-Check [the Application cluster specific resource 
definitions](#application-cluster-resource-definitions) and adjust them 
accordingly:
+* *Application* 作业,同时在该 *Application* 中运行 *JobManager*;
+* 运行若干个 TaskManager 的 Deployment;
+* 暴露 JobManager 上 REST 和 UI 端口的 Service;
 
-The `args` attribute in the `jobmanager-job.yaml` has to specify the main 
class of the user job.
-See also [how to specify the JobManager arguments]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments) to understand
-how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
+检查 [Application 集群资源定义](#application-cluster-resource-definitions) 并相应地调整它们:
 
-The *job artifacts* should be available from the `job-artifacts-volume` in 
[the resource definition examples](#application-cluster-resource-definitions).
-The definition examples mount the volume as a local directory of the host 
assuming that you create the components in a minikube cluster.
-If you do not use a minikube cluster, you can use any other type of volume, 
available in your Kubernetes cluster, to supply the *job artifacts*.
-Alternatively, you can build [a custom image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) which already contains the artifacts instead.
+`jobmanager-job.yaml` 中的 `args` 属性必须指定用户作业的主类。也可以参考[如何设置 JobManager 参数]({{< 
ref "docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments)来了解如何将额外的 `args` 传递给 
`jobmanager-job.yaml` 配置中指定的 Flink 镜像。
 
-After creating [the common cluster 
components](#common-cluster-resource-definitions), use [the Application cluster 
specific resource definitions](#application-cluster-resource-definitions) to 
launch the cluster with the `kubectl` command:
+*job artifacts* 参数必须可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 
`job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts* 。此外,还可以构建一个已经包含 *job artifacts* 
参数的[自定义镜像](({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization))。
+
+在创建[通用集群组件](#common-cluster-resource-definitions)后,指定 [Application 
集群资源定义](#application-cluster-resource-definitions)文件,执行 `kubectl` 命令来启动 Flink 
Application 集群:
 
 ```sh
     $ kubectl create -f jobmanager-job.yaml
     $ kubectl create -f taskmanager-job-deployment.yaml
 ```
 
-To terminate the single application cluster, these components can be deleted 
along with [the common ones](#common-cluster-resource-definitions)
-with the `kubectl` command:
+要停止单个 application 集群,可以使用 `kubectl` 命令来删除相应组件以及 
[通用集群资源](#common-cluster-resource-definitions)对应的组件 :
 
 ```sh
     $ kubectl delete -f taskmanager-job-deployment.yaml
     $ kubectl delete -f jobmanager-job.yaml
 ```
 
-### Per-Job Cluster Mode
-Flink on Standalone Kubernetes does not support the Per-Job Cluster Mode.
+### Per-Job 集群模式
+
+在 Kubernetes 上部署 Standalone 集群时不支持 Per-Job 集群模式。
 
-### Session Mode
+### Session 集群模式
 
-Deployment of a Session cluster is explained in the [Getting 
Started](#getting-started) guide at the top of this page.
+本页面顶部的[入门](#getting-started)指南中描述了 Session 集群模式的部署。
 
 {{< top >}}
 
-## Flink on Standalone Kubernetes Reference
+<a name="flink-on-standalone-kubernetes-reference"></a>
+
+## Kubernetes 上运行 Standalone 集群指南
+
+<a name="configuration"></a>
 
 ### Configuration
 
-All configuration options are listed on the [configuration page]({{< ref 
"docs/deployment/config" >}}). Configuration options can be added to the 
`flink-conf.yaml` section of the `flink-configuration-configmap.yaml` config 
map.
+所有配置项都展示在[配置页面]({{< ref "docs/deployment/config" >}})上。在 config map 配置文件 
`flink-configuration-configmap.yaml` 中,可以将配置添加在 `flink-conf.yaml` 部分。
 
-### Accessing Flink in Kubernetes
+<a name="accessing-flink-in-kubernetes"></a>
+
+### 在 Kubernets 上访问 Flink
+
+接下来可以访问 Flink UI 页面并通过不同的方式提交作业:
 
-You can then access the Flink UI and submit jobs via different ways:
 *  `kubectl proxy`:
 
-    1. Run `kubectl proxy` in a terminal.
-    2. Navigate to 
[http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy)
 in your browser.
+    1. 在终端运行 `kubectl proxy` 命令。
+    2. 在浏览器中导航到 
[http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy)。
 
 *  `kubectl port-forward`:
-    1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward 
your jobmanager's web ui port to local 8081.
-    2. Navigate to [http://localhost:8081](http://localhost:8081) in your 
browser.
-    3. Moreover, you can use the following command below to submit jobs to the 
cluster:
+    1. 运行 `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` 将 
jobmanager 的 web ui 端口映射到本地的 8081。
+    2. 在浏览器中导航到 [http://localhost:8081](http://localhost:8081)。
+    3. 此外,也可以使用如下命令向集群提交作业:
     ```bash
     $ ./bin/flink run -m localhost:8081 
./examples/streaming/TopSpeedWindowing.jar
     ```
 
-*  Create a `NodePort` service on the rest service of jobmanager:
-    1. Run `kubectl create -f jobmanager-rest-service.yaml` to create the 
`NodePort` service on jobmanager. The example of `jobmanager-rest-service.yaml` 
can be found in [appendix](#common-cluster-resource-definitions).
-    2. Run `kubectl get svc flink-jobmanager-rest` to know the `node-port` of 
this service and navigate to 
[http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>)
 in your browser.
-    3. If you use minikube, you can get its public ip by running `minikube ip`.
-    4. Similarly to the `port-forward` solution, you can also use the 
following command below to submit jobs to the cluster:
+*  基于 jobmanager 的 rest 服务上创建 `NodePort` service:
+    1. 运行 `kubectl create -f jobmanager-rest-service.yaml` 来基于 jobmanager 创建 
`NodePort` service。`jobmanager-rest-service.yaml` 的示例文件可以在 
[附录](#common-cluster-resource-definitions) 中找到。
+    2. 运行 `kubectl get svc flink-jobmanager-rest` 来查询 server 的 
`node-port`,然后再浏览器导航到 
[http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>)。
+    3. 如果使用 minikube 集群,可以执行 `minikube ip` 命令来查看 public ip。
+    4. 与 `port-forward` 方案类似,也可以使用如下命令向集群提交作业。
 
     ```bash
     $ ./bin/flink run -m <public-node-ip>:<node-port> 
./examples/streaming/TopSpeedWindowing.jar
     ```
 
-### Debugging and Log Access
 
-Many common errors are easy to detect by checking Flink's log files. If you 
have access to Flink's web user interface, you can access the JobManager and 
TaskManager logs from there.
 
-If there are problems starting Flink, you can also use Kubernetes utilities to 
access the logs. Use `kubectl get pods` to see all running pods.
-For the quickstart example from above, you should see three pods:
+<a name="debugging-and-log-access"></a>
+
+
+### 调试和访问日志
+
+通过查看 Flink 的日志文件,可以很轻松地发现许多常见错误。如果有权访问 Flink 的 Web 用户界面,那么可以在页面上访问 JobManager 
和 TaskManager 日志。
+
+如果启动 Flink 出现问题,也可以使用 Kubernetes 工具集访问日志。使用 `kubectl get pods` 命令查看所有运行的 pods 
资源。针对上面的快速入门示例,可以看到三个 pod:

Review comment:
       ```suggestion
   如果启动 Flink 出现问题,也可以使用 Kubernetes 工具集访问日志。使用 `kubectl get pods` 命令查看所有运行的 
pods 资源。针对上面的快速入门示例,你可以看到三个 pod:
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -25,61 +25,59 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Kubernetes Setup
+# Kubernetes 安装
 
-## Getting Started
+<a name="getting-started"></a>
 
-This *Getting Started* guide describes how to deploy a *Session cluster* on 
[Kubernetes](https://kubernetes.io).
+## 入门
 
-### Introduction
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
 
-This page describes deploying a [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) Flink cluster on 
top of Kubernetes, using Flink's standalone deployment.
-We generally recommend new users to deploy Flink on Kubernetes using [native 
Kubernetes deployments]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}).
+### 介绍
 
-### Preparation
+本文描述了如何使用 Flink standalone 部署模式在 Kubernetes 上部署 [standalone]({{< ref 
"docs/deployment/resource-providers/standalone/overview" >}}) 模式的 Flink 
集群。通常我们建议新用户使用 [native Kubernetes 部署]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 模式在 Kubernetes上部署 
Flink。
 
-This guide expects a Kubernetes environment to be present. You can ensure that 
your Kubernetes setup is working by running a command like `kubectl get nodes`, 
which lists all connected Kubelets. 
+### 准备
 
-If you want to run Kubernetes locally, we recommend using 
[MiniKube](https://minikube.sigs.k8s.io/docs/start/).
+本指南假设存在一个 Kubernets 的运行环境。可以通过运行 `kubectl get nodes` 命令来确保 Kubernetes 
环境运行正常,该命令展示所有连接到 Kubernets 集群的 node 节点信息。
+
+如果想在本地运行 Kubernetes,建议使用 [MiniKube](https://minikube.sigs.k8s.io/docs/start/)。
 
 {{< hint info >}}
-If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set 
docker0 promisc on'` before deploying a Flink cluster. Otherwise Flink 
components are not able to reference themselves through a Kubernetes service.
+如果使用 MiniKube,请确保在部署 Flink 集群之前先执行 `minikube ssh 'sudo ip link set docker0 
promisc on'`,否则 Flink 组件不能自动地将自己映射到 Kubernetes Service 中。
 {{< /hint >}}
 
-### Starting a Kubernetes Cluster (Session Mode)
+### Kubernetes 上的 Flink session 集群
 
-A *Flink Session cluster* is executed as a long-running Kubernetes Deployment. 
You can run multiple Flink jobs on a *Session cluster*.
-Each job needs to be submitted to the cluster after the cluster has been 
deployed.
+*Flink session 集群* 是以一种长期运行的 Kubernetes Deployment 形式执行的。可以在一个 *session 集群* 
上运行多个 Flink 作业。当然,只有 session 集群部署好以后才可以在上面提交 Flink 作业。
 
-A *Flink Session cluster* deployment in Kubernetes has at least three 
components:
+在 Kubernetes 上部署一个基本的 *Flink session 集群* 时,一般包括下面三个组件:
 
-* a *Deployment* which runs a [JobManager]({{< ref "docs/concepts/glossary" 
>}}#flink-jobmanager)
-* a *Deployment* for a pool of [TaskManagers]({{< ref "docs/concepts/glossary" 
>}}#flink-taskmanager)
-* a *Service* exposing the *JobManager's* REST and UI ports
+* 运行 [JobManager]({{< ref "docs/concepts/glossary" >}}#flink-jobmanager) 的 
*Deployment*;
+* 运行 [TaskManagers]({{< ref "docs/concepts/glossary" >}}#flink-taskmanager) 的 
*Deployment*;
+* 暴露 *JobManager* 上 REST 和 UI 端口的 *Service*;
 
-Using the file contents provided in the [the common resource 
definitions](#common-cluster-resource-definitions), create the following files, 
and create the respective components with the `kubectl` command:
+使用 [通用集群资源定义](#common-cluster-resource-definitions)中提供的文件内容来创建以下文件,并使用 
`kubectl` 命令来创建相应的组件:

Review comment:
       ```suggestion
   使用[通用集群资源定义](#common-cluster-resource-definitions)中提供的文件内容来创建以下文件,并使用 
`kubectl` 命令来创建相应的组件:
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式
 
-A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
+*Flink Application 集群* 是运行单个 Application 的专用集群,部署集群时要保证该 Application 可用。
 
-* an *Application* which runs a *JobManager*
-* a *Deployment* for a pool of *TaskManagers*
-* a *Service* exposing the *JobManager's* REST and UI ports
+在 Kubernetes 上部署一个基本的 *Flink Application 集群* 时,一般包括下面三个组件:
 
-Check [the Application cluster specific resource 
definitions](#application-cluster-resource-definitions) and adjust them 
accordingly:
+* *Application* 作业,同时在该 *Application* 中运行 *JobManager*;

Review comment:
       IMO, The `Application`  is a special conception in K8S. 
   Maybe keeping raw content could be better.
   What about `* 一个 运行  *JobManager* 的 *Application*`

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -184,18 +190,21 @@ flink-taskmanager-64847444ff-7rdl4   1/1     Running      
      3          3m28s
 flink-taskmanager-64847444ff-nnd6m   1/1     Running            3          
3m28s
 ```
 
-You can now access the logs by running `kubectl logs 
flink-jobmanager-589967dcfc-m49xv`
+现在可以通过运行 `kubectl logs flink-jobmanager-589967dcfc-m49xv` 来访问日志。
 
-### High-Availability with Standalone Kubernetes
+<a name="high-availability-with-standalone-kubernetes"></a>
 
-For high availability on Kubernetes, you can use the [existing high 
availability services]({{< ref "docs/deployment/ha/overview" >}}).
+### Standalone 集群配置 HA

Review comment:
       what about '高可用的 Standalone Kubernetes'?
   Only a suggestion.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -184,18 +190,21 @@ flink-taskmanager-64847444ff-7rdl4   1/1     Running      
      3          3m28s
 flink-taskmanager-64847444ff-nnd6m   1/1     Running            3          
3m28s
 ```
 
-You can now access the logs by running `kubectl logs 
flink-jobmanager-589967dcfc-m49xv`
+现在可以通过运行 `kubectl logs flink-jobmanager-589967dcfc-m49xv` 来访问日志。
 
-### High-Availability with Standalone Kubernetes
+<a name="high-availability-with-standalone-kubernetes"></a>
 
-For high availability on Kubernetes, you can use the [existing high 
availability services]({{< ref "docs/deployment/ha/overview" >}}).
+### Standalone 集群配置 HA
 
-#### Kubernetes High-Availability Services
+对于在 Kubernetes 上实现HA,可以参考当前的 [Kubernets 高可用服务]({{< ref 
"docs/deployment/ha/overview" >}})。
 
-Session Mode and Application Mode clusters support using the [Kubernetes high 
availability service]({{< ref "docs/deployment/ha/kubernetes_ha" >}}).
-You need to add the following Flink config options to 
[flink-configuration-configmap.yaml](#common-cluster-resource-definitions).
+<a name="kubernetes-high-availability-services"></a>
 
-<span class="label label-info">Note</span> The filesystem which corresponds to 
the scheme of your configured HA storage directory must be available to the 
runtime. Refer to [custom Flink image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) and [enable plugins]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#using-filesystem-plugins) for more information.
+#### Kubernetes 高可用 Services
+
+Session 模式和 Application 模式集群支持使用 [Kubernetes 高可用服务]({{< ref 
"docs/deployment/ha/kubernetes_ha" >}})。需要在 
[flink-configuration-configmap.yaml](#common-cluster-resource-definitions) 
中添加如下 Flink 配置项。

Review comment:
       ```suggestion
   Session 模式和 Application 模式集群都支持使用 [Kubernetes 高可用服务]({{< ref 
"docs/deployment/ha/kubernetes_ha" >}})。需要在 
[flink-configuration-configmap.yaml](#common-cluster-resource-definitions) 
中添加如下 Flink 配置项。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -88,94 +86,102 @@ You can tear down the cluster using the following commands:
     $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-
 {{< top >}}
 
-## Deployment Modes
+<a name="deployment-modes"></a>
 
-### Deploy Application Cluster
+## 部署模式
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
+### Application 集群模式
 
-A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
+*Flink Application 集群* 是运行单个 Application 的专用集群,部署集群时要保证该 Application 可用。
 
-* an *Application* which runs a *JobManager*
-* a *Deployment* for a pool of *TaskManagers*
-* a *Service* exposing the *JobManager's* REST and UI ports
+在 Kubernetes 上部署一个基本的 *Flink Application 集群* 时,一般包括下面三个组件:
 
-Check [the Application cluster specific resource 
definitions](#application-cluster-resource-definitions) and adjust them 
accordingly:
+* *Application* 作业,同时在该 *Application* 中运行 *JobManager*;
+* 运行若干个 TaskManager 的 Deployment;
+* 暴露 JobManager 上 REST 和 UI 端口的 Service;
 
-The `args` attribute in the `jobmanager-job.yaml` has to specify the main 
class of the user job.
-See also [how to specify the JobManager arguments]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments) to understand
-how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
+检查 [Application 集群资源定义](#application-cluster-resource-definitions) 并相应地调整它们:
 
-The *job artifacts* should be available from the `job-artifacts-volume` in 
[the resource definition examples](#application-cluster-resource-definitions).
-The definition examples mount the volume as a local directory of the host 
assuming that you create the components in a minikube cluster.
-If you do not use a minikube cluster, you can use any other type of volume, 
available in your Kubernetes cluster, to supply the *job artifacts*.
-Alternatively, you can build [a custom image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) which already contains the artifacts instead.
+`jobmanager-job.yaml` 中的 `args` 属性必须指定用户作业的主类。也可以参考[如何设置 JobManager 参数]({{< 
ref "docs/deployment/resource-providers/standalone/docker" 
>}}#jobmanager-additional-command-line-arguments)来了解如何将额外的 `args` 传递给 
`jobmanager-job.yaml` 配置中指定的 Flink 镜像。
 
-After creating [the common cluster 
components](#common-cluster-resource-definitions), use [the Application cluster 
specific resource definitions](#application-cluster-resource-definitions) to 
launch the cluster with the `kubectl` command:
+*job artifacts* 参数必须可以从 [资源定义示例](#application-cluster-resource-definitions) 中的 
`job-artifacts-volume` 处获取。假如是在 minikube 集群中创建这些组件,那么定义示例中的 
job-artifacts-volume 可以挂载为主机的本地目录。如果不使用 minikube 集群,那么可以使用 Kubernetes 
集群中任何其它可用类型的 volume 来提供 *job artifacts* 。此外,还可以构建一个已经包含 *job artifacts* 
参数的[自定义镜像](({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization))。
+
+在创建[通用集群组件](#common-cluster-resource-definitions)后,指定 [Application 
集群资源定义](#application-cluster-resource-definitions)文件,执行 `kubectl` 命令来启动 Flink 
Application 集群:
 
 ```sh
     $ kubectl create -f jobmanager-job.yaml
     $ kubectl create -f taskmanager-job-deployment.yaml
 ```
 
-To terminate the single application cluster, these components can be deleted 
along with [the common ones](#common-cluster-resource-definitions)
-with the `kubectl` command:
+要停止单个 application 集群,可以使用 `kubectl` 命令来删除相应组件以及 
[通用集群资源](#common-cluster-resource-definitions)对应的组件 :
 
 ```sh
     $ kubectl delete -f taskmanager-job-deployment.yaml
     $ kubectl delete -f jobmanager-job.yaml
 ```
 
-### Per-Job Cluster Mode
-Flink on Standalone Kubernetes does not support the Per-Job Cluster Mode.
+### Per-Job 集群模式
+
+在 Kubernetes 上部署 Standalone 集群时不支持 Per-Job 集群模式。
 
-### Session Mode
+### Session 集群模式
 
-Deployment of a Session cluster is explained in the [Getting 
Started](#getting-started) guide at the top of this page.
+本页面顶部的[入门](#getting-started)指南中描述了 Session 集群模式的部署。
 
 {{< top >}}
 
-## Flink on Standalone Kubernetes Reference
+<a name="flink-on-standalone-kubernetes-reference"></a>
+
+## Kubernetes 上运行 Standalone 集群指南
+
+<a name="configuration"></a>
 
 ### Configuration
 
-All configuration options are listed on the [configuration page]({{< ref 
"docs/deployment/config" >}}). Configuration options can be added to the 
`flink-conf.yaml` section of the `flink-configuration-configmap.yaml` config 
map.
+所有配置项都展示在[配置页面]({{< ref "docs/deployment/config" >}})上。在 config map 配置文件 
`flink-configuration-configmap.yaml` 中,可以将配置添加在 `flink-conf.yaml` 部分。
 
-### Accessing Flink in Kubernetes
+<a name="accessing-flink-in-kubernetes"></a>
+
+### 在 Kubernets 上访问 Flink
+
+接下来可以访问 Flink UI 页面并通过不同的方式提交作业:
 
-You can then access the Flink UI and submit jobs via different ways:
 *  `kubectl proxy`:
 
-    1. Run `kubectl proxy` in a terminal.
-    2. Navigate to 
[http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy)
 in your browser.
+    1. 在终端运行 `kubectl proxy` 命令。
+    2. 在浏览器中导航到 
[http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy)。
 
 *  `kubectl port-forward`:
-    1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward 
your jobmanager's web ui port to local 8081.
-    2. Navigate to [http://localhost:8081](http://localhost:8081) in your 
browser.
-    3. Moreover, you can use the following command below to submit jobs to the 
cluster:
+    1. 运行 `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` 将 
jobmanager 的 web ui 端口映射到本地的 8081。
+    2. 在浏览器中导航到 [http://localhost:8081](http://localhost:8081)。
+    3. 此外,也可以使用如下命令向集群提交作业:
     ```bash
     $ ./bin/flink run -m localhost:8081 
./examples/streaming/TopSpeedWindowing.jar
     ```
 
-*  Create a `NodePort` service on the rest service of jobmanager:
-    1. Run `kubectl create -f jobmanager-rest-service.yaml` to create the 
`NodePort` service on jobmanager. The example of `jobmanager-rest-service.yaml` 
can be found in [appendix](#common-cluster-resource-definitions).
-    2. Run `kubectl get svc flink-jobmanager-rest` to know the `node-port` of 
this service and navigate to 
[http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>)
 in your browser.
-    3. If you use minikube, you can get its public ip by running `minikube ip`.
-    4. Similarly to the `port-forward` solution, you can also use the 
following command below to submit jobs to the cluster:
+*  基于 jobmanager 的 rest 服务上创建 `NodePort` service:
+    1. 运行 `kubectl create -f jobmanager-rest-service.yaml` 来基于 jobmanager 创建 
`NodePort` service。`jobmanager-rest-service.yaml` 的示例文件可以在 
[附录](#common-cluster-resource-definitions) 中找到。
+    2. 运行 `kubectl get svc flink-jobmanager-rest` 来查询 server 的 
`node-port`,然后再浏览器导航到 
[http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>)。
+    3. 如果使用 minikube 集群,可以执行 `minikube ip` 命令来查看 public ip。
+    4. 与 `port-forward` 方案类似,也可以使用如下命令向集群提交作业。
 
     ```bash
     $ ./bin/flink run -m <public-node-ip>:<node-port> 
./examples/streaming/TopSpeedWindowing.jar
     ```
 
-### Debugging and Log Access
 
-Many common errors are easy to detect by checking Flink's log files. If you 
have access to Flink's web user interface, you can access the JobManager and 
TaskManager logs from there.
 
-If there are problems starting Flink, you can also use Kubernetes utilities to 
access the logs. Use `kubectl get pods` to see all running pods.
-For the quickstart example from above, you should see three pods:
+<a name="debugging-and-log-access"></a>
+
+
+### 调试和访问日志
+
+通过查看 Flink 的日志文件,可以很轻松地发现许多常见错误。如果有权访问 Flink 的 Web 用户界面,那么可以在页面上访问 JobManager 
和 TaskManager 日志。

Review comment:
       ```suggestion
   通过查看 Flink 的日志文件,可以很轻松地发现许多常见错误。如果你有权访问 Flink 的 Web 用户界面,那么可以在页面上访问 
JobManager 和 TaskManager 日志。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -184,18 +190,21 @@ flink-taskmanager-64847444ff-7rdl4   1/1     Running      
      3          3m28s
 flink-taskmanager-64847444ff-nnd6m   1/1     Running            3          
3m28s
 ```
 
-You can now access the logs by running `kubectl logs 
flink-jobmanager-589967dcfc-m49xv`
+现在可以通过运行 `kubectl logs flink-jobmanager-589967dcfc-m49xv` 来访问日志。
 
-### High-Availability with Standalone Kubernetes
+<a name="high-availability-with-standalone-kubernetes"></a>
 
-For high availability on Kubernetes, you can use the [existing high 
availability services]({{< ref "docs/deployment/ha/overview" >}}).
+### Standalone 集群配置 HA
 
-#### Kubernetes High-Availability Services
+对于在 Kubernetes 上实现HA,可以参考当前的 [Kubernets 高可用服务]({{< ref 
"docs/deployment/ha/overview" >}})。
 
-Session Mode and Application Mode clusters support using the [Kubernetes high 
availability service]({{< ref "docs/deployment/ha/kubernetes_ha" >}}).
-You need to add the following Flink config options to 
[flink-configuration-configmap.yaml](#common-cluster-resource-definitions).
+<a name="kubernetes-high-availability-services"></a>
 
-<span class="label label-info">Note</span> The filesystem which corresponds to 
the scheme of your configured HA storage directory must be available to the 
runtime. Refer to [custom Flink image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) and [enable plugins]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#using-filesystem-plugins) for more information.
+#### Kubernetes 高可用 Services
+
+Session 模式和 Application 模式集群支持使用 [Kubernetes 高可用服务]({{< ref 
"docs/deployment/ha/kubernetes_ha" >}})。需要在 
[flink-configuration-configmap.yaml](#common-cluster-resource-definitions) 
中添加如下 Flink 配置项。
+
+<span class="label label-info">Note</span> 配置了 HA 
存储目录相对应的文件系统必须在运行时可用。相关更多信息,请参阅 [自定义Flink 镜像]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) 和 [启用文件系统插件]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#using-filesystem-plugins) 。

Review comment:
       ```suggestion
   <span class="label label-info">Note</span> 配置了 HA 
存储目录相对应的文件系统必须在运行时可用。相关更多信息,请参阅 [自定义Flink 镜像]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization) 和 [启用文件系统插件]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#using-filesystem-plugins)。
   ```
   
   What about 
   `<span class="label label-info">Note</span> 配置了 HA 存储目录相对应的文件系统必须在运行时可用。请参阅 
[自定义Flink 镜像]({{< ref "docs/deployment/resource-providers/standalone/docker" 
>}}#advanced-customization)和[启用文件系统插件]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#using-filesystem-plugins) 获取更多相关信息。` ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to