RocMarshal commented on a change in pull request #16780:
URL: https://github.com/apache/flink/pull/16780#discussion_r689437858



##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。

Review comment:
       ```suggestion
   此外,你必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。请查看[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)获取更多信息。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -309,7 +337,7 @@ data:
     appender.rolling.strategy.type = DefaultRolloverStrategy
     appender.rolling.strategy.max = 10
 
-    # Suppress the irrelevant (wrong) warnings from the Netty channel handler
+    # 关闭 Netty channel handler 中的不相关(错误)警告

Review comment:
       `# 关闭 Netty channel handler 中不相关的(错误)警告` ?

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。
+
+当启用了高可用,Flink 会使用自己的 HA 服务进行服务发现。因此,JobManager Pod 会使用 IP 地址而不是 Kubernetes 的 
service 名称来作为 `jobmanager.rpc.address` 的配置项启动。完整配置请参考[附录](#appendix)。
+
+<a name="standby-jobManagers"></a>
+
+####  备用 JobManagers
+
+通常,只启动一个 JobManager pod 就足够了,因为一旦 pod 崩溃,Kubernetes 就会重新启动它。如果要实现更快的恢复,需要将 
`jobmanager-session-deployment-ha.yaml` 中的 `replicas` 配置 或 
`jobmanager-application-ha.yaml` 中的 `parallelism` 配置设定为大于 `1` 的值来启动备用 
JobManagers。
 
-When High-Availability is enabled, Flink will use its own HA-services for 
service discovery.
-Therefore, JobManager pods should be started with their IP address instead of 
a Kubernetes service as its `jobmanager.rpc.address`.
-Refer to the [appendix](#appendix) for full configuration.
+<a name="enabling-queryable-state"></a>
 
-#### Standby JobManagers
+### 启用 Queryable State
 
-Usually, it is enough to only start a single JobManager pod, because 
Kubernetes will restart it once the pod crashes.
-If you want to achieve faster recovery, configure the `replicas` in 
`jobmanager-session-deployment-ha.yaml` or `parallelism` in 
`jobmanager-application-ha.yaml` to a value greater than `1` to start standby 
JobManagers.
+如果为 TaskManager 创建一个 `NodePort` service,则可以访问 TaskManager 的 Queryable State 服务:
 
-### Enabling Queryable State
+  1. 运行 `kubectl create -f taskmanager-query-state-service.yaml` 来为 
`taskmanager` pod 创建 `NodePort` service。`taskmanager-query-state-service.yaml` 
的示例文件可以从[附录](#common-cluster-resource-definitions)中找到。
+  2. 运行 `kubectl get svc flink-taskmanager-query-state` 来查询 service 对应 
node-port 的端口号。然后可以创建 [QueryableStateClient(&lt;public-node-ip&gt;, 
&lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) 
来提交状态查询。
 
-You can access the queryable state of TaskManager if you create a `NodePort` 
service for it:
-  1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create 
the `NodePort` service for the `taskmanager` pod. The example of 
`taskmanager-query-state-service.yaml` can be found in 
[appendix](#common-cluster-resource-definitions).
-  2. Run `kubectl get svc flink-taskmanager-query-state` to get the 
`<node-port>` of this service. Then you can create the 
[QueryableStateClient(&lt;public-node-ip&gt;, &lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) to 
submit state queries.
+<a name="using-standalone-kubernetes-with-reactive-mode"></a>
 
-### Using Standalone Kubernetes with Reactive Mode
+### 在 Reactive 模式下使用 Standalone Kubernetes
 
-[Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
allows to run Flink in a mode, where the *Application Cluster* is always 
adjusting the job parallelism to the available resources. In combination with 
Kubernetes, the replica count of the TaskManager deployment determines the 
available resources. Increasing the replica count will scale up the job, 
reducing it will trigger a scale down. This can also be done automatically by 
using a [Horizontal Pod 
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
+[Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
允许在一种模式下运行 Flink,在这种模式下,*Application 集群* 始终根据可用资源调整作业并行度。与 Kubernetes 
结合使用,TaskManager 部署的副本数决定了可用资源。增加副本数将扩大作业规模,减少它会触发缩小。通过使用 [Horizontal Pod 
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
 也可以自动实现该功能。
 
-To use Reactive Mode on Kubernetes, follow the same steps as for [deploying a 
job using an Application Cluster](#deploy-application-cluster). But instead of 
`flink-configuration-configmap.yaml` use this config map: 
`flink-reactive-mode-configuration-configmap.yaml`. It contains the 
`scheduler-mode: reactive` setting for Flink.
+要在 Kubernetes 上使用 Reactive Mode,请按照[使用 Application 
集群部署作业](#deploy-application-cluster) 执行相同的操作。但是要使用 
`flink-reactive-mode-configuration-configmap.yaml` 配置文件来代替 
`flink-configuration-configmap.yaml`。该文件包含了针对 Flink 的 `scheduler-mode: 
reactive` 配置。

Review comment:
       ```suggestion
   要在 Kubernetes 上使用 Reactive Mode,请按照[使用 Application 
集群部署作业](#deploy-application-cluster) 完成相同的步骤。但是要使用 
`flink-reactive-mode-configuration-configmap.yaml` 配置文件来代替 
`flink-configuration-configmap.yaml`。该文件包含了针对 Flink 的 `scheduler-mode: 
reactive` 配置。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。
+
+当启用了高可用,Flink 会使用自己的 HA 服务进行服务发现。因此,JobManager Pod 会使用 IP 地址而不是 Kubernetes 的 
service 名称来作为 `jobmanager.rpc.address` 的配置项启动。完整配置请参考[附录](#appendix)。
+
+<a name="standby-jobManagers"></a>
+
+####  备用 JobManagers
+
+通常,只启动一个 JobManager pod 就足够了,因为一旦 pod 崩溃,Kubernetes 就会重新启动它。如果要实现更快的恢复,需要将 
`jobmanager-session-deployment-ha.yaml` 中的 `replicas` 配置 或 
`jobmanager-application-ha.yaml` 中的 `parallelism` 配置设定为大于 `1` 的值来启动备用 
JobManagers。
 
-When High-Availability is enabled, Flink will use its own HA-services for 
service discovery.
-Therefore, JobManager pods should be started with their IP address instead of 
a Kubernetes service as its `jobmanager.rpc.address`.
-Refer to the [appendix](#appendix) for full configuration.
+<a name="enabling-queryable-state"></a>
 
-#### Standby JobManagers
+### 启用 Queryable State
 
-Usually, it is enough to only start a single JobManager pod, because 
Kubernetes will restart it once the pod crashes.
-If you want to achieve faster recovery, configure the `replicas` in 
`jobmanager-session-deployment-ha.yaml` or `parallelism` in 
`jobmanager-application-ha.yaml` to a value greater than `1` to start standby 
JobManagers.
+如果为 TaskManager 创建一个 `NodePort` service,则可以访问 TaskManager 的 Queryable State 服务:
 
-### Enabling Queryable State
+  1. 运行 `kubectl create -f taskmanager-query-state-service.yaml` 来为 
`taskmanager` pod 创建 `NodePort` service。`taskmanager-query-state-service.yaml` 
的示例文件可以从[附录](#common-cluster-resource-definitions)中找到。
+  2. 运行 `kubectl get svc flink-taskmanager-query-state` 来查询 service 对应 
node-port 的端口号。然后可以创建 [QueryableStateClient(&lt;public-node-ip&gt;, 
&lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) 
来提交状态查询。

Review comment:
       ```suggestion
     2. 运行 `kubectl get svc flink-taskmanager-query-state` 来查询 service 对应 
node-port 的端口号。然后你就可以创建 [QueryableStateClient(&lt;public-node-ip&gt;, 
&lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) 
来提交状态查询。
   ```
   Only a suggestion.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -268,18 +296,18 @@ data:
     taskmanager.memory.process.size: 1728m
     parallelism.default: 2
   log4j-console.properties: |+
-    # This affects logging for both user code and Flink
+    # 如下配置会同时影响用户代码和 Flink 的日志行为
     rootLogger.level = INFO
     rootLogger.appenderRef.console.ref = ConsoleAppender
     rootLogger.appenderRef.rolling.ref = RollingFileAppender
 
-    # Uncomment this if you want to _only_ change Flink's logging
+    # 如果只想改变 Flink 的日志行为可以取消如下的注释符
     #logger.flink.name = org.apache.flink
     #logger.flink.level = INFO
 
-    # The following lines keep the log level of common libraries/connectors on
-    # log level INFO. The root logger does not override this. You have to 
manually
-    # change the log levels here.
+    # 下面几行将公共 libraries 或 connectors 的日志级别保持在 INFO 级别。
+    # root logger 的配置不会覆盖此处配置。
+    # 必须手动修改这里的日志级别。

Review comment:
       ```suggestion
       # 你必须手动修改这里的日志级别。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。
+
+当启用了高可用,Flink 会使用自己的 HA 服务进行服务发现。因此,JobManager Pod 会使用 IP 地址而不是 Kubernetes 的 
service 名称来作为 `jobmanager.rpc.address` 的配置项启动。完整配置请参考[附录](#appendix)。
+
+<a name="standby-jobManagers"></a>
+
+####  备用 JobManagers
+
+通常,只启动一个 JobManager pod 就足够了,因为一旦 pod 崩溃,Kubernetes 就会重新启动它。如果要实现更快的恢复,需要将 
`jobmanager-session-deployment-ha.yaml` 中的 `replicas` 配置 或 
`jobmanager-application-ha.yaml` 中的 `parallelism` 配置设定为大于 `1` 的值来启动备用 
JobManagers。
 
-When High-Availability is enabled, Flink will use its own HA-services for 
service discovery.
-Therefore, JobManager pods should be started with their IP address instead of 
a Kubernetes service as its `jobmanager.rpc.address`.
-Refer to the [appendix](#appendix) for full configuration.
+<a name="enabling-queryable-state"></a>
 
-#### Standby JobManagers
+### 启用 Queryable State
 
-Usually, it is enough to only start a single JobManager pod, because 
Kubernetes will restart it once the pod crashes.
-If you want to achieve faster recovery, configure the `replicas` in 
`jobmanager-session-deployment-ha.yaml` or `parallelism` in 
`jobmanager-application-ha.yaml` to a value greater than `1` to start standby 
JobManagers.
+如果为 TaskManager 创建一个 `NodePort` service,则可以访问 TaskManager 的 Queryable State 服务:
 
-### Enabling Queryable State
+  1. 运行 `kubectl create -f taskmanager-query-state-service.yaml` 来为 
`taskmanager` pod 创建 `NodePort` service。`taskmanager-query-state-service.yaml` 
的示例文件可以从[附录](#common-cluster-resource-definitions)中找到。
+  2. 运行 `kubectl get svc flink-taskmanager-query-state` 来查询 service 对应 
node-port 的端口号。然后可以创建 [QueryableStateClient(&lt;public-node-ip&gt;, 
&lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) 
来提交状态查询。
 
-You can access the queryable state of TaskManager if you create a `NodePort` 
service for it:
-  1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create 
the `NodePort` service for the `taskmanager` pod. The example of 
`taskmanager-query-state-service.yaml` can be found in 
[appendix](#common-cluster-resource-definitions).
-  2. Run `kubectl get svc flink-taskmanager-query-state` to get the 
`<node-port>` of this service. Then you can create the 
[QueryableStateClient(&lt;public-node-ip&gt;, &lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) to 
submit state queries.
+<a name="using-standalone-kubernetes-with-reactive-mode"></a>
 
-### Using Standalone Kubernetes with Reactive Mode
+### 在 Reactive 模式下使用 Standalone Kubernetes
 
-[Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
allows to run Flink in a mode, where the *Application Cluster* is always 
adjusting the job parallelism to the available resources. In combination with 
Kubernetes, the replica count of the TaskManager deployment determines the 
available resources. Increasing the replica count will scale up the job, 
reducing it will trigger a scale down. This can also be done automatically by 
using a [Horizontal Pod 
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
+[Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
允许在一种模式下运行 Flink,在这种模式下,*Application 集群* 始终根据可用资源调整作业并行度。与 Kubernetes 
结合使用,TaskManager 部署的副本数决定了可用资源。增加副本数将扩大作业规模,减少它会触发缩小。通过使用 [Horizontal Pod 
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
 也可以自动实现该功能。
 
-To use Reactive Mode on Kubernetes, follow the same steps as for [deploying a 
job using an Application Cluster](#deploy-application-cluster). But instead of 
`flink-configuration-configmap.yaml` use this config map: 
`flink-reactive-mode-configuration-configmap.yaml`. It contains the 
`scheduler-mode: reactive` setting for Flink.
+要在 Kubernetes 上使用 Reactive Mode,请按照[使用 Application 
集群部署作业](#deploy-application-cluster) 执行相同的操作。但是要使用 
`flink-reactive-mode-configuration-configmap.yaml` 配置文件来代替 
`flink-configuration-configmap.yaml`。该文件包含了针对 Flink 的 `scheduler-mode: 
reactive` 配置。
 
-Once you have deployed the *Application Cluster*, you can scale your job up or 
down by changing the replica count in the `flink-taskmanager` deployment.
+一旦部署了 *Application 集群*,就可以通过修改 `flink-taskmanager` 的部署副本数量来扩大或缩小作业的并行度。

Review comment:
       ```suggestion
   一旦你部署了 *Application 集群*,就可以通过修改 `flink-taskmanager` 的部署副本数量来扩大或缩小作业的并行度。
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。
+
+当启用了高可用,Flink 会使用自己的 HA 服务进行服务发现。因此,JobManager Pod 会使用 IP 地址而不是 Kubernetes 的 
service 名称来作为 `jobmanager.rpc.address` 的配置项启动。完整配置请参考[附录](#appendix)。
+
+<a name="standby-jobManagers"></a>
+
+####  备用 JobManagers

Review comment:
       Keep origianl content?

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。
+
+当启用了高可用,Flink 会使用自己的 HA 服务进行服务发现。因此,JobManager Pod 会使用 IP 地址而不是 Kubernetes 的 
service 名称来作为 `jobmanager.rpc.address` 的配置项启动。完整配置请参考[附录](#appendix)。
+
+<a name="standby-jobManagers"></a>
+
+####  备用 JobManagers
+
+通常,只启动一个 JobManager pod 就足够了,因为一旦 pod 崩溃,Kubernetes 就会重新启动它。如果要实现更快的恢复,需要将 
`jobmanager-session-deployment-ha.yaml` 中的 `replicas` 配置 或 
`jobmanager-application-ha.yaml` 中的 `parallelism` 配置设定为大于 `1` 的值来启动备用 
JobManagers。
 
-When High-Availability is enabled, Flink will use its own HA-services for 
service discovery.
-Therefore, JobManager pods should be started with their IP address instead of 
a Kubernetes service as its `jobmanager.rpc.address`.
-Refer to the [appendix](#appendix) for full configuration.
+<a name="enabling-queryable-state"></a>
 
-#### Standby JobManagers
+### 启用 Queryable State
 
-Usually, it is enough to only start a single JobManager pod, because 
Kubernetes will restart it once the pod crashes.
-If you want to achieve faster recovery, configure the `replicas` in 
`jobmanager-session-deployment-ha.yaml` or `parallelism` in 
`jobmanager-application-ha.yaml` to a value greater than `1` to start standby 
JobManagers.
+如果为 TaskManager 创建一个 `NodePort` service,则可以访问 TaskManager 的 Queryable State 服务:

Review comment:
       ```suggestion
   如果你为 TaskManager 创建了 `NodePort` service,那么你就可以访问 TaskManager 的 Queryable 
State 服务:
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -268,18 +296,18 @@ data:
     taskmanager.memory.process.size: 1728m
     parallelism.default: 2
   log4j-console.properties: |+
-    # This affects logging for both user code and Flink
+    # 如下配置会同时影响用户代码和 Flink 的日志行为
     rootLogger.level = INFO
     rootLogger.appenderRef.console.ref = ConsoleAppender
     rootLogger.appenderRef.rolling.ref = RollingFileAppender
 
-    # Uncomment this if you want to _only_ change Flink's logging
+    # 如果只想改变 Flink 的日志行为可以取消如下的注释符

Review comment:
       ```suggestion
       # 如果你只想改变 Flink 的日志行为则可以取消如下的注释部分
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -499,7 +533,7 @@ kind: Deployment
 metadata:
   name: flink-jobmanager
 spec:
-  replicas: 1 # Set the value to greater than 1 to start standby JobManagers
+  replicas: 1 # 通过设置大于 1 的值来开启备用 JobManager

Review comment:
       ```suggestion
     replicas: 1 # 通过设置大于 1 的整型值来开启备用 JobManager
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。
+
+当启用了高可用,Flink 会使用自己的 HA 服务进行服务发现。因此,JobManager Pod 会使用 IP 地址而不是 Kubernetes 的 
service 名称来作为 `jobmanager.rpc.address` 的配置项启动。完整配置请参考[附录](#appendix)。
+
+<a name="standby-jobManagers"></a>
+
+####  备用 JobManagers
+
+通常,只启动一个 JobManager pod 就足够了,因为一旦 pod 崩溃,Kubernetes 就会重新启动它。如果要实现更快的恢复,需要将 
`jobmanager-session-deployment-ha.yaml` 中的 `replicas` 配置 或 
`jobmanager-application-ha.yaml` 中的 `parallelism` 配置设定为大于 `1` 的值来启动备用 
JobManagers。
 
-When High-Availability is enabled, Flink will use its own HA-services for 
service discovery.
-Therefore, JobManager pods should be started with their IP address instead of 
a Kubernetes service as its `jobmanager.rpc.address`.
-Refer to the [appendix](#appendix) for full configuration.
+<a name="enabling-queryable-state"></a>
 
-#### Standby JobManagers
+### 启用 Queryable State
 
-Usually, it is enough to only start a single JobManager pod, because 
Kubernetes will restart it once the pod crashes.
-If you want to achieve faster recovery, configure the `replicas` in 
`jobmanager-session-deployment-ha.yaml` or `parallelism` in 
`jobmanager-application-ha.yaml` to a value greater than `1` to start standby 
JobManagers.
+如果为 TaskManager 创建一个 `NodePort` service,则可以访问 TaskManager 的 Queryable State 服务:
 
-### Enabling Queryable State
+  1. 运行 `kubectl create -f taskmanager-query-state-service.yaml` 来为 
`taskmanager` pod 创建 `NodePort` service。`taskmanager-query-state-service.yaml` 
的示例文件可以从[附录](#common-cluster-resource-definitions)中找到。
+  2. 运行 `kubectl get svc flink-taskmanager-query-state` 来查询 service 对应 
node-port 的端口号。然后可以创建 [QueryableStateClient(&lt;public-node-ip&gt;, 
&lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) 
来提交状态查询。
 
-You can access the queryable state of TaskManager if you create a `NodePort` 
service for it:
-  1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create 
the `NodePort` service for the `taskmanager` pod. The example of 
`taskmanager-query-state-service.yaml` can be found in 
[appendix](#common-cluster-resource-definitions).
-  2. Run `kubectl get svc flink-taskmanager-query-state` to get the 
`<node-port>` of this service. Then you can create the 
[QueryableStateClient(&lt;public-node-ip&gt;, &lt;node-port&gt;]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) to 
submit state queries.
+<a name="using-standalone-kubernetes-with-reactive-mode"></a>
 
-### Using Standalone Kubernetes with Reactive Mode
+### 在 Reactive 模式下使用 Standalone Kubernetes
 
-[Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
allows to run Flink in a mode, where the *Application Cluster* is always 
adjusting the job parallelism to the available resources. In combination with 
Kubernetes, the replica count of the TaskManager deployment determines the 
available resources. Increasing the replica count will scale up the job, 
reducing it will trigger a scale down. This can also be done automatically by 
using a [Horizontal Pod 
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
+[Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
允许在一种模式下运行 Flink,在这种模式下,*Application 集群* 始终根据可用资源调整作业并行度。与 Kubernetes 
结合使用,TaskManager 部署的副本数决定了可用资源。增加副本数将扩大作业规模,减少它会触发缩小。通过使用 [Horizontal Pod 
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
 也可以自动实现该功能。

Review comment:
       ```suggestion
   [Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
允许在 *Application 集群* 始终根据可用资源调整作业并行度的模式下运行 Flink。与 Kubernetes 结合使用,TaskManager 
部署的副本数决定了可用资源。增加副本数将扩大作业规模,而减少副本数将会触发缩减作业规模。通过使用 [Horizontal Pod 
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
 也可以自动实现该功能。
   ```
   Only a suggestion. Maybe you could translate it better.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -215,38 +236,45 @@ data:
   ...
 ```
 
-Moreover, you have to start the JobManager and TaskManager pods with a service 
account which has the permissions to create, edit, delete ConfigMaps.
-See [how to configure service accounts for 
pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 for more information.
+此外,必须使用具有创建、编辑、删除 ConfigMap 权限的 service 账号启动 JobManager 和 TaskManager 
pod。更多信息,请参考[如何为 pod 配置 service 
账号](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
 。
+
+当启用了高可用,Flink 会使用自己的 HA 服务进行服务发现。因此,JobManager Pod 会使用 IP 地址而不是 Kubernetes 的 
service 名称来作为 `jobmanager.rpc.address` 的配置项启动。完整配置请参考[附录](#appendix)。
+
+<a name="standby-jobManagers"></a>
+
+####  备用 JobManagers
+
+通常,只启动一个 JobManager pod 就足够了,因为一旦 pod 崩溃,Kubernetes 就会重新启动它。如果要实现更快的恢复,需要将 
`jobmanager-session-deployment-ha.yaml` 中的 `replicas` 配置 或 
`jobmanager-application-ha.yaml` 中的 `parallelism` 配置设定为大于 `1` 的值来启动备用 
JobManagers。

Review comment:
       大于 `1` 的值来启动备用 JobManagers。--> 大于 `1` 的整型值来启动 Standby JobManagers。?

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -289,13 +317,13 @@ data:
     logger.zookeeper.name = org.apache.zookeeper
     logger.zookeeper.level = INFO
 
-    # Log all infos to the console
+    # 将所有 info 级别的日志输出到 console
     appender.console.name = ConsoleAppender
     appender.console.type = CONSOLE
     appender.console.layout.type = PatternLayout
     appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c 
%x - %m%n
 
-    # Log all infos in the given rolling file
+    # 将所有 info 级别的日志输出到 rolling file

Review comment:
       ```suggestion
       # 将所有 info 级别的日志输出到指定的 rolling file
   ```

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -359,13 +388,13 @@ data:
     logger.zookeeper.name = org.apache.zookeeper
     logger.zookeeper.level = INFO
 
-    # Log all infos to the console
+    # 将所有 info 级别的日志输出到 console
     appender.console.name = ConsoleAppender
     appender.console.type = CONSOLE
     appender.console.layout.type = PatternLayout
     appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c 
%x - %m%n
 
-    # Log all infos in the given rolling file
+    # 将所有 info 级别的日志输出到 rolling file

Review comment:
       here, too.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -338,18 +366,19 @@ data:
     scheduler-mode: reactive
     execution.checkpointing.interval: 10s
   log4j-console.properties: |+
-    # This affects logging for both user code and Flink
+    # 如下配置会同时影响用户代码和 Flink 的日志行为
     rootLogger.level = INFO
     rootLogger.appenderRef.console.ref = ConsoleAppender
     rootLogger.appenderRef.rolling.ref = RollingFileAppender
 
-    # Uncomment this if you want to _only_ change Flink's logging
+    # 如果只想改变 Flink 的日志行为可以取消如下的注释符
     #logger.flink.name = org.apache.flink
     #logger.flink.level = INFO
 
-    # The following lines keep the log level of common libraries/connectors on
-    # log level INFO. The root logger does not override this. You have to 
manually
-    # change the log levels here.
+
+    # 下面几行将公共 libraries 或 connectors 的日志级别保持在 INFO 级别。
+    # root logger 的配置不会覆盖此处配置。
+    # 必须手动修改这里的日志级别。

Review comment:
       here, too.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -659,7 +695,7 @@ kind: Job
 metadata:
   name: flink-jobmanager
 spec:
-  parallelism: 1 # Set the value to greater than 1 to start standby JobManagers
+  parallelism: 1 # 通过设置大于 1 的值来开启备用 JobManager

Review comment:
       here, too.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -379,12 +408,13 @@ data:
     appender.rolling.strategy.type = DefaultRolloverStrategy
     appender.rolling.strategy.max = 10
 
-    # Suppress the irrelevant (wrong) warnings from the Netty channel handler
+    # 关闭 Netty channel handler 中的不相关(错误)警告

Review comment:
       here, too.

##########
File path: 
docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -338,18 +366,19 @@ data:
     scheduler-mode: reactive
     execution.checkpointing.interval: 10s
   log4j-console.properties: |+
-    # This affects logging for both user code and Flink
+    # 如下配置会同时影响用户代码和 Flink 的日志行为
     rootLogger.level = INFO
     rootLogger.appenderRef.console.ref = ConsoleAppender
     rootLogger.appenderRef.rolling.ref = RollingFileAppender
 
-    # Uncomment this if you want to _only_ change Flink's logging
+    # 如果只想改变 Flink 的日志行为可以取消如下的注释符

Review comment:
       same as the mentioned above.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to