Myasuka commented on a change in pull request #16084:
URL: https://github.com/apache/flink/pull/16084#discussion_r646156954
##########
File path: docs/content.zh/docs/deployment/ha/overview.md
##########
@@ -26,56 +26,50 @@ specific language governing permissions and limitations
under the License.
-->
-# High Availability
+# 高可用
-JobManager High Availability (HA) hardens a Flink cluster against JobManager
failures.
-This feature ensures that a Flink cluster will always continue executing your
submitted jobs.
+JobManager 高可用 (HA) 加强了 Flink 集群防止 JobManager 故障的能力。
+此特性确保 Flink 集群将始终持续执行您提交的作业。
-## JobManager High Availability
+## JobManager 高可用
-The JobManager coordinates every Flink deployment.
-It is responsible for both *scheduling* and *resource management*.
+JobManager 协调每个 Flink 的部署。它同时负责 *调度* 和 *资源管理*。
-By default, there is a single JobManager instance per Flink cluster.
-This creates a *single point of failure* (SPOF): if the JobManager crashes, no
new programs can be submitted and running programs fail.
+默认情况下,每个 Flink 集群只有一个 JobManager 实例。这会导致 *单点故障(SPOF)*:如果 JobManager
崩溃,则不能提交任何新程序,运行中的程序也会失败。
-With JobManager High Availability, you can recover from JobManager failures
and thereby eliminate the *SPOF*.
-You can configure high availability for every cluster deployment.
-See the [list of available high availability
services](#high-availability-services) for more information.
+使用 JobManager 高可用特性,您可以从 JobManager 失败中恢复,从而消除 SPOF。您可以为每个集群部署配置高可用特性。
+有关更多信息,请参阅 [高可用服务](#high-availability-services)。
-### How to make a cluster highly available
+### 如何启用集群高可用
-The general idea of JobManager High Availability is that there is a *single
leading JobManager* at any time and *multiple standby JobManagers* to take over
leadership in case the leader fails.
-This guarantees that there is *no single point of failure* and programs can
make progress as soon as a standby JobManager has taken leadership.
+JobManager 高可用的一般概念是指,在任何时候都有 *一个领导者 JobManager*,如果领导者出现故障,则有多个备用 JobManager
来接管领导。这保证了 *不存在单点故障*,只要有备用 JobManager 担任领导者,程序就可以继续运行。
-As an example, consider the following setup with three JobManager instances:
+如下是一个使用三个 JobManager 实例的例子:
{{< img src="/fig/jobmanager_ha_overview.png" class="center" >}}
-Flink's [high availability services](#high-availability-services) encapsulate
the required services to make everything work:
-* **Leader election**: Selecting a single leader out of a pool of `n`
candidates
-* **Service discovery**: Retrieving the address of the current leader
-* **State persistence**: Persisting state which is required for the successor
to resume the job execution (JobGraphs, user code jars, completed checkpoints)
+Flink 的 [高可用服务](#high-availability-services) 封装了所需的服务,使一切可以正常工作:
+* **领导者选举**:从 `n` 个候选者中选出一个领导者
+* **服务发现**:检索当前领导者的地址
+* **状态持久化**:继承程序恢复作业所需的持久化状态(JobGraphs、用户代码jar、已完成的检查点)
{{< top >}}
-## High Availability Services
+<a name="high-availability-services" />
-Flink ships with two high availability service implementations:
+## 高可用服务
-* [ZooKeeper]({{< ref "docs/deployment/ha/zookeeper_ha" >}}):
-ZooKeeper HA services can be used with every Flink cluster deployment.
-They require a running ZooKeeper quorum.
+Flink 提供了两种高可用服务实现:
-* [Kubernetes]({{< ref "docs/deployment/ha/kubernetes_ha" >}}):
-Kubernetes HA services only work when running on Kubernetes.
+
+* [ZooKeeper]({{< ref "docs/deployment/ha/zookeeper_ha" >}}):每个 Flink
集群部署都可以使用 ZooKeeper HA 服务。它们需要一个运行的 ZooKeeper 复制组(quorum)。
+
+* [Kubernetes]({{< ref "docs/deployment/ha/kubernetes_ha" >}}):Kubernetes HA
服务只能运行在 Kubernetes 上。
{{< top >}}
-## High Availability data lifecycle
+## 高可用数据生命周期
-In order to recover submitted jobs, Flink persists metadata and the job
artifacts.
-The HA data will be kept until the respective job either succeeds, is
cancelled or fails terminally.
-Once this happens, all the HA data, including the metadata stored in the HA
services, will be deleted.
+为了恢复提交的作业,Flink 持久化元数据和 job
组件。高可用(HA)数据将一直保存,直到相应的作业执行成功、被取消或最终失败。一旦发生这种情况,将删除所有高可用(HA)数据,包括存储在高可用(HA)服务中的元数据。
Review comment:
需要每次翻译 `高可用`时,都需要带上 `HA`么?
##########
File path: docs/content.zh/docs/deployment/ha/zookeeper_ha.md
##########
@@ -24,113 +24,104 @@ specific language governing permissions and limitations
under the License.
-->
-# ZooKeeper HA Services
+# ZooKeeper 高可用服务
-Flink's ZooKeeper HA services use [ZooKeeper](http://zookeeper.apache.org) for
high availability services.
+Flink 的 ZooKeeper 高可用服务使用 [ZooKeeper](http://zookeeper.apache.org) 提供高可用服务。
-Flink leverages **[ZooKeeper](http://zookeeper.apache.org)** for *distributed
coordination* between all running JobManager instances.
-ZooKeeper is a separate service from Flink, which provides highly reliable
distributed coordination via leader election and light-weight consistent state
storage.
-Check out [ZooKeeper's Getting Started
Guide](http://zookeeper.apache.org/doc/current/zookeeperStarted.html) for more
information about ZooKeeper.
-Flink includes scripts to [bootstrap a simple ZooKeeper](#bootstrap-zookeeper)
installation.
+Flink 利用 **[ZooKeeper](http://zookeeper.apache.org)** 在所有运行的 JobManager 实例之间进行
*分布式协调*。ZooKeeper 是一个独立于 Flink 的服务,它通过 leader 选举和轻量级的一致性状态存储来提供高可靠的分布式协调。查看
[ZooKeeper入门指南](http://zookeeper.apache.org/doc/current/zookeeperStarted.html),了解更多关于
ZooKeeper 的信息。Flink 包含 [启动一个简单的ZooKeeper](#bootstrap-zookeeper) 的安装脚本。
-## Configuration
+## 配置
-In order to start an HA-cluster you have to configure the following
configuration keys:
+为了启用高可用集群(HA-cluster),您必须设置以下配置项:
-- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (required):
-The `high-availability` option has to be set to `zookeeper`.
+- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (必要的):
+ `high-availability` 配置项必须设置为 `zookeeper`。
<pre>high-availability: zookeeper</pre>
-- [high-availability.storageDir]({{< ref "docs/deployment/config"
>}}#high-availability-storagedir) (required):
-JobManager metadata is persisted in the file system
`high-availability.storageDir` and only a pointer to this state is stored in
ZooKeeper.
+- [high-availability.storageDir]({{< ref "docs/deployment/config"
>}}#high-availability-storagedir) (必要的):
+ JobManager 元数据持久化到文件系统 `high-availability.storageDir` 配置的路径中,并且在 ZooKeeper
中只能有一个目录指向此位置。
<pre>high-availability.storageDir: hdfs:///flink/recovery</pre>
- The `storageDir` stores all metadata needed to recover a JobManager failure.
+ `storageDir` 存储要从 JobManager 失败恢复时所需的所有元数据。
-- [high-availability.zookeeper.quorum]({%link deployment/config.md
%}#high-availability-zookeeper-quorum) (required):
-A *ZooKeeper quorum* is a replicated group of ZooKeeper servers, which provide
the distributed coordination service.
+- [high-availability.zookeeper.quorum]({{< ref "docs/deployment/config"
>}}#high-availability-zookeeper-quorum) (必要的):
+ *ZooKeeper quorum* 是一个提供分布式协调服务的复制组。
<pre>high-availability.zookeeper.quorum:
address1:2181[,...],addressX:2181</pre>
- Each `addressX:port` refers to a ZooKeeper server, which is reachable by
Flink at the given address and port.
+ 每个 `addressX:port` 指的是一个 ZooKeeper 服务器,它可以被 Flink 在给定的地址和端口上访问。
-- [high-availability.zookeeper.path.root]({{< ref "docs/deployment/config"
>}}#high-availability-zookeeper-path-root) (recommended):
-The *root ZooKeeper node*, under which all cluster nodes are placed.
+- [high-availability.zookeeper.path.root]({{< ref "docs/deployment/config"
>}}#high-availability-zookeeper-path-root) (推荐的):
+ *ZooKeeper 根节点*,集群的所有节点都放在该节点下。
<pre>high-availability.zookeeper.path.root: /flink</pre>
-- [high-availability.cluster-id]({{< ref "docs/deployment/config"
>}}#high-availability-cluster-id) (recommended):
-The *cluster-id ZooKeeper node*, under which all required coordination data
for a cluster is placed.
+- [high-availability.cluster-id]({{< ref "docs/deployment/config"
>}}#high-availability-cluster-id) (推荐的):
+ *ZooKeeper cluster-id 节点*,在该节点下放置集群所需的协调数据。
<pre>high-availability.cluster-id: /default_ns # important: customize per
cluster</pre>
- **Important**:
- You should not set this value manually when running on YARN, native
Kubernetes or on another cluster manager.
- In those cases a cluster-id is being automatically generated.
- If you are running multiple Flink HA clusters on bare metal, you have to
manually configure separate cluster-ids for each cluster.
+ **重要**:
+ 在 YARN、原生 Kubernetes
或其他集群管理器上运行时,不应该手动设置此值。在这些情况下,将自动生成一个集群id。如果在未使用集群管理器的机器上运行多个 Flink
高可用(HA)集群,则必须为每个集群手动配置单独的集群 ID(cluster-ids)。
-### Example configuration
+### 配置示例
-Configure high availability mode and ZooKeeper quorum in
`conf/flink-conf.yaml`:
+在 `conf/flink-conf.yaml` 中配置高可用模式和 ZooKeeper 复制组(quorum):
```bash
high-availability: zookeeper
high-availability.zookeeper.quorum: localhost:2181
high-availability.zookeeper.path.root: /flink
-high-availability.cluster-id: /cluster_one # important: customize per cluster
+high-availability.cluster-id: /cluster_one # 重要: 每个集群自定义
high-availability.storageDir: hdfs:///flink/recovery
```
{{< top >}}
-## Configuring for ZooKeeper Security
+## ZooKeeper 安全配置
-If ZooKeeper is running in secure mode with Kerberos, you can override the
following configurations in `flink-conf.yaml` as necessary:
+如果 ZooKeeper 使用 Kerberos 以安全模式运行,必要时可以在 `flink-conf.yaml` 中覆盖以下配置:
```bash
-# default is "zookeeper". If the ZooKeeper quorum is configured
-# with a different service name then it can be supplied here.
+# 默认配置为 "zookeeper". 如果 ZooKeeper quorum 配置了不同的服务名称,
+# 那么可以替换到这里。
-zookeeper.sasl.service-name: zookeeper
+zookeeper.sasl.service-name: zookeeper
-# default is "Client". The value needs to match one of the values
-# configured in "security.kerberos.login.contexts".
+# 默认配置为 "Client". 该值必须为 "security.kerberos.login.contexts" 项中配置的某一个值。
zookeeper.sasl.login-context-name: Client
```
-For more information on Flink configuration for Kerberos security, please
refer to the [security section of the Flink configuration page]({{< ref
"docs/deployment/config" >}}#security).
-You can also find further details on [how Flink sets up Kerberos-based
security internally]({{< ref "docs/deployment/security/security-kerberos" >}}).
+有关用于 Kerberos 安全性的 Flink 配置的更多信息,请参阅 [Flink 配置页面的安全性部分]({{< ref
"docs/deployment/config" >}}#security)。您还可以找到关于 [Flink 如何在内部设置基于 kerberos
的安全性]({{< ref "docs/deployment/security/security-kerberos" >}}) 的详细信息。
{{< top >}}
-## ZooKeeper Versions
+## ZooKeeper 版本
-Flink ships with separate ZooKeeper clients for 3.4 and 3.5, with 3.4 being in
the `lib` directory of the distribution
-and thus used by default, whereas 3.5 is placed in the `opt` directory.
+Flink 附带了 3.4 和 3.5 的单独的 ZooKeeper 客户端,其中 3.4 位于发行版的 `lib` 目录中,为默认使用版本,而 3.5
位于 opt 目录中。
-The 3.5 client allows you to secure the ZooKeeper connection via SSL, but
_may_ not work with 3.4- ZooKeeper installations.
+3.5 客户端允许您通过 SSL 保护 ZooKeeper 连接,但 _可能_ 不适用于 3.4 版本的 ZooKeeper 安装。
-You can control which version is used by Flink by placing either jar in the
`lib` directory.
+您可以通过在 `lib` 目录中放置任意一个 jar 来控制 Flink 使用哪个版本。
{{< top >}}
-## Bootstrap ZooKeeper
+<a name="bootstrap-zookeeper" />
-If you don't have a running ZooKeeper installation, you can use the helper
scripts, which ship with Flink.
+## 启动 ZooKeeper
-There is a ZooKeeper configuration template in `conf/zoo.cfg`.
-You can configure the hosts to run ZooKeeper on with the `server.X` entries,
where X is a unique ID of each server:
+如果你没有安装 ZooKeeper,可以使用 Flink 附带的帮助脚本。
+
+在 `conf/zoo.cfg` 文件中有 ZooKeeper 的配置模板。您可以在 `server.X` 配置项中配置主机来运行 ZooKeeper。其中
X 是每个服务器的唯一 ID:
```bash
server.X=addressX:peerPort:leaderPort
[...]
server.Y=addressY:peerPort:leaderPort
```
-The script `bin/start-zookeeper-quorum.sh` will start a ZooKeeper server on
each of the configured hosts.
-The started processes start ZooKeeper servers via a Flink wrapper, which reads
the configuration from `conf/zoo.cfg` and makes sure to set some required
configuration values for convenience.
-In production setups, it is recommended to manage your own ZooKeeper
installation.
+脚本 `bin/start-zookeeper-quorum.sh` 将在每个配置的主机上启动一个 ZooKeeper 服务器。该进程是通过 Flink
包装器来启动的 ZooKeeper 服务器,该包装器从 `conf/zoo.cfg`
读取配置,并确保设置一些必要的配置值以方便使用。在生产设置中,建议您管理自己的 ZooKeeper 安装。
Review comment:
建议最后一句换行,可能更贴切的描述是`在生产环境中,建议您自行管理Zookepper的安装与部署`
##########
File path: docs/content.zh/docs/deployment/ha/kubernetes_ha.md
##########
@@ -24,52 +24,50 @@ specific language governing permissions and limitations
under the License.
-->
-# Kubernetes HA Services
+# Kubernetes 高可用服务
-Flink's Kubernetes HA services use [Kubernetes](https://kubernetes.io/) for
high availability services.
+Flink 的 Kubernetes 高可用服务使用 [Kubernetes](https://kubernetes.io/) 提供高可用服务。
-Kubernetes high availability services can only be used when deploying to
Kubernetes.
-Consequently, they can be configured when using [standalone Flink on
Kubernetes]({{< ref "docs/deployment/resource-providers/standalone/kubernetes"
>}}) or the [native Kubernetes integration]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}})
+Kubernetes 高可用服务只能在部署到 Kubernetes 时使用。因此,当使用 [在 Kubernetes 上独立部署 Flink]({{<
ref "docs/deployment/resource-providers/standalone/kubernetes" >}}) 或 [原生的
Kubernetes 集成]({{< ref "docs/deployment/resource-providers/native_kubernetes"
>}}) 两种模式时,可以对它们进行配置。
-## Prerequisites
+## 准备
-In order to use Flink's Kubernetes HA services you must fulfill the following
prerequisites:
+为了使用 Flink 的 Kubernetes 高可用服务,您必须满足以下先决条件:
- Kubernetes >= 1.9.
-- Service account with permissions to create, edit, delete ConfigMaps.
- Take a look at how to configure a service account for [Flink's native
Kubernetes integration]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}}#rbac) and
[standalone Flink on Kubernetes]({{< ref
"docs/deployment/resource-providers/standalone/kubernetes"
>}}#kubernetes-high-availability-services) for more information.
+- 具有创建、编辑、删除 ConfigMaps 权限的服务帐户。想了解更多信息,请查看如何使用 [原生的 Kubernetes 集成]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}}#rbac) 和 [在
Kubernetes 上独立部署 Flink]({{< ref
"docs/deployment/resource-providers/standalone/kubernetes"
>}}#kubernetes-high-availability-services) 配置服务帐户。
Review comment:
`standalone` 翻译成 `独立`似乎不妥
##########
File path: docs/content.zh/docs/deployment/ha/kubernetes_ha.md
##########
@@ -24,52 +24,50 @@ specific language governing permissions and limitations
under the License.
-->
-# Kubernetes HA Services
+# Kubernetes 高可用服务
-Flink's Kubernetes HA services use [Kubernetes](https://kubernetes.io/) for
high availability services.
+Flink 的 Kubernetes 高可用服务使用 [Kubernetes](https://kubernetes.io/) 提供高可用服务。
-Kubernetes high availability services can only be used when deploying to
Kubernetes.
-Consequently, they can be configured when using [standalone Flink on
Kubernetes]({{< ref "docs/deployment/resource-providers/standalone/kubernetes"
>}}) or the [native Kubernetes integration]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}})
+Kubernetes 高可用服务只能在部署到 Kubernetes 时使用。因此,当使用 [在 Kubernetes 上独立部署 Flink]({{<
ref "docs/deployment/resource-providers/standalone/kubernetes" >}}) 或 [原生的
Kubernetes 集成]({{< ref "docs/deployment/resource-providers/native_kubernetes"
>}}) 两种模式时,可以对它们进行配置。
-## Prerequisites
+## 准备
-In order to use Flink's Kubernetes HA services you must fulfill the following
prerequisites:
+为了使用 Flink 的 Kubernetes 高可用服务,您必须满足以下先决条件:
- Kubernetes >= 1.9.
-- Service account with permissions to create, edit, delete ConfigMaps.
- Take a look at how to configure a service account for [Flink's native
Kubernetes integration]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}}#rbac) and
[standalone Flink on Kubernetes]({{< ref
"docs/deployment/resource-providers/standalone/kubernetes"
>}}#kubernetes-high-availability-services) for more information.
+- 具有创建、编辑、删除 ConfigMaps 权限的服务帐户。想了解更多信息,请查看如何使用 [原生的 Kubernetes 集成]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}}#rbac) 和 [在
Kubernetes 上独立部署 Flink]({{< ref
"docs/deployment/resource-providers/standalone/kubernetes"
>}}#kubernetes-high-availability-services) 配置服务帐户。
-## Configuration
+## 配置
-In order to start an HA-cluster you have to configure the following
configuration keys:
+为了启动高可用集群(HA-cluster),您必须配置以下配置项:
-- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (required):
-The `high-availability` option has to be set to `KubernetesHaServicesFactory`.
+- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (必要的):
+`high-availability` 选项必须设置为 `KubernetesHaServicesFactory`.
-```yaml
-high-availability:
org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
-```
+ ```yaml
Review comment:
These spaces are not necessary, and you could also check all additional
spaces below.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]