This is an automated email from the ASF dual-hosted git repository.
tangyun pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git
The following commit(s) were added to refs/heads/master by this push:
new fc73b3f [FLINK-22518][docs-zh] Translate the documents of High
Availability into Chinese(#16084) (#16084)
fc73b3f is described below
commit fc73b3fe99951c91459842cad60236c537cd1517
Author: movesan <[email protected]>
AuthorDate: Fri Jun 18 16:59:18 2021 +0800
[FLINK-22518][docs-zh] Translate the documents of High Availability into
Chinese(#16084) (#16084)
This fix #16084.
---
docs/content.zh/docs/deployment/ha/_index.md | 4 +-
.../content.zh/docs/deployment/ha/kubernetes_ha.md | 49 ++++++------
docs/content.zh/docs/deployment/ha/overview.md | 54 ++++++-------
docs/content.zh/docs/deployment/ha/zookeeper_ha.md | 91 ++++++++++------------
4 files changed, 90 insertions(+), 108 deletions(-)
diff --git a/docs/content.zh/docs/deployment/ha/_index.md
b/docs/content.zh/docs/deployment/ha/_index.md
index 3eb199e..c88b724 100644
--- a/docs/content.zh/docs/deployment/ha/_index.md
+++ b/docs/content.zh/docs/deployment/ha/_index.md
@@ -1,5 +1,5 @@
---
-title: High Availablity
+title: 高可用
bookCollapseSection: true
weight: 6
---
@@ -20,4 +20,4 @@ software distributed under the License is distributed on an
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
--->
\ No newline at end of file
+-->
diff --git a/docs/content.zh/docs/deployment/ha/kubernetes_ha.md
b/docs/content.zh/docs/deployment/ha/kubernetes_ha.md
index 8bfb0ee..0aafa7c 100644
--- a/docs/content.zh/docs/deployment/ha/kubernetes_ha.md
+++ b/docs/content.zh/docs/deployment/ha/kubernetes_ha.md
@@ -1,5 +1,5 @@
---
-title: Kubernetes HA Services
+title: Kubernetes 高可用服务
weight: 3
type: docs
aliases:
@@ -24,52 +24,50 @@ specific language governing permissions and limitations
under the License.
-->
-# Kubernetes HA Services
+# Kubernetes 高可用服务
-Flink's Kubernetes HA services use [Kubernetes](https://kubernetes.io/) for
high availability services.
+Flink 的 Kubernetes 高可用模式使用 [Kubernetes](https://kubernetes.io/) 提供高可用服务。
-Kubernetes high availability services can only be used when deploying to
Kubernetes.
-Consequently, they can be configured when using [standalone Flink on
Kubernetes]({{< ref "docs/deployment/resource-providers/standalone/kubernetes"
>}}) or the [native Kubernetes integration]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}})
+Kubernetes 高可用服务只能在部署到 Kubernetes 时使用。因此,当使用 [在 Kubernetes 上单节点部署 Flink]({{<
ref "docs/deployment/resource-providers/standalone/kubernetes" >}}) 或 [Flink 原生
Kubernetes 集成]({{< ref "docs/deployment/resource-providers/native_kubernetes"
>}}) 两种模式时,可以对它们进行配置。
-## Prerequisites
+## 准备
-In order to use Flink's Kubernetes HA services you must fulfill the following
prerequisites:
+为了使用 Flink 的 Kubernetes 高可用服务,你必须满足以下先决条件:
- Kubernetes >= 1.9.
-- Service account with permissions to create, edit, delete ConfigMaps.
- Take a look at how to configure a service account for [Flink's native
Kubernetes integration]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}}#rbac) and
[standalone Flink on Kubernetes]({{< ref
"docs/deployment/resource-providers/standalone/kubernetes"
>}}#kubernetes-high-availability-services) for more information.
+- 具有创建、编辑、删除 ConfigMaps 权限的服务帐户。想了解更多信息,请查看如何在 [Flink 原生 Kubernetes 集成]({{<
ref "docs/deployment/resource-providers/native_kubernetes" >}}#rbac) 和 [在
Kubernetes 上单节点部署 Flink]({{< ref
"docs/deployment/resource-providers/standalone/kubernetes"
>}}#kubernetes-high-availability-services) 两种模式中配置服务帐户。
-## Configuration
+## 配置
-In order to start an HA-cluster you have to configure the following
configuration keys:
+为了启用高可用集群(HA-cluster),你必须设置以下配置项:
-- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (required):
-The `high-availability` option has to be set to `KubernetesHaServicesFactory`.
+- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (必要的):
+`high-availability` 选项必须设置为 `KubernetesHaServicesFactory`.
```yaml
high-availability:
org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
```
-- [high-availability.storageDir]({{< ref "docs/deployment/config"
>}}#high-availability-storagedir) (required):
-JobManager metadata is persisted in the file system
`high-availability.storageDir` and only a pointer to this state is stored in
Kubernetes.
+- [high-availability.storageDir]({{< ref "docs/deployment/config"
>}}#high-availability-storagedir) (必要的):
+JobManager 元数据持久化到文件系统 `high-availability.storageDir` 配置的路径中,并且在 Kubernetes
中只能有一个目录指向此位置。
```yaml
high-availability.storageDir: s3:///flink/recovery
```
-The `storageDir` stores all metadata needed to recover a JobManager failure.
-
-- [kubernetes.cluster-id]({{< ref "docs/deployment/config"
>}}#kubernetes-cluster-id) (required):
-In order to identify the Flink cluster, you have to specify a
`kubernetes.cluster-id`.
+ `storageDir` 存储要从 JobManager 失败恢复时所需的所有元数据。
+
+- [kubernetes.cluster-id]({{< ref "docs/deployment/config"
>}}#kubernetes-cluster-id) (必要的):
+为了识别 Flink 集群,你必须指定 `kubernetes.cluster-id`。
```yaml
kubernetes.cluster-id: cluster1337
```
-### Example configuration
+### 配置示例
-Configure high availability mode in `conf/flink-conf.yaml`:
+在 `conf/flink-conf.yaml` 中配置高可用模式:
```yaml
kubernetes.cluster-id: <cluster-id>
@@ -79,11 +77,8 @@ high-availability.storageDir: hdfs:///flink/recovery
{{< top >}}
-## High availability data clean up
+## 高可用数据清理
-To keep HA data while restarting the Flink cluster, simply delete the
deployment (via `kubectl delete deployment <cluster-id>`).
-All the Flink cluster related resources will be deleted (e.g. JobManager
Deployment, TaskManager pods, services, Flink conf ConfigMap).
-HA related ConfigMaps will be retained because they do not set the owner
reference.
-When restarting the cluster, all previously running jobs will be recovered and
restarted from the latest successful checkpoint.
+要在重新启动 Flink 集群时保留高可用数据,只需删除部署(通过 `kubectl delete deployment
<cluster-id>`)。所有与 Flink 集群相关的资源将被删除(例如:JobManager Deployment、TaskManager
pods、services、Flink conf ConfigMap)。高可用相关的 ConfigMaps
将被保留,因为它们没有设置所有者引用。当重新启动集群时,所有以前运行的作业将从最近成功的检查点恢复并重新启动。
-{{< top >}}
+{{< top >}}
diff --git a/docs/content.zh/docs/deployment/ha/overview.md
b/docs/content.zh/docs/deployment/ha/overview.md
index 0c5034b..b7aec63 100644
--- a/docs/content.zh/docs/deployment/ha/overview.md
+++ b/docs/content.zh/docs/deployment/ha/overview.md
@@ -26,56 +26,50 @@ specific language governing permissions and limitations
under the License.
-->
-# High Availability
+# 高可用
-JobManager High Availability (HA) hardens a Flink cluster against JobManager
failures.
-This feature ensures that a Flink cluster will always continue executing your
submitted jobs.
+JobManager 高可用(HA)模式加强了 Flink 集群防止 JobManager 故障的能力。
+此特性确保 Flink 集群将始终持续执行你提交的作业。
-## JobManager High Availability
+## JobManager 高可用
-The JobManager coordinates every Flink deployment.
-It is responsible for both *scheduling* and *resource management*.
+JobManager 协调每个 Flink 的部署。它同时负责 *调度* 和 *资源管理*。
-By default, there is a single JobManager instance per Flink cluster.
-This creates a *single point of failure* (SPOF): if the JobManager crashes, no
new programs can be submitted and running programs fail.
+默认情况下,每个 Flink 集群只有一个 JobManager 实例。这会导致 *单点故障(SPOF)*:如果 JobManager
崩溃,则不能提交任何新程序,运行中的程序也会失败。
-With JobManager High Availability, you can recover from JobManager failures
and thereby eliminate the *SPOF*.
-You can configure high availability for every cluster deployment.
-See the [list of available high availability
services](#high-availability-services) for more information.
+使用 JobManager 高可用模式,你可以从 JobManager 失败中恢复,从而消除单点故障。你可以为每个集群部署配置高可用模式。
+有关更多信息,请参阅 [高可用服务](#high-availability-services)。
-### How to make a cluster highly available
+### 如何启用集群高可用
-The general idea of JobManager High Availability is that there is a *single
leading JobManager* at any time and *multiple standby JobManagers* to take over
leadership in case the leader fails.
-This guarantees that there is *no single point of failure* and programs can
make progress as soon as a standby JobManager has taken leadership.
+JobManager 高可用一般概念是指,在任何时候都有 *一个领导者 JobManager*,如果领导者出现故障,则有多个备用 JobManager
来接管领导。这保证了 *不存在单点故障*,只要有备用 JobManager 担任领导者,程序就可以继续运行。
-As an example, consider the following setup with three JobManager instances:
+如下是一个使用三个 JobManager 实例的例子:
{{< img src="/fig/jobmanager_ha_overview.png" class="center" >}}
-Flink's [high availability services](#high-availability-services) encapsulate
the required services to make everything work:
-* **Leader election**: Selecting a single leader out of a pool of `n`
candidates
-* **Service discovery**: Retrieving the address of the current leader
-* **State persistence**: Persisting state which is required for the successor
to resume the job execution (JobGraphs, user code jars, completed checkpoints)
+Flink 的 [高可用服务](#high-availability-services) 封装了所需的服务,使一切可以正常工作:
+* **领导者选举**:从 `n` 个候选者中选出一个领导者
+* **服务发现**:检索当前领导者的地址
+* **状态持久化**:继承程序恢复作业所需的持久化状态(JobGraphs、用户代码jar、已完成的检查点)
{{< top >}}
-## High Availability Services
+<a name="high-availability-services" />
-Flink ships with two high availability service implementations:
+## 高可用服务
-* [ZooKeeper]({{< ref "docs/deployment/ha/zookeeper_ha" >}}):
-ZooKeeper HA services can be used with every Flink cluster deployment.
-They require a running ZooKeeper quorum.
+Flink 提供了两种高可用服务实现:
-* [Kubernetes]({{< ref "docs/deployment/ha/kubernetes_ha" >}}):
-Kubernetes HA services only work when running on Kubernetes.
+
+* [ZooKeeper]({{< ref "docs/deployment/ha/zookeeper_ha" >}}):每个 Flink
集群部署都可以使用 ZooKeeper HA 服务。它们需要一个运行的 ZooKeeper 复制组(quorum)。
+
+* [Kubernetes]({{< ref "docs/deployment/ha/kubernetes_ha" >}}):Kubernetes HA
服务只能运行在 Kubernetes 上。
{{< top >}}
-## High Availability data lifecycle
+## 高可用数据生命周期
-In order to recover submitted jobs, Flink persists metadata and the job
artifacts.
-The HA data will be kept until the respective job either succeeds, is
cancelled or fails terminally.
-Once this happens, all the HA data, including the metadata stored in the HA
services, will be deleted.
+为了恢复提交的作业,Flink 持久化元数据和 job
组件。高可用数据将一直保存,直到相应的作业执行成功、被取消或最终失败。当这些情况发生时,将删除所有高可用数据,包括存储在高可用服务中的元数据。
{{< top >}}
diff --git a/docs/content.zh/docs/deployment/ha/zookeeper_ha.md
b/docs/content.zh/docs/deployment/ha/zookeeper_ha.md
index 3ab017e..1e23d20 100644
--- a/docs/content.zh/docs/deployment/ha/zookeeper_ha.md
+++ b/docs/content.zh/docs/deployment/ha/zookeeper_ha.md
@@ -1,5 +1,5 @@
---
-title: ZooKeeper HA Services
+title: ZooKeeper 高可用服务
weight: 2
type: docs
aliases:
@@ -24,104 +24,97 @@ specific language governing permissions and limitations
under the License.
-->
-# ZooKeeper HA Services
+# ZooKeeper 高可用服务
-Flink's ZooKeeper HA services use [ZooKeeper](http://zookeeper.apache.org) for
high availability services.
+Flink 的 ZooKeeper 高可用模式使用 [ZooKeeper](http://zookeeper.apache.org) 提供高可用服务。
-Flink leverages **[ZooKeeper](http://zookeeper.apache.org)** for *distributed
coordination* between all running JobManager instances.
-ZooKeeper is a separate service from Flink, which provides highly reliable
distributed coordination via leader election and light-weight consistent state
storage.
-Check out [ZooKeeper's Getting Started
Guide](http://zookeeper.apache.org/doc/current/zookeeperStarted.html) for more
information about ZooKeeper.
-Flink includes scripts to [bootstrap a simple ZooKeeper](#bootstrap-zookeeper)
installation.
+Flink 利用 **[ZooKeeper](http://zookeeper.apache.org)** 在所有运行的 JobManager 实例之间进行
*分布式协调*。ZooKeeper 是一个独立于 Flink 的服务,它通过 leader 选举和轻量级的一致性状态存储来提供高可靠的分布式协调。查看
[ZooKeeper入门指南](http://zookeeper.apache.org/doc/current/zookeeperStarted.html),了解更多关于
ZooKeeper 的信息。Flink 包含 [启动一个简单的ZooKeeper](#bootstrap-zookeeper) 的安装脚本。
-## Configuration
+## 配置
-In order to start an HA-cluster you have to configure the following
configuration keys:
+为了启用高可用集群(HA-cluster),你必须设置以下配置项:
-- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (required):
-The `high-availability` option has to be set to `zookeeper`.
+- [high-availability]({{< ref "docs/deployment/config"
>}}#high-availability-1) (必要的):
+ `high-availability` 配置项必须设置为 `zookeeper`。
<pre>high-availability: zookeeper</pre>
-- [high-availability.storageDir]({{< ref "docs/deployment/config"
>}}#high-availability-storagedir) (required):
-JobManager metadata is persisted in the file system
`high-availability.storageDir` and only a pointer to this state is stored in
ZooKeeper.
+- [high-availability.storageDir]({{< ref "docs/deployment/config"
>}}#high-availability-storagedir) (必要的):
+ JobManager 元数据持久化到文件系统 `high-availability.storageDir` 配置的路径中,并且在 ZooKeeper
中只能有一个目录指向此位置。
<pre>high-availability.storageDir: hdfs:///flink/recovery</pre>
- The `storageDir` stores all metadata needed to recover a JobManager failure.
+ `storageDir` 存储要从 JobManager 失败恢复时所需的所有元数据。
-- [high-availability.zookeeper.quorum]({%link deployment/config.md
%}#high-availability-zookeeper-quorum) (required):
-A *ZooKeeper quorum* is a replicated group of ZooKeeper servers, which provide
the distributed coordination service.
+- [high-availability.zookeeper.quorum]({{< ref "docs/deployment/config"
>}}#high-availability-zookeeper-quorum) (必要的):
+ *ZooKeeper quorum* 是一个提供分布式协调服务的复制组。
<pre>high-availability.zookeeper.quorum:
address1:2181[,...],addressX:2181</pre>
- Each `addressX:port` refers to a ZooKeeper server, which is reachable by
Flink at the given address and port.
+ 每个 `addressX:port` 指的是一个 ZooKeeper 服务器,它可以被 Flink 在给定的地址和端口上访问。
-- [high-availability.zookeeper.path.root]({{< ref "docs/deployment/config"
>}}#high-availability-zookeeper-path-root) (recommended):
-The *root ZooKeeper node*, under which all cluster nodes are placed.
+- [high-availability.zookeeper.path.root]({{< ref "docs/deployment/config"
>}}#high-availability-zookeeper-path-root) (推荐的):
+ *ZooKeeper 根节点*,集群的所有节点都放在该节点下。
<pre>high-availability.zookeeper.path.root: /flink</pre>
-- [high-availability.cluster-id]({{< ref "docs/deployment/config"
>}}#high-availability-cluster-id) (recommended):
-The *cluster-id ZooKeeper node*, under which all required coordination data
for a cluster is placed.
+- [high-availability.cluster-id]({{< ref "docs/deployment/config"
>}}#high-availability-cluster-id) (推荐的):
+ *ZooKeeper cluster-id 节点*,在该节点下放置集群所需的协调数据。
<pre>high-availability.cluster-id: /default_ns # important: customize per
cluster</pre>
- **Important**:
- You should not set this value manually when running on YARN, native
Kubernetes or on another cluster manager.
- In those cases a cluster-id is being automatically generated.
- If you are running multiple Flink HA clusters on bare metal, you have to
manually configure separate cluster-ids for each cluster.
+ **重要**:
+ 在 YARN、原生 Kubernetes 或其他集群管理器上运行时,不应该手动设置此值。在这些情况下,将自动生成一个集群
ID。如果在未使用集群管理器的机器上运行多个 Flink 高可用集群,则必须为每个集群手动配置单独的集群 ID(cluster-ids)。
-### Example configuration
+### 配置示例
-Configure high availability mode and ZooKeeper quorum in
`conf/flink-conf.yaml`:
+在 `conf/flink-conf.yaml` 中配置高可用模式和 ZooKeeper 复制组(quorum):
```bash
high-availability: zookeeper
high-availability.zookeeper.quorum: localhost:2181
high-availability.zookeeper.path.root: /flink
-high-availability.cluster-id: /cluster_one # important: customize per cluster
+high-availability.cluster-id: /cluster_one # 重要: 每个集群自定义
high-availability.storageDir: hdfs:///flink/recovery
```
{{< top >}}
-## Configuring for ZooKeeper Security
+## ZooKeeper 安全配置
-If ZooKeeper is running in secure mode with Kerberos, you can override the
following configurations in `flink-conf.yaml` as necessary:
+如果 ZooKeeper 使用 Kerberos 以安全模式运行,必要时可以在 `flink-conf.yaml` 中覆盖以下配置:
```bash
-# default is "zookeeper". If the ZooKeeper quorum is configured
-# with a different service name then it can be supplied here.
+# 默认配置为 "zookeeper". 如果 ZooKeeper quorum 配置了不同的服务名称,
+# 那么可以替换到这里。
zookeeper.sasl.service-name: zookeeper
-# default is "Client". The value needs to match one of the values
-# configured in "security.kerberos.login.contexts".
+# 默认配置为 "Client". 该值必须为 "security.kerberos.login.contexts" 项中配置的某一个值。
zookeeper.sasl.login-context-name: Client
```
-For more information on Flink configuration for Kerberos security, please
refer to the [security section of the Flink configuration page]({{< ref
"docs/deployment/config" >}}#security).
-You can also find further details on [how Flink sets up Kerberos-based
security internally]({{< ref "docs/deployment/security/security-kerberos" >}}).
+有关用于 Kerberos 安全性的 Flink 配置的更多信息,请参阅 [Flink 配置页面的安全性部分]({{< ref
"docs/deployment/config" >}}#security)。你还可以找到关于 [Flink 如何在内部设置基于 kerberos
的安全性]({{< ref "docs/deployment/security/security-kerberos" >}}) 的详细信息。
{{< top >}}
-## ZooKeeper Versions
+## ZooKeeper 版本
-Flink ships with separate ZooKeeper clients for 3.4 and 3.5, with 3.4 being in
the `lib` directory of the distribution
-and thus used by default, whereas 3.5 is placed in the `opt` directory.
+Flink 附带了 3.4 和 3.5 的单独的 ZooKeeper 客户端,其中 3.4 位于发行版的 `lib` 目录中,为默认使用版本,而 3.5
位于 opt 目录中。
-The 3.5 client allows you to secure the ZooKeeper connection via SSL, but
_may_ not work with 3.4- ZooKeeper installations.
+3.5 客户端允许你通过 SSL 保护 ZooKeeper 连接,但 _可能_ 不适用于 3.4 版本的 ZooKeeper 安装。
-You can control which version is used by Flink by placing either jar in the
`lib` directory.
+你可以通过在 `lib` 目录中放置任意一个 jar 来控制 Flink 使用哪个版本。
{{< top >}}
-## Bootstrap ZooKeeper
+<a name="bootstrap-zookeeper" />
-If you don't have a running ZooKeeper installation, you can use the helper
scripts, which ship with Flink.
+## 启动 ZooKeeper
-There is a ZooKeeper configuration template in `conf/zoo.cfg`.
-You can configure the hosts to run ZooKeeper on with the `server.X` entries,
where X is a unique ID of each server:
+如果你没有安装 ZooKeeper,可以使用 Flink 附带的帮助脚本。
+
+在 `conf/zoo.cfg` 文件中有 ZooKeeper 的配置模板。你可以在 `server.X` 配置项中配置主机来运行 ZooKeeper。其中
X 是每个服务器的唯一 ID:
```bash
server.X=addressX:peerPort:leaderPort
@@ -129,8 +122,8 @@ server.X=addressX:peerPort:leaderPort
server.Y=addressY:peerPort:leaderPort
```
-The script `bin/start-zookeeper-quorum.sh` will start a ZooKeeper server on
each of the configured hosts.
-The started processes start ZooKeeper servers via a Flink wrapper, which reads
the configuration from `conf/zoo.cfg` and makes sure to set some required
configuration values for convenience.
-In production setups, it is recommended to manage your own ZooKeeper
installation.
+脚本 `bin/start-zookeeper-quorum.sh` 将在每个配置的主机上启动一个 ZooKeeper 服务。该进程是通过 Flink
包装器来启动的,该包装器从 `conf/zoo.cfg` 读取配置,并确保设置一些必要的配置值以方便使用。
+
+在生产环境中,建议你自行管理 ZooKeeper 的安装与部署。
-{{< top >}}
+{{< top >}}