RocMarshal commented on a change in pull request #16787:
URL: https://github.com/apache/flink/pull/16787#discussion_r695337362
##########
File path: docs/content.zh/docs/deployment/security/security-kerberos.md
##########
@@ -25,103 +25,111 @@ specific language governing permissions and limitations
under the License.
-->
-# Kerberos Authentication Setup and Configuration
+<a name="kerberos-authentication-setup-and-configuration"></a>
-This document briefly describes how Flink security works in the context of
various deployment mechanisms (Standalone, native Kubernetes, YARN),
-filesystems, connectors, and state backends.
+# Kerberos 身份认证设置和配置
-## Objective
-The primary goals of the Flink Kerberos security infrastructure are:
+本文简要描述了 Flink 如何在各种部署机制(Standalone, native Kubernetes, YARN)、文件系统、connector
以及state backend 的上下文中安全工作。
Review comment:
```suggestion
本文简要描述了 Flink 如何在各种部署机制(Standalone, native Kubernetes, YARN)、文件系统、connector
以及 state backend 的上下文中安全工作。
```
##########
File path: docs/content.zh/docs/deployment/security/security-kerberos.md
##########
@@ -25,103 +25,111 @@ specific language governing permissions and limitations
under the License.
-->
-# Kerberos Authentication Setup and Configuration
+<a name="kerberos-authentication-setup-and-configuration"></a>
-This document briefly describes how Flink security works in the context of
various deployment mechanisms (Standalone, native Kubernetes, YARN),
-filesystems, connectors, and state backends.
+# Kerberos 身份认证设置和配置
-## Objective
-The primary goals of the Flink Kerberos security infrastructure are:
+本文简要描述了 Flink 如何在各种部署机制(Standalone, native Kubernetes, YARN)、文件系统、connector
以及state backend 的上下文中安全工作。
-1. to enable secure data access for jobs within a cluster via connectors (e.g.
Kafka)
-2. to authenticate to ZooKeeper (if configured to use SASL)
-3. to authenticate to Hadoop components (e.g. HDFS, HBase)
+<a name="objective"></a>
-In a production deployment scenario, streaming jobs are understood to run for
long periods of time (days/weeks/months) and be able to authenticate to secure
-data sources throughout the life of the job. Kerberos keytabs do not expire
in that timeframe, unlike a Hadoop delegation token
-or ticket cache entry.
+## 目标
+Flink Kerberos 安全框架的主要目标如下:
-The current implementation supports running Flink clusters (JobManager /
TaskManager / jobs) with either a configured keytab credential
-or with Hadoop delegation tokens. Keep in mind that all jobs share the
credential configured for a given cluster. To use a different keytab
-for a certain job, simply launch a separate Flink cluster with a different
configuration. Numerous Flink clusters may run side-by-side in a Kubernetes
or YARN
- environment.
+1. 在集群内使用 connector(例如 Kafka)时确保作业安全地访问数据;
+2. 对 zookeeper 进行身份认证(如果配置了 SASL);
+3. 对 Hadoop 组件进行身份认证(例如 HDFS,HBASE)。
-## How Flink Security works
-In concept, a Flink program may use first- or third-party connectors (Kafka,
HDFS, Cassandra, Flume, Kinesis etc.) necessitating arbitrary authentication
methods (Kerberos, SSL/TLS, username/password, etc.). While satisfying the
security requirements for all connectors is an ongoing effort,
-Flink provides first-class support for Kerberos authentication only. The
following services and connectors are supported for Kerberos authentication:
+生产部署场景中,流式作业通常会运行很长一段时间(天、周、月级别的时间段),并且需要在作业的整个生命周期中对其进行身份认证以保护数据源。与 Hadoop
delegation token 和 ticket 缓存项不同,Kerberos keytab 不会在该时间段内过期。
+
+当前的实现支持使用可配置的 keytab credential 或 Hadoop delegation token 来运行 Flink
集群(JobManager / TaskManager / 作业)。
+
+请注意,所有作业都能共享为指定集群配置的凭据。如果想为一个作业使用不同的 keytab,只需单独启动一个具有不同配置的 Flink 集群。多个 Flink
集群可以在 Kubernetes 或 YARN 环境中并行运行。
+
+<a name="how-flink-security-works"></a>
+
+## Flink Security 如何工作
+
+理论上,Flink 程序可以使用自己的或第三方的
connector(Kafka、HDFS、Cassandra、Flume、Kinesis等),同时需要支持任意的认证方式(Kerberos、SSL/TLS、用户名/密码等)。满足所有
connector 的安全需求还在进行中,不过 Flink 提供了针对 Kerberos 身份认证的一流支持。Kerberos 身份认证支持以下服务和
connector:
Review comment:
```suggestion
理论上,Flink 程序可以使用自己的或第三方的 connector(Kafka、HDFS、Cassandra、Flume、Kinesis
等),同时需要支持任意的认证方式(Kerberos、SSL/TLS、用户名/密码等)。满足所有 connector 的安全需求还在进行中,不过 Flink
提供了针对 Kerberos 身份认证的一流支持。Kerberos 身份认证支持以下服务和 connector:
```
##########
File path: docs/content.zh/docs/deployment/security/security-kerberos.md
##########
@@ -25,103 +25,111 @@ specific language governing permissions and limitations
under the License.
-->
-# Kerberos Authentication Setup and Configuration
+<a name="kerberos-authentication-setup-and-configuration"></a>
-This document briefly describes how Flink security works in the context of
various deployment mechanisms (Standalone, native Kubernetes, YARN),
-filesystems, connectors, and state backends.
+# Kerberos 身份认证设置和配置
-## Objective
-The primary goals of the Flink Kerberos security infrastructure are:
+本文简要描述了 Flink 如何在各种部署机制(Standalone, native Kubernetes, YARN)、文件系统、connector
以及state backend 的上下文中安全工作。
-1. to enable secure data access for jobs within a cluster via connectors (e.g.
Kafka)
-2. to authenticate to ZooKeeper (if configured to use SASL)
-3. to authenticate to Hadoop components (e.g. HDFS, HBase)
+<a name="objective"></a>
-In a production deployment scenario, streaming jobs are understood to run for
long periods of time (days/weeks/months) and be able to authenticate to secure
-data sources throughout the life of the job. Kerberos keytabs do not expire
in that timeframe, unlike a Hadoop delegation token
-or ticket cache entry.
+## 目标
+Flink Kerberos 安全框架的主要目标如下:
-The current implementation supports running Flink clusters (JobManager /
TaskManager / jobs) with either a configured keytab credential
-or with Hadoop delegation tokens. Keep in mind that all jobs share the
credential configured for a given cluster. To use a different keytab
-for a certain job, simply launch a separate Flink cluster with a different
configuration. Numerous Flink clusters may run side-by-side in a Kubernetes
or YARN
- environment.
+1. 在集群内使用 connector(例如 Kafka)时确保作业安全地访问数据;
+2. 对 zookeeper 进行身份认证(如果配置了 SASL);
Review comment:
```suggestion
2. 对 zookeeper 进行身份认证(如果配置了 SASL);
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]