wuchong commented on a change in pull request #9764: [FLINK-12939][docs-zh]
Translate "Apache Kafka Connector" page into Chinese
URL: https://github.com/apache/flink/pull/9764#discussion_r340045581
##########
File path: docs/dev/connectors/kafka.zh.md
##########
@@ -759,85 +586,50 @@ config.setWriteTimestampToKafka(true);
-## Kafka Connector metrics
-
-Flink's Kafka connectors provide some metrics through Flink's [metrics
system]({{ site.baseurl }}/monitoring/metrics.html) to analyze
-the behavior of the connector.
-The producers export Kafka's internal metrics through Flink's metric system
for all supported versions. The consumers export
-all metrics starting from Kafka version 0.9. The Kafka documentation lists all
exported metrics
-in its
[documentation](http://kafka.apache.org/documentation/#selector_monitoring).
+## Kafka Connector 指标
-In addition to these metrics, all consumers expose the `current-offsets` and
`committed-offsets` for each topic partition.
-The `current-offsets` refers to the current offset in the partition. This
refers to the offset of the last element that
-we retrieved and emitted successfully. The `committed-offsets` is the last
committed offset.
+Flink 的 Kafka 连接器通过 Flink 的 [metric 系统]({{ site.baseurl
}}/monitoring/metrics.html) 提供一些指标来分析 Kafka Connector 的状况。producer 通过 Flink 的
metrics 系统为所有支持的版本导出 Kafka 的内部指标。consumer 从 Kafka 0.9 版本开始导出所有指标。Kafka
文档在其[文档](http://kafka.apache.org/documentation/#selector_monitoring)中列出了所有导出的指标。
-The Kafka Consumers in Flink commit the offsets back to Zookeeper (Kafka 0.8)
or the Kafka brokers (Kafka 0.9+). If checkpointing
-is disabled, offsets are committed periodically.
-With checkpointing, the commit happens once all operators in the streaming
topology have confirmed that they've created a checkpoint of their state.
-This provides users with at-least-once semantics for the offsets committed to
Zookeeper or the broker. For offsets checkpointed to Flink, the system
-provides exactly once guarantees.
+除了这些指标之外,所有 consumer 都会公开每个主题分区的 `current-offsets` 和
`committed-offsets`。`current-offsets`
是指分区中的当前偏移量。指的是我们成功检索和发出的最后一个元素的偏移量。`committed-offsets` 是最后提交的偏移量。这为用户提供了
at-least-once 语义,用于提交给 Zookeeper 或 broker 的偏移量。对于 Flink 的偏移检查点,系统提供精准一次语义。
-The offsets committed to ZK or the broker can also be used to track the read
progress of the Kafka consumer. The difference between
-the committed offset and the most recent offset in each partition is called
the *consumer lag*. If the Flink topology is consuming
-the data slower from the topic than new data is added, the lag will increase
and the consumer will fall behind.
-For large production deployments we recommend monitoring that metric to avoid
increasing latency.
+提交给 ZK 或 broker 的偏移量也可以用来跟踪 Kafka consumer 的读取进度。每个分区中提交的偏移量和最近偏移量之间的差异称为
*consumer lag*。如果 Flink 拓扑消耗来自 topic 的数据的速度比添加新数据的速度慢,那么延迟将会增加,consumer
将会滞后。对于大型生产部署,我们建议监视该指标,以避免增加延迟。
-## Enabling Kerberos Authentication (for versions 0.9+ and above only)
+## 启用 Kerberos 身份验证(仅适用于 0.9 以上版本)
-Flink provides first-class support through the Kafka connector to authenticate
to a Kafka installation
-configured for Kerberos. Simply configure Flink in `flink-conf.yaml` to enable
Kerberos authentication for Kafka like so:
+Flink 通过 Kafka 连接器提供了一流的支持,可以对 Kerberos 配置的 Kafka 安装进行身份验证。只需在
`flink-conf.yaml` 中配置 Flink。像这样为 Kafka 启用 Kerberos 身份验证:
-1. Configure Kerberos credentials by setting the following -
- - `security.kerberos.login.use-ticket-cache`: By default, this is `true` and
Flink will attempt to use Kerberos credentials in ticket caches managed by
`kinit`.
- Note that when using the Kafka connector in Flink jobs deployed on YARN,
Kerberos authorization using ticket caches will not work.
- This is also the case when deploying using Mesos, as authorization using
ticket cache is not supported for Mesos deployments.
- - `security.kerberos.login.keytab` and `security.kerberos.login.principal`:
To use Kerberos keytabs instead, set values for both of these properties.
+1. 通过设置以下内容配置 Kerberos 票据
+ - `security.kerberos.login.use-ticket-cache`:默认情况下,这个值是 `true`,Flink 将尝试在
`kinit` 管理的票据缓存中使用 Kerberos 票据。注意!在 YARN 上部署的 Flink jobs 中使用 Kafka
连接器时,使用票据缓存的 Kerberos 授权将不起作用。使用 Mesos 进行部署时也是如此,因为 Mesos 部署不支持使用票据缓存进行授权。
+ - `security.kerberos.login.keytab` 和 `security.kerberos.login.principal`:要使用
Kerberos keytabs,需为这两个属性设置值。
-2. Append `KafkaClient` to `security.kerberos.login.contexts`: This tells
Flink to provide the configured Kerberos credentials to the Kafka login context
to be used for Kafka authentication.
+2. 将 `KafkaClient` 追加到 `security.kerberos.login.contexts`:这告诉 Flink 将配置的
Kerberos 票据提供给 Kafka 登录上下文以用于 Kafka 身份验证。
-Once Kerberos-based Flink security is enabled, you can authenticate to Kafka
with either the Flink Kafka Consumer or Producer
-by simply including the following two settings in the provided properties
configuration that is passed to the internal Kafka client:
+一旦启用了基于 Kerberos 的 Flink 安全性后,只需在提供的属性配置中包含以下两个设置(通过传递给内部 Kafka 客户端),即可使用
Flink Kafka Consumer 或 Producer 向 Kafk a进行身份验证:
-- Set `security.protocol` to `SASL_PLAINTEXT` (default `NONE`): The protocol
used to communicate to Kafka brokers.
-When using standalone Flink deployment, you can also use `SASL_SSL`; please
see how to configure the Kafka client for SSL
[here](https://kafka.apache.org/documentation/#security_configclients).
-- Set `sasl.kerberos.service.name` to `kafka` (default `kafka`): The value for
this should match the `sasl.kerberos.service.name` used for Kafka broker
configurations.
-A mismatch in service name between client and server configuration will cause
the authentication to fail.
+- 将 `security.protocol` 设置为 `SASL_PLAINTEXT`(默认为 `NONE`):用于与 Kafka broker
进行通信的协议。使用独立 Flink 部署时,也可以使用
`SASL_SSL`;请在[此处](https://kafka.apache.org/documentation/#security_configclients)查看如何为
SSL 配置 Kafka 客户端。
+- 将 `sasl.kerberos.service.name` 设置为 `kafka`(默认为 `kafka`):此值应与用于 Kafka broker
配置的 `sasl.kerberos.service.name` 相匹配。客户端和服务器配置之间的服务名称不匹配将导致身份验证失败。
-For more information on Flink configuration for Kerberos security, please see
[here]({{ site.baseurl}}/ops/config.html).
-You can also find [here]({{ site.baseurl}}/ops/security-kerberos.html) further
details on how Flink internally setups Kerberos-based security.
+有关 Kerberos 安全性 Flink 配置的更多信息,请参见[这里]({{
site.baseurl}}/ops/config.html)。你也可以在[这里]({{
site.baseurl}}/ops/security-kerberos.html)进一步了解 Flink 如何在内部设置基于 kerberos 的安全性。
## Troubleshooting
Review comment:
```suggestion
## 问题排查
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services