wuchong commented on a change in pull request #9764: [FLINK-12939][docs-zh]
Translate "Apache Kafka Connector" page into Chinese
URL: https://github.com/apache/flink/pull/9764#discussion_r340027484
##########
File path: docs/dev/connectors/kafka.zh.md
##########
@@ -107,40 +100,36 @@ Then, import the connector in your maven project:
</dependency>
{% endhighlight %}
-Note that the streaming connectors are currently not part of the binary
distribution.
-See how to link with them for cluster execution [here]({{
site.baseurl}}/dev/projectsetup/dependencies.html).
+请注意:目前流连接器还不是二进制分发的一部分。
+[在此处]({{
site.baseurl}}/zh/dev/projectsetup/dependencies.html)可以了解到如何链接它们以实现在集群中执行。
-## Installing Apache Kafka
+## 安装 Apache Kafka
-* Follow the instructions from [Kafka's
quickstart](https://kafka.apache.org/documentation.html#quickstart) to download
the code and launch a server (launching a Zookeeper and a Kafka server is
required every time before starting the application).
-* If the Kafka and Zookeeper servers are running on a remote machine, then the
`advertised.host.name` setting in the `config/server.properties` file must be
set to the machine's IP address.
+* 按照 [ Kafka
快速入门](https://kafka.apache.org/documentation.html#quickstart)的说明下载代码并启动 Kafka
服务器(每次启动应用程序之前都需要启动 Zookeeper 和 Kafka server)。
+* 如果 Kafka 和 Zookeeper 服务器运行在远端机器上,那么必须要将 `config/server.properties` 文件中的
`advertised.host.name`属性设置为远端设备的 IP 地址。
-## Kafka 1.0.0+ Connector
+## Kafka 1.0.0+ 连接器
-Starting with Flink 1.7, there is a new universal Kafka connector that does
not track a specific Kafka major version.
-Rather, it tracks the latest version of Kafka at the time of the Flink release.
+从 Flink 1.7 开始,有一个新的通用 Kafka 连接器,它不跟踪特定的 Kafka 主版本。相反,它是在 Flink 发布时跟踪最新版本的
Kafka。
+如果你的 Kafka broker 版本是 1.0.0 或 更新的版本,你应该使用这个 Kafka 连接器。
+如果你使用的是 Kafka 的旧版本( 0.11、0.10、0.9 或 0.8 ),那么你应该使用与 Kafka broker 版本相对应的连接器。
-If your Kafka broker version is 1.0.0 or newer, you should use this Kafka
connector.
-If you use an older version of Kafka (0.11, 0.10, 0.9, or 0.8), you should use
the connector corresponding to the broker version.
+### 兼容性
-### Compatibility
+通过 Kafka client API 和 broker 的兼容性保证,通用的 Kafka 连接器兼容较旧和较新的 Kafka broker。
+它兼容 Kafka broker 0.11.0 或者更高版本,具体兼容性取决于所使用的功能。有关 Kafka 兼容性的详细信息,请参考 [Kafka
文档](https://kafka.apache.org/protocol.html#protocol_compatibility)。
-The universal Kafka connector is compatible with older and newer Kafka brokers
through the compatibility guarantees of the Kafka client API and broker.
-It is compatible with broker versions 0.11.0 or newer, depending on the
features used.
-For details on Kafka compatibility, please refer to the [Kafka
documentation](https://kafka.apache.org/protocol.html#protocol_compatibility).
+### 将 Kafka Connector 从 0.11 迁移到通用版本
-### Migrating Kafka Connector from 0.11 to universal
+以便执行迁移,请参考 [升级 Jobs 和 Flink 版本指南]({{ site.baseurl }}/ops/upgrading.html):
+* 在全程中使用 Flink 1.9 或更新版本。
+* 不要同时升级 Flink 和 Operator。
+* 确保你的 Job 中所使用的 Kafka Consumer 和/或 Kafka Producer 分配了唯一的标识符(uid)。
+* 使用 stop with savepoint 的特性来执行 savepoint(例如,使用 `stop --withSavepoint`)[CLI
命令]({{ site.baseurl }}/ops/cli.html)。
Review comment:
```suggestion
* 使用 stop with savepoint 的特性来执行 savepoint(例如,使用 `stop --withSavepoint`)[CLI
命令]({{ site.baseurl }}/zh/ops/cli.html)。
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services