wuchong commented on a change in pull request #9764: [FLINK-12939][docs-zh]
Translate "Apache Kafka Connector" page into Chinese
URL: https://github.com/apache/flink/pull/9764#discussion_r340037621
##########
File path: docs/dev/connectors/kafka.zh.md
##########
@@ -577,171 +474,101 @@ stream.addSink(myProducer);
val stream: DataStream[String] = ...
val myProducer = new FlinkKafkaProducer011[String](
- "localhost:9092", // broker list
- "my-topic", // target topic
- new SimpleStringSchema) // serialization schema
+ "localhost:9092", // broker 列表
+ "my-topic", // 目标 topic
+ new SimpleStringSchema) // 序列化 schema
-// versions 0.10+ allow attaching the records' event timestamp when writing
them to Kafka;
-// this method is not available for earlier Kafka versions
+// 0.10+ 版本的 Kafka 允许在将记录写入 Kafka 时附加记录的事件时间戳;
+// 此方法不适用于早期版本的 Kafka
myProducer.setWriteTimestampToKafka(true)
stream.addSink(myProducer)
{% endhighlight %}
</div>
</div>
-The above examples demonstrate the basic usage of creating a Flink Kafka
Producer
-to write streams to a single Kafka target topic. For more advanced usages,
there
-are other constructor variants that allow providing the following:
-
- * *Providing custom properties*:
- The producer allows providing a custom properties configuration for the
internal `KafkaProducer`.
- Please refer to the [Apache Kafka
documentation](https://kafka.apache.org/documentation.html) for
- details on how to configure Kafka Producers.
- * *Custom partitioner*: To assign records to specific
- partitions, you can provide an implementation of a `FlinkKafkaPartitioner` to
the
- constructor. This partitioner will be called for each record in the stream
- to determine which exact partition of the target topic the record should be
sent to.
- Please see [Kafka Producer Partitioning
Scheme](#kafka-producer-partitioning-scheme) for more details.
- * *Advanced serialization schema*: Similar to the consumer,
- the producer also allows using an advanced serialization schema called
`KeyedSerializationSchema`,
- which allows serializing the key and value separately. It also allows to
override the target topic,
- so that one producer instance can send data to multiple topics.
-
-### Kafka Producer Partitioning Scheme
-
-By default, if a custom partitioner is not specified for the Flink Kafka
Producer, the producer will use
-a `FlinkFixedPartitioner` that maps each Flink Kafka Producer parallel subtask
to a single Kafka partition
-(i.e., all records received by a sink subtask will end up in the same Kafka
partition).
-
-A custom partitioner can be implemented by extending the
`FlinkKafkaPartitioner` class. All
-Kafka versions' constructors allow providing a custom partitioner when
instantiating the producer.
-Note that the partitioner implementation must be serializable, as they will be
transferred across Flink nodes.
-Also, keep in mind that any state in the partitioner will be lost on job
failures since the partitioner
-is not part of the producer's checkpointed state.
-
-It is also possible to completely avoid using and kind of partitioner, and
simply let Kafka partition
-the written records by their attached key (as determined for each record using
the provided serialization schema).
-To do this, provide a `null` custom partitioner when instantiating the
producer. It is important
-to provide `null` as the custom partitioner; as explained above, if a custom
partitioner is not specified
-the `FlinkFixedPartitioner` is used instead.
-
-### Kafka Producers and Fault Tolerance
+上面的例子演示了创建 Flink Kafka Producer 来将流消息写入单个 Kafka 目标 topic 的基本用法。
+对于更高级的用法,这还有其他构造函数变体允许提供以下内容:
+
+ * *提供自定义属性*:producer 允许为内部 `KafkaProducer` 提供自定义属性配置。有关如何配置 Kafka Producer
的详细信息,请参阅 [Apache Kafka 文档](https://kafka.apache.org/documentation.html)。
+ * *自定义分区器*:要将消息分配给特定的分区,可以向构造函数提供一个 `FlinkKafkaPartitioner`
的实现。这个分区器将被流中的每条记录调用,以确定消息应该发送到目标 topic 的哪个具体分区里。有关详细信息,请参阅 [Kafka Producer
分区方案](#kafka-producer-partitioning-scheme)。
Review comment:
```suggestion
* *自定义分区器*:要将消息分配给特定的分区,可以向构造函数提供一个 `FlinkKafkaPartitioner`
的实现。这个分区器将被流中的每条记录调用,以确定消息应该发送到目标 topic 的哪个具体分区里。有关详细信息,请参阅 [Kafka Producer
分区方案](#kafka-producer-分区方案)。
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services