wuchong commented on a change in pull request #9764: [FLINK-12939][docs-zh] 
Translate "Apache Kafka Connector" page into Chinese
URL: https://github.com/apache/flink/pull/9764#discussion_r340031136
 
 

 ##########
 File path: docs/dev/connectors/kafka.zh.md
 ##########
 @@ -342,74 +300,51 @@ 
myConsumer.setStartFromSpecificOffsets(specificStartOffsets)
 </div>
 </div>
 
-The above example configures the consumer to start from the specified offsets 
for
-partitions 0, 1, and 2 of topic `myTopic`. The offset values should be the
-next record that the consumer should read for each partition. Note that
-if the consumer needs to read a partition which does not have a specified
-offset within the provided offsets map, it will fallback to the default
-group offsets behaviour (i.e. `setStartFromGroupOffsets()`) for that
-particular partition.
+上面的例子中使用的配置是指定从 `myTopic` 主题的 0 、1 和 2 分区的指定偏移量开始消费。offset 值是 consumer 
应该为每个分区读取的下一条消息。请注意:如果 consumer 需要读取在提供的 offset 映射中没有指定 offset 
的分区,那么它将回退到该特定分区的默认组偏移行为(即 `setStartFromGroupOffsets()`)。
+
 
-Note that these start position configuration methods do not affect the start 
position when the job is
-automatically restored from a failure or manually restored using a savepoint.
-On restore, the start position of each Kafka partition is determined by the
-offsets stored in the savepoint or checkpoint
-(please see the next section for information about checkpointing to enable
-fault tolerance for the consumer).
+请注意:当 Job 从故障中自动恢复或使用 savepoint 手动恢复时,这些起始位置配置方法不会影响消费的起始位置。在恢复时,每个 Kafka 
分区的起始位置由存储在 savepoint 或 checkpoint 中的 offset 确定(有关 checkpointing 的信息,请参阅下一节,以便为 
consumer 启用容错功能)。
 
-### Kafka Consumers and Fault Tolerance
+### Kafka Consumer 和容错
 
-With Flink's checkpointing enabled, the Flink Kafka Consumer will consume 
records from a topic and periodically checkpoint all
-its Kafka offsets, together with the state of other operations, in a 
consistent manner. In case of a job failure, Flink will restore
-the streaming program to the state of the latest checkpoint and re-consume the 
records from Kafka, starting from the offsets that were
-stored in the checkpoint.
+伴随着启用 Flink 的 checkpointing 后,Flink Kafka Consumer 将使用 topic 
中的记录,并以一致的方式定期检查其所有 Kafka offset 和其他算子的状态。``如果 Job 失败,Flink 会将流式程序恢复到最新 
checkpoint 的状态,并从存储在 checkpoint 中的 offset 开始重新消费 Kafka 中的消息。
 
 Review comment:
   ```suggestion
   伴随着启用 Flink 的 checkpointing 后,Flink Kafka Consumer 将使用 topic 
中的记录,并以一致的方式定期检查其所有 Kafka offset 和其他算子的状态。如果 Job 失败,Flink 会将流式程序恢复到最新 
checkpoint 的状态,并从存储在 checkpoint 中的 offset 开始重新消费 Kafka 中的消息。
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to