huxixiang commented on a change in pull request #16547:
URL: https://github.com/apache/flink/pull/16547#discussion_r680776455



##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -276,81 +270,65 @@ esSinkBuilder.setRestClientFactory(new RestClientFactory {
   }
 })
 
-// finally, build and add the sink to the job's pipeline
+// 最后,构建并添加 sink 到作业管道中
 input.addSink(esSinkBuilder.build)
 ```
 {{< /tab >}}
 {{< /tabs >}}
 
-For Elasticsearch versions that still uses the now deprecated 
`TransportClient` to communicate
-with the Elasticsearch cluster (i.e., versions equal or below 5.x), note how a 
`Map` of `String`s
-is used to configure the `ElasticsearchSink`. This config map will be directly
-forwarded when creating the internally used `TransportClient`.
-The configuration keys are documented in the Elasticsearch documentation
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
-Especially important is the `cluster.name` parameter that must correspond to
-the name of your cluster.
+对于仍然使用已被弃用的 `TransportClient` 和 Elasticsearch 集群通信的 Elasticsearch 版本 (即,小于或等于 
5.x 的版本),
+请注意如何使用一个 `String` 类型的 `Map` 配置 `ElasticsearchSink`。在创建内部使用的 `TransportClient` 
时将直接转发此配置映射。
+配置项参见[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)的
 Elasticsearch 文档中。
+特别重要的是参数 `cluster.name` 必须和你的集群名称对应上。
 
-For Elasticsearch 6.x and above, internally, the `RestHighLevelClient` is used 
for cluster communication.
-By default, the connector uses the default configurations for the REST client. 
To have custom
-configuration for the REST client, users can provide a `RestClientFactory` 
implementation when 
-setting up the `ElasticsearchClient.Builder` that builds the sink.
+对于 Elasticsearch 6.x 及以上版本,内部使用 `RestHighLevelClient` 和集群通信。
+默认情况下,连接器使用 REST 客户端的默认配置。
+如果要使用自定义配置的 REST 客户端,用户可以在设置构建 sink 的 `ElasticsearchClient.Builder` 时提供一个 
`RestClientFactory` 的实现。
 
-Also note that the example only demonstrates performing a single index
-request for each incoming element. Generally, the `ElasticsearchSinkFunction`
-can be used to perform multiple requests of different types (ex.,
-`DeleteRequest`, `UpdateRequest`, etc.). 
+另外注意,该示例仅演示了对每个传入的元素执行单个索引请求。
+通常,`ElasticsearchSinkFunction` 可用于执行多个不同类型的请求(例如 `DeleteRequest`、 
`UpdateRequest` 等)。
 
-Internally, each parallel instance of the Flink Elasticsearch Sink uses
-a `BulkProcessor` to send action requests to the cluster.
-This will buffer elements before sending them in bulk to the cluster. The 
`BulkProcessor`
-executes bulk requests one at a time, i.e. there will be no two concurrent
-flushes of the buffered actions in progress.
+在内部,Flink Elasticsearch Sink 的每个并行实例使用一个 `BulkProcessor` 向集群发送操作请求。
+这将使得元素在发送到集群之前进行批量缓存。
+`BulkProcessor` 一次执行一个批量请求,即不会存在两个并行刷新缓存的操作。
 
-### Elasticsearch Sinks and Fault Tolerance
+### Elasticsearch Sinks 和容错
 
-With Flink’s checkpointing enabled, the Flink Elasticsearch Sink guarantees
-at-least-once delivery of action requests to Elasticsearch clusters. It does
-so by waiting for all pending action requests in the `BulkProcessor` at the
-time of checkpoints. This effectively assures that all requests before the
-checkpoint was triggered have been successfully acknowledged by Elasticsearch, 
before
-proceeding to process more records sent to the sink.
+启用 Flink checkpoint 后,Flink Elasticsearch Sink 保证至少一次将操作请求发送到 Elasticsearch 集群。
+这是通过在进行 checkpoint 时等待 `BulkProcessor` 中所有挂起的操作请求来实现。
+这有效地保证了在触发 checkpoint 之前所有的请求被 Elasticsearch 成功确认,然后继续处理发送到 sink 的记录。
 
-More details on checkpoints and fault tolerance are in the [fault tolerance 
docs]({{< ref "docs/learn-flink/fault_tolerance" >}}).
+关于 checkpoint 和容错的更多详细信息,请参见[容错文档]({{< ref "docs/learn-flink/fault_tolerance" 
>}})。
 
-To use fault tolerant Elasticsearch Sinks, checkpointing of the topology needs 
to be enabled at the execution environment:
+要使用具有容错特性的 Elasticsearch Sinks,需要在执行环境中启用作业拓扑的 checkpoint:
 
 {{< tabs "d00d1e93-4844-40d7-b0ec-9ec37e73145e" >}}
 {{< tab "Java" >}}
 ```java
 final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-env.enableCheckpointing(5000); // checkpoint every 5000 msecs
+env.enableCheckpointing(5000); // 每 5000 毫秒执行一次 checkpoint
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.enableCheckpointing(5000) // checkpoint every 5000 msecs
+env.enableCheckpointing(5000) // 每 5000 毫秒执行一次 checkpoint
 ```
 {{< /tab >}}
 {{< /tabs >}}
 
 <p style="border-radius: 5px; padding: 5px" class="bg-danger">
-<b>NOTE</b>: Users can disable flushing if they wish to do so, by calling
-<b>disableFlushOnCheckpoint()</b> on the created <b>ElasticsearchSink</b>. Be 
aware
-that this essentially means the sink will not provide any strong
-delivery guarantees anymore, even with checkpoint for the topology enabled.
+<b>注意</b>: 如果用户愿意,可以通过在创建的

Review comment:
       Accepted, thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to