wuchong commented on a change in pull request #9008: [FLINK-12942][docs-zh] Translate Elasticsearch Connector page into Ch… URL: https://github.com/apache/flink/pull/9008#discussion_r300868369
########## File path: docs/dev/connectors/elasticsearch.zh.md ########## @@ -281,43 +275,21 @@ input.addSink(esSinkBuilder.build) </div> </div> -For Elasticsearch versions that still uses the now deprecated `TransportClient` to communicate -with the Elasticsearch cluster (i.e., versions equal or below 5.x), note how a `Map` of `String`s -is used to configure the `ElasticsearchSink`. This config map will be directly -forwarded when creating the internally used `TransportClient`. -The configuration keys are documented in the Elasticsearch documentation -[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html). -Especially important is the `cluster.name` parameter that must correspond to -the name of your cluster. +对于 Elasticsearch(即版本等于或低于5.x)的集群仍使用现已弃用的 `TransportClient` 进行通信,请注意 `ElasticsearchSink` 使用一个由 `String` 构成的 `Map` 来进行参数配置 。这个配置 map 将直接在内部创建使用`TransportClient` 时转发。配置关键参数在 Elasticsearch 文档中[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)可以查看。特别重要的是 `cluster.name` 参数必须对应你集群的名称。 -For Elasticsearch 6.x and above, internally, the `RestHighLevelClient` is used for cluster communication. -By default, the connector uses the default configurations for the REST client. To have custom -configuration for the REST client, users can provide a `RestClientFactory` implementation when -setting up the `ElasticsearchClient.Builder` that builds the sink. +对于 Elasticsearch 6.x及更高版本,在内部,使用 `RestHighLevelClient` 进行集群通信。默认情况下,连接器使用 REST 客户端的默认配置。要想自定义 REST 客户端的配置,用户可以在提供 `RestClientFactory` 实现时设置构建 sink 的 `ElasticsearchClient.Builder` 。 -Also note that the example only demonstrates performing a single index -request for each incoming element. Generally, the `ElasticsearchSinkFunction` -can be used to perform multiple requests of different types (ex., -`DeleteRequest`, `UpdateRequest`, etc.). +另外请注意,该示例仅展示了执行单个索引请求的每个传入元素。通常,`ElasticsearchSinkFunction` 可用于执行不同类型的多个请求(例如,`DeleteRequest` ,`UpdateRequest` 等)。 -Internally, each parallel instance of the Flink Elasticsearch Sink uses -a `BulkProcessor` to send action requests to the cluster. -This will buffer elements before sending them in bulk to the cluster. The `BulkProcessor` -executes bulk requests one at a time, i.e. there will be no two concurrent -flushes of the buffered actions in progress. +在内部,Flink Elasticsearch Sink 的每个并行实例都使用一个 `BulkProcessor` ,用于向集群发送动作请求。这将在批量发送到集群之前缓冲元素。 `BulkProcessor` 一次执行一个批量请求,即不会有两个并发刷新正在进行的缓冲操作。 -### Elasticsearch Sinks and Fault Tolerance +### Elasticsearch Sinks and 容错处理 -With Flink’s checkpointing enabled, the Flink Elasticsearch Sink guarantees -at-least-once delivery of action requests to Elasticsearch clusters. It does -so by waiting for all pending action requests in the `BulkProcessor` at the -time of checkpoints. This effectively assures that all requests before the -checkpoint was triggered have been successfully acknowledged by Elasticsearch, before -proceeding to process more records sent to the sink. +在启用 Flink 的 Checkpoint 后,Flink Elasticsearch Sink 可以保证至少一次向 Elasticsearch 集群传递操作请求。确实如此所以通过等待 `BulkProcessor` 中的所有待处理操作请求 checkpoints 的时间。这有效地保证了之前的所有请求在触发 checkpoint 之前已被 Elasticsearch 成功接收确认继续处理发送到接收器的更多记录。 -More details on checkpoints and fault tolerance are in the [fault tolerance docs]({{site.baseurl}}/internals/stream_checkpointing.html). +更多有关 checkpoint 和容错的详细信息,请参见[容错相关文档]({{site.baseurl}}/zh/internals/stream_checkpointing.html)。 -To use fault tolerant Elasticsearch Sinks, checkpointing of the topology needs to be enabled at the execution environment: +要使用容错机制的 Elasticsearch Sink,需要在执行环境中启用 topology 的 Checkpoint : Review comment: ```suggestion 要使用容错机制的 Elasticsearch Sink,需要在执行环境中开启拓扑的 checkpoint 机制: ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
