RocMarshal commented on a change in pull request #16547:
URL: https://github.com/apache/flink/pull/16547#discussion_r680612732



##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -51,29 +49,25 @@ of the Elasticsearch installation:
         <td>{{< artifact flink-connector-elasticsearch6 withScalaVersion 
>}}</td>
     </tr>
     <tr>
-        <td>7 and later versions</td>
+        <td>7 及更高版本</td>
         <td>{{< artifact flink-connector-elasticsearch7 withScalaVersion 
>}}</td>
     </tr>
   </tbody>
 </table>
 
-Note that the streaming connectors are currently not part of the binary
-distribution. See [here]({{< ref "docs/dev/datastream/project-configuration" 
>}}) for information
-about how to package the program with the libraries for cluster execution.
+请注意,流连接器目前不是二进制发行版的一部分。
+有关如何将程序和用于集群执行的库一起打包,参考[此处]({{< ref 
"docs/dev/datastream/project-configuration" >}})
 
-## Installing Elasticsearch
+## 安装 Elasticsearch
 
-Instructions for setting up an Elasticsearch cluster can be found
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
-Make sure to set and remember a cluster name. This must be set when
-creating an `ElasticsearchSink` for requesting document actions against your 
cluster.
+Elasticsearch 
集群的设置可以参考[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html)。
+确保设置并记住集群名称。这是在创建 `ElasticsearchSink` 请求集群文档操作时必须要设置的。
 
 ## Elasticsearch Sink
 
-The `ElasticsearchSink` uses a `TransportClient` (before 6.x) or 
`RestHighLevelClient` (starting with 6.x) to communicate with an
-Elasticsearch cluster.
+`ElasticsearchSink` 使用 `TransportClient` (6.x 之前) 或者 `RestHighLevelClient` 
(6.x 开始) 和 Elasticsearch 集群进行通信。

Review comment:
       ```suggestion
   `ElasticsearchSink` 使用 `TransportClient`(6.x 之前)或者 `RestHighLevelClient`(6.x 
开始)和 Elasticsearch 集群进行通信。
   ```
   Just keep an English space between words and Chinese.

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -51,29 +49,25 @@ of the Elasticsearch installation:
         <td>{{< artifact flink-connector-elasticsearch6 withScalaVersion 
>}}</td>
     </tr>
     <tr>
-        <td>7 and later versions</td>
+        <td>7 及更高版本</td>
         <td>{{< artifact flink-connector-elasticsearch7 withScalaVersion 
>}}</td>
     </tr>
   </tbody>
 </table>
 
-Note that the streaming connectors are currently not part of the binary
-distribution. See [here]({{< ref "docs/dev/datastream/project-configuration" 
>}}) for information
-about how to package the program with the libraries for cluster execution.
+请注意,流连接器目前不是二进制发行版的一部分。
+有关如何将程序和用于集群执行的库一起打包,参考[此处]({{< ref 
"docs/dev/datastream/project-configuration" >}})

Review comment:
       `[此处]` -> `[此文档]`?
   only a minor comment.

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -396,13 +374,13 @@ input.addSink(new ElasticsearchSink(
                 RequestIndexer indexer) {
 
             if (ExceptionUtils.findThrowable(failure, 
EsRejectedExecutionException.class).isPresent()) {
-                // full queue; re-add document for indexing
+                 // 队列已满;重新添加文档进行索引
                 indexer.add(action)
             } else if (ExceptionUtils.findThrowable(failure, 
ElasticsearchParseException.class).isPresent()) {
-                // malformed document; simply drop request without failing sink
+                 // 文档格式错误;简单地删除请求避免 sink 失败

Review comment:
       ```suggestion
                   // 文档格式错误;简单地删除请求避免 sink 失败
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -276,81 +270,65 @@ esSinkBuilder.setRestClientFactory(new RestClientFactory {
   }
 })
 
-// finally, build and add the sink to the job's pipeline
+// 最后,构建并添加 sink 到作业管道中
 input.addSink(esSinkBuilder.build)
 ```
 {{< /tab >}}
 {{< /tabs >}}
 
-For Elasticsearch versions that still uses the now deprecated 
`TransportClient` to communicate
-with the Elasticsearch cluster (i.e., versions equal or below 5.x), note how a 
`Map` of `String`s
-is used to configure the `ElasticsearchSink`. This config map will be directly
-forwarded when creating the internally used `TransportClient`.
-The configuration keys are documented in the Elasticsearch documentation
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
-Especially important is the `cluster.name` parameter that must correspond to
-the name of your cluster.
+对于仍然使用已被弃用的 `TransportClient` 和 Elasticsearch 集群通信的 Elasticsearch 版本 (即,小于或等于 
5.x 的版本),
+请注意如何使用一个 `String` 类型的 `Map` 配置 `ElasticsearchSink`。在创建内部使用的 `TransportClient` 
时将直接转发此配置映射。
+配置项参见[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)的
 Elasticsearch 文档中。
+特别重要的是参数 `cluster.name` 必须和你的集群名称对应上。
 
-For Elasticsearch 6.x and above, internally, the `RestHighLevelClient` is used 
for cluster communication.
-By default, the connector uses the default configurations for the REST client. 
To have custom
-configuration for the REST client, users can provide a `RestClientFactory` 
implementation when 
-setting up the `ElasticsearchClient.Builder` that builds the sink.
+对于 Elasticsearch 6.x 及以上版本,内部使用 `RestHighLevelClient` 和集群通信。
+默认情况下,连接器使用 REST 客户端的默认配置。
+如果要使用自定义配置的 REST 客户端,用户可以在设置构建 sink 的 `ElasticsearchClient.Builder` 时提供一个 
`RestClientFactory` 的实现。
 
-Also note that the example only demonstrates performing a single index
-request for each incoming element. Generally, the `ElasticsearchSinkFunction`
-can be used to perform multiple requests of different types (ex.,
-`DeleteRequest`, `UpdateRequest`, etc.). 
+另外注意,该示例仅演示了对每个传入的元素执行单个索引请求。
+通常,`ElasticsearchSinkFunction` 可用于执行多个不同类型的请求(例如 `DeleteRequest`、 
`UpdateRequest` 等)。
 
-Internally, each parallel instance of the Flink Elasticsearch Sink uses
-a `BulkProcessor` to send action requests to the cluster.
-This will buffer elements before sending them in bulk to the cluster. The 
`BulkProcessor`
-executes bulk requests one at a time, i.e. there will be no two concurrent
-flushes of the buffered actions in progress.
+在内部,Flink Elasticsearch Sink 的每个并行实例使用一个 `BulkProcessor` 向集群发送操作请求。
+这将使得元素在发送到集群之前进行批量缓存。
+`BulkProcessor` 一次执行一个批量请求,即不会存在两个并行刷新缓存的操作。
 
-### Elasticsearch Sinks and Fault Tolerance
+### Elasticsearch Sinks 和容错
 
-With Flink’s checkpointing enabled, the Flink Elasticsearch Sink guarantees
-at-least-once delivery of action requests to Elasticsearch clusters. It does
-so by waiting for all pending action requests in the `BulkProcessor` at the
-time of checkpoints. This effectively assures that all requests before the
-checkpoint was triggered have been successfully acknowledged by Elasticsearch, 
before
-proceeding to process more records sent to the sink.
+启用 Flink checkpoint 后,Flink Elasticsearch Sink 保证至少一次将操作请求发送到 Elasticsearch 集群。
+这是通过在进行 checkpoint 时等待 `BulkProcessor` 中所有挂起的操作请求来实现。
+这有效地保证了在触发 checkpoint 之前所有的请求被 Elasticsearch 成功确认,然后继续处理发送到 sink 的记录。
 
-More details on checkpoints and fault tolerance are in the [fault tolerance 
docs]({{< ref "docs/learn-flink/fault_tolerance" >}}).
+关于 checkpoint 和容错的更多详细信息,请参见[容错文档]({{< ref "docs/learn-flink/fault_tolerance" 
>}})。
 
-To use fault tolerant Elasticsearch Sinks, checkpointing of the topology needs 
to be enabled at the execution environment:
+要使用具有容错特性的 Elasticsearch Sinks,需要在执行环境中启用作业拓扑的 checkpoint:
 
 {{< tabs "d00d1e93-4844-40d7-b0ec-9ec37e73145e" >}}
 {{< tab "Java" >}}
 ```java
 final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-env.enableCheckpointing(5000); // checkpoint every 5000 msecs
+env.enableCheckpointing(5000); // 每 5000 毫秒执行一次 checkpoint
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment()
-env.enableCheckpointing(5000) // checkpoint every 5000 msecs
+env.enableCheckpointing(5000) // 每 5000 毫秒执行一次 checkpoint
 ```
 {{< /tab >}}
 {{< /tabs >}}
 
 <p style="border-radius: 5px; padding: 5px" class="bg-danger">
-<b>NOTE</b>: Users can disable flushing if they wish to do so, by calling
-<b>disableFlushOnCheckpoint()</b> on the created <b>ElasticsearchSink</b>. Be 
aware
-that this essentially means the sink will not provide any strong
-delivery guarantees anymore, even with checkpoint for the topology enabled.
+<b>注意</b>: 如果用户愿意,可以通过在创建的

Review comment:
       ```suggestion
   <b>注意</b>:如果用户愿意,可以通过在创建的
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -411,61 +389,44 @@ input.addSink(new ElasticsearchSink(
 {{< /tab >}}
 {{< /tabs >}}
 
-The above example will let the sink re-add requests that failed due to
-queue capacity saturation and drop requests with malformed documents, without
-failing the sink. For all other failures, the sink will fail. If a 
`ActionRequestFailureHandler`
-is not provided to the constructor, the sink will fail for any kind of error.
+上面的示例 sink 重新添加由于队列容量已满而失败的请求,同时丢弃文档格式错误的请求,而不会使 sink 失败。
+对于其它故障,sink 将会失败。如果未向构造器提供一个 `ActionRequestFailureHandler`,那么任何类型的错误都会导致 sink 
失败。
 
-Note that `onFailure` is called for failures that still occur only after the
-`BulkProcessor` internally finishes all backoff retry attempts.
-By default, the `BulkProcessor` retries to a maximum of 8 attempts with
-an exponential backoff. For more information on the behaviour of the
-internal `BulkProcessor` and how to configure it, please see the following 
section.
+注意,`onFailure` 仅在 `BulkProcessor` 内部完成所有延迟重试后仍发生故障时被调用。
+默认情况下,`BulkProcessor` 最多重试 8 次,两次重试之间的等待时间呈指数增长。有关 `BulkProcessor` 
内部行为以及如何配置它的更多信息,请参阅以下部分。
 
-By default, if a failure handler is not provided, the sink uses a
-`NoOpFailureHandler` that simply fails for all kinds of exceptions. The
-connector also provides a `RetryRejectedExecutionFailureHandler` implementation
-that always re-add requests that have failed due to queue capacity saturation.
+默认情况下,如果未提供失败处理程序,那么 sink 使用 `NoOpFailureHandler` 来简单处理所有的异常。
+连接器还提供了一个 `RetryRejectedExecutionFailureHandler` 实现,它总是重新添加由于队列容量已满导致失败的请求。
 
 <p style="border-radius: 5px; padding: 5px" class="bg-danger">
-<b>IMPORTANT</b>: Re-adding requests back to the internal <b>BulkProcessor</b>
-on failures will lead to longer checkpoints, as the sink will also
-need to wait for the re-added requests to be flushed when checkpointing.
-For example, when using <b>RetryRejectedExecutionFailureHandler</b>, 
checkpoints
-will need to wait until Elasticsearch node queues have enough capacity for
-all the pending requests. This also means that if re-added requests never
-succeed, the checkpoint will never finish.
+<b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 
checkpoint 时, sink 还需要等待重新添加的请求被刷新。
+例如,当使用 <b>RetryRejectedExecutionFailureHandler</b> 时,
+checkpoint 需要等到 Elasticsearch 节点队列有足够的容量来处理所有挂起的请求。
+这也就意味着如果重新添加的请求永远不成功,checkpoint 也将永远不会完成。
 </p>
 
-### Configuring the Internal Bulk Processor
+### 配置内部批量处理器
 
-The internal `BulkProcessor` can be further configured for its behaviour
-on how buffered action requests are flushed, by setting the following values in
-the provided `Map<String, String>`:
+通过在提供的 `Map<String, String>` 中设置以下值,内部 `BulkProcessor` 可以进一步配置其如何刷新缓存操作请求的行为:
 
- * **bulk.flush.max.actions**: Maximum amount of actions to buffer before 
flushing.
- * **bulk.flush.max.size.mb**: Maximum size of data (in megabytes) to buffer 
before flushing.
- * **bulk.flush.interval.ms**: Interval at which to flush regardless of the 
amount or size of buffered actions.
- 
-For versions 2.x and above, configuring how temporary request errors are
-retried is also supported:
- 
- * **bulk.flush.backoff.enable**: Whether or not to perform retries with 
backoff delay for a flush
- if one or more of its actions failed due to a temporary 
`EsRejectedExecutionException`.
- * **bulk.flush.backoff.type**: The type of backoff delay, either `CONSTANT` 
or `EXPONENTIAL`
- * **bulk.flush.backoff.delay**: The amount of delay for backoff. For constant 
backoff, this
- is simply the delay between each retry. For exponential backoff, this is the 
initial base delay.
- * **bulk.flush.backoff.retries**: The amount of backoff retries to attempt.
+ * **bulk.flush.max.actions**:刷新前最大缓存的操作数。
+ * **bulk.flush.max.size.mb**:刷新前最大缓存的数据量(以兆字节为单位)。
+ * **bulk.flush.interval.ms**:刷新的时间间隔(不论缓存操作的数量或大小如何)。
 
-More information about Elasticsearch can be found [here](https://elastic.co).
+对于 2.x 及以上版本,还支持配置如何重试临时请求错误:
 
-## Packaging the Elasticsearch Connector into an Uber-Jar
+ * **bulk.flush.backoff.enable**:如果一个或多个请求由于临时的 `EsRejectedExecutionException` 
而失败,是否为刷新执行带有延迟的重试操作。
+ * **bulk.flush.backoff.type**:延迟重试的类型,`CONSTANT` 或者 `EXPONENTIAL`。
+ * 
**bulk.flush.backoff.delay**:延迟重试的时间间隔。对于常量延迟来说,此值是每次重试间的间隔。对于指数延迟来说,此值是延迟的初始值。
+ * **bulk.flush.backoff.retries**:延迟重试次数。
 
-For the execution of your Flink program, it is recommended to build a
-so-called uber-jar (executable jar) containing all your dependencies
-(see [here]({{< ref "docs/dev/datastream/project-configuration" >}}) for 
further information).
+可以在[此处](https://elastic.co)找到 Elasticsearch 的更多信息。
 
-Alternatively, you can put the connector's jar file into Flink's `lib/` folder 
to make it available
-system-wide, i.e. for all job being run.
+## 将 Elasticsearch 连接器打包到 Uber-Jar 中
+
+为了执行你的 Flink 程序,建议构建一个 uber-jar (可执行的 jar),其中包含了你所有的依赖
+(更多信息参见[此处]({{< ref "docs/dev/datastream/project-configuration" >}}))。
+
+或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在全局范围内可用,即用于所有的作业。

Review comment:
       ```suggestion
   或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在全局范围内可用,即可用于所有的作业。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -411,61 +389,44 @@ input.addSink(new ElasticsearchSink(
 {{< /tab >}}
 {{< /tabs >}}
 
-The above example will let the sink re-add requests that failed due to
-queue capacity saturation and drop requests with malformed documents, without
-failing the sink. For all other failures, the sink will fail. If a 
`ActionRequestFailureHandler`
-is not provided to the constructor, the sink will fail for any kind of error.
+上面的示例 sink 重新添加由于队列容量已满而失败的请求,同时丢弃文档格式错误的请求,而不会使 sink 失败。
+对于其它故障,sink 将会失败。如果未向构造器提供一个 `ActionRequestFailureHandler`,那么任何类型的错误都会导致 sink 
失败。
 
-Note that `onFailure` is called for failures that still occur only after the
-`BulkProcessor` internally finishes all backoff retry attempts.
-By default, the `BulkProcessor` retries to a maximum of 8 attempts with
-an exponential backoff. For more information on the behaviour of the
-internal `BulkProcessor` and how to configure it, please see the following 
section.
+注意,`onFailure` 仅在 `BulkProcessor` 内部完成所有延迟重试后仍发生故障时被调用。
+默认情况下,`BulkProcessor` 最多重试 8 次,两次重试之间的等待时间呈指数增长。有关 `BulkProcessor` 
内部行为以及如何配置它的更多信息,请参阅以下部分。
 
-By default, if a failure handler is not provided, the sink uses a
-`NoOpFailureHandler` that simply fails for all kinds of exceptions. The
-connector also provides a `RetryRejectedExecutionFailureHandler` implementation
-that always re-add requests that have failed due to queue capacity saturation.
+默认情况下,如果未提供失败处理程序,那么 sink 使用 `NoOpFailureHandler` 来简单处理所有的异常。
+连接器还提供了一个 `RetryRejectedExecutionFailureHandler` 实现,它总是重新添加由于队列容量已满导致失败的请求。
 
 <p style="border-radius: 5px; padding: 5px" class="bg-danger">
-<b>IMPORTANT</b>: Re-adding requests back to the internal <b>BulkProcessor</b>
-on failures will lead to longer checkpoints, as the sink will also
-need to wait for the re-added requests to be flushed when checkpointing.
-For example, when using <b>RetryRejectedExecutionFailureHandler</b>, 
checkpoints
-will need to wait until Elasticsearch node queues have enough capacity for
-all the pending requests. This also means that if re-added requests never
-succeed, the checkpoint will never finish.
+<b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 
checkpoint 时, sink 还需要等待重新添加的请求被刷新。

Review comment:
       ```suggestion
   <b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 
checkpoint 时,sink 还需要等待重新添加的请求被刷新。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -396,13 +374,13 @@ input.addSink(new ElasticsearchSink(
                 RequestIndexer indexer) {
 
             if (ExceptionUtils.findThrowable(failure, 
EsRejectedExecutionException.class).isPresent()) {
-                // full queue; re-add document for indexing
+                 // 队列已满;重新添加文档进行索引

Review comment:
       ```suggestion
                   // 队列已满;重新添加文档进行索引
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -98,7 +92,7 @@ DataStream<String> input = ...;
 
 Map<String, String> config = new HashMap<>();
 config.put("cluster.name", "my-cluster-name");
-// This instructs the sink to emit after every element, otherwise they would 
be buffered
+// 这指示 sink 在接收每个元素之后立即提交,否则它们将被缓存

Review comment:
       Maybe you could translate it in a better way.
   `
   // 这指示 sink 在接收每个元素之后立即提交,否则它们将被缓存
   `

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -276,81 +270,65 @@ esSinkBuilder.setRestClientFactory(new RestClientFactory {
   }
 })
 
-// finally, build and add the sink to the job's pipeline
+// 最后,构建并添加 sink 到作业管道中
 input.addSink(esSinkBuilder.build)
 ```
 {{< /tab >}}
 {{< /tabs >}}
 
-For Elasticsearch versions that still uses the now deprecated 
`TransportClient` to communicate
-with the Elasticsearch cluster (i.e., versions equal or below 5.x), note how a 
`Map` of `String`s
-is used to configure the `ElasticsearchSink`. This config map will be directly
-forwarded when creating the internally used `TransportClient`.
-The configuration keys are documented in the Elasticsearch documentation
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
-Especially important is the `cluster.name` parameter that must correspond to
-the name of your cluster.
+对于仍然使用已被弃用的 `TransportClient` 和 Elasticsearch 集群通信的 Elasticsearch 版本 (即,小于或等于 
5.x 的版本),
+请注意如何使用一个 `String` 类型的 `Map` 配置 `ElasticsearchSink`。在创建内部使用的 `TransportClient` 
时将直接转发此配置映射。
+配置项参见[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)的
 Elasticsearch 文档中。
+特别重要的是参数 `cluster.name` 必须和你的集群名称对应上。
 
-For Elasticsearch 6.x and above, internally, the `RestHighLevelClient` is used 
for cluster communication.
-By default, the connector uses the default configurations for the REST client. 
To have custom
-configuration for the REST client, users can provide a `RestClientFactory` 
implementation when 
-setting up the `ElasticsearchClient.Builder` that builds the sink.
+对于 Elasticsearch 6.x 及以上版本,内部使用 `RestHighLevelClient` 和集群通信。
+默认情况下,连接器使用 REST 客户端的默认配置。
+如果要使用自定义配置的 REST 客户端,用户可以在设置构建 sink 的 `ElasticsearchClient.Builder` 时提供一个 
`RestClientFactory` 的实现。
 
-Also note that the example only demonstrates performing a single index
-request for each incoming element. Generally, the `ElasticsearchSinkFunction`
-can be used to perform multiple requests of different types (ex.,
-`DeleteRequest`, `UpdateRequest`, etc.). 
+另外注意,该示例仅演示了对每个传入的元素执行单个索引请求。
+通常,`ElasticsearchSinkFunction` 可用于执行多个不同类型的请求(例如 `DeleteRequest`、 
`UpdateRequest` 等)。
 
-Internally, each parallel instance of the Flink Elasticsearch Sink uses
-a `BulkProcessor` to send action requests to the cluster.
-This will buffer elements before sending them in bulk to the cluster. The 
`BulkProcessor`
-executes bulk requests one at a time, i.e. there will be no two concurrent
-flushes of the buffered actions in progress.
+在内部,Flink Elasticsearch Sink 的每个并行实例使用一个 `BulkProcessor` 向集群发送操作请求。
+这将使得元素在发送到集群之前进行批量缓存。

Review comment:
       ```suggestion
   这会在元素批量发送到集群之前进行缓存。
   ```
   Just a minor comment.

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -276,81 +270,65 @@ esSinkBuilder.setRestClientFactory(new RestClientFactory {
   }
 })
 
-// finally, build and add the sink to the job's pipeline
+// 最后,构建并添加 sink 到作业管道中
 input.addSink(esSinkBuilder.build)
 ```
 {{< /tab >}}
 {{< /tabs >}}
 
-For Elasticsearch versions that still uses the now deprecated 
`TransportClient` to communicate
-with the Elasticsearch cluster (i.e., versions equal or below 5.x), note how a 
`Map` of `String`s
-is used to configure the `ElasticsearchSink`. This config map will be directly
-forwarded when creating the internally used `TransportClient`.
-The configuration keys are documented in the Elasticsearch documentation
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
-Especially important is the `cluster.name` parameter that must correspond to
-the name of your cluster.
+对于仍然使用已被弃用的 `TransportClient` 和 Elasticsearch 集群通信的 Elasticsearch 版本 (即,小于或等于 
5.x 的版本),
+请注意如何使用一个 `String` 类型的 `Map` 配置 `ElasticsearchSink`。在创建内部使用的 `TransportClient` 
时将直接转发此配置映射。
+配置项参见[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)的
 Elasticsearch 文档中。
+特别重要的是参数 `cluster.name` 必须和你的集群名称对应上。

Review comment:
       ```suggestion
   特别重要的是参数 `cluster.name` 必须和你的集群名称对应。
   ```

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -166,10 +160,10 @@ ElasticsearchSink.Builder<String> esSinkBuilder = new 
ElasticsearchSink.Builder<
     }
 );
 
-// configuration for the bulk requests; this instructs the sink to emit after 
every element, otherwise they would be buffered
+// 批量请求的配置;这指示 sink 在接收每个元素之后立即提交,否则它们将被缓存

Review comment:
       `指示`->`设置` ? 
   Free translation is easier to understand than literal translation. Only a 
minor comment.

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -51,29 +49,25 @@ of the Elasticsearch installation:
         <td>{{< artifact flink-connector-elasticsearch6 withScalaVersion 
>}}</td>
     </tr>
     <tr>
-        <td>7 and later versions</td>
+        <td>7 及更高版本</td>
         <td>{{< artifact flink-connector-elasticsearch7 withScalaVersion 
>}}</td>
     </tr>
   </tbody>
 </table>
 
-Note that the streaming connectors are currently not part of the binary
-distribution. See [here]({{< ref "docs/dev/datastream/project-configuration" 
>}}) for information
-about how to package the program with the libraries for cluster execution.
+请注意,流连接器目前不是二进制发行版的一部分。
+有关如何将程序和用于集群执行的库一起打包,参考[此处]({{< ref 
"docs/dev/datastream/project-configuration" >}})
 
-## Installing Elasticsearch
+## 安装 Elasticsearch
 
-Instructions for setting up an Elasticsearch cluster can be found
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
-Make sure to set and remember a cluster name. This must be set when
-creating an `ElasticsearchSink` for requesting document actions against your 
cluster.
+Elasticsearch 
集群的设置可以参考[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html)。
+确保设置并记住集群名称。这是在创建 `ElasticsearchSink` 请求集群文档操作时必须要设置的。

Review comment:
       `确保`->`确认`?

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -276,81 +270,65 @@ esSinkBuilder.setRestClientFactory(new RestClientFactory {
   }
 })
 
-// finally, build and add the sink to the job's pipeline
+// 最后,构建并添加 sink 到作业管道中
 input.addSink(esSinkBuilder.build)
 ```
 {{< /tab >}}
 {{< /tabs >}}
 
-For Elasticsearch versions that still uses the now deprecated 
`TransportClient` to communicate
-with the Elasticsearch cluster (i.e., versions equal or below 5.x), note how a 
`Map` of `String`s
-is used to configure the `ElasticsearchSink`. This config map will be directly
-forwarded when creating the internally used `TransportClient`.
-The configuration keys are documented in the Elasticsearch documentation
-[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html).
-Especially important is the `cluster.name` parameter that must correspond to
-the name of your cluster.
+对于仍然使用已被弃用的 `TransportClient` 和 Elasticsearch 集群通信的 Elasticsearch 版本 (即,小于或等于 
5.x 的版本),
+请注意如何使用一个 `String` 类型的 `Map` 配置 `ElasticsearchSink`。在创建内部使用的 `TransportClient` 
时将直接转发此配置映射。

Review comment:
       Maybe we could translate it in a better way.
   

##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -411,61 +389,44 @@ input.addSink(new ElasticsearchSink(
 {{< /tab >}}
 {{< /tabs >}}
 
-The above example will let the sink re-add requests that failed due to
-queue capacity saturation and drop requests with malformed documents, without
-failing the sink. For all other failures, the sink will fail. If a 
`ActionRequestFailureHandler`
-is not provided to the constructor, the sink will fail for any kind of error.
+上面的示例 sink 重新添加由于队列容量已满而失败的请求,同时丢弃文档格式错误的请求,而不会使 sink 失败。
+对于其它故障,sink 将会失败。如果未向构造器提供一个 `ActionRequestFailureHandler`,那么任何类型的错误都会导致 sink 
失败。
 
-Note that `onFailure` is called for failures that still occur only after the
-`BulkProcessor` internally finishes all backoff retry attempts.
-By default, the `BulkProcessor` retries to a maximum of 8 attempts with
-an exponential backoff. For more information on the behaviour of the
-internal `BulkProcessor` and how to configure it, please see the following 
section.
+注意,`onFailure` 仅在 `BulkProcessor` 内部完成所有延迟重试后仍发生故障时被调用。
+默认情况下,`BulkProcessor` 最多重试 8 次,两次重试之间的等待时间呈指数增长。有关 `BulkProcessor` 
内部行为以及如何配置它的更多信息,请参阅以下部分。
 
-By default, if a failure handler is not provided, the sink uses a
-`NoOpFailureHandler` that simply fails for all kinds of exceptions. The
-connector also provides a `RetryRejectedExecutionFailureHandler` implementation
-that always re-add requests that have failed due to queue capacity saturation.
+默认情况下,如果未提供失败处理程序,那么 sink 使用 `NoOpFailureHandler` 来简单处理所有的异常。
+连接器还提供了一个 `RetryRejectedExecutionFailureHandler` 实现,它总是重新添加由于队列容量已满导致失败的请求。
 
 <p style="border-radius: 5px; padding: 5px" class="bg-danger">
-<b>IMPORTANT</b>: Re-adding requests back to the internal <b>BulkProcessor</b>
-on failures will lead to longer checkpoints, as the sink will also
-need to wait for the re-added requests to be flushed when checkpointing.
-For example, when using <b>RetryRejectedExecutionFailureHandler</b>, 
checkpoints
-will need to wait until Elasticsearch node queues have enough capacity for
-all the pending requests. This also means that if re-added requests never
-succeed, the checkpoint will never finish.
+<b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 
checkpoint 时, sink 还需要等待重新添加的请求被刷新。
+例如,当使用 <b>RetryRejectedExecutionFailureHandler</b> 时,
+checkpoint 需要等到 Elasticsearch 节点队列有足够的容量来处理所有挂起的请求。
+这也就意味着如果重新添加的请求永远不成功,checkpoint 也将永远不会完成。
 </p>
 
-### Configuring the Internal Bulk Processor
+### 配置内部批量处理器
 
-The internal `BulkProcessor` can be further configured for its behaviour
-on how buffered action requests are flushed, by setting the following values in
-the provided `Map<String, String>`:
+通过在提供的 `Map<String, String>` 中设置以下值,内部 `BulkProcessor` 可以进一步配置其如何刷新缓存操作请求的行为:
 
- * **bulk.flush.max.actions**: Maximum amount of actions to buffer before 
flushing.
- * **bulk.flush.max.size.mb**: Maximum size of data (in megabytes) to buffer 
before flushing.
- * **bulk.flush.interval.ms**: Interval at which to flush regardless of the 
amount or size of buffered actions.
- 
-For versions 2.x and above, configuring how temporary request errors are
-retried is also supported:
- 
- * **bulk.flush.backoff.enable**: Whether or not to perform retries with 
backoff delay for a flush
- if one or more of its actions failed due to a temporary 
`EsRejectedExecutionException`.
- * **bulk.flush.backoff.type**: The type of backoff delay, either `CONSTANT` 
or `EXPONENTIAL`
- * **bulk.flush.backoff.delay**: The amount of delay for backoff. For constant 
backoff, this
- is simply the delay between each retry. For exponential backoff, this is the 
initial base delay.
- * **bulk.flush.backoff.retries**: The amount of backoff retries to attempt.
+ * **bulk.flush.max.actions**:刷新前最大缓存的操作数。
+ * **bulk.flush.max.size.mb**:刷新前最大缓存的数据量(以兆字节为单位)。
+ * **bulk.flush.interval.ms**:刷新的时间间隔(不论缓存操作的数量或大小如何)。
 
-More information about Elasticsearch can be found [here](https://elastic.co).
+对于 2.x 及以上版本,还支持配置如何重试临时请求错误:
 
-## Packaging the Elasticsearch Connector into an Uber-Jar
+ * **bulk.flush.backoff.enable**:如果一个或多个请求由于临时的 `EsRejectedExecutionException` 
而失败,是否为刷新执行带有延迟的重试操作。
+ * **bulk.flush.backoff.type**:延迟重试的类型,`CONSTANT` 或者 `EXPONENTIAL`。
+ * 
**bulk.flush.backoff.delay**:延迟重试的时间间隔。对于常量延迟来说,此值是每次重试间的间隔。对于指数延迟来说,此值是延迟的初始值。
+ * **bulk.flush.backoff.retries**:延迟重试次数。
 
-For the execution of your Flink program, it is recommended to build a
-so-called uber-jar (executable jar) containing all your dependencies
-(see [here]({{< ref "docs/dev/datastream/project-configuration" >}}) for 
further information).
+可以在[此处](https://elastic.co)找到 Elasticsearch 的更多信息。
 
-Alternatively, you can put the connector's jar file into Flink's `lib/` folder 
to make it available
-system-wide, i.e. for all job being run.
+## 将 Elasticsearch 连接器打包到 Uber-Jar 中
+
+为了执行你的 Flink 程序,建议构建一个 uber-jar (可执行的 jar),其中包含了你所有的依赖

Review comment:
       ```suggestion
   建议构建一个包含程序所有依赖的 uber-jar (可执行的 jar),以便更好地执行你的 Flink 程序。
   ```
   Only a minor suggestion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to