huxixiang commented on a change in pull request #16547:
URL: https://github.com/apache/flink/pull/16547#discussion_r680777787



##########
File path: docs/content.zh/docs/connectors/datastream/elasticsearch.md
##########
@@ -411,61 +389,44 @@ input.addSink(new ElasticsearchSink(
 {{< /tab >}}
 {{< /tabs >}}
 
-The above example will let the sink re-add requests that failed due to
-queue capacity saturation and drop requests with malformed documents, without
-failing the sink. For all other failures, the sink will fail. If a 
`ActionRequestFailureHandler`
-is not provided to the constructor, the sink will fail for any kind of error.
+上面的示例 sink 重新添加由于队列容量已满而失败的请求,同时丢弃文档格式错误的请求,而不会使 sink 失败。
+对于其它故障,sink 将会失败。如果未向构造器提供一个 `ActionRequestFailureHandler`,那么任何类型的错误都会导致 sink 
失败。
 
-Note that `onFailure` is called for failures that still occur only after the
-`BulkProcessor` internally finishes all backoff retry attempts.
-By default, the `BulkProcessor` retries to a maximum of 8 attempts with
-an exponential backoff. For more information on the behaviour of the
-internal `BulkProcessor` and how to configure it, please see the following 
section.
+注意,`onFailure` 仅在 `BulkProcessor` 内部完成所有延迟重试后仍发生故障时被调用。
+默认情况下,`BulkProcessor` 最多重试 8 次,两次重试之间的等待时间呈指数增长。有关 `BulkProcessor` 
内部行为以及如何配置它的更多信息,请参阅以下部分。
 
-By default, if a failure handler is not provided, the sink uses a
-`NoOpFailureHandler` that simply fails for all kinds of exceptions. The
-connector also provides a `RetryRejectedExecutionFailureHandler` implementation
-that always re-add requests that have failed due to queue capacity saturation.
+默认情况下,如果未提供失败处理程序,那么 sink 使用 `NoOpFailureHandler` 来简单处理所有的异常。
+连接器还提供了一个 `RetryRejectedExecutionFailureHandler` 实现,它总是重新添加由于队列容量已满导致失败的请求。
 
 <p style="border-radius: 5px; padding: 5px" class="bg-danger">
-<b>IMPORTANT</b>: Re-adding requests back to the internal <b>BulkProcessor</b>
-on failures will lead to longer checkpoints, as the sink will also
-need to wait for the re-added requests to be flushed when checkpointing.
-For example, when using <b>RetryRejectedExecutionFailureHandler</b>, 
checkpoints
-will need to wait until Elasticsearch node queues have enough capacity for
-all the pending requests. This also means that if re-added requests never
-succeed, the checkpoint will never finish.
+<b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 
checkpoint 时, sink 还需要等待重新添加的请求被刷新。
+例如,当使用 <b>RetryRejectedExecutionFailureHandler</b> 时,
+checkpoint 需要等到 Elasticsearch 节点队列有足够的容量来处理所有挂起的请求。
+这也就意味着如果重新添加的请求永远不成功,checkpoint 也将永远不会完成。
 </p>
 
-### Configuring the Internal Bulk Processor
+### 配置内部批量处理器
 
-The internal `BulkProcessor` can be further configured for its behaviour
-on how buffered action requests are flushed, by setting the following values in
-the provided `Map<String, String>`:
+通过在提供的 `Map<String, String>` 中设置以下值,内部 `BulkProcessor` 可以进一步配置其如何刷新缓存操作请求的行为:
 
- * **bulk.flush.max.actions**: Maximum amount of actions to buffer before 
flushing.
- * **bulk.flush.max.size.mb**: Maximum size of data (in megabytes) to buffer 
before flushing.
- * **bulk.flush.interval.ms**: Interval at which to flush regardless of the 
amount or size of buffered actions.
- 
-For versions 2.x and above, configuring how temporary request errors are
-retried is also supported:
- 
- * **bulk.flush.backoff.enable**: Whether or not to perform retries with 
backoff delay for a flush
- if one or more of its actions failed due to a temporary 
`EsRejectedExecutionException`.
- * **bulk.flush.backoff.type**: The type of backoff delay, either `CONSTANT` 
or `EXPONENTIAL`
- * **bulk.flush.backoff.delay**: The amount of delay for backoff. For constant 
backoff, this
- is simply the delay between each retry. For exponential backoff, this is the 
initial base delay.
- * **bulk.flush.backoff.retries**: The amount of backoff retries to attempt.
+ * **bulk.flush.max.actions**:刷新前最大缓存的操作数。
+ * **bulk.flush.max.size.mb**:刷新前最大缓存的数据量(以兆字节为单位)。
+ * **bulk.flush.interval.ms**:刷新的时间间隔(不论缓存操作的数量或大小如何)。
 
-More information about Elasticsearch can be found [here](https://elastic.co).
+对于 2.x 及以上版本,还支持配置如何重试临时请求错误:
 
-## Packaging the Elasticsearch Connector into an Uber-Jar
+ * **bulk.flush.backoff.enable**:如果一个或多个请求由于临时的 `EsRejectedExecutionException` 
而失败,是否为刷新执行带有延迟的重试操作。
+ * **bulk.flush.backoff.type**:延迟重试的类型,`CONSTANT` 或者 `EXPONENTIAL`。
+ * 
**bulk.flush.backoff.delay**:延迟重试的时间间隔。对于常量延迟来说,此值是每次重试间的间隔。对于指数延迟来说,此值是延迟的初始值。
+ * **bulk.flush.backoff.retries**:延迟重试次数。
 
-For the execution of your Flink program, it is recommended to build a
-so-called uber-jar (executable jar) containing all your dependencies
-(see [here]({{< ref "docs/dev/datastream/project-configuration" >}}) for 
further information).
+可以在[此处](https://elastic.co)找到 Elasticsearch 的更多信息。
 
-Alternatively, you can put the connector's jar file into Flink's `lib/` folder 
to make it available
-system-wide, i.e. for all job being run.
+## 将 Elasticsearch 连接器打包到 Uber-Jar 中
+
+为了执行你的 Flink 程序,建议构建一个 uber-jar (可执行的 jar),其中包含了你所有的依赖
+(更多信息参见[此处]({{< ref "docs/dev/datastream/project-configuration" >}}))。
+
+或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在全局范围内可用,即用于所有的作业。

Review comment:
       Accepted, thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to