XComp commented on a change in pull request #18910:
URL: https://github.com/apache/flink/pull/18910#discussion_r815901156
##########
File path: docs/content.zh/docs/deployment/overview.md
##########
@@ -152,7 +152,14 @@ When deploying Flink, there are often multiple options
available for each buildi
</tbody>
</table>
-
+### Repeatable Resource Cleanup Strategy
+
+Once a job has reached a globally terminal state of either finished, failed or
cancelled, the
+external component resources associated with the job are then cleaned up. In
the event of a
+failure when cleaning up a resource, Flink will attempt to retry the cleanup
based on
+a repeatable retry strategy. You can [configure]({{< ref
"docs/deployment/config" >}}) this
+retry strategy to change parameters such as number of retries, the delay
between between
+retries and whether retries follow a fixed-delay or an exponential-delay
strategy.
Review comment:
```suggestion
retry strategy.
```
IMHO, this is redundant information. The provided link should be enough to
point to the configuration options. The risk of redundant documentation is that
we might miss updating it here when changing some behavior.
##########
File path: docs/content.zh/docs/deployment/ha/overview.md
##########
@@ -73,3 +73,17 @@ Flink 提供了两种高可用服务实现:
为了恢复提交的作业,Flink 持久化元数据和 job
组件。高可用数据将一直保存,直到相应的作业执行成功、被取消或最终失败。当这些情况发生时,将删除所有高可用数据,包括存储在高可用服务中的元数据。
{{< top >}}
+
+## JobResultStore
+
+In order to preserve a job's scheduling status across failover events and
prevent erroneous
+re-execution of globally terminated (i.e. finished, cancelled or failed) jobs,
Flink persists
+status of terminated jobs to a filesystem using the JobResultStore.
+The JobResultStore allows job results to outlive a finished job, and can be
used by
+Flink components involved in the recovery of a highly-available cluster in
order to
+determine whether a job should be subject to recovery.
+
+The JobResultStore has sensible defaults for its behaviour, such as result
storage
+location, but these can be [configured]({{< ref "docs/deployment/config" >}}).
Review comment:
Also here, we might want to refer to the specific section:
`docs/deployment/config/#high-availability`
##########
File path: docs/content.zh/docs/deployment/overview.md
##########
@@ -152,7 +152,14 @@ When deploying Flink, there are often multiple options
available for each buildi
</tbody>
</table>
-
+### Repeatable Resource Cleanup Strategy
+
+Once a job has reached a globally terminal state of either finished, failed or
cancelled, the
+external component resources associated with the job are then cleaned up. In
the event of a
+failure when cleaning up a resource, Flink will attempt to retry the cleanup
based on
+a repeatable retry strategy. You can [configure]({{< ref
"docs/deployment/config" >}}) this
Review comment:
```suggestion
a repeatable retry strategy. You can [configure]({{< ref
"[docs/deployment/config](/docs/deployment/config/#retryable-cleanup" >}}) this
```
Not sure whether it works as is but we should use the anchor for Retryable
Cleanup `#retryable-cleanup` from PR #18913 here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]