[
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17124607#comment-17124607
]
alex zhu commented on FLINK-11909:
----------------------------------
I believe most flink users (such as me) are focusing on business or data
handling, not a native java programmer。So introducing a built-in retry
mechanism would be very great usability improvement。Retry for each record is
enough, it can solve "service/database restarting" such common scenarios。
> Provide default failure/timeout/backoff handling strategy for AsyncIO
> functions
> -------------------------------------------------------------------------------
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
> Issue Type: Improvement
> Components: API / DataStream
> Reporter: Rong Rong
> Assignee: Rong Rong
> Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function
> invoke fails [1]. It would be nice to have some default Async IO
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction
> timeout method to interact with the AsyncWaitOperator. For example (quote
> [~suez1224] in [2]):
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> Discussion also extended to introduce configuration such as:
> * MAX_RETRY_COUNT
> * RETRY_FAILURE_POLICY
> REF:
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling
> [2]
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Backoff-strategies-for-async-IO-functions-tt26580.html
--
This message was sent by Atlassian Jira
(v8.3.4#803005)