[
https://issues.apache.org/jira/browse/FLINK-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852289#comment-17852289
]
Yun Tang commented on FLINK-9043:
---------------------------------
[~mszacillo] After six years of launching this discussion, I think current
[upgrade
mode|https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/concepts/controller-flow/#upgrademode-and-suspendcancel-behaviour]
in flink-kubernetes-operator might be the better way to resume from previous
checkpoint/savepoint. Let the outer managed system instead of Flink system
itself decide what to upgrade could be the correct choice.
> Introduce a friendly way to resume the job from externalized checkpoints
> automatically
> --------------------------------------------------------------------------------------
>
> Key: FLINK-9043
> URL: https://issues.apache.org/jira/browse/FLINK-9043
> Project: Flink
> Issue Type: New Feature
> Components: Runtime / Checkpointing
> Reporter: Godfrey He
> Priority: Not a Priority
> Labels: auto-deprioritized-major, auto-deprioritized-minor,
> pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> I know a flink job can reovery from checkpoint with restart strategy, but can
> not recovery as spark streaming jobs when job is starting.
> Every time, the submitted flink job is regarded as a new job, while , in the
> spark streaming job, which can detect the checkpoint directory first, and
> then recovery from the latest succeed one. However, Flink only can recovery
> until the job failed first, then retry with strategy.
>
> So, would flink support to recover from the checkpoint directly in a new job?
> h2. New description by [~sihuazhou]
> Currently, it's quite a bit not friendly for users to recover job from the
> externalized checkpoint, user need to find the dedicate dir for the job which
> is not a easy thing when there are too many jobs. This ticket attend to
> introduce a more friendly way to allow the user to use the externalized
> checkpoint to do recovery.
> The implementation steps are copied from the comments of [~StephanEwen]:
> - We could make this an option where you pass a flag (-r) to automatically
> look for the latest checkpoint in a given directory.
> - If more than one jobs checkpointed there before, this operation would fail.
> - We might also need a way to have jobs not create the UUID subdirectory,
> otherwise the scanning for the latest checkpoint would not easily work.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)