[
https://issues.apache.org/jira/browse/FLINK-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16596983#comment-16596983
]
ASF GitHub Bot commented on FLINK-9043:
---------------------------------------
GodfreyJohnson opened a new pull request #6633: [FLINK-9043] restore from the
latest job's completed checkpoint for h…
URL: https://github.com/apache/flink/pull/6633
- For
[FLINK-9043](https://issues.apache.org/jira/browse/FLINK-9043?filter=-6&jql=project%20%3D%20FLINK%20AND%20created%20%3E%3D%20-1w%20order%20by%20created%20DESC)
## What is the purpose of the change
What we aim to do is to recover from the hdfs path automatically with the
latest job's completed checkpoint. Currently, we can use 'run -s' with the
metadata path manully, which is easy for single flink job to recover. But we
have managed a lot of flink jobs, we want each flink job recovered just like
spark streaming with getorcreate method from the latest completed jobs, without
records lost.
- Each flink job has it own hdfs checkpoint path
- Only support for HDFS(hdfs:// or viewfs://)
- Support for RocksDBStateBackend and FsStateBackend
- Support for legacy mode and new mode(dynamic scaling)
## Brief change log
- add hdfs utils to get the latest job completed checkpoint metadata path
- recover from the metadata path for legacy mode and new mode
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (no)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (no)
- The serializers: (no)
- The runtime per-record code paths (performance sensitive): (don't know)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
- The S3 file system connector: (no)
## Documentation
- Does this pull request introduce a new feature? (yes)
- If yes, how is the feature documented? (not documented)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Introduce a friendly way to resume the job from externalized checkpoints
> automatically
> --------------------------------------------------------------------------------------
>
> Key: FLINK-9043
> URL: https://issues.apache.org/jira/browse/FLINK-9043
> Project: Flink
> Issue Type: New Feature
> Reporter: godfrey johnson
> Assignee: Sihua Zhou
> Priority: Major
> Labels: pull-request-available
>
> I know a flink job can reovery from checkpoint with restart strategy, but can
> not recovery as spark streaming jobs when job is starting.
> Every time, the submitted flink job is regarded as a new job, while , in the
> spark streaming job, which can detect the checkpoint directory first, and
> then recovery from the latest succeed one. However, Flink only can recovery
> until the job failed first, then retry with strategy.
>
> So, would flink support to recover from the checkpoint directly in a new job?
> h2. New description by [~sihuazhou]
> Currently, it's quite a bit not friendly for users to recover job from the
> externalized checkpoint, user need to find the dedicate dir for the job which
> is not a easy thing when there are too many jobs. This ticket attend to
> introduce a more friendly way to allow the user to use the externalized
> checkpoint to do recovery.
> The implementation steps are copied from the comments of [~StephanEwen]:
> - We could make this an option where you pass a flag (-r) to automatically
> look for the latest checkpoint in a given directory.
> - If more than one jobs checkpointed there before, this operation would fail.
> - We might also need a way to have jobs not create the UUID subdirectory,
> otherwise the scanning for the latest checkpoint would not easily work.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)