- For 
[FLINK-9043](https://issues.apache.org/jira/browse/FLINK-9043?filter=-6&jql=project%20%3D%20FLINK%20AND%20created%20%3E%3D%20-1w%20order%20by%20created%20DESC)

  ## What is the purpose of the change

  What we aim to do is to recover from the hdfs path automatically with the 
latest job's completed checkpoint. Currently, we can use 'run -s' with the 
metadata path manully, which is easy for single flink job to recover. But we 
have managed a lot of flink jobs, we want each flink job recovered just like 
spark streaming with getorcreate method from the latest completed jobs, without 
records lost. 

  - Each flink job has it own hdfs checkpoint path
  - Only support for HDFS(hdfs:// or viewfs://)
  - Support for RocksDBStateBackend and FsStateBackend
  - Support for legacy mode and new mode(dynamic scaling)

  ## Brief change log

  - add hdfs utils to get the latest job completed checkpoint metadata path
  - recover from the metadata path for legacy mode and new mode

  ## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
  - The S3 file system connector: (no)

  ## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (not documented)

[ Full content available at: https://github.com/apache/flink/pull/6633 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to