[
https://issues.apache.org/jira/browse/FLINK-27274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523499#comment-17523499
]
Zhu Zhu commented on FLINK-27274:
---------------------------------
Sorry but I do not quite understand the problem.
By design, if a cluster was normally shutdown (e.g. via stop-cluster.sh), all
data will be cleaned and no job will be recovered if a new Flink cluster is
launched later.
Only if the master node exited abnormally (crash due to a fatal error or killed
by an outer system), the master node may restart and recover from a previous
state (if the HA environment is properly setup).
>From the log you shared, I see that no job is recovered, which is expected.
>(see log "Successfully recovered 0 persisted job graphs")
Would you elaborate more about the problem?
> Job cannot be recovered, after restarting cluster
> -------------------------------------------------
>
> Key: FLINK-27274
> URL: https://issues.apache.org/jira/browse/FLINK-27274
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / API
> Affects Versions: 1.15.0
> Environment: Flink 1.15.0-rc3
> [https://github.com/apache/flink/archive/refs/tags/release-1.15.0-rc3.tar.gz]
> Reporter: macdoor615
> Priority: Blocker
> Fix For: 1.15.1
>
> Attachments: flink-conf.yaml,
> flink-gum-standalonesession-0-hb3-dev-flink-000.log.zip,
> flink-gum-taskexecutor-2-hb3-dev-flink-000.log,
> new_cf_alarm_no_recover.yaml.sql
>
>
> 1. execute new_cf_alarm_no_recover.yaml.sql with sql-client.sh
> config file: flink-conf.yaml
> the job run properly
> 2. restart cluster with command
> stop-cluster.sh
> start-cluster.sh
> 3. job cannot be recovered
> log files
> flink-gum-standalonesession-0-hb3-dev-flink-000.log
> flink-gum-taskexecutor-2-hb3-dev-flink-000.log
> 4. not all job can not be recovered, some can, some can not, at same time
> 5. all job can be recovered on Flink 1.14.4
--
This message was sent by Atlassian Jira
(v8.20.1#820001)