[
https://issues.apache.org/jira/browse/FLINK-27274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17524731#comment-17524731
]
macdoor615 commented on FLINK-27274:
------------------------------------
[~martijnvisser]
# It works with 1.14.4 and earlier version. I think stop-cluster.sh is to stop
the cluster in an orderly manner, and it can be restored to the previous state
as if it were recovered from an accident.
# I am tuning 1.15.0 flink-conf.yaml. I have upgraded to version 1.15.0 days
ago. I don't think
[https://nightlies.apache.org/flink/flink-docs-master/docs/ops/upgrading/] is
relevant to my question
> Job cannot be recovered, after restarting cluster
> -------------------------------------------------
>
> Key: FLINK-27274
> URL: https://issues.apache.org/jira/browse/FLINK-27274
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / API
> Affects Versions: 1.15.0
> Environment: Flink 1.15.0-rc3
> [https://github.com/apache/flink/archive/refs/tags/release-1.15.0-rc3.tar.gz]
> Reporter: macdoor615
> Priority: Major
> Attachments: flink-conf.yaml,
> flink-gum-standalonesession-0-hb3-dev-flink-000.log.3.zip,
> flink-gum-standalonesession-0-hb3-dev-flink-000.log.zip,
> flink-gum-taskexecutor-2-hb3-dev-flink-000.log, log.recover.debug.zip,
> new_cf_alarm_no_recover.yaml.sql
>
>
> 1. execute new_cf_alarm_no_recover.yaml.sql with sql-client.sh
> config file: flink-conf.yaml
> the job run properly
> 2. restart cluster with command
> stop-cluster.sh
> start-cluster.sh
> 3. job cannot be recovered
> log files
> flink-gum-standalonesession-0-hb3-dev-flink-000.log
> flink-gum-taskexecutor-2-hb3-dev-flink-000.log
> 4. not all job can not be recovered, some can, some can not, at same time
> 5. all job can be recovered on Flink 1.14.4
--
This message was sent by Atlassian Jira
(v8.20.7#820007)