Yes, if you delete the deployment directly, all the HA data will be
retained. And you could recover the Flink job by creating a new deployment.

You could also find this description in the documentation[1].


[1].
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/ha/kubernetes_ha/#high-availability-data-clean-up

Best,
Yang

Alexis Sarda-Espinosa <alexis.sarda-espin...@microfocus.com> 于2021年10月8日周五
下午10:47写道:

> Hi Yang,
>
> thanks for the confirmation. If I manually stop the job by deleting the
> Kubernetes deployment before it completes, I suppose the files will not be
> cleaned up, right? That's a somewhat non-standard scenario, so I wouldn't
> expect Flink to clean up, I just want to be sure.
>
> Regards,
> Alexis.
>
> ------------------------------
> *From:* Yang Wang <danrtsey...@gmail.com>
> *Sent:* Friday, October 8, 2021 5:24 AM
> *To:* Alexis Sarda-Espinosa <alexis.sarda-espin...@microfocus.com>
> *Cc:* Flink ML <user@flink.apache.org>
> *Subject:* Re: Kubernetes HA - Reusing storage dir for different clusters
>
> When the Flink job reached to global terminal state(FAILED, CANCELED,
> FINISHED), all the HA related data(including pointers in ConfigMap and
> concrete data in DFS) will be cleaned up automatically.
>
> Best,
> Yang
>
> Alexis Sarda-Espinosa <alexis.sarda-espin...@microfocus.com>
> 于2021年10月4日周一 下午3:59写道:
>
> Hello,
>
>
>
> If I deploy a Flink-Kubernetes application with HA, I need to set
> high-availability.storageDir. If my application is a batch job that may run
> multiple times with the same configuration, do I need to manually clean up
> the storage dir between each execution?
>
>
>
> Regards,
>
> Alexis.
>
>
>
>

Reply via email to