Hi Yang,

thanks for the confirmation. If I manually stop the job by deleting the 
Kubernetes deployment before it completes, I suppose the files will not be 
cleaned up, right? That's a somewhat non-standard scenario, so I wouldn't 
expect Flink to clean up, I just want to be sure.

Regards,
Alexis.

________________________________
From: Yang Wang <danrtsey...@gmail.com>
Sent: Friday, October 8, 2021 5:24 AM
To: Alexis Sarda-Espinosa <alexis.sarda-espin...@microfocus.com>
Cc: Flink ML <user@flink.apache.org>
Subject: Re: Kubernetes HA - Reusing storage dir for different clusters

When the Flink job reached to global terminal state(FAILED, CANCELED, 
FINISHED), all the HA related data(including pointers in ConfigMap and concrete 
data in DFS) will be cleaned up automatically.

Best,
Yang

Alexis Sarda-Espinosa 
<alexis.sarda-espin...@microfocus.com<mailto:alexis.sarda-espin...@microfocus.com>>
 于2021年10月4日周一 下午3:59写道:

Hello,



If I deploy a Flink-Kubernetes application with HA, I need to set 
high-availability.storageDir. If my application is a batch job that may run 
multiple times with the same configuration, do I need to manually clean up the 
storage dir between each execution?



Regards,

Alexis.


Reply via email to