Re: Cleaning old incremental checkpoint files

2021-09-17 Thread Yun Tang
to:rmetz...@apache.org>> Cc: user mailto:user@flink.apache.org>> Subject: Re: Cleaning old incremental checkpoint files Thanks Robert for your answer, this seems to be what we observed when we tried to delete the first time: Flink complained about missing files. I'm wondering then ho

Re: Cleaning old incremental checkpoint files

2021-09-07 Thread Robin Cassan
ould > let new job get decoupled with older checkpoints. Do you think that could > resolve your case? > > Best > Yun Tang > -- > *From:* Robin Cassan > *Sent:* Wednesday, September 1, 2021 17:38 > *To:* Robert Metzger > *Cc:* user > *Sub

Re: Cleaning old incremental checkpoint files

2021-09-03 Thread Yun Tang
ember 1, 2021 17:38 To: Robert Metzger Cc: user Subject: Re: Cleaning old incremental checkpoint files Thanks Robert for your answer, this seems to be what we observed when we tried to delete the first time: Flink complained about missing files. I'm wondering then how are people cleaning the

Re: Cleaning old incremental checkpoint files

2021-09-01 Thread Robin Cassan
Thanks Robert for your answer, this seems to be what we observed when we tried to delete the first time: Flink complained about missing files. I'm wondering then how are people cleaning their storage for incremental checkpoints? Is there any guarantee when using TTLs that after the TTL has expired,

Re: Cleaning old incremental checkpoint files

2021-08-03 Thread Robert Metzger
Hi Robin, Let's say you have two checkpoints #1 and #2, where #1 has been created by an old version or your job, and #2 has been created by the new version. When can you delete #1? In #1, there's a directory "/shared" that contains data that is also used by #2, because of the incremental nature of

Cleaning old incremental checkpoint files

2021-07-29 Thread Robin Cassan
Hi all! We've happily been running a Flink job in production for a year now, with the RocksDB state backend and incremental retained checkpointing on S3. We often release new versions of our jobs, which means we cancel the running one and submit another while restoring the previous jobId's last re