to:rmetz...@apache.org>>
Cc: user mailto:user@flink.apache.org>>
Subject: Re: Cleaning old incremental checkpoint files
Thanks Robert for your answer, this seems to be what we observed when we tried
to delete the first time: Flink complained about missing files.
I'm wondering then ho
ould
> let new job get decoupled with older checkpoints. Do you think that could
> resolve your case?
>
> Best
> Yun Tang
> --
> *From:* Robin Cassan
> *Sent:* Wednesday, September 1, 2021 17:38
> *To:* Robert Metzger
> *Cc:* user
> *Sub
ember 1, 2021 17:38
To: Robert Metzger
Cc: user
Subject: Re: Cleaning old incremental checkpoint files
Thanks Robert for your answer, this seems to be what we observed when we tried
to delete the first time: Flink complained about missing files.
I'm wondering then how are people cleaning the
Thanks Robert for your answer, this seems to be what we observed when we
tried to delete the first time: Flink complained about missing files.
I'm wondering then how are people cleaning their storage for incremental
checkpoints? Is there any guarantee when using TTLs that after the TTL has
expired,
Hi Robin,
Let's say you have two checkpoints #1 and #2, where #1 has been created by
an old version or your job, and #2 has been created by the new version.
When can you delete #1?
In #1, there's a directory "/shared" that contains data that is also used
by #2, because of the incremental nature of
Hi all!
We've happily been running a Flink job in production for a year now, with
the RocksDB state backend and incremental retained checkpointing on S3. We
often release new versions of our jobs, which means we cancel the running
one and submit another while restoring the previous jobId's last re