elkhand commented on issue #2033:
URL: https://github.com/apache/iceberg/issues/2033#issuecomment-769952948
Anyone facing the same issue, to bypass the issue until this bug is fixed on
the Flink side:
You do not need to fix corrupted metadata files where
`flink.max-committed-checkpoint-id` is set to `Long.MAX_VALUE`.
Just follow this workflow for stateful upgrades going forward - this flow
works as expected and you do not get corrupted metadata file anymore:
- take manual savepoint, no job stopping or canceling `./bin/flink savepoint
${JOB_ID} /tmp/flink-savepoints`
- cancel your job
- do your code changes, upload a new jar of your job
- start your job with the last saved savepoint
After this flow, your `flink.max-committed-checkpoint-id` will be set to
correct `checkpointId` .
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]