We are still pretty new to Flink and I have a conceptual / DevOps question.

When a job is modified and we want to deploy the new version, what is the 
preferred method?  Our jobs have a lot of keyed state.

If we use snapshots we have old state that may no longer apply to the new 
pipeline.
If we start a new job we can reprocess historical data from Kafka, but that can 
be very resource heavy for a while.

Is there an option I am missing?  Are there facilities to “patch” or “purge” 
selectively the keyed state?

Michael

Reply via email to