Hi,

this is awesome, and really useful feature. If I might ask for one thing to consider - would it be possible to make the Savepoint manipulation API (at least writing the Savepoint) less dependent on other parts of Flink internals (e.g. |KeyedStateBootstrapFunction|) and provide something more general (e.g. some generic Writer)? Why I'm asking for that - I can totally imagine situation, where users might want to create bootstrapped state by some other runner (e.g. Apache Spark), and then run Apache Flink after the state has been created. This makes even more sense in context of Apache Beam, which provides all the necessary work to make this happen. The question is - would it be possible to design this feature so that writing the savepoint from different runner would be possible?

Cheers,

 Jan

On 5/30/19 1:14 AM, Seth Wiesman wrote:
Hey Everyone!
​
Gordon and I have been discussing adding a savepoint connector to flink for 
reading, writing and modifying savepoints.
​
This is useful for:
​
     Analyzing state for interesting patterns
     Troubleshooting or auditing jobs by checking for discrepancies in state
     Bootstrapping state for new applications
     Modifying savepoints such as:
         Changing max parallelism
         Making breaking schema changes
         Correcting invalid state
​
We are looking forward to your feedback!
​
This is the FLIP:
​
https://cwiki.apache.org/confluence/display/FLINK/FLIP-43%3A+Savepoint+Connector

Seth



Reply via email to