jlloh commented on issue #9256:
URL: https://github.com/apache/hudi/issues/9256#issuecomment-1652989720

   Ok so I tried loading in the specific parquet files written in partitions 
whereby the new column should exist, and realised that the parquet files don't 
contain the new column. Seems it might be an issue specific to my Flink job, 
when I reload from the savepoint it's not picking up the new column or 
modifying the jobgraph. Will dig into this further.
   
   One follow up question slightly off topic, what's the "safe" way to restart 
a Flink job from scratch and get it to write to the same Hudi table? I.e. let's 
say it's a problem with updating the jobgraph from the savepoint, and I want to 
stop my Flink job with some data loss, and continue writing to the hudi table 
after redeploying. I could not find much documentation on how I can let my 
Flink job just resume writing to an existing Hudi table and ignore the previous 
in-flight commits that did not finish.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to