jlloh commented on issue #9256:
URL: https://github.com/apache/hudi/issues/9256#issuecomment-1653434166

   Update, @voonhous, turns out it was indeed an issue with my Flink setup. It 
was setup in Kubernetes in HA mode, and I was creating a savepoint, deleting 
the pods, and resuming by it by passing it the argument `-s`. 
   
   But it seems that in HA mode, somehow if the configmap in Kubernetes exists, 
it will ignore the savepoint and load from the checkpoint instead, and it was 
not picking up the new columns I added to my job, so the new columns were not 
being inserted.
   
   It worked as expected with this sequence:
   1. create savepoint
   2. delete flink deployment, clean up HA configmap
   3. `alter table add column` in spark
   4. redeploy new flink job with new column, and make sure that the new 
jobgraph for flink is updated to insert the new column
   5.  query new column from spark
   
   Thanks again for the quick support @voonhous, apologies that it was a 
misconfiguration on my side. 
   
   
   p.s. My minor question still stands above on starting a flink job from 
scratch and writing to the same hudi table, if you know the answer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to