kiwi-chen commented on issue #11803:
URL: https://github.com/apache/hudi/issues/11803#issuecomment-2463590439

   @rangareddy 
   In you examples , I noticed that the inserting statement is neccessary after 
changed table schema.
   I also tested as the following steps:
   1. alter table  column type(from int to long)
   2. select * from table, then throw an excetpion :
      Caused by: java.lang.NullPointerException
           at 
org.apache.spark.sql.execution.vectorized.WritableColumnVector.arrayData(WritableColumnVector.java:710)
   
   but if I execute an insert statement succesfully after altering column type, 
the select statement will be successful, i.e. the exception above will also 
disappear.
   So, it seems that we have to execute inserting statement after altering 
statement. But i don't know why, if you do, please let me know, thank you. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to