aditiwari01 commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-808839630


   Yes. My changes are also in HoodieSparkSqlWriter where we generate avro 
schema from df schema. The only difference is where to keep the convertor 
function. Instead of having a separate DataSourceUtils.regenerateSchema 
function, I enhannced convertStructTypeToAvroSchema method itself to handle 
null values. Seems to me that this is a limitation in 
convertStructTypeToAvroSchema function and we need not have a separate util 
only to add default values. Anytime anyone needs to get schema from dataframe 
must get with defaults handled.
   
   Ultimately from flow point of view, this flow also enhances 
HoodieSparkSqlWriter eventually.
   
   Besides this, is there any doc to run unit/integration tests on the project? 
Or is there any pipeline I can test my local branch on?
   Do we need Docker setup for unit tests as well? I tried "mvn clean install 
-DskipITs", but it also fails indefinately trying to connect to localhost:54522.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to