li-ang-666 commented on issue #9804:
URL: https://github.com/apache/hudi/issues/9804#issuecomment-1749301109

   > Yeah, I checked the code for schema resolving, the `TableSchemaResolver` 
first decodes the schema from instant commit metadata, then from the table 
option `hoodie.table.create.schema` in hoodie.properties, then from the data 
files. Not sure why your code runs into resolvind from the data files, because 
your hoodie.properties already includes the table schema.
   > 
   > Here is the code snippet:
   > 
   > 
https://github.com/apache/hudi/blob/b77286f176f1a606c807139042c2bd1f56883016/hudi-common/src/main/java/org/apache/hudi/common/table/TableSchemaResolver.java#L193
   
   I guess the parquet file produced by bulk_insert contains int96, how could I 
do not write int96 but timestamp-millis


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to