hudi-bot opened a new issue, #15959:
URL: https://github.com/apache/hudi/issues/15959

   When using flink 1.13.6 and hudi 0.13.0 cow + append + clustering mode, if 
the field list contains map type and aysnc clustering job scheduled, will throw 
exception: 
   {quote}The requested schema is not compatible with the file schema. 
incompatible types: required binary key (STRING) != optional binary key (STRING)
   {quote}
   Root reason is [HUDI-3378|https://github.com/apache/hudi/pull/7345] change 
parquet reader. The latest parquet reader is compatible with spark but not 
fully compatible with flink due to flink parquet schema is different from spark 
parquet schema.
   
   We will make two patch, the first patch fix this bug in 0.13.x. The last 
patch fix diff schema between flink parquet and spark parquet.
   
   ## JIRA info
   
   - Link: https://issues.apache.org/jira/browse/HUDI-6221
   - Type: Bug


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to