xiarixiaoyao opened a new pull request, #6816:
URL: https://github.com/apache/hudi/pull/6816

   ### Change Logs
   
   add integrity check of merged parquet file for HoodieMergeHandle.
   
   In the current production environment, due to the instability of the 
cluster, it is very easy for hudi to write the corrupt parquet file, which 
makes the entire table unavailable,
   eg:
   Caused by: java.io.IOException: can not read class 
org.apache.parquet.format.PageHeader: Required field 'uncompressed_page_size' 
was not found in serialized data! Struct: 
org.apache.parquet.format.PageHeader$PageHeaderStandardScheme@183284b
        at org.apache.parquet.format.Util.read(Util.java:365)
        at org.apache.parquet.format.Util.readPageHeader(Util.java:132)
        at 
org.apache.parquet.hadoop.ParquetFileReader$Chunk.readPageHeader(ParquetFileReader.java:1382)
   
   
   After the hudi completes the parquet writing, add a simple parquet file 
integrity check to ensure that no corrupt files will enter the hudi table
   
   
   ### Impact
   
   Risk level: none
   
   
   ### Documentation Update
   
   N/A
   ### Contributor's checklist
   
   - [ ] Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [ ] Change Logs and Impact were stated clearly
   - [ ] Adequate tests were added if applicable
   - [ ] CI passed
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to