zhangyue19921010 opened a new pull request, #13365:
URL: https://github.com/apache/hudi/pull/13365

   ### Change Logs
   
   details are as followed
   
   ### Impact
   
   no
   
   ### Risk level (write none, low medium or high below)
   
   low
   
   ### Documentation Update
   
   _Describe any necessary documentation update if there is any new feature, 
config, or user-facing change. If not, put "none"._
   
   - _The config description must be updated if new configs are added or the 
default value of the configs are changed_
   - _Any new feature or user-facing change requires updating the Hudi website. 
Please create a Jira ticket, attach the
     ticket number here and follow the 
[instruction](https://hudi.apache.org/contribute/developer-setup#website) to 
make
     changes to the website._
   
   ### Contributor's checklist
   
   - [ ] Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [ ] Change Logs and Impact were stated clearly
   - [ ] Adequate tests were added if applicable
   - [ ] CI passed
   
   
   # Hudi Parquet Binary Stream Copy Optimization
   
   ## Background
   - Hudi tables primarily use **Parquet format**.
   - During **Clustering**, data undergoes redundant "column-to-row-to-column" 
transformations, involving:
     - Compression/decompression
     - Encoding/decoding
     - Row/column conversions
     - Format translations between compute engines and Hudi
   - Similar inefficiencies exist in:
     - Low-update-ratio **compaction** scenarios
     - **CoW write amplification** with minimal data updates
   
   ## Optimization Approach
   <img width="1314" alt="image" 
src="https://github.com/user-attachments/assets/aa3385d6-a90e-40b3-9a55-f45b7604c3d4";
 />
   
   - **HoodieParquetRewriter**: A foundational implementation for:  
     ✅ **Column-chunk-level binary stream copying**  
     ✅ Directly copying unmodified Parquet column chunks to target files  
     ✅ Rewriting Parquet footer for fast replication  
   - Applied to **Clustering scenarios** in this PR.
   
   ## Supported Capabilities
   ### 1. Schema Evolution
   - Handles schema changes (e.g., adding fields, widening numeric types). 
**Behavior**:  
     - Uses the table's latest schema as output schema.  
     - Compares input file schemas against the latest schema.  
     - Fills missing fields with **null values** during binary copying.
   
   ### 2. Hudi Custom Metadata Merging
   - Merges critical Hudi metadata from input files:  
     ```plaintext
     hoodie_min_record_key
     hoodie_max_record_key
     org.apache.hudi.bloomfilter
     hoodie_bloom_filter_type_code
   
   - Process:
     - Collect metadata from all input files.
     - Compute aggregated values (e.g., global min/max keys and merge bloom 
filters).
     - Write merged metadata to output files.
    
   ## Future Roadmap
   Extend binary copy support to Merge-on-Read (MoR) Compaction and CoW Upsert 
operations.
   
   ## Optimization Results
   
   Using identical configurations, Clustering was performed on 100GB of data 
with both the default strategy and the binary stream copy strategy. The binary 
stream copy strategy achieved:
   
   - **93% reduction in execution time** (from 18 minutes to 1.2 minutes)
   
   - **95% reduction in computational workload** (from 28.7 task-hours to 1.3 
task-hours)
   
   **Before**
   <img width="1482" alt="image (1)" 
src="https://github.com/user-attachments/assets/ac7800c7-f033-43c0-a5e8-277cf9cd2a77";
 />
   ![image 
(2)](https://github.com/user-attachments/assets/51835cf7-4d9e-4f21-b5e5-2d7a5fc3cad8)
   
   
   **After**
   
   <img width="1638" alt="image (3)" 
src="https://github.com/user-attachments/assets/c78d9201-0ab9-449d-9d51-3654f3e49fc7";
 />
   
   ![image 
(4)](https://github.com/user-attachments/assets/b5fa09c0-7448-48fb-b075-3e3a8acb4973)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to