TheR1sing3un opened a new pull request, #12961: URL: https://github.com/apache/hudi/pull/12961
For the record we need to delete, we only need to read the `hoodie_meta_fields`, `record_keys` and the columns involved in the delete condition from the table, which can greatly reduce the amount of read data when deleting. > benchmark in our production - 1000 columns - execute `delete from table where col-a = 'xxx'` - 50,000,000 records per partition > before optimized <img width="2508" alt="image" src="https://github.com/user-attachments/assets/79b1a53b-3ff4-4fdc-bc35-ee8989fb20b5" /> > after optimized <img width="2524" alt="image" src="https://github.com/user-attachments/assets/350af7ee-38cf-477e-800b-7dfb6f16517c" /> In our wide table scenario, column pruning will greatly reduce the amount of data scanning and shuffle overhead in the entire process. ### Change Logs 1. introduce schema pruning for delete record ### Impact improve delete performance especially tables with more columns ### Risk level (write none, low medium or high below) low ### Documentation Update none ### Contributor's checklist - [x] Read through [contributor's guide](https://hudi.apache.org/contribute/how-to-contribute) - [x] Change Logs and Impact were stated clearly - [x] Adequate tests were added if applicable - [x] CI passed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
