haggy opened a new pull request, #5965: URL: https://github.com/apache/hudi/pull/5965
## *Tips* - *Thank you very much for contributing to Apache Hudi.* - *Please review https://hudi.apache.org/contribute/how-to-contribute before opening a pull request.* ## What is the purpose of the pull request This PR changes the `HoodieCleaner` utility so that errors will propagate to spark + YARN (or whatever RM you are using). Currently, if the cleaner fails for any reason the application status (in YARN) is not set to `FAILED`. When running the cleaner outside of the write process (such as DeltaStreamer) this result in a job that cannot be tracked by the coordinating system. We are using Apache Airflow to run the cleaner in parallel on our tables. If the cleaner fails, we need the application status to be set to `FAILED` so that the airflow sensor for the job can alert us. There was an [old issue and PR](https://issues.apache.org/jira/browse/HUDI-1749) created to catch all cleaner errors and log them. According to the ticket, the change was done because the cleaner process was hanging during certain scenarios. I have not been able to reproduce this behavior. There is no information about the environment on the ticket (possibly only occurred under spark 2.x?) ## Brief change log *(for example:)* - *Modify HoodieCleaner.java to remove try/catch block* ## Verify this pull request *(Please pick either of the following options)* This pull request is a trivial rework / code cleanup without any test coverage. ## Committer checklist - [ ] Has a corresponding JIRA in PR title & commit - [x] Commit message is descriptive of the change - [ ] CI is green - [x] Necessary doc changes done or have another open PR - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
