codecov-commenter commented on PR #14605: URL: https://github.com/apache/dolphinscheduler/pull/14605#issuecomment-1643569584
## [Codecov](https://app.codecov.io/gh/apache/dolphinscheduler/pull/14605?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) Report > Merging [#14605](https://app.codecov.io/gh/apache/dolphinscheduler/pull/14605?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) (71caa9c) into [dev](https://app.codecov.io/gh/apache/dolphinscheduler/commit/311a71512302296cecdd36e1b8882ed4b7884b88?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) (311a715) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. > :exclamation: Current head 71caa9c differs from pull request most recent head c933feb. Consider uploading reports for the commit c933feb to get more accurate results ```diff @@ Coverage Diff @@ ## dev #14605 +/- ## ============================================ - Coverage 38.48% 38.48% -0.01% Complexity 4546 4546 ============================================ Files 1254 1254 Lines 43724 43724 Branches 4825 4825 ============================================ - Hits 16828 16825 -3 - Misses 25023 25026 +3 Partials 1873 1873 ``` [see 1 file with indirect coverage changes](https://app.codecov.io/gh/apache/dolphinscheduler/pull/14605/indirect-changes?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) :mega: We’re building smart automated test selection to slash your CI/CD build times. [Learn more](https://about.codecov.io/iterative-testing/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
