steveloughran opened a new pull request, #4728: URL: https://github.com/apache/hadoop/pull/4728
### Description of PR Declares its compatibility with the stream capability "mapreduce.job.committer.dynamic.partitioning" spark will need to cast to StreamCapabilities and then probe. ### How was this patch tested? I have a patch matching changes in the spark code, with unit tests there to verify that it's not an error to ask for dynamic partition if the committer's hasCapability holds. Full integration tests could be added in my cloudstore repo https://github.com/hortonworks-spark/cloud-integration, a matter of lifting some tests from spark and making retargetable at other stores than localfs. Or maybe, given manifest committer works with file:// doing a unit test there to run iff spark is built against a hadoop release with the class ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
