Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1919#issuecomment-52123315
There are a couple of ways we can add tests, ideally we would do a little
of both:
- Find [existing hive
tests](https://github.com/apache/spark/tree/master/sql/hive/src/test/resources/ql/src/test/queries/clientpositive)
that test dynamic partitioning and add them to [our
whitelist](https://github.com/apache/spark/blob/master/sql/hive/compatibility/src/test/scala/org/apache/spark/sql/hive/execution/HiveCompatibilitySuite.scala#L209).
The test harness will automatically invoke hive to calculate the correct
answers. You need to make sure you have Hadoop and Hive compiled and the
environment variables set correctly as described in other [dependencies for
developers](https://github.com/apache/spark/tree/master/sql).
- Add tests to
[HiveQuerySuite](https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]