HyukjinKwon commented on a change in pull request #28977:
URL: https://github.com/apache/spark/pull/28977#discussion_r458730964
##########
File path: project/SparkBuild.scala
##########
@@ -486,9 +485,11 @@ object SparkParallelTestGrouping {
)
private val DEFAULT_TEST_GROUP = "default_test_group"
+ private val HIVE_EXECUTION_TEST_GROUP = "hive_execution_test_group"
private def testNameToTestGroup(name: String): String = name match {
case _ if testsWhichShouldRunInTheirOwnDedicatedJvm.contains(name) => name
+ case _ if name.contains("org.apache.spark.sql.hive.execution") =>
HIVE_EXECUTION_TEST_GROUP
Review comment:
@xuanyuanking, shell we add a comment to distinguish
`testsWhichShouldRunInTheirOwnDedicatedJvm` and
`org.apache.spark.sql.hive.execution`?
I think the point is that it works differently from other opt-in test cases
above defined at `testsWhichShouldRunInTheirOwnDedicatedJvm` since we're
grouping all `org.apache.spark.sql.hive.execution.*` into a single group unlike
other test cases above, which might be difficult to catch at a glance.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]