viirya commented on a change in pull request #31764:
URL: https://github.com/apache/spark/pull/31764#discussion_r589091797
##########
File path:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/PlanTest.scala
##########
@@ -51,6 +51,15 @@ trait CodegenInterpretedPlanTest extends PlanTest {
super.test(testName + " (interpreted path)", testTags: _*)(testFun)(pos)
}
}
+
+ protected def testFallback(
+ testName: String,
+ testTags: Tag*)(testFun: => Any)(implicit pos: source.Position): Unit = {
+ val codegenMode = CodegenObjectFactoryMode.FALLBACK.toString
+ withSQLConf(SQLConf.CODEGEN_FACTORY_MODE.key -> codegenMode) {
+ super.test(testName, testTags: _*)(testFun)(pos)
+ }
Review comment:
`FALLBACK` mode is not a special path other than codegen and interpreted
path. Under `FALLBACK` mode, Spark runs first codegen path and then interpreted
path if codegen fails.
So it sounds weird that some test cases works only at `FALLBACK` mode, so we
only need to run it with `FALLBACK` mode. Doesn't it just mean the test cases
may fail under codegen but can success under interpreted?
Wrapping the test function with `FALLBACK` config, means we may only run the
test in codegen path, if codegen path successes. It skips interpreted path in
the case.
So, it may unintentionally avoid the test coverage of interpreted path. That
means, if we use `testFallback`, it might only test codegen path if codegen
path successes. Interpreted path will not be tested for the case.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]