LuciferYang commented on code in PR #47414:
URL: https://github.com/apache/spark/pull/47414#discussion_r1686150662


##########
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala:
##########
@@ -1275,11 +1275,32 @@ class JDBCV2Suite extends QueryTest with 
SharedSparkSession with ExplainSuiteHel
       "BONUS IS NOT NULL, SIN(BONUS) < -0.08, SINH(BONUS) > 200.0, COS(BONUS) 
> 0.9, " +
       "COSH(BONUS) > 200.0, TAN(BONUS) < -0.08, TANH(BONUS) = 1.0, COT(BONUS) 
< -11.0, " +
       "ASIN(BONUS) > 0.1, ACOS(BONUS) > 1.4, ATAN(BONUS) > 1.4, (ATAN2(BONUS, 
BONUS)) > 0.7],")
-    checkAnswer(df16, Seq(Row(1, "cathy", 9000, 1200, false),
-      Row(2, "alex", 12000, 1200, false), Row(6, "jen", 12000, 1200, true)))
 
-    // H2 does not support log2, asinh, acosh, atanh, cbrt
+    // When arguments for asin and acos are invalid (< -1 || > 1) in H2
+    val e = intercept[SparkException] {

Review Comment:
   Can the test for df16 be fixed in place to use a set of compliant data, and 
then place the test with exceptional data into a separate test suite? Name this 
new test case with the prefix `SPARK-48943:`. By doing this, there should also 
be no need to rename `df17`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to