caicancai commented on code in PR #3706:
URL: https://github.com/apache/calcite/pull/3706#discussion_r1505234803


##########
testkit/src/main/java/org/apache/calcite/test/SqlOperatorTest.java:
##########
@@ -6268,6 +6268,18 @@ void checkRegexpExtract(SqlOperatorFixture f0, 
FunctionAlias functionAlias) {
     f.checkScalar("rand_integer(2, 11)", 1, "INTEGER NOT NULL");
   }
 
+  /** Test case for <a 
href="https://issues.apache.org/jira/browse/CALCITE-6283";>
+   * [CALCITE-6283] Function array_append with a NULL array argument crashes 
with
+   * NullPointerException</a>. */
+  @Test void testArrayNullFunc() {
+    final SqlOperatorFixture f = fixture().withLibrary(SqlLibrary.SPARK);
+    f.checkNull("array_append(null, 2)");
+    f.checkNull("array_prepend(null, 2)");
+    f.checkNull("array_remove(null, 2)");
+    f.checkNull("array_contains(null, 2)");

Review Comment:
   > I have changed the type checker to reject NULL literals for arrays. 
However, it seems to be that Spark is inconsistent, since it allows NULL values 
for arrays at runtime, but not at compile time. This is probably because Spark 
uses a different type checking/inference algorithm.
   
   spark sql, after generating the Unresolved Logical Plan, runs a series of 
rules to turn it into the Logical Plan, which solves some type conversion 
issues there, and the type handling of array_append(null,2) seems to be in that 
part



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to