beliefer commented on code in PR #38867:
URL: https://github.com/apache/spark/pull/38867#discussion_r1063236793


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##########
@@ -4601,6 +4601,231 @@ case class ArrayExcept(left: Expression, right: 
Expression) extends ArrayBinaryL
     newLeft: Expression, newRight: Expression): ArrayExcept = copy(left = 
newLeft, right = newRight)
 }
 
+@ExpressionDescription(
+  usage = "_FUNC_(x, pos, val) - Places val into index pos of array x (array 
indices start at 1)",
+  examples = """
+    Examples:
+      > SELECT _FUNC_(array(1, 2, 3, 4), 5, 5);
+       [1,2,3,4,5]

Review Comment:
   @Daniel-Davies I'm sorry. I said wrong. It seems the array functions 
supported by Spark starts at 1. So there are two ways:
   1. All the new array functions still starts at 1. 
   2. Another way, the legacy functions starts at 1 is a mistake indeed. For 
follow snowflake, we have to break change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to