Daniel-Davies commented on code in PR #38867:
URL: https://github.com/apache/spark/pull/38867#discussion_r1063284902
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##########
@@ -4601,6 +4601,231 @@ case class ArrayExcept(left: Expression, right:
Expression) extends ArrayBinaryL
newLeft: Expression, newRight: Expression): ArrayExcept = copy(left =
newLeft, right = newRight)
}
+@ExpressionDescription(
+ usage = "_FUNC_(x, pos, val) - Places val into index pos of array x (array
indices start at 1)",
+ examples = """
+ Examples:
+ > SELECT _FUNC_(array(1, 2, 3, 4), 5, 5);
+ [1,2,3,4,5]
Review Comment:
No worries- I can revert to the last commit if we want to go with (1). My
understanding is that standard SQL99 specifies that arrays are 1 indexed, which
is why Spark arrays are 1 indexed.
Personally, (1) sounds like a good option to me. However I appreciate that
this is out of my knowledge level, so I'm happy to pick whichever one of my
commits that works for you.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]