dbatomic commented on code in PR #45064:
URL: https://github.com/apache/spark/pull/45064#discussion_r1483558418
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/ColumnStats.scala:
##########
@@ -270,8 +273,8 @@ private[columnar] final class StringColumnStats extends
ColumnStats {
}
def gatherValueStats(value: UTF8String, size: Int): Unit = {
Review Comment:
I was also planning stats to be covered in some future PR. The thing is that
in this code we used to explicitly call `compareTo` on UTF8String, which
clearly is not allowed anymore with collations, so I had to do some tuning. I
tried to follow same approach as we did for `DecimalType` in this code path.
Are there any additional tests that I can run/get stats people to review
this? If I am not mistaken this should be a relatively low risk change.
##########
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##########
@@ -46,7 +47,7 @@
* <p>
* Note: This is not designed for general use cases, should not be used
outside SQL.
*/
-public final class UTF8String implements Comparable<UTF8String>,
Externalizable, KryoSerializable,
+public final class UTF8String implements Externalizable, KryoSerializable,
Review Comment:
Makes sense. Done.
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collationExpressions.scala:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.codegen._
+import org.apache.spark.sql.catalyst.util.CollationFactory
+import org.apache.spark.sql.types._
+import org.apache.spark.unsafe.types.UTF8String
+
+/**
+ * A function that marks a given expression with specified collation.
+ * This function is pass-through, it will not modify the input data.
+ * Only type metadata will be updated.
+ */
+@ExpressionDescription(
+ usage = "expr _FUNC_ collationName",
+ examples = """
+ Examples:
+ > SELECT 'Spark SQL' COLLATE 'UCS_BASIC_LCASE';
+ Spark SQL
+ """,
+ since = "4.0.0",
+ group = "string_funcs")
+case class Collate(child: Expression, collationName: String)
+ extends UnaryExpression with ExpectsInputTypes {
+ private val collationId = CollationFactory.collationNameToId(collationName)
+ override def dataType: DataType = StringType(collationId)
+ override def inputTypes: Seq[AbstractDataType] = Seq(StringType)
+
+ override protected def withNewChildInternal(
+ newChild: Expression): Expression = copy(newChild)
+
+ override def eval(row: InternalRow): Any = child.eval(row)
+
+ override protected def doGenCode(ctx: CodegenContext, ev: ExprCode):
ExprCode =
+ defineCodeGen(ctx, ev, (in) => in)
+}
+
+/**
+ * A function that returns the collation name of a given expression.
+ */
+@ExpressionDescription(
+ usage = "_FUNC_(expr)",
+ examples = """
+ Examples:
+ > SELECT _FUNC_('Spark SQL');
+ UCS_BASIC
+ """,
+ since = "4.0.0",
+ group = "string_funcs")
+case class Collation(child: Expression) extends UnaryExpression {
Review Comment:
I think it should be fine even with user-defined. Let's go with runtime
replaceable.
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala:
##########
@@ -650,7 +651,7 @@ class CodegenContext extends Logging {
case FloatType =>
val clsName = SQLOrderingUtil.getClass.getName.stripSuffix("$")
s"$clsName.compareFloats($c1, $c2)"
- // use c1 - c2 may overflow
+ case st: StringType => s"$c1.semanticCompare($c2, ${st.collationId})"
Review Comment:
Done. Did the same thing for both compare and equals.
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala:
##########
@@ -650,7 +651,7 @@ class CodegenContext extends Logging {
case FloatType =>
val clsName = SQLOrderingUtil.getClass.getName.stripSuffix("$")
s"$clsName.compareFloats($c1, $c2)"
- // use c1 - c2 may overflow
Review Comment:
It felt like the comment is outdated and thus removed it. Not sure to what
case does `c1 - c2 may overflow` refer to.
##########
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##########
@@ -1410,6 +1422,13 @@ public boolean equals(final Object other) {
}
}
+ /**
+ * Collation-aware equality comparison of two UTF8String.
+ */
+ public boolean semanticEquals(final UTF8String other, int collationId) {
+ return
CollationFactory.fetchCollation(collationId).equalsFunction.apply(this, other);
Review Comment:
In theory, impact should be negligible. `fetchCollation` is O(1) operation,
so this should only add a couple of extra instructions. Let's keep as is here
but add special handling for `UCS_BASIC` in `CodeGen`, as you suggested?
##########
sql/api/src/main/scala/org/apache/spark/sql/types/StringType.scala:
##########
@@ -26,7 +27,27 @@ import org.apache.spark.annotation.Stable
* @param collationId The id of collation for this StringType.
*/
@Stable
-class StringType private(val collationId: Int) extends AtomicType {
+class StringType private(val collationId: Int) extends AtomicType with
Serializable {
+ /**
+ * Returns whether assigned collation is the default spark collation
(UCS_BASIC).
+ */
+ private def isDefaultCollation: Boolean = collationId ==
StringType.DEFAULT_COLLATION_ID
Review Comment:
Yeah, it is public now. Just didn't have a use case for this method prior to
latest changes so I kept it private.
##########
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##########
@@ -989,6 +989,7 @@ primaryExpression
| CASE whenClause+ (ELSE elseExpression=expression)? END
#searchedCase
| CASE value=expression whenClause+ (ELSE elseExpression=expression)? END
#simpleCase
| name=(CAST | TRY_CAST) LEFT_PAREN expression AS dataType RIGHT_PAREN
#cast
+ | primaryExpression COLLATE stringLit
#collate
Review Comment:
Left arg is an expression, right one is literal. E.g. you can say
`'hello' :: 'world' COLLATE 'UCS_BASIC_LCASE'.
We can make collation name to be an expression as well as needed. But it
seems like the standard way to is to keep it as literal.
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collationExpressions.scala:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.codegen._
+import org.apache.spark.sql.catalyst.util.CollationFactory
+import org.apache.spark.sql.types._
+import org.apache.spark.unsafe.types.UTF8String
+
+/**
+ * A function that marks a given expression with specified collation.
+ * This function is pass-through, it will not modify the input data.
+ * Only type metadata will be updated.
+ */
+@ExpressionDescription(
+ usage = "expr _FUNC_ collationName",
+ examples = """
+ Examples:
+ > SELECT 'Spark SQL' COLLATE 'UCS_BASIC_LCASE';
+ Spark SQL
Review Comment:
Yep, it would be more interesting :)
Not sure whether we should mix different expressions here though?
I added what you suggested, but let's see what Wenchen is going to say here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]