EnricoMi commented on code in PR #36150:
URL: https://github.com/apache/spark/pull/36150#discussion_r922713602
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala:
##########
@@ -92,6 +92,19 @@ private[sql] object QueryCompilationErrors extends
QueryErrorsBase {
pivotVal.toString, pivotVal.dataType.simpleString,
pivotCol.dataType.catalogString))
}
+ def unpivotRequiresValueColumns(ids: Seq[NamedExpression]): Throwable = {
+ new AnalysisException(
+ errorClass = "UNPIVOT_REQUIRES_VALUE_COLUMNS",
+ messageParameters = Array(ids.map(id =>
toSQLId(id.toString)).mkString(", ")))
+ }
+
+ def unpivotValDataTypeMismatchError(values: Seq[NamedExpression]): Throwable
= {
+ val dataTypes = values.map(_.dataType).toSet.map((dt: DataType) =>
toSQLType(dt))
Review Comment:
Wide tables would list a lot of types, possibly the same types multiple
times, which makes it harder to spot the incompatible types. And after the
tenth ordinal, the error message gets messy and less helpful.
Alternatively, the error could list a few (3) columns per type. That would
make it easier to identify the one / few column(s) that are incompatible with
all others.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]