cashmand commented on code in PR #45703:
URL: https://github.com/apache/spark/pull/45703#discussion_r1539495963
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaConverter.scala:
##########
@@ -404,6 +401,35 @@ class ParquetToSparkSchemaConverter(
}
}
+ private def convertVariantField(groupColumn: GroupColumnIO): ParquetColumn =
{
+ if (groupColumn.getChildrenCount != 2 ) {
+ // We may allow more than two children in the future, so consider this
unsupported.
+ throw QueryCompilationErrors.
+ parquetTypeUnsupportedYetError("variant with more than two fields")
+ }
+ // Find the binary columns, and validate that they have the correct type.
+ val valueAndMetadata = Seq("value", "metadata").map { colName =>
+ val idx = (0 until groupColumn.getChildrenCount)
+ .find(groupColumn.getChild(_).getName == colName)
Review Comment:
I'm not sure I understand the concern, or how you'd like it changed. Can you
clarify? I'd like to basically do the same search for "value" and "metadata"
that is done for struct fields in `convertInternal`, and then do validation on
each one. It's not clear to me what the easier approach would be.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]