viirya commented on a change in pull request #34199:
URL: https://github.com/apache/spark/pull/34199#discussion_r726582676
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
##########
@@ -609,7 +610,13 @@ private[parquet] class ParquetRowConverter(
//
// If the element type does not match the Catalyst type and the
underlying repeated type
// does not belong to the legacy LIST type, then it is case 1;
otherwise, it is case 2.
- val guessedElementType = schemaConverter.convertField(repeatedType)
+ //
+ // Since `convertField` method requires a Parquet `ColumnIO` as input,
here we first create
+ // a dummy message type which wraps the given repeated type, and then
convert it to the
+ // `ColumnIO` using Parquet API.
+ val messageType =
Types.buildMessage().addField(repeatedType).named("foo")
+ val column = new ColumnIOFactory().getColumnIO(messageType)
+ val guessedElementType =
schemaConverter.convertField(column.getChild(0)).sparkType
Review comment:
Why this change is needed? We still get the same element type (spark) of
the array type. What `getColumnIO` gives us more than before here?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]