Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/22880#discussion_r229204016
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -182,18 +182,20 @@ private[parquet] class ParquetRowConverter(
// Converters for each field.
private val fieldConverters: Array[Converter with
HasParentContainerUpdater] = {
- parquetType.getFields.asScala.zip(catalystType).zipWithIndex.map {
- case ((parquetFieldType, catalystField), ordinal) =>
- // Converted field value should be set to the `ordinal`-th cell of
`currentRow`
- newConverter(parquetFieldType, catalystField.dataType, new
RowUpdater(currentRow, ordinal))
+ parquetType.getFields.asScala.map {
+ case parquetField =>
--- End diff --
Do we really need pattern match here? Or just
```scala
parquetType.getFields.asScala.map { parquetField =>
...
}
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]