Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/8228#discussion_r37148466
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/CatalystRowConverter.scala
---
@@ -126,7 +169,24 @@ private[parquet] class CatalystRowConverter(
// Converters for each field.
private val fieldConverters: Array[Converter with
HasParentContainerUpdater] = {
- parquetType.getFields.zip(catalystType).zipWithIndex.map {
+ // In case of schema merging, `parquetType` can be a subset of
`catalystType`. We need to pad
+ // those missing fields and create converters for them, although
values of these fields are
+ // always null.
--- End diff --
This is a good question. Tested the following snippet against 1.4.1, and it
doesn't work as expected:
```
import sqlContext.implicits._
import org.apache.spark.sql.types._
sqlContext
.range(1).select('id as 'a, 'id as 'b)
.write.mode("overwrite").parquet("file:///tmp/schema")
sqlContext
.read
.schema(StructType(StructField("a", LongType, false) :: Nil))
.parquet("file:///tmp/schema").show()
```
So at least this won't be a regression. This worths further investigation
though.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]