dtenedor commented on code in PR #37501:
URL: https://github.com/apache/spark/pull/37501#discussion_r944913477
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala:
##########
@@ -282,14 +239,17 @@ private[parquet] class ParquetRowConverter(
// Create a RowUpdater instance for converting Parquet objects to
Catalyst rows. If any fields
// in the Catalyst result schema have associated existence default
values, maintain a boolean
// array to track which fields have been explicitly assigned for each
row.
- val rowUpdater: RowUpdater =
- if (catalystType.hasExistenceDefaultValues) {
- resetExistenceDefaultsBitmask(catalystType)
- new RowUpdaterWithBitmask(
- currentRow, catalystFieldIndex,
catalystType.existenceDefaultsBitmask)
- } else {
- new RowUpdater(currentRow, catalystFieldIndex)
+ val rowUpdater: RowUpdater = new RowUpdater(currentRow,
catalystFieldIndex)
+ if (catalystType.hasExistenceDefaultValues) {
+ for (i <- 0 until catalystType.existenceDefaultValues.size) {
+ catalystType.existenceDefaultsBitmask(i) =
+ if (i < parquetType.getFieldCount) {
Review Comment:
We discussed this offline. I moved this code out of the
`parquetType.getFields.asScala.map { parquetField => ...` loop, and also ported
the explanation into a comment here:
```
// Assume the schema for a Parquet file-based table contains N fields. Then
if we later
// run a command "ALTER TABLE t ADD COLUMN c DEFAULT <value>" on the Parquet
table, this
// adds one field to the Catalyst schema. Then if we query the old files
with the new
// Catalyst schema, we should only apply the existence default value to all
columns > N.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]