Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90968113
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
footer = readFooter(configuration, file, range(split.getStart(),
split.getEnd()));
MessageType fileSchema = footer.getFileMetaData().getSchema();
FilterCompat.Filter filter = getFilter(configuration);
- blocks = filterRowGroups(filter, footer.getBlocks(), fileSchema);
+ try {
+ blocks = filterRowGroups(filter, footer.getBlocks(), fileSchema);
+ } catch (IllegalArgumentException e) {
+ // In the case where a particular parquet files does not contain
+ // the column(s) in the filter, we don't do filtering at this level
+ // PARQUET-389 will resolve this issue in Parquet 1.9, which may
be used
+ // by future Spark versions. This is a workaround for current
Spark version.
+ // Also the assumption here is that the predicates will be applied
later
+ blocks = footer.getBlocks();
--- End diff --
Could you please add a TODO item int the comment so that we won't forget to
remove this after upgrading to 1.9? Thanks!
```
// TODO Remove this hack after upgrading to parquet-mr 1.9.
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]