Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90982394
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
footer = readFooter(configuration, file, range(split.getStart(),
split.getEnd()));
MessageType fileSchema = footer.getFileMetaData().getSchema();
FilterCompat.Filter filter = getFilter(configuration);
- blocks = filterRowGroups(filter, footer.getBlocks(), fileSchema);
+ try {
+ blocks = filterRowGroups(filter, footer.getBlocks(), fileSchema);
+ } catch (IllegalArgumentException e) {
+ // In the case where a particular parquet files does not contain
+ // the column(s) in the filter, we don't do filtering at this level
+ // PARQUET-389 will resolve this issue in Parquet 1.9, which may
be used
+ // by future Spark versions. This is a workaround for current
Spark version.
+ // Also the assumption here is that the predicates will be applied
later
--- End diff --
@liancheng I just wonder if it is safe to rely on exception handling where
it might be a common case for reading multiple Parquet files with a merged
schema. Up to my understanding, we here know the schema of the target parquet
file from its footer and the referenced column from the filter. Would we maybe
try to check this and simply use if-else condition?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]