[
https://issues.apache.org/jira/browse/DRILL-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15127743#comment-15127743
]
ASF GitHub Bot commented on DRILL-4287:
---------------------------------------
Github user amansinha100 commented on a diff in the pull request:
https://github.com/apache/drill/pull/345#discussion_r51528494
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetGroupScan.java
---
@@ -338,8 +354,14 @@ private boolean hasSingleValue(ColumnMetadata
columnChunkMetaData) {
return (columnChunkMetaData != null) &&
(columnChunkMetaData.hasSingleValue());
}
+ @Override
--- End diff --
Yes, I suppose you encountered a failure in some tests due to this...
> Do lazy reading of parquet metadata cache file
> ----------------------------------------------
>
> Key: DRILL-4287
> URL: https://issues.apache.org/jira/browse/DRILL-4287
> Project: Apache Drill
> Issue Type: Bug
> Components: Query Planning & Optimization
> Affects Versions: 1.4.0
> Reporter: Aman Sinha
> Assignee: Jinfeng Ni
>
> Currently, the parquet metadata cache file is read eagerly during creation of
> the DrillTable (as part of ParquetFormatMatcher.isReadable()). This is not
> desirable from performance standpoint since there are scenarios where we want
> to do some up-front optimizations - e.g. directory-based partition pruning
> (see DRILL-2517) or potential limit 0 optimization etc. - and in such
> situations it is better to do lazy reading of the metadata cache file.
> This is a placeholder to perform such delayed reading since it is needed for
> the aforementioned optimizations.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)