[
https://issues.apache.org/jira/browse/DRILL-6331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16449483#comment-16449483
]
ASF GitHub Bot commented on DRILL-6331:
---------------------------------------
Github user vdiravka commented on a diff in the pull request:
https://github.com/apache/drill/pull/1214#discussion_r183633623
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/easy/EasyFormatPlugin.java
---
@@ -147,10 +147,12 @@ CloseableRecordBatch getReaderBatch(FragmentContext
context, EasySubScan scan) t
List<RecordReader> readers = new LinkedList<>();
List<Map<String, String>> implicitColumns = Lists.newArrayList();
Map<String, String> mapWithMaxColumns = Maps.newLinkedHashMap();
+ boolean supportsFileImplicitColumns = scan.getSelectionRoot() != null;
for(FileWork work : scan.getWorkUnits()){
--- End diff --
`for (`
> Parquet filter pushdown does not support the native hive reader
> ---------------------------------------------------------------
>
> Key: DRILL-6331
> URL: https://issues.apache.org/jira/browse/DRILL-6331
> Project: Apache Drill
> Issue Type: Improvement
> Components: Storage - Hive
> Affects Versions: 1.13.0
> Reporter: Arina Ielchiieva
> Assignee: Arina Ielchiieva
> Priority: Major
> Fix For: 1.14.0
>
>
> Initially HiveDrillNativeParquetGroupScan was based mainly on HiveScan, the
> core difference between them was
> that HiveDrillNativeParquetScanBatchCreator was creating ParquetRecordReader
> instead of HiveReader.
> This allowed to read Hive parquet files using Drill native parquet reader but
> did not expose Hive data to Drill optimizations.
> For example, filter push down, limit push down, count to direct scan
> optimizations.
> Hive code had to be refactored to use the same interfaces as
> ParquestGroupScan in order to be exposed to such optimizations.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)