[
https://issues.apache.org/jira/browse/DRILL-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526935#comment-15526935
]
ASF GitHub Bot commented on DRILL-4905:
---------------------------------------
Github user jinfengni commented on a diff in the pull request:
https://github.com/apache/drill/pull/597#discussion_r80754698
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetScanBatchCreator.java
---
@@ -107,7 +107,7 @@ public ScanBatch getBatch(FragmentContext context,
ParquetRowGroupScan rowGroupS
if
(!context.getOptions().getOption(ExecConstants.PARQUET_NEW_RECORD_READER).bool_val
&& !isComplex(footers.get(e.getPath()))) {
readers.add(
new ParquetRecordReader(
- context, e.getPath(), e.getRowGroupIndex(), fs,
+ context, rowGroupScan.getBatchSize(), e.getPath(),
e.getRowGroupIndex(), fs,
--- End diff --
If it's only for one type of parquet reader, please document it in the
JIRA, so that people will know this.
> Push down the LIMIT to the parquet reader scan to limit the numbers of
> records read
> -----------------------------------------------------------------------------------
>
> Key: DRILL-4905
> URL: https://issues.apache.org/jira/browse/DRILL-4905
> Project: Apache Drill
> Issue Type: Bug
> Components: Storage - Parquet
> Affects Versions: 1.8.0
> Reporter: Padma Penumarthy
> Assignee: Padma Penumarthy
> Fix For: 1.9.0
>
>
> Limit the number of records read from disk by pushing down the limit to
> parquet reader.
> For queries like
> select * from <table> limit N;
> where N < size of Parquet row group, we are reading 32K/64k rows or entire
> row group. This needs to be optimized to read only N rows.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)