[
https://issues.apache.org/jira/browse/DRILL-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15524754#comment-15524754
]
ASF GitHub Bot commented on DRILL-4905:
---------------------------------------
Github user jinfengni commented on a diff in the pull request:
https://github.com/apache/drill/pull/597#discussion_r80606955
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetGroupScan.java
---
@@ -115,6 +115,8 @@
private List<RowGroupInfo> rowGroupInfos;
private Metadata.ParquetTableMetadataBase parquetTableMetadata = null;
private String cacheFileRoot = null;
+ private int batchSize;
+ private static final int DEFAULT_BATCH_LENGTH = 256 * 1024;
--- End diff --
why do we have this DEFAULT_BATCH_LENGTH, in addition to the new option of
"store.parquet.record_batch_size"?
> Push down the LIMIT to the parquet reader scan to limit the numbers of
> records read
> -----------------------------------------------------------------------------------
>
> Key: DRILL-4905
> URL: https://issues.apache.org/jira/browse/DRILL-4905
> Project: Apache Drill
> Issue Type: Bug
> Components: Storage - Parquet
> Affects Versions: 1.8.0
> Reporter: Padma Penumarthy
> Assignee: Padma Penumarthy
> Fix For: 1.9.0
>
>
> Limit the number of records read from disk by pushing down the limit to
> parquet reader.
> For queries like
> select * from <table> limit N;
> where N < size of Parquet row group, we are reading 32K/64k rows or entire
> row group. This needs to be optimized to read only N rows.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)