[
https://issues.apache.org/jira/browse/DRILL-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533501#comment-15533501
]
ASF GitHub Bot commented on DRILL-4905:
---------------------------------------
Github user jinfengni commented on a diff in the pull request:
https://github.com/apache/drill/pull/597#discussion_r81196052
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetGroupScan.java
---
@@ -926,16 +952,22 @@ public GroupScan applyLimit(long maxRecords) {
fileNames.add(rowGroupInfo.getPath());
}
- if (fileNames.size() == fileSet.size() ) {
+ // If there is no change in fileSet and maxRecords is >= batchSize, no
need to create new groupScan.
+ if (fileNames.size() == fileSet.size() && (maxRecords >=
recommendedBatchSize) ) {
// There is no reduction of rowGroups. Return the original groupScan.
logger.debug("applyLimit() does not apply!");
return null;
}
+ // If limit maxRecords is less than batchSize, update batchSize to the
limit size.
+ if (maxRecords < recommendedBatchSize) {
+ recommendedBatchSize = (int) maxRecords;
+ }
+
try {
FileSelection newSelection = new FileSelection(null,
Lists.newArrayList(fileNames), getSelectionRoot(), cacheFileRoot, false);
logger.debug("applyLimit() reduce parquet file # from {} to {}",
fileSet.size(), fileNames.size());
- return this.clone(newSelection);
+ return this.clone(newSelection, recommendedBatchSize);
--- End diff --
I feel in case file selection is unchanged and maxRecords <
recommenedBatchSize, we do not have to re-create a new parquetGroupScan. In
such case, all we need is to re-set batchsize. Recreate a parquetgroup with
same fileselection would incur overhead of reading parquet metadata.
> Push down the LIMIT to the parquet reader scan to limit the numbers of
> records read
> -----------------------------------------------------------------------------------
>
> Key: DRILL-4905
> URL: https://issues.apache.org/jira/browse/DRILL-4905
> Project: Apache Drill
> Issue Type: Bug
> Components: Storage - Parquet
> Affects Versions: 1.8.0
> Reporter: Padma Penumarthy
> Assignee: Padma Penumarthy
> Fix For: 1.9.0
>
>
> Limit the number of records read from disk by pushing down the limit to
> parquet reader.
> For queries like
> select * from <table> limit N;
> where N < size of Parquet row group, we are reading 32K/64k rows or entire
> row group. This needs to be optimized to read only N rows.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)