[ 
https://issues.apache.org/jira/browse/DRILL-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15573624#comment-15573624
 ] 

ASF GitHub Bot commented on DRILL-4905:
---------------------------------------

Github user jinfengni commented on a diff in the pull request:

    https://github.com/apache/drill/pull/597#discussion_r83335163
  
    --- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/ParquetRecordReaderTest.java
 ---
    @@ -637,7 +637,7 @@ public void testPerformance(@Injectable final 
DrillbitContext bitContext,
         final FileSystem fs = new CachedSingleFileSystem(fileName);
         final BufferAllocator allocator = RootAllocatorFactory.newRoot(c);
         for(int i = 0; i < 25; i++) {
    -      final ParquetRecordReader rr = new ParquetRecordReader(context, 
256000, fileName, 0, fs,
    +      final ParquetRecordReader rr = new ParquetRecordReader(context, 
256000, -1, fileName, 0, fs,
    --- End diff --
    
    If no param for "NumRecordsToRead" means rowCount, we do not need to change 
here. 


> Push down the LIMIT to the parquet reader scan to limit the numbers of 
> records read
> -----------------------------------------------------------------------------------
>
>                 Key: DRILL-4905
>                 URL: https://issues.apache.org/jira/browse/DRILL-4905
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Parquet
>    Affects Versions: 1.8.0
>            Reporter: Padma Penumarthy
>            Assignee: Padma Penumarthy
>             Fix For: 1.9.0
>
>
> Limit the number of records read from disk by pushing down the limit to 
> parquet reader.
> For queries like
> select * from <table> limit N; 
> where N < size of Parquet row group, we are reading 32K/64k rows or entire 
> row group. This needs to be optimized to read only N rows.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to