[ 
https://issues.apache.org/jira/browse/DRILL-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15881390#comment-15881390
 ] 

ASF GitHub Bot commented on DRILL-5266:
---------------------------------------

Github user ppadma commented on a diff in the pull request:

    https://github.com/apache/drill/pull/749#discussion_r102827651
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/VarLenBinaryReader.java
 ---
    @@ -70,33 +87,21 @@ public long readFields(long recordsToReadInThisPass, 
ColumnReader<?> firstColumn
         return recordsReadInCurrentPass;
       }
     
    -
       private long determineSizesSerial(long recordsToReadInThisPass) throws 
IOException {
    -    int lengthVarFieldsInCurrentRecord = 0;
    -    boolean exitLengthDeterminingLoop = false;
    -    long totalVariableLengthData = 0;
    -    long recordsReadInCurrentPass = 0;
    -    do {
    +
    +    int recordsReadInCurrentPass = 0;
    +    top: do {
           for (VarLengthColumn<?> columnReader : columns) {
    -        if (!exitLengthDeterminingLoop) {
    -          exitLengthDeterminingLoop =
    -              columnReader.determineSize(recordsReadInCurrentPass, 
lengthVarFieldsInCurrentRecord);
    -        } else {
    -          break;
    +        // Return status is "done reading", meaning stop if true.
    +        if (columnReader.determineSize(recordsReadInCurrentPass, 0 /* 
unused */ )) {
    --- End diff --
    
    why not remove the unused parameter ?


> Parquet Reader produces "low density" record batches - bits vs. bytes
> ---------------------------------------------------------------------
>
>                 Key: DRILL-5266
>                 URL: https://issues.apache.org/jira/browse/DRILL-5266
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Parquet
>    Affects Versions: 1.10
>            Reporter: Paul Rogers
>            Assignee: Paul Rogers
>              Labels: ready-to-commit
>
> Testing with the managed sort revealed that, for at least one file, Parquet 
> produces "low-density" batches: batches in which only 5% of each value vector 
> contains actual data, with the rest being unused space. When fed into the 
> sort, we end up buffering 95% of wasted space, using only 5% of available 
> memory to hold actual query data. The result is poor performance of the sort 
> as it must spill far more frequently than expected.
> The managed sort analyzes incoming batches to prepare good memory use 
> estimates. The following the the output from the Parquet file in question:
> {code}
> Actual batch schema & sizes {
>   T1¦¦cs_sold_date_sk(std col. size: 4, actual col. size: 4, total size: 
> 196608, vector size: 131072, data size: 4516, row capacity: 32768, density: 4)
>   T1¦¦cs_sold_time_sk(std col. size: 4, actual col. size: 4, total size: 
> 196608, vector size: 131072, data size: 4516, row capacity: 32768, density: 4)
>   T1¦¦cs_ship_date_sk(std col. size: 4, actual col. size: 4, total size: 
> 196608, vector size: 131072, data size: 4516, row capacity: 32768, density: 4)
> ...
>   c_email_address(std col. size: 54, actual col. size: 27, total size: 53248, 
> vector size: 49152, data size: 30327, row capacity: 4095, density: 62)
>   Records: 1129, Total size: 32006144, Row width:28350, Density:5}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to