[ 
https://issues.apache.org/jira/browse/DRILL-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627796#comment-15627796
 ] 

ASF GitHub Bot commented on DRILL-4706:
---------------------------------------

Github user sudheeshkatkam commented on a diff in the pull request:

    https://github.com/apache/drill/pull/639#discussion_r86074128
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetGroupScan.java
 ---
    @@ -822,10 +838,103 @@ private void getFiles(String path, List<FileStatus> 
fileStatuses) throws IOExcep
         }
       }
     
    +  /*
    +   * Figure out the best node to scan each of the rowGroups and update the 
preferredEndpoint.
    +   * Based on this, update the total work units assigned to the endpoint 
in the endpointAffinity.
    +   */
    +  private void computeRowGroupAssignment() {
    +    Map<DrillbitEndpoint, Integer> numEndpointAssignments = 
Maps.newHashMap();
    +    Map<DrillbitEndpoint, Long> numAssignedBytes = Maps.newHashMap();
    +
    +    // Do this for 2 iterations to adjust node assignments after first 
iteration.
    +    int numIterartions = 2;
    --- End diff --
    
    Iterartions -> iterations


> Fragment planning causes Drillbits to read remote chunks when local copies 
> are available
> ----------------------------------------------------------------------------------------
>
>                 Key: DRILL-4706
>                 URL: https://issues.apache.org/jira/browse/DRILL-4706
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Query Planning & Optimization
>    Affects Versions: 1.6.0
>         Environment: CentOS, RHEL
>            Reporter: Kunal Khatua
>            Assignee: Sorabh Hamirwasia
>              Labels: performance, planning
>
> When a table (datasize=70GB) of 160 parquet files (each having a single 
> rowgroup and fitting within one chunk) is available on a 10-node setup with 
> replication=3 ; a pure data scan query causes about 2% of the data to be read 
> remotely. 
> Even with the creation of metadata cache, the planner is selecting a 
> sub-optimal plan of executing the SCAN fragments such that some of the data 
> is served from a remote server. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to