[ 
https://issues.apache.org/jira/browse/DRILL-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16088492#comment-16088492
 ] 

ASF GitHub Bot commented on DRILL-5616:
---------------------------------------

Github user paul-rogers commented on a diff in the pull request:

    https://github.com/apache/drill/pull/871#discussion_r127536980
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
    @@ -401,7 +398,7 @@ private void delayedSetup() {
         } else { // two phase
           // Adjust down the number of partitions if needed - when the memory 
available can not hold as
           // many batches (configurable option), plus overhead (e.g. hash 
table, links, hash values))
    -      while ( numPartitions * ( estMaxBatchSize * minBatchesPerPartition + 
8 * 1024 * 1024) > memAvail ) {
    +      while ( numPartitions * ( estMaxBatchSize * minBatchesPerPartition + 
2 * 1024 * 1024) > memAvail ) {
    --- End diff --
    
    Why 2 MB? Should this a constant so the name gives a hint?


> Hash Agg Spill: OOM while reading irregular varchar data
> --------------------------------------------------------
>
>                 Key: DRILL-5616
>                 URL: https://issues.apache.org/jira/browse/DRILL-5616
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Relational Operators
>    Affects Versions: 1.11.0
>            Reporter: Boaz Ben-Zvi
>            Assignee: Boaz Ben-Zvi
>             Fix For: 1.11.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> An OOM while aggregating a table of two varchar columns where sizes vary 
> significantly (  about 8 bytes long in average, but 250 bytes max )
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 327127360;
> select count( * ) from (select max(`filename`) from 
> dfs.`/drill/testdata/hash-agg/data2` group by no_nulls_col, nulls_col) d;
> {code}
> Error: RESOURCE ERROR: One or more nodes ran out of memory while executing 
> the query.
> OOM at Second Phase. Partitions: 2. Estimated batch size: 12255232. Planned 
> batches: 0. Rows spilled so far: 434127447 Memory limit: 163563680 so far 
> allocated: 150601728.
> Fragment 1:0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to