[ 
https://issues.apache.org/jira/browse/IMPALA-11842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17742842#comment-17742842
 ] 

ASF subversion and git services commented on IMPALA-11842:
----------------------------------------------------------

Commit 9070895ed3b0ebb2506ddbf9d7bda9ffc1089bf6 in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=9070895ed ]

IMPALA-11842: Improve memory estimation for Aggregate

Planner often overestimates aggregation node memory estimate since it
uses simple multiplication of NDVs of contributing grouping columns.
This patch introduces new query options LARGE_AGG_MEM_THRESHOLD and
AGG_MEM_CORRELATION_FACTOR. If the estimated perInstanceDataBytes from
the NDV multiplication method exceed LARGE_AGG_MEM_THRESHOLD, recompute
perInstanceDataBytes again by comparing against the max(NDV) &
AGG_MEM_CORRELATION_FACTOR method.

perInstanceDataBytes is kept at LARGE_AGG_MEM_THRESHOLD at a minimum so
that low max(NDV) will not negatively impact query execution. Unlike
PREAGG_BYTES_LIMIT, LARGE_AGG_MEM_THRESHOLD is evaluated on both
preaggregation and final aggregation, and does not cap max memory
reservation of the aggregation node (it may still increase memory
allocation beyond the estimate if it is available). However, if a plan
node is a streaming preaggregation node and PREAGG_BYTES_LIMIT is set,
then PREAGG_BYTES_LIMIT will override the value of
LARGE_AGG_MEM_THRESHOLD as a threshold.

Testing:
- Run the patch with 10 nodes, MT_DOP=12, against TPC-DS 3TB scale.
  Among 103 queries, 20 queries have lower
  "Per-Host Resource Estimates", 11 have lower "Cluster Memory
  Admitted", and 3 have over 10% reduced latency. No significant
  regression in query latency was observed.
- Pass core tests.

Change-Id: Ia4b4b2e519ee89f0a13fdb62d0471ee4047f6421
Reviewed-on: http://gerrit.cloudera.org:8080/20104
Reviewed-by: Impala Public Jenkins <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>


> Improve memory estimation for streaming aggregate operator
> ----------------------------------------------------------
>
>                 Key: IMPALA-11842
>                 URL: https://issues.apache.org/jira/browse/IMPALA-11842
>             Project: IMPALA
>          Issue Type: Bug
>            Reporter: Abhishek Rawat
>            Priority: Critical
>
> Streaming aggregate operator can over estimate the peak memory and as a 
> result Impala could request max memory allowed by admission controller. This 
> impacts query concurrency and also causes unnecessary scaling.
> Looking at some cases in the following profile snippet, the estimated peak 
> memory (8.15 GB) is *20X* from actual peak memory (416.07 MB).
> {code:java}
>     Estimated Per-Host Mem: 247067273376
>     Request Pool: root.default
>     Per Host Min Memory Reservation: impala-executor-003-5:27010(1.34 GB) 
> impala-executor-003-4:27010(1.35 GB) impala-executor-003-6:27010(1.34 GB) 
> impala-executor-003-0:27010(1.35 GB) impala-executor-003-8:27010(1.35 GB) 
> impala-executor-003-2:27010(1.34 GB) impala-executor-003-1:27010(1.35 GB) 
> coordinator-0.coordinator-int.impala-dylan-impala.svc.cluster.local:27000(4.00
>  MB) impala-executor-003-3:27010(1.34 GB) impala-executor-003-9:27010(1.34 
> GB) impala-executor-003-7:27010(1.35 GB)
>     Per Host Number of Fragment Instances: impala-executor-003-5:27010(37) 
> impala-executor-003-4:27010(38) impala-executor-003-6:27010(37) 
> impala-executor-003-0:27010(38) impala-executor-003-8:27010(38) 
> impala-executor-003-2:27010(37) impala-executor-003-1:27010(38) 
> coordinator-0.coordinator-int.impala-dylan-impala.svc.cluster.local:27000(1) 
> impala-executor-003-3:27010(37) impala-executor-003-9:27010(37) 
> impala-executor-003-7:27010(38)
>     Latest admission queue reason: Not enough memory available on host 
> impala-executor-003-5:27010. Needed 50.00 GB but only 33.06 GB out of 83.06 
> GB was available.
>     Admission result: Admitted (queued)
>     Initial admission queue reason: waited 83020 ms, reason: Not enough 
> memory available on host impala-executor-003-5:27010. Needed 50.00 GB but 
> only 33.06 GB out of 83.06 GB was available.
>     Cluster Memory Admitted: 500.10 GB
>     Executor Group: root.default-group-002
>     ExecSummary: 
> Operator                 #Hosts  #Inst   Avg Time   Max Time    #Rows  Est. 
> #Rows   Peak Mem  Est. Peak Mem  Detail                      
> -----------------------------------------------------------------------------------------------------------------------------------------
> F04:ROOT                      1      1  146.192us  146.192us                  
>        4.01 MB        4.00 MB                              
> 11:MERGING-EXCHANGE           1      1    3.297ms    3.297ms      100         
> 100    1.88 MB      234.53 KB  UNPARTITIONED               
> F03:EXCHANGE SENDER          10    120    4.196ms  374.168ms                  
>        7.52 KB              0                              
> 05:TOP-N                     10    120   22.184ms    1s028ms   12.00K         
> 100   16.00 KB        1.56 KB                              
> 10:AGGREGATE                 10    120       4m2s     12m14s  499.98K     
> 487.66K    2.34 MB       10.00 MB  FINALIZE                    
> 09:EXCHANGE                  10    120    5s336ms    7s799ms   22.11B     
> 487.66K   10.40 MB        3.09 MB  HASH(vendor_id)             
> F02:EXCHANGE SENDER          10    120   28s199ms   48s974ms                  
>        4.16 MB              0                              
> 04:AGGREGATE                 10    120      1m17s      1m34s   22.11B     
> 487.66K   21.02 MB       10.00 MB  STREAMING                   
> 08:AGGREGATE                 10    120     12m29s     22m36s   50.00B      
> 50.00B    3.85 GB       10.87 GB                              
> 07:EXCHANGE                  10    120   10s165ms   12s246ms   50.00B      
> 50.00B   11.69 MB       12.34 MB  HASH(vendor_id,purchase_id) 
> F00:EXCHANGE SENDER          10    120      1m28s      1m49s                  
>        4.16 MB              0                              
> 03:AGGREGATE                 10    120      2m34s       5m5s   50.00B      
> 50.00B  416.07 MB        8.15 GB  STREAMING                   
> 02:HASH JOIN                 10    120      1m17s      1m40s   50.00B      
> 50.00B   46.12 KB              0  INNER JOIN, BROADCAST       
> |--F05:JOIN BUILD            10     10  557.800ms  613.307ms                  
>      408.02 MB      408.00 MB                              
> |  06:EXCHANGE               10     10   88.770ms  102.048ms    5.00M       
> 5.00M   16.11 MB       10.10 MB  BROADCAST                   
> |  F01:EXCHANGE SENDER        5      5  190.673ms  212.904ms                  
>       75.23 KB              0                              
> |  00:SCAN HDFS               5      5  774.428ms  965.865ms    5.00M       
> 5.00M   12.92 MB       64.00 MB  tab.product               
> 01:SCAN HDFS                 10    120      5m48s     13m14s   50.00B      
> 50.00B   32.92 MB       88.00 MB  tab.pli {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to