[ 
https://issues.apache.org/jira/browse/IMPALA-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mala Chikka Kempanna updated IMPALA-7118:
-----------------------------------------
    Description: 
Similar to peak node memeory metric in profile like below, we also need a 
metric listing mem_limit set on each node in profile.
{code:java}
Per Node Peak Memory Usage: x.y1.com:22000(36.14 GB) 

{code}
 

Background for this ask is:

We have seen performance issues reported across releases of impala, where 
cluster admin would have lowered memory limits on all or subset of impala 
daemons while also upgrading the version.

And query performance suffers if the queries that were running in-memory before 
start spilling now due to lower memory limit.

And end-users/application-users blame the upgraded version for slowness when 
they share profiles for analysis not knowing the lowering of memory by 
cluster-admins was the reason.

Having this mem limit metric in profile will help easily spot this issue.

 

 

 

  was:
Similar to peak node memeory metric in profile below, we need a metric listing 
mem_limit set on each node.

{code}

Per Node Peak Memory Usage: p1ehowchp2d04.prudential.com:22000(36.14 GB) 
p1ehowchp2d07.prudential.com:22000(48.50 GB) 
p1ehowchp2d06.prudential.com:22000(44.67 GB) 
p1ehowchp2d02.prudential.com:22000(38.42 GB) 
p1ehowchp2d01.prudential.com:22000(43.19 GB) 
p1ehowchp2d08.prudential.com:22000(41.60 GB) 
p1ehowchp2d03.prudential.com:22000(49.72 GB) 
p1ehowchp2d05.prudential.com:22000(45.89 GB)

{code}

 

Background for this ask is:

We have seen performance issues reported across releases of impala, where 
cluster admin would have lowered memory limits on all or subset of impala 
daemons while also upgrading the version.

And query performance suffers if the queries that were running in-memory before 
start spilling now due to lower memory limit.

And end-users/application-users blame the upgraded version for slowness when 
they share profiles for analysis not knowing the lowering of memory by 
cluster-admins was the reason.

Having this mem limit metric in profile will help easily spot this issue.

 

 

 


> Add mem_limit metric listing for all hosts in the profile
> ---------------------------------------------------------
>
>                 Key: IMPALA-7118
>                 URL: https://issues.apache.org/jira/browse/IMPALA-7118
>             Project: IMPALA
>          Issue Type: Improvement
>          Components: Perf Investigation
>            Reporter: Mala Chikka Kempanna
>            Priority: Major
>
> Similar to peak node memeory metric in profile like below, we also need a 
> metric listing mem_limit set on each node in profile.
> {code:java}
> Per Node Peak Memory Usage: x.y1.com:22000(36.14 GB) 
> {code}
>  
> Background for this ask is:
> We have seen performance issues reported across releases of impala, where 
> cluster admin would have lowered memory limits on all or subset of impala 
> daemons while also upgrading the version.
> And query performance suffers if the queries that were running in-memory 
> before start spilling now due to lower memory limit.
> And end-users/application-users blame the upgraded version for slowness when 
> they share profiles for analysis not knowing the lowering of memory by 
> cluster-admins was the reason.
> Having this mem limit metric in profile will help easily spot this issue.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to