[ 
https://issues.apache.org/jira/browse/IMPALA-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152109#comment-17152109
 ] 

Tim Armstrong commented on IMPALA-9919:
---------------------------------------

The most likely explanation based on the large # of threads is the thrift RPC 
stack causing problems. 
https://blog.cloudera.com/scalability-improvement-of-apache-impala-2-12-0-in-cdh-5-15-0/
 talks a bit about the thread issue for context.

Probably IMPALA-2567 would fix this on its own, and the later scalability 
improvements would help further - IMPALA-2990 IMPALA-7984. IMPALA-7239 could 
also be contributing.

Those symptoms would also be explained if your system is low on memory and 
swapping.

I don't think there are any actions the Apache Impala project can take on this 
unless we have a lot more details or it reproduces on a later version.

> Bad Impala Performance after a period of time
> ---------------------------------------------
>
>                 Key: IMPALA-9919
>                 URL: https://issues.apache.org/jira/browse/IMPALA-9919
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Backend
>    Affects Versions: Impala 2.10.0
>         Environment: OS: CentOS 6.9
>            Reporter: Vagelis Nomikos
>            Priority: Major
>              Labels: performance
>
> Our cluster is consisting of about 60 Impala nodes. After a period of time 
> and after executing some "heavy" queries the performance of the cluster 
> becomes bad and the Impala is not responding after a period of time. We 
> observed that day after day the Impala Resident memory and the running 
> threads of the machine keep growing even if we do not run queries. Everytime 
> we perform an Impala restart everything seems to work fine for a period of 
> time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to