[
https://issues.apache.org/jira/browse/IMPALA-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17205706#comment-17205706
]
ASF subversion and git services commented on IMPALA-10193:
----------------------------------------------------------
Commit a0a25a61c302d864315daa7f09827b37a37419d5 in impala's branch
refs/heads/master from fifteencai
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=a0a25a6 ]
IMPALA-10193: Limit the memory usage for the whole test cluster
This patch introduces a new approach of limiting the memory usage
for both mini-cluster and CDH cluster.
Without this limit, clusters are prone to getting killed when running
in docker containers with a lower mem limit than host's memory size.
i.e. The mini-cluster may running in a
container with 32GB limitted by CGROUPS, while the host machine has
128GB. Under this circumstance, if the container is started with
'-privileged' command argument, both mini and CDH clusters compute
their mem_limit according to 128GB rather than 32GB. They will be
killed when attempting to apply for extra resource.
Currently, the mem-limit estimating algorithms for Impalad and Node
Manager are different:
for Impalad: mem_limit = 0.7 * sys_mem / cluster_size (default is 3)
for Node Manager:
1. Leave aside 24GB, then fit the left into threasholds below.
2. The bare limit is 4GB and maximum limit 48GB
In headge of over-consumption, we
- Added a new environment variable IMPALA_CLUSTER_MAX_MEM_GB
- Modified the algorithm in 'bin/start-impala-cluster.py', making it
taking IMPALA_CLUSTER_MAX_MEM_GB rather than sys_mem into account.
- Modified the logic in
'testdata/cluster/node_templates/common/etc/hadoop/conf/yarn-site.xml.py'
Similarly, making IMPALA_CLUSTER_MAX_MEM_GB substitutes for sys_mem .
Testing: this patch worked in a 32GB docker container running on a 128GB
host machine. All 1188 unit tests get passed.
Change-Id: I8537fd748e279d5a0e689872aeb4dbfd0c84dc93
Reviewed-on: http://gerrit.cloudera.org:8080/16522
Reviewed-by: Impala Public Jenkins <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>
> Limit the memory usage of the whole mini-cluster
> ------------------------------------------------
>
> Key: IMPALA-10193
> URL: https://issues.apache.org/jira/browse/IMPALA-10193
> Project: IMPALA
> Issue Type: Bug
> Components: Infrastructure
> Affects Versions: Impala 3.4.0
> Reporter: Fifteen
> Assignee: Fifteen
> Priority: Minor
> Attachments: image-2020-09-28-17-18-15-358.png
>
>
> The mini-cluster contains 3 virtual nodes, and all of them runs in a single
> 'Machine'. By quoting, it implies the machine can be a docker container. If
> the container is started with `-priviledged` and the actual memory is limited
> by CGROUPS, then the total memory in `htop` and the actual available memory
> can be different!
>
> For example, in the container below, `htop` tells us the total memory is
> 128GB, while the total memory set in CGROUPS is actually 32GB. If the acutal
> mem usage exceeds 32GB, process (such as impalad, hivemaster2 etc.) get
> killed.
> !image-2020-09-28-17-18-15-358.png!
>
> So we may need a way to limit the whole mini-cluster's memory usage.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]