Hi Naganarasimha,
                Thank you so much for your help.

1.       We are using Hadoop 2.5.0 and it's against the trunk code.

2.       It seems that we were not setting properties as you mentioned and that 
is why it was taking more cpu share than which was assigned. I have tried to 
setup YARN to use cgroup. While doing so I am facing some issues as follows.

3.       Do container-executor and container-executor.cfg files need root 
permission? Because with other user it was throwing permission denied 
exception. And with root user I am getting invalid file container-executor.cfg 
file exception in NodeManager log.
 Caused by: java.io.IOException: Linux container executor not configured 
properly (error=24)

Following is my container-executor.cfg file.

yarn.nodemanager.linux-container-executor.group=<user name with which I start 
nodemanager daemon>
banned.users=#comma separated list of users who can not run applications
min.user.id=1000#Prevent other super-users
allowed.system.users=##comma separated list of system users who CAN run 
applications

Am I missing some configuration related settings? Thanks again for writing.

Regards,
Smita

From: Naganarasimha G R (Naga) [mailto:[email protected]]
Sent: Wednesday, November 05, 2014 11:42 AM
To: [email protected]
Subject: RE: CPU usage of a container.

Hi Smita,
Can you please inform abt the following :
1. Which version of Hadoop ?
2. Linux Container Executor with DRC and "CgroupsLCEResourcesHandler" is being 
configured ?
3. if its against the trunk code, have you configured for 
"yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage" which 
is by default false?

In general its not restrictive cpu usage, i.e. only when all the cpu cores are 
used cgroups tries to restrict the container usage if not container is allowed 
to use the cpu when its free
Please refer comments from Chris Riccomini in
https://issues.apache.org/jira/browse/YARN-600, will give some rough idea how 
cpu isolation can be validated and also his blog
http://riccomini.name/posts/hadoop/2013-06-14-yarn-with-cgroups
which might help you in understanding cgroups and cpu isolation.


After YARN-2531 
"yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage" is
supported so if you are using hadoop trunk code
then you can restrict single container cpu usage.




Regards,

Naga



Huawei Technologies Co., Ltd.
Phone:
Fax:
Mobile:  +91 9980040283
Email: [email protected]<mailto:[email protected]>
Huawei Technologies Co., Ltd.
Bantian, Longgang District,Shenzhen 518129, P.R.China
http://www.huawei.com



________________________________
From: Smita Deshpande [[email protected]]
Sent: Wednesday, November 05, 2014 13:21
To: [email protected]<mailto:[email protected]>
Subject: CPU usage of a container.
Hi All,
                I am facing sort of a weird issue in YARN. I am running a 
single container on a cluster whose cpu configuration is as follows:
                NODEMANAGER1 : 4 cpu cores
                NODEMANAGER2 : 4 cpu cores
                NODEMANAGER3 : 16 cpu cores
                All processors are Hyperthreaded ones. So if I am using 1 cpu 
core then max usage could be 200%.
                When I am running different number of threads in that 
container(basically cpu intensive calculation), its showing cpu usage more than 
allotted number of cores to it. Please refer to below table for different test 
cases. Highlighted values in Red seem to have crossed its usage. I am using 
DominantResourceCalculator in CS.
                PFA the screenshot for the same.
                Any help would be appreciated.

Resource Ask

%cpu Usage (from htop command)

# of Threads launched in container

<1024,1>

176.8

4

108

1

177

2

291

3

342

4

337

4    [container launched on NODEMANAGER3]

<1024,2>

177

3

182.6

9

336

4    [container launched on NODEMANAGER3]

189

2   [container launched on NODEMANAGER2]

291

3

337

4

<1024,3>

283

3

329.7

9

343

4  [container launched on NODEMANAGER3]

122

1

216

2

290

3

<1024,4>

289

3

123

1

217

2

292

3

338

4

177.3

32


Regards,
Smita

Reply via email to