[
https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16298192#comment-16298192
]
zhangshilong edited comment on YARN-7672 at 12/20/17 9:55 AM:
--------------------------------------------------------------
[~cxcw] I use two daemons deployed in different two hosts.
I start 1000~5000 threads to simulate NM/AM,because I need to simulate 10000
apps running with 10000 NM nodes.
one task use 1vcore and 2304Mb. And one NM has 50 vcore and 50*2304 Mb
resources.
All of NM and AM simulators are all cpu type of task. So cpu.load will go up
to 100+ (only 32 cores) And as we know, Scheduler will also use one process
for allocating resources.
was (Author: zsl2007):
[~cxcw] I use two daemons deployed on different two hosts.
I start 1000~5000 threads to simulate NM/AM,because I need to simulate 10000
apps running with 10000 NM nodes.
one task use 1vcore and 2304Mb. And one NM has 50 vcore and 50*2304 Mb
resources.
All of NM and AM simulators are all cpu type of task. So cpu.load will go up
to 100+ (only 32 cores) And as we know, Scheduler will also use one process
for allocating resources.
> hadoop-sls can not simulate huge scale of YARN
> ----------------------------------------------
>
> Key: YARN-7672
> URL: https://issues.apache.org/jira/browse/YARN-7672
> Project: Hadoop YARN
> Issue Type: Improvement
> Reporter: zhangshilong
> Assignee: zhangshilong
> Attachments: YARN-7672.patch
>
>
> Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler
> pressure test.
> Using SLS,we start 2000+ threads to simulate NM and AM. But cpu.load very
> high to 100+. I thought that will affect performance evaluation of
> scheduler.
> So I thought to separate the scheduler from the simulator.
> I start a real RM. Then SLS will register nodes to RM,And submit apps to RM
> using RM RPC.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]