Nishkam Ravi commented on YARN-2041:

Experiments were done with FIFO and Fair schedulers under two different 
settings: single-job and multi-job modes. 
In the single-job mode, one instance of a workload executes in the cluster at a 
time. In the multi-job mode, multiple instances (10 in this case) of the same 
job are spawned and execute in parallel on the cluster. This is done for six 
different workloads. Average run time is measured and reported.

Performance drop was observed with FIFO in both single-job and multi-job modes 
(at the time of opening this JIRA). And has been verified. Fair scheduler 
performs fine even with increased container size. 

Fair to conclude that FIFO was the problem. No problem with co-locating MR2 and 
Spark jobs using Fair scheduler.  Will try Capacity scheduler as well while we 
are at it. 

> Hard to co-locate MR2 and Spark jobs on the same cluster in YARN
> ----------------------------------------------------------------
>                 Key: YARN-2041
>                 URL: https://issues.apache.org/jira/browse/YARN-2041
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>    Affects Versions: 2.3.0
>            Reporter: Nishkam Ravi
> Performance of MR2 jobs falls drastically as YARN config parameter 
> yarn.nodemanager.resource.memory-mb  is increased beyond a certain value. 
> Performance of Spark falls drastically as the value of 
> yarn.nodemanager.resource.memory-mb is decreased beyond a certain value for a 
> large data set.
> This makes it hard to co-locate MR2 and Spark jobs in YARN.
> The experiments are being conducted on a 6-node cluster. The following 
> workloads are being run: TeraGen, TeraSort, TeraValidate, WordCount, 
> ShuffleText and PageRank.
> Will add more details to this JIRA over time.

This message was sent by Atlassian JIRA

Reply via email to