Hi,
I'rm running my cluster at Hadoop 2.2.0, and use CapacityScheduler. And
all my jobs are uberized and running among 2 queues, one queue takes
majority of capacity(90%), another take 10%. What I found is for small
queue, only one job is running for a given time, I tried twisting below
properties, but no luck so far, could you guys share some light on this?
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>1.0</value>
<description>
Maximum percent of resources in the cluster which can be used to run
application masters i.e. controls number of concurrent running
applications.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>default,small</value>
<description>
The queues at the this level (root is the root queue).
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.small.maximum-am-resource-percent</name>
<value>1.0</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.small.user-limit</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.capacity</name>
<value>88</value>
<description>Default queue target capacity.</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.small.capacity</name>
<value>12</value>
<description>Default queue target capacity.</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
<value>88</value>
<description>
The maximum capacity of the default queue.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.small.maximum-capacity</name>
<value>12</value>
<description>Maximum queue capacity.</description>
</property>
Thanks
--
--Anfernee