Hi all,

I'm using Hadoop 2.6.0 after upgrading from Hadoop 2.2.0. Previously, I
didn't mess with any vcore settings as we weren't doing anything special
with containers and it seemed happy enough to only consider memory as a
contended resource. This is consistent with the behavior described in the
documentation for both releases:

```
minResources: minimum resources the queue is entitled to, in the form "X
mb, Y vcores". For the single-resource fairness policy, the vcores value is
ignored.
```

This is from
http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/FairScheduler.html

Unfortunately, it seems after moving to 2.6.0 the default behavior actually
does consider vcores as a contended resource and I can't seem to figure out
how to tell it not to.

I was looking in FairScheduler.java, saw some refactorings had happened,
and one thing that sticks out is:

```
  @Override
  public EnumSet<SchedulerResourceTypes> getSchedulingResourceTypes() {
    return EnumSet
      .of(SchedulerResourceTypes.MEMORY, SchedulerResourceTypes.CPU);
  }
```

So, I'm wondering the following:

1- How can I enable the single-resource fairness policy?
2- If this is a bug and/or feature drift, is it possible to trick the fair
scheduler into behaving as if memory is the only contented resource by
setting the vcores insanely high (say to Integer.MAX_VALUE) and configuring
the memory as I had previously done?

Thanks,
Bill
  • FairScheduler + single resource policy William Slacum

Reply via email to