Hi,

I am trying to use the fair scheduler pools
(http://spark.apache.org/docs/latest/job-scheduling.html#fair-scheduler-pools)
to schedule two jobs at the same time.

In my simple example, I have configured spark in local mode with 2 cores
("local[2]"). I have also configured two pools in fairscheduler.xml that
each have "minShares = 1". With this configuration, I would assume that each
all jobs in each pool will get assigned to one core. However, after running
some simple experiments, and looking at the spark UI, I doesn't seem like
this is the case.

Is my understanding incorrect? If not, am I configuring things wrong? I have
copied my code and xml below.

Thanks,
Nick


code:

val conf = new SparkConf()
  .setMaster("local[2]")
  .setAppName("Test")
  .set("spark.scheduler.mode", "FAIR")
  .set("spark.scheduler.allocation.file", "/etc/tercel/fairscheduler.xml")
val sc = new SparkContext(conf)

val input = sc.parallelize(1 to 10)

new Thread(new Runnable() {
  override def run(): Unit = {
    sc.setLocalProperty("spark.scheduler.pool", "pool1")
    val output1 = input.map { x => Thread.sleep(1000); x }
    output1.count()
  }
}).start()

new Thread(new Runnable() {
  override def run(): Unit = {
    sc.setLocalProperty("spark.scheduler.pool", "pool2")
    val output2 = input.map { x => Thread.sleep(1000); x }
    output2.count()
  }
}).start()


fairscheduler.xml:

<?xml version="1.0"?>
<allocations>
  <pool name="pool1">
    <schedulingMode>FAIR</schedulingMode>
    <weight>1</weight>
    <minShare>1</minShare>
  </pool>
  <pool name="pool2">
    <schedulingMode>FAIR</schedulingMode>
    <weight>1</weight>
    <minShare>1</minShare>
  </pool>
</allocations>




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Fair-Scheduler-Pools-tp21791.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to