[
https://issues.apache.org/jira/browse/SPARK-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-6325:
-----------------------------
Assignee: Marcelo Vanzin
> YarnAllocator crash with dynamic allocation on
> ----------------------------------------------
>
> Key: SPARK-6325
> URL: https://issues.apache.org/jira/browse/SPARK-6325
> Project: Spark
> Issue Type: Bug
> Components: Spark Core, YARN
> Affects Versions: 1.3.0
> Reporter: Marcelo Vanzin
> Assignee: Marcelo Vanzin
> Priority: Critical
> Fix For: 1.4.0, 1.3.1
>
>
> Run spark-shell like this:
> {noformat}
> spark-shell --conf spark.shuffle.service.enabled=true \
> --conf spark.dynamicAllocation.enabled=true \
> --conf spark.dynamicAllocation.minExecutors=1 \
> --conf spark.dynamicAllocation.maxExecutors=20 \
> --conf spark.dynamicAllocation.executorIdleTimeout=10 \
> --conf spark.dynamicAllocation.schedulerBacklogTimeout=5 \
> --conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=5
> {noformat}
> Then run this simple test:
> {code}
> scala> val verySmallRdd = sc.parallelize(1 to 10, 10).map { i =>
> | if (i % 2 == 0) { Thread.sleep(30 * 1000); i } else 0
> | }
> verySmallRdd: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1] at map at
> <console>:21
> scala> verySmallRdd.collect()
> {code}
> When Spark starts ramping down the number of allocated executors, it will hit
> an assert in YarnAllocator.scala:
> {code}
> assert(targetNumExecutors >= 0, "Allocator killed more executors than are
> allocated!")
> {code}
> This assert will cause the akka backend to die, but not the AM itself. So the
> app will be in a zombie-like state, where the driver is alive but can't talk
> to the AM. Sadness ensues.
> I have a working fix, just need to add unit tests. Stay tuned.
> Thanks to [~wypoon] for finding the problem, and for the test case.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]