Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17088#discussion_r106335566
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -394,6 +394,32 @@ class DAGSchedulerSuite extends SparkFunSuite with
LocalSparkContext with Timeou
assertDataStructuresEmpty()
}
+ test("All shuffle files should on the slave should be cleaned up when
slave lost") {
+ // reset the test context with the right shuffle service config
+ afterEach()
+ val conf = new SparkConf()
+ conf.set("spark.shuffle.service.enabled", "true")
+ init(conf)
+ runEvent(ExecutorAdded("exec-hostA1", "hostA"))
+ runEvent(ExecutorAdded("exec-hostA2", "hostA"))
+ runEvent(ExecutorAdded("exec-hostB", "hostB"))
+ val shuffleMapRdd = new MyRDD(sc, 3, Nil)
+ val shuffleDep = new ShuffleDependency(shuffleMapRdd, new
HashPartitioner(1))
+ val shuffleId = shuffleDep.shuffleId
+ val reduceRdd = new MyRDD(sc, 1, List(shuffleDep), tracker =
mapOutputTracker)
+ submit(reduceRdd, Array(0))
+ complete(taskSets(0), Seq(
+ (Success, makeMapStatus("hostA", 1)),
+ (Success, makeMapStatus("hostA", 1)),
+ (Success, makeMapStatus("hostB", 1))))
+ scheduler.handleExecutorLost("exec-hostA1", fileLost = false, hostLost
= true, Some("hostA"))
+ runEvent(ExecutorLost("exec-hostA1", SlaveLost("", true)))
+ val mapStatus = mapOutputTracker.mapStatuses.get(0).get.filter(_!=
null)
--- End diff --
I think there are a couple of problems with this test.
* you are trying to change the behavior on a fetch failure, so really you
should have tasks completing with a `FetchFailed`
* `makeMapStatus` is actually doing the wrong thing in this case, since its
expecting executor ids to be "exec-$host", but you've got a "1" or "2" appended
to some of them
I think this is better:
```scala
submit(reduceRdd, Array(0))
// map stage completes successfully, with one task on each executor
complete(taskSets(0), Seq(
(Success,
MapStatus(BlockManagerId("exec-hostA1", "hostA", 12345),
Array.fill[Long](1)(2))),
(Success,
MapStatus(BlockManagerId("exec-hostA2", "hostA", 12345),
Array.fill[Long](1)(2))),
(Success, makeMapStatus("hostB", 1))
))
// make sure our test setup is correct
val initialMapStatus = mapOutputTracker.mapStatuses.get(0).get
assert(initialMapStatus.count(_ != null) === 3)
assert(initialMapStatus.map{_.location.executorId}.toSet ===
Set("exec-hostA1", "exec-hostA2", "exec-hostB"))
// reduce stage fails with a fetch failure from one host
complete(taskSets(1), Seq(
(FetchFailed(BlockManagerId("exec-hostA2", "hostA", 12345),
shuffleId, 0, 0, "ignored"),
null)
))
// Here is the main assertion -- make sure that we de-register the map
output from both executors on hostA
val mapStatus = mapOutputTracker.mapStatuses.get(0).get
assert(mapStatus.count(_ != null) === 1)
assert(mapStatus(2).location.executorId === "exec-hostB")
assert(mapStatus(2).location.host === "hostB")
```
this version fails until you reverse the if / else I pointed out in the
dagscheduler.
it would also be nice if this included map output from multiple stages
registered on the given host, so you could check that *all* output is
deregistered, not just the one shuffleId which had an error.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]