holdenk commented on a change in pull request #29001:
URL: https://github.com/apache/spark/pull/29001#discussion_r449958191
##########
File path:
core/src/test/scala/org/apache/spark/scheduler/WorkerDecommissionExtendedSuite.scala
##########
@@ -32,17 +32,17 @@ import
org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend
class WorkerDecommissionExtendedSuite extends SparkFunSuite with
LocalSparkContext {
private val conf = new org.apache.spark.SparkConf()
.setAppName(getClass.getName)
- .set(SPARK_MASTER, "local-cluster[20,1,512]")
+ .set(SPARK_MASTER, "local-cluster[10,1,512]")
Review comment:
So I think this test came from the situation where we were experiencing
a deadlock and we wanted to make sure we re-created the potential deadlock
which happened when we decommissioned most of the executors. Now this deadlock
never made it into OSS Spark, but having the test here to catch it just incase
is good. I think we could catch the same deadlock with 5 executors and
decommissioning 4 of them, but @dongjoon-hyun is the one who found this
potential issue so I'll let him clarify :)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]