holdenk commented on a change in pull request #28331:
URL: https://github.com/apache/spark/pull/28331#discussion_r432110087



##########
File path: 
core/src/test/scala/org/apache/spark/storage/BlockManagerDecommissionSuite.scala
##########
@@ -69,36 +84,64 @@ class BlockManagerDecommissionSuite extends SparkFunSuite 
with LocalSparkContext
     })
 
     // Cache the RDD lazily
-    sleepyRdd.persist()
+    if (persist) {
+      testRdd.persist()
+    }
 
     // Start the computation of RDD - this step will also cache the RDD
-    val asyncCount = sleepyRdd.countAsync()
+    val asyncCount = testRdd.countAsync()
 
     // Wait for the job to have started
     sem.acquire(1)
 
+    // Give Spark a tiny bit to start the tasks after the listener says hello
+    Thread.sleep(100)

Review comment:
       I've added the wait for all the executors to come up before starting the 
job, but I think this sleep is ok because we know it's less than the length of 
the job and we are essentially trying to test what happens in the middle of a 
job. I can't think of a reasonable way to avoid this sleep.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to