Github user attilapiros commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20546#discussion_r167276875
  
    --- Diff: core/src/test/scala/org/apache/spark/DistributedSuite.scala ---
    @@ -160,10 +160,6 @@ class DistributedSuite extends SparkFunSuite with 
Matchers with LocalSparkContex
         val data = sc.parallelize(1 to 1000, 10)
         val cachedData = data.persist(storageLevel)
         assert(cachedData.count === 1000)
    -    
assert(sc.getExecutorStorageStatus.map(_.rddBlocksById(cachedData.id).size).sum 
===
    --- End diff --
    
    Using sc.statusStore here would also cause the bug I mentioned above 
(rddStorageInfo.numCachedPartitions difference). This is why I left this part 
untouched.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to