srowen commented on a change in pull request #23426: [SPARK-26527][CORE] Let 
acquireUnrollMemory fail fast if required space exceeds memory limit
URL: https://github.com/apache/spark/pull/23426#discussion_r245440605
 
 

 ##########
 File path: core/src/test/scala/org/apache/spark/storage/MemoryStoreSuite.scala
 ##########
 @@ -291,11 +291,11 @@ class MemoryStoreSuite
     blockInfoManager.removeBlock("b3")
     putIteratorAsBytes("b3", smallIterator, ClassTag.Any)
 
-    // Unroll huge block with not enough space. This should fail and kick out 
b2 in the process.
+    // Unroll huge block with not enough space. This should fail.
     val result4 = putIteratorAsBytes("b4", bigIterator, ClassTag.Any)
     assert(result4.isLeft) // unroll was unsuccessful
     assert(!memoryStore.contains("b1"))
-    assert(!memoryStore.contains("b2"))
+    assert(memoryStore.contains("b2"))
 
 Review comment:
   FWIW while changing this test to use UnifiedMemoryManager, for a different 
purpose, I also found this block b2 would not be evicted. I think the assertion 
maybe isn't correct, so I think this change is OK.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to