srowen commented on a change in pull request #23457: [SPARK-26539][CORE] Remove
spark.memory.useLegacyMode and StaticMemoryManager
URL: https://github.com/apache/spark/pull/23457#discussion_r245426245
##########
File path: core/src/test/scala/org/apache/spark/storage/MemoryStoreSuite.scala
##########
@@ -291,11 +290,11 @@ class MemoryStoreSuite
blockInfoManager.removeBlock("b3")
putIteratorAsBytes("b3", smallIterator, ClassTag.Any)
- // Unroll huge block with not enough space. This should fail and kick out
b2 in the process.
+ // Unroll huge block with not enough space.
val result4 = putIteratorAsBytes("b4", bigIterator, ClassTag.Any)
assert(result4.isLeft) // unroll was unsuccessful
assert(!memoryStore.contains("b1"))
- assert(!memoryStore.contains("b2"))
+ assert(memoryStore.contains("b2")) // not necessarily evicted
Review comment:
This behavior changed after the change above. However, I am not sure this
assertion is correct? there are 4800 bytes available. b2 takes 400. An attempt
to put 4000 may fail (? well that was the previous and current behavior,
anyway), but I don't see that it necessarily requires evicting b2; the
allocation failed. We've recently had a very similar discussion about a
different change that also uncovered this assertion as an issue.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]