Github user ConeyLiu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19285#discussion_r140490824
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -354,63 +401,30 @@ private[spark] class MemoryStore(
ser.serializeStream(serializerManager.wrapForCompression(blockId,
redirectableStream))
}
- // Request enough memory to begin unrolling
- keepUnrolling = reserveUnrollMemoryForThisTask(blockId,
initialMemoryThreshold, memoryMode)
-
- if (!keepUnrolling) {
- logWarning(s"Failed to reserve initial memory threshold of " +
- s"${Utils.bytesToString(initialMemoryThreshold)} for computing
block $blockId in memory.")
- } else {
- unrollMemoryUsedByThisBlock += initialMemoryThreshold
+ def storeValue(value: T): Unit = {
+ serializationStream.writeObject(value)(classTag)
}
- def reserveAdditionalMemoryIfNecessary(): Unit = {
- if (bbos.size > unrollMemoryUsedByThisBlock) {
- val amountToRequest = (bbos.size * memoryGrowthFactor -
unrollMemoryUsedByThisBlock).toLong
- keepUnrolling = reserveUnrollMemoryForThisTask(blockId,
amountToRequest, memoryMode)
- if (keepUnrolling) {
- unrollMemoryUsedByThisBlock += amountToRequest
- }
- }
- }
-
- // Unroll this block safely, checking whether we have exceeded our
threshold
- while (values.hasNext && keepUnrolling) {
- serializationStream.writeObject(values.next())(classTag)
- elementsUnrolled += 1
- if (elementsUnrolled % memoryCheckPeriod == 0) {
- reserveAdditionalMemoryIfNecessary()
+ def estimateSize(precise: Boolean): Long = {
+ if (precise) {
+ serializationStream.flush()
--- End diff --
@cloud-fan Sorry for the previous saying, I read the code again. Here seems
call `serializationStream .close` is also OK. Because the the iterator is has
not value need write, that's meaning the `serializationStream` don't need
anymore.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]