Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11874#discussion_r57084302
  
    --- Diff: 
core/src/main/scala/org/apache/spark/memory/StorageMemoryPool.scala ---
    @@ -56,11 +56,13 @@ private[memory] class StorageMemoryPool(lock: Object) 
extends MemoryPool(lock) w
     
       /**
        * Acquire N bytes of memory to cache the given block, evicting existing 
ones if necessary.
    - *
    +   *
        * @return whether all N bytes were successfully granted.
        */
    -  def acquireMemory(blockId: BlockId, numBytes: Long): Boolean = 
lock.synchronized {
    -    val numBytesToFree = math.max(0, numBytes - memoryFree)
    +  def acquireMemory(blockId: BlockId, numBytes: Long): Boolean = {
    +    val numBytesToFree = lock.synchronized {
    +      math.max(0, numBytes - memoryFree)
    +    }
         acquireMemory(blockId, numBytes, numBytesToFree)
    --- End diff --
    
    As discussed, the fact that we release the lock here could lead to two race 
conditions:
    
    (1) If another thread comes in and uses up some free memory before this 
line, then `numBytesToFree` will be too low, causing the memory acquisition to 
fail.
    
    (2) If another thread comes in and frees up a lot of memory before this 
line, then `numBytesToFree` will be too high, causing more blocks to be evicted 
than necessary.
    
    We can remedy (1) by acquiring as much free space is needed and available 
before releasing the lock and then try to acquire the rest later. Fixing (2) 
may be a little more difficult.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to