Github user suyanNone commented on the pull request:

    https://github.com/apache/spark/pull/4887#issuecomment-155758546
  
    @andrewor14 , I had not make my point clearly.
    This issues created because  of `memory_disk_level` block not released 
`unrollMemory` after that block had been put into disk successful.
    
    In current logic,
    There are have a `unrollMemoryMap` to contains all unroll memory for 
unrolling blocks and unrolled failed blocks.
    due to SPARK-4777, we add a `pendingUnrollMemoryMap` to reserve unroll 
memory only for a block that unrolled success. 
    and `pendingUnrollMemoryMap`  + `unrollMemoryMap` is the total used 
unrollMemory.
    
    Now to resolve this issue, after spark had unroll failed a 
`memory_and_disk` level block, we need to know the specific memory size(like 
199MB) of `this` block.
    **important**we can't just call `releaseUnrollMemoryForThisTask` after we 
had put this block into disk, because `unrollMemoryForThisTask` may contains 
other block's unroll memory, such as 2 cache RDD, and have same paritioner, and 
with some `cogroup` ops.  right?
    
    So we may need to know the specific memory size of unrolled failed 
memory_and_disk block, and should be differentiate from that `unrollMemoryMap`. 
    
    
    
    
      
    
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to