Github user suyanNone commented on the pull request:

    https://github.com/apache/spark/pull/6586#issuecomment-108174658
  
    @srowen 
    I think in all system, we want make physical memory to be more 
controllable. 
    We use spark on yarn, we always encounter direct buffer is out of control, 
and be killed by yarn. it cause a lot of task failed and begin to retry. 
    
    for the reason: 
    1.memory_mapping block send to remote, the direct buffer will not be 
released while the thread is alive.
     it is a netty 4.0.23-final bug, I already report to netty community, and 
it give a solution
    2. according to this patch
    this patch is for the block contains disk level 
    2.1. read block not use memory mapping ,because it small than memory 
mapping threshold. 
    2.2. droping memory_disk_ser level block into disk.
    
    because channel contains a thread local direct buffer pool, it is soft 
reference, and can't be released as I wishes. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to