GitHub user jinxing64 opened a pull request:

    https://github.com/apache/spark/pull/18388

    [SPARK-21175] Reject OpenBlocks when memory shortage on shuffle service.

    ## What changes were proposed in this pull request?
    
    A shuffle service can serves blocks from multiple apps/tasks. Thus the 
shuffle service can suffers high memory usage when lots of shuffle-reads happen 
at the same time. In my cluster, OOM always happens on shuffle service. 
Analyzing heap dump, memory cost by Netty(chunks) can be up to 2~3G. It might 
make sense to reject "open blocks" request when memory usage is high on shuffle 
service.
    
    
https://github.com/apache/spark/commit/93dd0c518d040155b04e5ab258c5835aec7776fc 
and 
https://github.com/apache/spark/commit/85c6ce61930490e2247fb4b0e22dfebbb8b6a1ee 
tried to alleviate the memory pressure on shuffle service but cannot solve the 
root cause. This pr proposes to control currency of shuffle read.
    
    ## How was this patch tested?
    Added unit test.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jinxing64/spark SPARK-21175

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/18388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #18388
    
----
commit ed889b96d938cc8d0dbd9f6b153a4f7c5b44c7d4
Author: jinxing <[email protected]>
Date:   2017-06-22T05:00:23Z

    Reject OpenBlocks when memory shortage on shuffle service.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to