Jason Lowe updated YARN-2314:
    Attachment: disable-cm-proxy-cache.patch

Yeah, I don't think there's a good way to fix this short of running a bigger 
container than necessary or patching the code.

Attaching a patch we've been running with recently that disables the CM proxy 
cache completely and reinstates the fix from MAPREDUCE-3333.  It's not an ideal 
fix but it effectively restores the behavior to what Hadoop 0.23 did which 
worked OK for us.

> ContainerManagementProtocolProxy can create thousands of threads for a large 
> cluster
> ------------------------------------------------------------------------------------
>                 Key: YARN-2314
>                 URL: https://issues.apache.org/jira/browse/YARN-2314
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 2.1.0-beta
>            Reporter: Jason Lowe
>            Priority: Critical
>         Attachments: disable-cm-proxy-cache.patch, 
> nmproxycachefix.prototype.patch
> ContainerManagementProtocolProxy has a cache of NM proxies, and the size of 
> this cache is configurable.  However the cache can grow far beyond the 
> configured size when running on a large cluster and blow AM address/container 
> limits.  More details in the first comment.

This message was sent by Atlassian JIRA

Reply via email to