[ 
https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571732#comment-16571732
 ] 

Jason Lowe commented on YARN-8609:
----------------------------------

bq. Indeed, it would not take up too much memory if running with YARN-3998.

Then I propose this be closed as a duplicate.  Those looking for a JIRA and 
finding the summary matching their symptoms should be directed to YARN-3998 
since that alone is sufficient to address that problem.

bq. if we do truncation in for loop, all kinds of diagnostic info will retain. 
This is what I want to say and it is a small improvement.

We can add the ability to truncate individual diagnostic messages in a separate 
improvement JIRA.  However as I mentioned above, 5000 may be too small of a 
default since it could end up truncating a critical "Caused by" towards the end 
of a large stacktrace.

> NM oom because of large container statuses
> ------------------------------------------
>
>                 Key: YARN-8609
>                 URL: https://issues.apache.org/jira/browse/YARN-8609
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>            Reporter: Xianghao Lu
>            Priority: Major
>         Attachments: YARN-8609.001.patch, contain_status.jpg, oom.jpeg
>
>
> Sometimes, NodeManger will send large container statuses to ResourceManager 
> when NodeManger start with recovering, as a result , NodeManger will be 
> failed to start because of oom.
>  In my case, the large container statuses size is 135M, which contain 11 
> container statuses, and I find the diagnostics of 5 containers are very 
> large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to