[ 
https://issues.apache.org/jira/browse/HADOOP-17438?focusedWorklogId=526922&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526922
 ]

ASF GitHub Bot logged work on HADOOP-17438:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Dec/20 19:14
            Start Date: 21/Dec/20 19:14
    Worklog Time Spent: 10m 
      Work Description: ericbadger commented on pull request #2560:
URL: https://github.com/apache/hadoop/pull/2560#issuecomment-749148544


   I'm not sure how to profile this because I don't really know what access we 
have to the machines that run Jenkins, what else runs on there, or anything 
else. Logging memory over time would be a good first step to help narrow it 
down. Then once we know what module is suspect, we can either investigate on 
our own local machines or we can enable further debugging on that specific 
module.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 526922)
    Time Spent: 1h 10m  (was: 1h)

> Increase docker memory limit in Jenkins
> ---------------------------------------
>
>                 Key: HADOOP-17438
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17438
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: build, scripts, test, yetus
>            Reporter: Ahmed Hussein
>            Assignee: Ahmed Hussein
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Yetus keeps failing with OOM.
>  
> {code:bash}
> unable to create new native thread
> java.lang.OutOfMemoryError: unable to create new native thread
>       at java.lang.Thread.start0(Native Method)
>       at java.lang.Thread.start(Thread.java:717)
>       at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>       at 
> java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
>  
> This jira to increase the memory limit from 20g to 22g.
> *Note: This is only a workaround to get things more productive. If this 
> change reduces the frequency of the OOM failure, there must be a follow-up 
> profile the runtime to figure out which components are causing the docker to 
> run out of memory.*
> CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], 
> [~iwasakims], [~ayushtkn], [~inigoiri]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to