[ 
https://issues.apache.org/jira/browse/HBASE-24072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083792#comment-17083792
 ] 

Michael Stack commented on HBASE-24072:
---------------------------------------

Found that HBASE-23956 Use less resources running tests (#1266) was missing off 
branch-2.3. Added it (and the correction HBASE-23987 NettyRpcClientConfigHelper 
will not share event loop by default which is incorrect (#1288)). Lets see if 
that helps.

> Nightlies reporting OutOfMemoryError: unable to create new native thread
> ------------------------------------------------------------------------
>
>                 Key: HBASE-24072
>                 URL: https://issues.apache.org/jira/browse/HBASE-24072
>             Project: HBase
>          Issue Type: Task
>          Components: test
>            Reporter: Michael Stack
>            Assignee: Michael Stack
>            Priority: Major
>             Fix For: 3.0.0, 2.3.0
>
>         Attachments: 
> 0001-HBASE-24072-Nightlies-reporting-OutOfMemoryError-una.patch, 
> print_ulimit.patch
>
>
> Seeing this kind of thing in nightly...
> {code}
> java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new 
> native thread
>       at 
> org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.beforeClass(TestMultithreadedTableMapper.java:83)
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
>       at 
> org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.beforeClass(TestMultithreadedTableMapper.java:83)
> {code}
> Chatting w/ Nick and Huaxiang, doing the math, we are likely oversubscribing 
> our docker container. It is set to 20G (The hosts are 48G). Fork count is 
> 0.5C on a 16 CPU machine which is 8 *2.8G our current forked jvm size. Add 
> the maven 4G and we could be over the top.
> Play w/ downing the fork size (in earlier study we didn't seem to need this 
> much RAM when running a fat long test). Let me also take th ms off the mvn 
> allocation to see if that helps.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to