[
https://issues.apache.org/jira/browse/HBASE-24072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michael Stack resolved HBASE-24072.
-----------------------------------
Fix Version/s: 2.3.0
3.0.0
Release Note: Hadoop hosts have had their ulimit -u raised from 10000 to
30000 (per user, by INFRA). The Docker build container has had its limit raised
from 10000 to 12500. (was: Hadoop hosts have had their ulimit -u raised from
10000 to 30000 (per user). The build container has had its limit raised from
10000 to 12500.)
Assignee: Michael Stack
Resolution: Fixed
We don't see these anymore. Two containers running on a single hadoop host
seems likely culprit. We shouldn't see this anymore since we won't be upping
our parallelism on jenkins builds since we seem to be operating at the edge of
whats fair; i.e. max of half CPUs on hadoop host (hadoop hosts run two jenkins
executors per host).
> Nightlies reporting OutOfMemoryError: unable to create new native thread
> ------------------------------------------------------------------------
>
> Key: HBASE-24072
> URL: https://issues.apache.org/jira/browse/HBASE-24072
> Project: HBase
> Issue Type: Task
> Components: test
> Reporter: Michael Stack
> Assignee: Michael Stack
> Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments:
> 0001-HBASE-24072-Nightlies-reporting-OutOfMemoryError-una.patch,
> print_ulimit.patch
>
>
> Seeing this kind of thing in nightly...
> {code}
> java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new
> native thread
> at
> org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.beforeClass(TestMultithreadedTableMapper.java:83)
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at
> org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.beforeClass(TestMultithreadedTableMapper.java:83)
> {code}
> Chatting w/ Nick and Huaxiang, doing the math, we are likely oversubscribing
> our docker container. It is set to 20G (The hosts are 48G). Fork count is
> 0.5C on a 16 CPU machine which is 8 *2.8G our current forked jvm size. Add
> the maven 4G and we could be over the top.
> Play w/ downing the fork size (in earlier study we didn't seem to need this
> much RAM when running a fat long test). Let me also take th ms off the mvn
> allocation to see if that helps.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)