Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The "Hbase/Troubleshooting" page has been changed by AndrewPurtell. The comment on this change is: Item 20. http://wiki.apache.org/hadoop/Hbase/Troubleshooting?action=diff&rev1=45&rev2=46 -------------------------------------------------- 1. [[#A17|Problem: My shell or client application throws lots of scary exceptions during normal operation]] 1. [[#A18|Problem: The HBase or Hadoop daemons crash after some days of uptime with no errors logged]] 1. [[#A19|Problem: Running a Scan or a MapReduce job over a full table fails with "xceiverCount xx exceeds..." or OutOfMemoryErrors in the HDFS datanodes]] + 1. [[#A20|Problem: System instability, and the presence of "java.lang.OutOfMemoryError: unable to create new native thread" exceptions in HDFS datanode logs or that of any system daemon]] <<Anchor(1)>> @@ -338, +339 @@ * Mess with configuration that effects RAM -- i.e. thread stack sizes or, dependent on what your query path looks like, shrink size given over to block cache (will slow your reads though) * Add machines to your cluster. + <<Anchor(20)>> + + == 20. Problem: System instability, and the presence of "java.lang.OutOfMemoryError: unable to create new native thread in exceptions" HDFS datanode logs or that of any system daemon == + + === Causes === + + The user under which the daemons are running has an nproc limit (default) set too low. The default on recent Linux distributions is 1024. + + === Resolution === + + Set this to 16K or higher. We recommend at least the configured number of DataNode xceivers plus 1K. Add the following lines to /etc/security/limits.conf. Substitute the actual user name for {{{<user>}}}. + . {{{ + <user> soft nproc 32768 + <user> hard nproc 32768 + }}} +
