[ 
https://issues.apache.org/jira/browse/HBASE-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898157#comment-13898157
 ] 

Nick Dimiduk commented on HBASE-10500:
--------------------------------------

Looks like the same kind of issue crops up with LoadIncrementalHFiles:

{noformat}
2014-02-11 18:14:30,021 ERROR [main] mapreduce.LoadIncrementalHFiles: 
Unexpected execution exception during splitting
  java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Direct 
buffer memory
  at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
  at java.util.concurrent.FutureTask.get(FutureTask.java:111)
  at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:407)
  at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:288)
  at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:822)
  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
  at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:827)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
  at java.nio.Bits.reserveMemory(Bits.java:658)
  at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
  at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
  at 
org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
  at 
org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:44)
  at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:270)
  at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:210)
  at 
org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:399)
  at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:166)
  at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:476)
  at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:397)
  at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:395)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
  at java.util.concurrent.FutureTask.run(FutureTask.java:166)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:722)
{noformat}

> hbck and OOM when BucketCache is enabled
> ----------------------------------------
>
>                 Key: HBASE-10500
>                 URL: https://issues.apache.org/jira/browse/HBASE-10500
>             Project: HBase
>          Issue Type: Bug
>          Components: hbck
>    Affects Versions: 0.98.0
>            Reporter: Nick Dimiduk
>            Assignee: Nick Dimiduk
>
> Running {{hbck --repair}} when BucketCache is enabled in offheap mode can 
> cause OOM. This is apparently because {{bin/hbase}} does not include 
> $HBASE_REGIONSERVER_OPTS for hbck. It instantiates an HRegion instance as 
> part of HDFSIntegrityFixer.handleHoleInRegionChain. That HRegion initializes 
> its CacheConfig, which doesn't have the necessary Direct Memory.
> Possible solutions include:
>  - disable blockcache in the config used by hbck when running its repairs
>  - include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments
> I'm leaning toward the former because it's possible that hbck is run on a 
> host with different hardware profile as the RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to