I am running a MapReduce program that processes lots of data (100 million +)
rows. I hate to admit it, but I am running it in "Standalone Localhost" mode..
I know.. I know.. not much money in our budget at this time :(
Anyway, I am getting lots of WARN messages from my MapReduce program that look
like this....
java.lang.RuntimeException: Failed HTable construction
at
com.xxx.yy.CustomersLoader$CustomerReducer.setup(CustomersLoader.java:77)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:172)
at
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:563)
at
org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:215)
Caused by: org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out
trying to locate root region
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:922)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:573)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:555)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:686)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:582)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:555)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:686)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:586)
at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:549)
at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:125)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:103)
The job keeps going.. so I was assuming these were just warnings.....
But when I start a shell (hbase shell) and run the 'list' command, I get this...
NativeException: org.apache.hadoop.hbase.MasterNotRunningException: null
from org/apache/hadoop/hbase/client/HConnectionManager.java:347:in
`getMaster'
from org/apache/hadoop/hbase/client/HBaseAdmin.java:72:in `<init>'
from sun/reflect/NativeConstructorAccessorImpl.java:-2:in `newInstance0'
from sun/reflect/NativeConstructorAccessorImpl.java:57:in `newInstance'
from sun/reflect/DelegatingConstructorAccessorImpl.java:45:in `newInstance'
from
java/lang/reflect/Constructor.java:532:in `newInstance'
from org/jruby/javasupport/JavaConstructor.java:226:in `new_instance'
from org/jruby/java/invokers/ConstructorInvoker.java:100:in `call'
from org/jruby/java/invokers/ConstructorInvoker.java:180:in `call'
from org/jruby/RubyClass.java:372:in `finvoke'
from org/jruby/javasupport/util/RuntimeHelpers.java:376:in `invoke'
from org/jruby/java/proxies/ConcreteJavaProxy.java:48:in `call'
from org/jruby/runtime/callsite/CachingCallSite.java:119:in `callBlock'
from org/jruby/runtime/callsite/CachingCallSite.java:126:in `call'
from org/jruby/RubyClass.java:554:in `call'
from org/jruby/internal/runtime/methods/DynamicMethod.java:152:in `call'
Which makes me believe HBase is
down. Additionally, when I run:
ps -eaf | grep 'hbase', the only process I see running is my MapReduce
program.. which makes me wonder:
1) Did my MapReduce bring HBase down?
2) If it did, how is it still running?
Please help. Thanks.