Hi Henning,
Try doing flush '.META.' and major_compact '.META.' in the hbase
shell. It worked once for me. I hope it helps.
Cheers,
hari
On Fri, Nov 12, 2010 at 6:43 PM, Henning Blohm <[email protected]>wrote:
> Hi again,
>
> we have a 1 master, 3 data nodes Hadoop+HBase cluster for a PoC. I ran
> into "Too many open files" errors on the region server during load
> testing. No problem as such. But now, after shutting down and starting
> up again, when trying to count how many rows actually made it,
>
> > bin/hadoop jar ../hbase/hbase-0.20.6.jar rowcounter tir_items
>
> I get
>
> Exception in thread "main"
> org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> address listed in .META. for region
>
> tir_items,customer/0/FB032E3983F42455E0ACCFE61A0C3385A37975A42439430F0993D4D3DC5F76E,1289565977870
> at org.apache.hadoop.hbase.client.HConnectionManager
> $TableServers.locateRegionInMeta(HConnectionManager.java:726)
> at org.apache.hadoop.hbase.client.HConnectionManager
> $TableServers.locateRegion(HConnectionManager.java:634)
> at org.apache.hadoop.hbase.client.HConnectionManager
> $TableServers.locateRegion(HConnectionManager.java:601)
> at org.apache.hadoop.hbase.client.HConnectionManager
> $TableServers.getRegionLocation(HConnectionManager.java:428)
> at
> org.apache.hadoop.hbase.client.HTable.getRegionLocation(HTable.java:207)
> at
>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:289)
> at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
> at
> org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:126)
> ...
>
> This situation persists even after reboot. Any idea how to fix this?
> (All other advice I found says that things should be
> fine after restart at latest).
>
> Thanks,
> Henning
>
>
>