Sorry for the mixup but I followed this particular link and not the one
mentioned in the above mail:

http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/31308


On Tue, Jun 11, 2013 at 12:56 AM, divye sheth <[email protected]> wrote:

> Hi,
>
> There are tables which we are trying to get ONLINE, but the regions always
> end up in the transitioned list.
>
> We are running HBASE 0.94.2 on HADOOP 0.20.2 appended version.
>
> The regionserver logs show :
> 2013-06-10 14:57:31,041 ERROR
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open
> of
> region=content_810032,10014-810032-03F59981A6584F3ABDA2426B2D8B0A81,1370851254794.7bde935148f6f52003b3237e5510f683.,
> starting to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.IOException: Cannot open
> filename
> /hbase/content_810032/9350f1616ad689de1307eed1b9efa8aa/pt/097ea6c56c5e4a2a9fed9b07f5d51428
>     at
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:548)
>     at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:461)
>     at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3813)
>     at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3761)
>     at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>     at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>     at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: java.io.IOException: Cannot open filename
> /hbase/content_810032/9350f1616ad689de1307eed1b9efa8aa/pt/097ea6c56c5e4a2a9fed9b07f5d51428
>     at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:403)
>     at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:256)
>     at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2995)
>     at
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:523)
>     at
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:521)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>     ... 3 more
>
>
> The file in the logs is not present in HDFS. This might be caused by the
> exception trace given below:
>
> java.io.InterruptedIOException: Aborting compaction of store pt in region
> content_810032,10014-810032-D80608D7006996E5CF67D95D7C67F2DD,1370031570408.1c83129238657afb75e1b6039e0e0edb.
> because user requested stop.
>     at
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1614)
>     at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1011)
>     at
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1208)
>     at
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
>
>
> Things we tried:
> 1. hbase hbck -repair (it tries bringing the region out of transition but
> times out)
> 2. tried creating the file in question manually in HDFS and perform step 1
> again, but then the *FILENOTFOUNDEXCEPTION* turns to *
> CORRUPTHFILEEXCEPTION*
> 3. Even cleared zookeeper data and tried but regionserver still tries to
> access the non-existent file.
>
> I found this particular link, this guy had the same issue and he resolved
> it by creating blank HFile and placing it at the locations where
> filenotfound exception occurred and ran hbase hbck -repair
> http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/32949
>
> Is it the only way to resolve this issue, or there is something that I
> have not done.
>
> P.S auto_comapction is on as well as auto_balancer
> I do not want to lose data.
>
>
> Thanks
> Divye Sheth
>

Reply via email to