Seems that I also met a similar issue complaining about this:

org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for *t_system_rec,,99999999999999* after 10 tries. at org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:38) at org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:265) at org.apache.hadoop.hbase.client.HTablePool.findOrCreateTable(HTablePool.java:195) at org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:174)

But later after restart and rerunning application, it disappeared.

于 9/26/2013 11:03 AM, Ted Yu 写道:
Can you check NameNode log ?

What Hadoop / HBase releases are you using ?

Thanks

On Sep 25, 2013, at 7:52 PM, kun yan <[email protected]> wrote:

i check regionserver logs
What should I do, I only know a little bit HLog

2013-09-26 10:37:13,478 WARN org.apache.hadoop.hbase.util.FSHDFSUtils:
Cannot recoverLease after trying for 900000ms
(hbase.lease.recovery.timeout); continuing, but may be DATALOSS!!!;
attempt=16 on
file=hdfs://hydra0001:8020/hbase/.logs/hydra0006,60020,1379926437471-splitting/hydra0006%2C60020%2C1379926437471.1380157500804
after 921109ms
2013-09-26 10:37:13,519 WARN org.apache.hadoop.hbase.regionserver.wal.HLog:
Lease should have recovered. This is not expected. Will retry
java.io.IOException: Cannot obtain block length for
LocatedBlock{BP-1087715125-192.5.1.50-1378889582109:blk_-8658284328699269340_21570;
getBlockSize()=0; corrupt=false; offset=0; locs=[192.5.1.56:50010,
192.5.1.52:50010, 192.5.1.55:50010]}
        at
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:319)
        at
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:263)
        at
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:205)
        at
org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:198)
        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1117)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
        at
org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1787)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:62)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1707)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:55)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:177)
        at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:713)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:846)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:759)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:403)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:371)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:115)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:283)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:214)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:182)
        at java.lang.Thread.run(Thread.java:722)
2013-09-26 10:40:05,900 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=2.02 MB,
free=243.83 MB, max=245.84 MB, blocks=0, accesses=0, hits=0, hitRatio=0,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=0, evictions=0,
evicted=0, evictedPerRun=NaN
2013-09-26 10:40:07,770 DEBUG
org.apache.hadoop.hbase.regionserver.LogRoller: Hlog roll period 3600000ms
elapsed
2013-09-26 10:42:14,404 ERROR
org.apache.hadoop.hbase.regionserver.wal.HLog: Can't open after 300
attempts and 300926ms  for
hdfs://hydra0001:8020/hbase/.logs/hydra0006,60020,1379926437471-splitting/hydra0006%2C60020%2C1379926437471.1380157500804
2013-09-26 10:42:14,404 INFO
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Processed 0 edits
across 0 regions threw away edits for 0 regions; log
file=hdfs://hydra0001:8020/hbase/.logs/hydra0006,60020,1379926437471-splitting/hydra0006%2C60020%2C1379926437471.1380157500804
is corrupted = false progress failed = false
2013-09-26 10:42:14,404 WARN
org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of
hdfs://hydra0001:8020/hbase/.logs/hydra0006,60020,1379926437471-splitting/hydra0006%2C60020%2C1379926437471.1380157500804
failed, returning error
java.io.IOException: Cannot obtain block length for
LocatedBlock{BP-1087715125-192.5.1.50-1378889582109:blk_-8658284328699269340_21570;
getBlockSize()=0; corrupt=false; offset=0; locs=[192.5.1.56:50010,
192.5.1.52:50010, 192.5.1.55:50010]}
        at
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:319)
        at
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:263)
        at
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:205)
        at
org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:198)
        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1117)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
        at
org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1787)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:62)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1707)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:55)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:177)
        at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:713)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:846)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:759)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:403)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:371)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:115)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:283)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:214)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:182)
        at java.lang.Thread.run(Thread.java:722)
2013-09-26 10:42:14,418 INFO
org.apache.hadoop.hbase.regionserver.SplitLogWorker: successfully
transitioned task
/hbase/splitlog/hdfs%3A%2F%2Fhydra0001%3A8020%2Fhbase%2F.logs%2Fhydra0006%2C60020%2C1379926437471-splitting%2Fhydra0006%252C60020%252C1379926437471.1380157500804
to final state err
2013-09-26 10:42:14,419 INFO
org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker
hydra0006,60020,1380159605759 done with task
/hbase/splitlog/hdfs%3A%2F%2Fhydra0001%3A8020%2Fhbase%2F.logs%2Fhydra0006%2C60020%2C1379926437471-splitting%2Fhydra0006%252C60020%252C1379926437471.1380157500804
in 1222058ms
2013-09-26 10:42:14,427 DEBUG
org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or
departed
2013-09-26 10:42:44,450 DEBUG
org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or
departed
2013-09-26 10:42:44,497 INFO
org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker
hydra0006,60020,1380159605759 acquired task
/hbase/splitlog/hdfs%3A%2F%2Fhydra0001%3A8020%2Fhbase%2F.logs%2Fhydra0003%2C60020%2C1379926447350-splitting%2Fhydra0003%252C60020%252C1379926447350.1380157487667
2013-09-26 10:42:44,499 INFO
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Splitting hlog:
hdfs://hydra0001:8020/hbase/.logs/hydra0003,60020,1379926447350-splitting/hydra0003%2C60020%2C1379926447350.1380157487667,
length=63861780
2013-09-26 10:42:44,506 INFO org.apache.hadoop.hbase.util.FSHDFSUtils:
Recovering lease on dfs file
hdfs://hydra0001:8020/hbase/.logs/hydra0003,60020,1379926447350-splitting/hydra0003%2C60020%2C1379926447350.1380157487667
2013-09-26 10:42:44,507 INFO org.apache.hadoop.hbase.util.FSHDFSUtils:
recoverLease=true, attempt=0 on
file=hdfs://hydra0001:8020/hbase/.logs/hydra0003,60020,1379926447350-splitting/hydra0003%2C60020%2C1379926447350.1380157487667
after 1ms
2013-09-26 10:42:44,512 WARN org.apache.hadoop.hdfs.DFSClient: Failed to
connect to /192.5.1.56:50010 for block, add to deadNodes and continue.
java.io.IOException: Got error for OP_READ_BLOCK, self=/192.5.1.56:37058,
remote=/192.5.1.56:50010, for file
/hbase/.logs/hydra0003,60020,1379926447350-splitting/hydra0003%2C60020%2C1379926447350.1380157487667,
for pool BP-1087715125-192.5.1.50-1378889582109 block
785178848034028699_21541
java.io.IOException: Got error for OP_READ_BLOCK, self=/192.5.1.56:37058,
remote=/192.5.1.56:50010, for file
/hbase/.logs/hydra0003,60020,1379926447350-splitting/hydra0003%2C60020%2C1379926447350.1380157487667,
for pool BP-1087715125-192.5.1.50-1378889582109 block
785178848034028699_21541
        at
org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:429)
        at
org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:394)
        at
org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
        at
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
        at
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538)
        at
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
        at
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at java.io.DataInputStream.readFully(DataInputStream.java:195)
        at java.io.DataInputStream.readFully(DataInputStream.java:169)
        at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1800)
        at
org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1765)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1714)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:55)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:177)
        at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:713)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:846)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:759)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:403)
        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:371)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:115)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:283)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:214)
        at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:182)
        at java.lang.Thread.run(Thread.java:722)


2013/9/26 Jimmy Xiang <[email protected]>

In the region server log, you should see the details about the failure.


On Wed, Sep 25, 2013 at 7:31 PM, kun yan <[email protected]> wrote:

I use importtsv import data into HDFS, but during the power outage.
Horrible, and then I re-import the data.(hbase 0.94)

the exception
org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find
region for data2,04684015,99999999999999 after 10 tries.

I look HMaster logs as follows:

2013-09-26 10:21:17,874 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:18,875 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:19,875 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:20,875 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:21,876 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:22,875 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:23,875 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:24,875 DEBUG
org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned
= 0
2013-09-26 10:21:25,436 INFO
org.apache.hadoop.hbase.master.SplitLogManager: task
/hbase/splitlog/hdfs%3A%2F%2Fhydra0001%3A8020%2Fhbase%2F.logs%2Fhydra0006%2C60020%2C1379926437471-splitting%2Fhydra0006%252C60020%252C1379926437471.1380157500804
entered state err hydra0004,60020,1380159614688
2013-09-26 10:21:25,436 WARN
org.apache.hadoop.hbase.master.SplitLogManager: Error splitting
/hbase/splitlog/hdfs%3A%2F%2Fhydra0001%3A8020%2Fhbase%2F.logs%2Fhydra0006%2C60020%2C1379926437471-splitting%2Fhydra0006%252C60020%252C1379926437471.1380157500804
2013-09-26 10:21:25,436 WARN
org.apache.hadoop.hbase.master.SplitLogManager: error while splitting
logs
in
[hdfs://hydra0001:8020/hbase/.logs/hydra0003,60020,1379926447350-splitting,
hdfs://hydra0001:8020/hbase/.logs/hydra0004,60020,1379926440171-splitting,
hdfs://hydra0001:8020/hbase/.logs/hydra0006,60020,1379926437471-splitting]
installed = 2 but only 0 done
2013-09-26 10:21:25,436 WARN
org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting of
[hydra0003,60020,1379926447350, hydra0004,60020,1379926440171,
hydra0006,60020,1379926437471]
java.io.IOException: error or interrupted while splitting logs in
[hdfs://hydra0001:8020/hbase/.logs/hydra0003,60020,1379926447350-splitting,
hdfs://hydra0001:8020/hbase/.logs/hydra0004,60020,1379926440171-splitting,
hdfs://hydra0001:8020/hbase/.logs/hydra0006,60020,1379926437471-splitting]
Task = installed = 2 done = 0 error = 2
        at
org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:282)
        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:300)
        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterFileSystem.java:242)
        at
org.apache.hadoop.hbase.master.HMaster.splitLogAfterStartup(HMaster.java:661)
        at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:580)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:396)
        at java.lang.Thread.run(Thread.java:722)

--

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
[email protected]


--

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
[email protected]


--
Best Regards, Julian

Reply via email to